text
stringlengths 1
1.88M
| meta
dict |
|---|---|
\section*{\refname
\@mkboth{\MakeUppercase\refname}{\MakeUppercase\refname}}%
\list{\@biblabel{\@arabic\c@enumiv}}%
{\settowidth\labelwidth{\@biblabel{#1}}%
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\itemsep\bibspace
\parsep\z@skip %
\@openbib@code
\usecounter{enumiv}%
\let\p@enumiv\@empty
\renewcommand\theenumiv{\@arabic\c@enumiv}}%
\sloppy\clubpenalty4000\widowpenalty4000%
\sfcode`\.\@m}
{\def\@noitemerr
{\@latex@warning{Empty `thebibliography' environment}}%
\endlist}
\makeatother
\renewcommand{\rmdefault}{ptm}
\numberwithin{equation}{section}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{remarks}[theorem]{Remarks}
\newtheorem{question}[theorem]{Question}
\def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}}%
{\XXint\textstyle\scriptstyle{#1}}%
{\XXint\scriptstyle\scriptscriptstyle{#1}}%
{\XXint\scriptscriptstyle\scriptscriptstyle{#1}}%
\!\int}
\def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$ }
\vcenter{\hbox{$#2#3$ }}\kern-.6\wd0}}
\def\Xint={\Xint=}
\def\Xint-{\Xint-}
\newcommand{\begin{flushright}\rule{3mm}{3mm} \end{flushright}}{\begin{flushright}\rule{3mm}{3mm} \end{flushright}}
\begin{document}
\title{Asymptotic expansion at infinity of solutions of Monge-Amp\`ere type equations}
\author{Zixiao Liu,\quad Jiguang Bao\footnote{Supported in part by Natural Science Foundation of China (11871102 and 11631002).}}
\date{\today}
\maketitle
\begin{abstract}
We obtain a quantitative expansion at infinity of solutions for a kind of Monge-Amp\`ere type equations that origin from mean curvature equations of Lagrangian graph $(x,Du(x))$ and refine the previous study on zero mean curvature equations and the Monge-Amp\`ere equations.
{\textbf{Keywords:}} Monge-Amp\`ere equation, Mean curvature eqution, Asymptotic expansion.
{\textbf{MSC~2020:}}~~ 35J60;~~35B40
\end{abstract}
\section{Introduction}
In 2018, Wang-Huang-Bao \cite{Wang.Chong-paper} studied the second boundary value problem of Lagrangian mean curvature equation of gradient graph $(x,Du(x))$ in $\left(\mathbb{R}^{n} \times \mathbb{R}^{n}, g_{\tau}\right)$, where $Du$ denotes the gradient of scalar function $u$ and
\begin{equation*}
g_{\tau}=\sin \tau \delta_{0}+\cos \tau g_{0}, \quad \tau \in\left[0, \frac{\pi}{2}\right]
\end{equation*}
is the linearly combined metric of standard Euclidean metric
\begin{equation*}
\delta_{0}=\sum_{i=1}^{n} d x_{i} \otimes d x_{i}+\sum_{j=1}^{n} d y_{j} \otimes d y_{j},
\end{equation*}
with the pseudo-Euclidean metric
\begin{equation*}
g_{0}=\sum_{i=1}^{n} d x_{i} \otimes d y_{i}+ \sum_{j=1}^{n} d y_{j} \otimes d x_{j}.
\end{equation*}
They proved that for domain $\Omega\subset\mathbb R^n$, if $u\in C^2(\Omega)$ is a solution of
\begin{equation}\label{Equ-perturb}
F_{\tau}\left(\lambda\left(D^{2} u\right)\right)=f(x), \quad x \in \Omega,
\end{equation}
then $Df(x)$ is the mean curvature of gradient graph $(x,Du(x))$ in $\left(\mathbb{R}^{n} \times \mathbb{R}^{n}, g_{\tau}\right)$. Previously, Warren \cite{Warren} proved that when $f(x)\equiv C_0$ for some constants $C_0$, the mean curvature of $(x,Du(x))$ is zero. In \eqref{Equ-perturb},
$f(x)$ is a scalar function with sufficient regularity, $\lambda\left(D^{2} u\right)=\left(\lambda_{1}, \lambda_{2}, \cdots, \lambda_{n}\right)$ are $n$ eigenvalues of Hessian matrix $D^{2} u$ and
$$
F_{\tau}(\lambda):=\left\{
\begin{array}{ccc}
\displaystyle \frac{1}{n} \sum_{i=1}^{n} \ln \lambda_{i}, & \tau=0,\\
\displaystyle \frac{\sqrt{a^{2}+1}}{2 b} \sum_{i=1}^{n} \ln \frac{\lambda_{i}+a-b}{\lambda_{i}+a+b},
& 0<\tau<\frac{\pi}{4},\\
\displaystyle-\sqrt{2} \sum_{i=1}^{n} \frac{1}{1+\lambda_{i}}, & \tau=\frac{\pi}{4},\\
\displaystyle\frac{\sqrt{a^{2}+1}}{b} \sum_{i=1}^{n} \arctan \displaystyle\frac{\lambda_{i}+a-b}{\lambda_{i}+a+b}, &
\frac{\pi}{4}<\tau<\frac{\pi}{2},\\
\displaystyle\sum_{i=1}^{n} \arctan \lambda_{i}, & \tau=\frac{\pi}{2},\\
\end{array}
\right.
$$
$a=\cot \tau, b=\sqrt{\left|\cot ^{2} \tau-1\right|}$.
If $\tau=0$, then \eqref{Equ-perturb} becomes the Monge-Amp\`ere type equation
\begin{equation}\label{equ-MA}
\operatorname{det} D^{2} u=e^{nf(x)}\quad\text{in }\mathbb R^n.
\end{equation}
For $f(x)$ being a constant $C_0$, there are Bernstein-type results by J\"orgens \cite{Jorgens}, Calabi \cite{Calabi} and Pogorelov \cite{Pogorelov}, which state that any convex classical solution of \eqref{equ-MA} must be a quadratic polynomial. See Cheng-Yau \cite{ChengandYau}, Caffarelli \cite{7}, Jost-Xin \cite{JostandXin} and Li-Xu-Simon-Jia \cite{AffineMongeAmpere} for different proofs and extensions. For $f(x)-C_0$ having compact support, there are exterior Bernstein-type results by Ferrer-Mart\'{\i}nez-Mil\'{a}n \cite{FMM99} for $n=2$ and Caffarelli-Li \cite{CL}, which state that any convex solution must be asymptotic to quadratic polynomials at infinity (for $n=2$ we need additional $\ln$-term). For $f(x)-C_0$ vanishing at infinity, there are similar asymptotic results by Bao-Li-Zhang \cite{BLZ}. For $f(x)-C_0$ being a periodic function or asymptotically periodic function, there are classification results by Caffarelli-Li \cite{Peroidic_MA}, Teixeira-Zhang \cite{Peroidic_MA2} etc.
If $\tau=\frac{\pi}{2}$, then \eqref{Equ-perturb} becomes the Lagrangian mean curvature equation
\begin{equation}\label{equ-spl}
\sum_{i=1}^{n} \arctan \lambda_{i}\left(D^{2} u\right)=f(x)\quad\text{in }\mathbb R^n.
\end{equation}
For $f(x)$ being a constant $C_0$, there are Bernstein-type results by Yuan \cite{Yu.Yuan1,Yu.Yuan2}, which state that any classical solution of \eqref{equ-spl} and
\begin{equation}\label{equ-cond-spl}
D^2u\geq \left\{
\begin{array}{lll}
-KI, & n\leq 4,\\
-(\frac{1}{\sqrt 3}+\epsilon(n))I, & n\geq 5,\\
\end{array}
\right.\quad\text{or}\quad C_0>\frac{n-2}{2}\pi,
\end{equation}
must be a quadratic polynomial, where $I$ denote the unit $n\times n$ matrix, $K$ is a constant and $\epsilon(n)$ is a small dimensional constant.
For $f(x)-C_0$ having compact support, there is an exterior Bernstein-type result by Li-Li-Yuan \cite{ExteriorLiouville}, which states that any classical solution of \eqref{equ-spl} with \eqref{equ-cond-spl} must be asymptotic to quadratic polynomials at infinity (for $n=2$ we need additional $\ln$-term).
For general $\tau\in [0,\frac{\pi}{2}]$, for $f(x)$ being a constant $C_0$, there are Bernstein-type results under suitable semi-convex conditions by Warren \cite{Warren}, which is based on the results of J\"orgens \cite{Jorgens}-Calabi \cite{Calabi}-Pogorelov \cite{Pogorelov}, Flanders \cite{Flanders} and Yuan \cite{Yu.Yuan1,Yu.Yuan2}. For $f(x)-C_0$ having compact support, there are exterior Bernstein-type results when $n\geq 3$ in our earlier work \cite{bao-liu-2020}, which state that any classical solution of \eqref{Equ-perturb} with suitable semi-convex conditions must be asymptotic to quadratic polynomial at infinity. There are also higher order expansions at infinity, which give the precise gap between exterior maximal/minimal gradient graph and the entire case. Such higher order expansions
problem was considered for the Yamabe equation and $\sigma_k$-Yamabe equation by Han-Li-Li \cite{Han2019-Expansion}, which refines the study by Caffarelli-Gidas-Spruck \cite{CGS}, Korevaar-Mazzeo-Pacard-Schoen \cite{KMPS}, Han-Li-Teixeira \cite{Han-Li-T-Simgak} etc.
In this paper, we obtain asymptotic expansion at infinity of classical solutions of
\begin{equation}\label{Equ-exterior}
F_{\tau}(\lambda(D^2u))=f(x)\quad\text{in }\mathbb R^n,
\end{equation}
where $n\geq 3$, $\tau\in [0,\frac{\pi}{4}]$ and $f(x)$ is a perturbation of $f(\infty):=\displaystyle\lim _{x \rightarrow \infty} f(x)$ at infinity. This partially refines previous study \cite{BLZ,CL,RemarkMA-2020,ExteriorLiouville,bao-liu-2020} etc.
Our first result considers asymptotic behavior and higher order expansions of general classical solution of \eqref{Equ-exterior}. Hereinafter, we let $\varphi=O_m(|x|^{-k_1}(\ln|x|)^{k_2})$ with $m\in\mathbb N, k_1,k_2\geq 0$ denote $$|D^k\varphi|=O(|x|^{-k_1-k}(\ln|x|)^{k_2})
\quad\text{as}~|x|\rightarrow+\infty
$$ for all $0\leq k\leq m$. Let $x^T$ denote the transpose of vector $x\in\mathbb R^n$, $\mathtt{Sym}(n)$ denote the set of symmetric $n\times n$ matrix,
$\mathcal H_k^n$ denote the $k$-order spherical harmonic function space in $\mathbb R^n$,
$DF_{\tau}(\lambda(A))$ denote the matrix with elements being value of partial derivative of $F_{\tau}(\lambda(M))$ w.r.t $M_{ij}$ variable at matrix $A$ and $[k]$ denote the largest natural number no larger than $k$.
\begin{theorem}\label{Thm-firstExpansion}
Let $u \in C^{2}\left(\mathbb{R}^{n}\right)$ be a classical solution of \eqref{Equ-exterior}, where
$f\in C^0(\mathbb{R}^n)
$ is $C^m$ outside a compact subset of $\mathbb{R}^n$ and satisfies \begin{equation}\label{Low-Regular-Condition}
\limsup _ { | x | \rightarrow \infty } | x | ^ {\zeta+k} | D^k( f ( x ) - f ( \infty ) ) | < \infty,\quad\forall~ k=0,1,2,\cdots,m
\end{equation}
for some $\zeta>2$ and $m\geq 2$.
Suppose either of the following holds
\begin{enumerate}[(1)]
\item \label{case-MA} $D^2u>0$ for $\tau=0$;
\item \label{case-small}
\begin{equation}\label{Condition-QuadraticGrowth-1}
u(x)\leq C(1+|x|^2)\quad\text{and}\quad D^2u>(-a+b)I,\quad\forall~ x\in\mathbb{R}^n
\end{equation}
for some constant $C$, for $\tau\in (0,\frac{\pi}{4})$;
\item \label{case-inverse}
\begin{equation}\label{Condition-QuadraticGrowth-2}
u(x)\leq C(1+|x|^2)\quad\text{and}\quad D^2u>-I,\quad \forall~ x\in\mathbb{R}^n
\end{equation}
for some constant $C$, for $\tau=\frac{\pi}{4}$.
\end{enumerate}
Then there exist $c\in\mathbb R, b\in\mathbb R^n$ and $ A\in\mathtt{Sym}(n)$ with $F_{\tau}(\lambda(A))=f(\infty)$ such that
\begin{equation}\label{equ-asym-Behavior}
u(x)-\left(\frac{1}{2} x^T A x+b x+c\right)=\left\{
\begin{array}{llll}
O_{m+1}(|x|^{2-\min\{n,\zeta\}}), & \zeta\not=n,\\
O_{m+1}(|x|^{2-n}(\ln|x|)), & \zeta=n,\\
\end{array}
\right.
\end{equation}
as $|x|\rightarrow+\infty$.
\end{theorem}
\begin{remark}
The matrix $A$ in Theorem \ref{Thm-firstExpansion} also satisfies $A> 0$ in case \eqref{case-MA}, $A> (-a+b)I$ in case \eqref{case-small} and $A> -I$ in case \eqref{case-inverse} respectively.
\end{remark}
\begin{remark}\label{example}
Notice that in condition \eqref{Low-Regular-Condition}, we only require $m\geq 2$, which is an improvement to the results for $m\geq 3$ by Bao-Li-Zhang \cite{BLZ}. It would be an interesting to determin sharp lower bounds for $m$ in Theorem \ref{Thm-firstExpansion}. There has been an example in \cite{BLZ} that shows the decay rate assumption $\zeta>2$ in \eqref{Low-Regular-Condition} is optimal.
\end{remark}
We also have the following higher order expansions for $\zeta>n$, which gives a finer characteristic of the error term in \eqref{equ-asym-Behavior}.
\begin{theorem}\label{Thm-secondExpansion}
Under conditions of Theorem \ref{Thm-firstExpansion}, there exist $c_0\in\mathbb R$,
$c_k(\theta)\in\mathcal H_k^n$ with $k=1,2,\cdots,n-[2n-\zeta]-1$ such that
\begin{equation}\label{equ-asym-expan-1}
\begin{array}{llll}
&\displaystyle u ( x ) - \left( \frac { 1 } { 2 } x ^ TA x + bx + c \right)\\
-&\displaystyle
c_0(x^T(DF_{\tau}(\lambda(A)))^{-1}x)^{\frac{2-n}{2}}
-\sum_{k=1}^{n-[2n-\zeta]-1}c_k(\theta)\left(x^T(DF_{\tau}(\lambda(A)))^{-1} x\right)^{\frac{2-n-k}{2}}
\\=&
\left\{
\begin{array}{lllll}
O_m(|x|^{2-\min\{2n,\zeta\}}), & \min\{2n,\zeta\}-n\not\in\mathbb N,\\
O_m(|x|^{2-\min\{2n,\zeta\}}(\ln |x|)), & \min\{2n,\zeta\}-n\in\mathbb N,\\
\end{array}
\right.
\end{array}
\end{equation}
as $|x|\rightarrow+\infty$, where
\begin{equation*}
\theta=\frac{(DF_{\tau}(\lambda(A)))^{-\frac{1}{2}}x}{\left(x^T(
DF_{\tau}(\lambda(A)))^{-1}x\right)^{\frac{1}{2}}}.
\end{equation*}
\end{theorem}
\begin{remark}\label{thm-radial}
By computing $F_{\tau}(\lambda(D^2u))$ of radially symmetric $u$ of form $\frac{C_1}{2}|x|^2+C_2|x|^{-k}$, we find expansions \eqref{equ-asym-Behavior} and \eqref{equ-asym-expan-1} are optimal for all $\zeta>2$ in the sense that
the series of $k$ doesn't exists or cannot be taken up to $n-[2n-\zeta]$
when $2<\zeta\leq n$ or $\zeta>n$ respectively
since $c_{n-[2n-\zeta]}$ doesn't belong to space $\mathcal H_n^{n-[2n-\zeta]}$ in general.
\end{remark}
The paper is organized as follows. In section \ref{sec-convergeHessian} we prove that the Hessian matrix $D^2u$ converge to some constant matrix $A\in\mathtt{Sym}(n)$ at infinity, in order to make preparation for proving Theorem \ref{Thm-firstExpansion}. In the next two sections we give the proofs of Theorems \ref{Thm-firstExpansion} and \ref{Thm-secondExpansion} respectively based on the detailed analysis of the solutions of non-homogeneous linearized equations.
Hereinafter, we let $B_r(x)$ denote a ball centered at $x\in\mathbb R^n$ with radius $r$. Especially for $x=0$, we let $B_r:=B_r(0)$. For any open subset $\Omega\subset\mathbb R^n$, we let $\overline{\Omega}$ denote the closure of $\Omega$ and $\Omega^c$ denote the complement of $\Omega$ in $\mathbb R^n$.
\section{Convergence of Hessian at infinity}\label{sec-convergeHessian}
In this section, we study the asymptotic behavior at infinity of Hessian matrix of classical solutions of \eqref{Equ-exterior}. We prove a weaker convergence than \eqref{equ-asym-Behavior} in Theorem \ref{Thm-firstExpansion} and $D^2u$ has bounded $C^{\alpha}$ norm for some $0<\alpha<1$ under a weaker assumption on $f$.
By
interior regularity as Lemma 17.16 of \cite{GT} and
extension theorem as Theorem 6.10 of \cite{EvansMeasureTheory}, we
may change the value of $u, f$ on a compact subset of $\mathbb R^n$ and prove only for $u\in C^{2,\alpha}(\mathbb R^n)$ and $f\in C^{\alpha}(\mathbb R^n)$.
\begin{theorem}\label{thm-sec2}
Let $u$ be as in Theorem \ref{Thm-firstExpansion}, $f\in C^{\alpha}(\mathbb R^n)$ for some $0<\alpha<1$ and satisfy
\begin{equation}\label{equ-temp-6}
\limsup_{|x|\rightarrow\infty}\left(
|x|^{\zeta}|f(x)-f(\infty)|+
|x|^{\alpha+\zeta'}[f]_{C^{\alpha}
(\overline{B_{\frac{|x|}{2}}(x)})}\right)<\infty
\end{equation}
\begin{enumerate}[(1)]
\item
with some $\zeta>1,\zeta'>0$ for $\tau=0$;
\item \label{case-2.1-2}
with some $\zeta>1,\zeta'>0$ for $\tau\in(0,\frac{\pi}{4})$;
\item
with some $\zeta>0,\zeta'>0$ for $\tau=\frac{\pi}{4}$.
\end{enumerate}
Then there exist $\epsilon>0, A\in\mathtt{Sym}(n)$ with $F_{\tau}(\lambda(A))=f(\infty)$ and $C>0$ such that
$$
||D^2u||_{C^{\alpha}(\mathbb R^n)}\leq C,
\quad\text{and}\quad \left|D^2u(x)-A\right|\leq \dfrac{C}{|x|^{\epsilon}},\quad\forall~|x|\geq 1.
$$
\end{theorem}
The proof is separated into three subsections according to three different range of $\tau$.
\subsection{$\tau=0$ case}
In $\tau=0$ case, \eqref{Equ-exterior} becomes the Monge-Amp\`ere equation \eqref{equ-MA}.
\begin{theorem}\label{roughestimate}
Let $u \in C ^ { 0 } \left( \mathbb { R } ^ { n } \right)$ be a convex viscosity solution of
\begin{equation}\label{Monge-Ampere}
\operatorname{det} D^{2} u=\psi(x)\quad\text{in } \mathbb{R}^{n}
\end{equation}
with $u(0)=\min_{\mathbb{R}^n}u=0$, where $
0<\psi\in C ^ { 0 } \left( \mathbb { R } ^ { n } \right)
$
and $$\psi^{\frac{1}{n}}-1\in L^n(\mathbb{R}^n).
$$
Then there exists a linear transform $T$ satisfying $\det T=1$ such that $v:=u\circ T$ satisfies
$$
\left|v-\dfrac{1}{2}|x|^2\right|\leq C|x|^{2-\varepsilon},\quad \forall~ |x|\geq 1.
$$
for some $C>0$ and $\varepsilon>0$.
\end{theorem}
Theorem \ref{roughestimate} can be found in the proof of Theorem 1.2 in \cite{BLZ}, which is based on the level set method by Caffarelli-Li \cite{CL}.
\begin{corollary}\label{corollary_estimate}
Let $u\in C^0(\mathbb R^n)$ be a convex viscosity solution of \eqref{Equ-exterior} with $f\in C^0(\mathbb R^n)$ satisfies
$$
\limsup_{|x|\rightarrow\infty}|x|^{\zeta}|f(x)-f(\infty)|<\infty
$$
for some $\zeta>1$. Then there exists a linear transform $T$ satisfying $\det T=1$ such that $v:=u\circ T$ satisfies
\begin{equation}\label{roughestimate_appendix}
\left|v-\dfrac{\exp(f(\infty))}{2}|x|^2\right|\leq C|x|^{2-\varepsilon},\quad \forall~ |x|\geq 1
\end{equation}
for some $C>0$ and $\varepsilon>0$.
\end{corollary}
\begin{proof}
By a direct computation,
$$
\widetilde u(x):=\dfrac{1}{\exp(f(\infty))}\left(u(x)-Du(0)x-u(0)\right)
$$
is a convex viscosity solution of
$$
\det D^2\widetilde u=e^{n(f(x)-f(\infty))}=:\widetilde f(x)\quad\text{in }\mathbb R^n.
$$
By a direct computation, $|\widetilde f(x)-1|\leq C|x|^{-\zeta}$ for some $C>0$ and
$$
\int _ { \mathbb{R}^n\setminus B_{1} } \left|(\widetilde f(x))^ { \frac { 1 } { n } } - 1 \right| ^ { n } d x \leq C
\int _ { \mathbb{R}^n\setminus B_{1} } \left| \widetilde f (x ) - 1 \right| ^ { n } d x \leq C \int _ { \mathbb { R } ^ { n } \backslash B _ { 1 } } |x|^{-\zeta n} d x <\infty.
$$
The result follows immediately by applying Theorem \ref{roughestimate} to $\widetilde u$.
\end{proof}
As a consequence, we have the following convergence of Hessian matrix for solutions of \eqref{Monge-Ampere}. The proof is similar to the one in Bao-Li-Zhang \cite{BLZ} and in Caffarelli-Li \cite{CL}. Since there are some differences from their proof, we provide the details here for reading simplicity.
\begin{theorem}\label{corollary_estimate3}
Let $u \in C ^ { 0 } \left( \mathbb { R } ^ { n } \right)$ be a convex viscosity solution of \eqref{Equ-exterior}, $
f \in C^{\alpha}(\mathbb{R}^n ) $ satisfy \eqref{equ-temp-6} for some $0<\alpha<1$, $\zeta>1$ and $\zeta'>0$.
Then $u\in C^{2,\alpha}(\mathbb R^n)$,
\begin{equation}\label{HolderRegularity_MA}
||D^2u||_{C^{\alpha}(\mathbb R^n)}\leq C,
\end{equation}
and
\begin{equation}\label{ConvergeRateofHessian}
u-\left(\frac{1}{2}x^TAx+b x+c\right)=O_2(|x|^{2-\epsilon})
\end{equation}
as $|x|\rightarrow \infty$,
where
$\epsilon:=\min\{\varepsilon,\zeta,\zeta'\}$,
$\varepsilon$ is the positive constant from Theorem \ref{roughestimate}, $A\in\mathtt{Sym}(n)$ with $\det A=\exp(nf(\infty))$, $b\in\mathbb R^n$, $c\in\mathbb R$ and $C>0$.
\end{theorem}
\begin{proof}
By Corollary \ref{corollary_estimate}, there exist a linear transform $T$, $\varepsilon>0$ and $C>0$ such that $v:=u\circ T$ satisfies (\ref{roughestimate_appendix}).
\textbf{Step 1:} prove $C^{\alpha}$ boundedness of Hessian \eqref{HolderRegularity_MA}.
Let
\begin{equation*}
v_{R}(y)=\left(\frac{4}{R}\right)^{2} v\left(x+\frac{R}{4} y\right), \quad|y| \leq 2
\end{equation*}
for $|x|=R>2$.
By (\ref{roughestimate_appendix}),
\begin{equation*}
\left\|v_{R}\right\|_{C^0\left(\overline{B_{2}}\right)} \leq C
\end{equation*}
for some $C>0$ for all $R\geq 2$. Then $v_R$ satisfies
\begin{equation}\label{equ-temp-4}
\operatorname{det}\left(D^{2} v_{R}(y)\right)=\exp\left(nf\left(x+\frac{R}{4} y\right)\right)=: f_{R}(y) \quad \text { in } B_{2}.
\end{equation}
By a direct computation, there exists $C>0$ uniform to $x$ such that
$$
||f_{R}-\exp(nf(\infty))||_{C^{0}(\overline{B_2})}\leq C R^{-\zeta}
$$
and for all $y_1,y_2\in B_2$,
$$
\dfrac{|f_R(y_1)-f_R(y_2)|}{|y_1-y_2|^{\alpha}}=
\dfrac{|f(z_1)-f(z_2)|}{|z_1-z_2|^{\alpha}}\cdot (\frac{R}{4})^{\alpha}\leq CR^{-\zeta'},
$$
where $z_i:=x+\frac{R}{4}y_i\in B_{\frac{|x|}{2}}(x)$.
Applying the interior estimate by Caffarelli \cite{7}, Jian-Wang \cite{25} on $B_2$, we have
\begin{equation}\label{CalphaEstimate}
\left\|D^{2} v_{R}\right\|_{C^{\alpha}\left(\overline{B_{1}}\right)} \leq C
\end{equation}
and hence
\begin{equation}\label{2.9}
\frac{1}{C}I\leq D^{2} v_{R} \leq C I\quad \text {in}~ B_{1}
\end{equation}
for some $C$ independent of $R$.
For any $|x|=R\geq 2$, we have
\begin{equation}\label{equ-bddHessian}
|D^2v(x)|=|D^2v_R(0)|\leq
||D^2v_R||_{C^0(\overline{B_1})}\leq C.
\end{equation}
For any $x_1,x_2\in B_{2}^c$ with $0<|x_2-x_1|\leq \frac{1}{4}|x_1|$, let $R:=|x_1|>2$, by (\ref{CalphaEstimate}),
\begin{equation*}
\begin{array}{llll}
\dfrac{\left|D^{2} v\left(x_{1}\right)-D^{2} v\left(x_{2}\right)\right|}{\left|x_{1}-x_{2}\right|^{\alpha}}
&=&\dfrac{\left|D^{2} v_{R}\left(0\right)-D^{2} v_{R}\left(\frac{4(x_2-x_1)}{|x_1|}\right)\right|}{\left|x_{1}-x_{2}\right|^{\alpha}}\\
&\leq & [D^2v_{R}]_{C^{\alpha}(\overline{B_1})}\cdot \left(\frac{4}{|x_1|}\right)^{\alpha}\\
&\leq & CR^{-\alpha}.\\
\end{array}
\end{equation*}
For any $x_1,x_2\in B_{2}^c$ with $|x_2-x_1|\geq \frac{1}{4}|x_1|$, by \eqref{equ-bddHessian},
$$
\dfrac{|D^2v(x_1)-D^2v(x_2)|}{|x_1-x_2|^{\alpha}}
\leq 2^{\alpha}\cdot 2||D^2v||_{C^0(\mathbb R^n)}\leq C.
$$
Since the linear transform $T$ from Theorem \ref{roughestimate} is invertible, (\ref{HolderRegularity_MA}) follows immediately.
\textbf{Step 2:} prove convergence speed at infinity \eqref{ConvergeRateofHessian}.
Let
$$
w(x):=v(x)-\dfrac{\exp(f(\infty))}{2}|x|^2\quad\text{and}\quad w_R(y):=
\left(\frac{4}{R}\right)^{2} w\left(x+\frac{R}{4} y\right), \quad|y| \leq 2
$$
for $|x|=R\geq 2$. By (\ref{roughestimate_appendix}) in Theorem \ref{roughestimate},
\begin{equation*}
\left\|w_{R}\right\|_{C^0\left(\overline{B_{2}}\right)} \leq C R^{-\varepsilon}.
\end{equation*}
Applying Newton-Leibnitz formula between \eqref{equ-temp-4} and $\operatorname{det}(\exp(f(\infty))I)=\exp(nf(\infty))$,
\begin{equation*}
\widetilde{a_{i j}}(y) D_{i j} w_{R}=f_{ R}(y)-\exp(nf(\infty))\quad \text{in }B_2,
\end{equation*}
where $\widetilde{a_{i j}}(y)=\int_{0}^{1} D_{M_{i j}}(\det \left(I+t D^{2} w_{R}(y)\right)) d t$.
By (\ref{CalphaEstimate}) and (\ref{2.9}), there exists constant $C$ independent of $|x|=R>2$ such that
\begin{equation*}
\frac{I}{C} \leq \widetilde{a_{i j}} \leq C I\quad\text {in } B_{1}, \quad\left\|\widetilde{a_{i j}}\right\|_{C^{ \alpha}\left(\overline{B_{1}}\right)} \leq C.
\end{equation*}
By interior Schauder estimates, see for instance Theorem 6.2 of \cite{GT},
\begin{equation}\label{equ-interiorSchauder}
\begin{array}{llll}
\left\|w_{R}\right\|_{C^{2, \alpha}\left(\overline{B_{\frac{1}{2}}}\right)} &\leq & C\left(\left\|w_{R}\right\|_{C^0\left(\overline{B_{1}}\right)}+\left\|f_{ R}-\exp(nf(\infty))\right\|_{C^{\alpha}\left(\overline{B_{1}}\right)}\right)\\
& \leq & C R^{-\min\{\varepsilon,\zeta,\zeta'\}}.\\
\end{array}
\end{equation}
The result (\ref{ConvergeRateofHessian}) follows immediately by
scaling back.
\end{proof}
\begin{remark}
In the proof of Theorem \ref{corollary_estimate3}, the interior Schauder estimates used in \eqref{equ-interiorSchauder} can be replaced by the $W^{2,\infty}$ type estimates (see for instance Remark 1.3 of \cite{Dong-Xu-estimate}),
$$
||w_R||_{W^{2,\infty}(\overline{B_{\frac{1}{2}}})}\leq C \left(
\left\|w_{R}\right\|_{C^0\left(\overline{B_{1}}\right)}+\left\|f_{ R}-\exp(nf(\infty))\right\|_{C^{\alpha}\left(\overline{B_{1}}\right)}
\right)\leq C R^{-\min\{\varepsilon,\zeta,\zeta'\}}.
$$
\end{remark}
\begin{remark}\label{corollary_estimate2}
The condition \eqref{equ-temp-6} in Theorem \ref{corollary_estimate3} holds if
for some $C>0$,
\begin{equation}\label{1-orderCondition}
|x|^{\zeta}|f(x)-f(\infty)|+|x|^{1+\zeta'}
\left| Df ( x ) \right|\leq C,\quad\forall~|x|>2.
\end{equation}
Even if $f(x)$ is $C^1$, condition \eqref{equ-temp-6} is weaker than (\ref{1-orderCondition}).
For example, we consider $f(x):=e^{-|x|}\sin(e^{|x|})$. On the one hand, $Df(x)$ doesn't admit a limit at infinity, hence $f$ doesn't satisfy condition (\ref{1-orderCondition}).
On the other hand,
for any
$|x|=R>1$ and
$z_1,z_2\in B_{\frac{|x|}{2}}(x)$,
$$
\begin{array}{lll}
\dfrac{\left|f\left(z_{1}\right)-f\left(z_{2}\right)\right|}{\left|z_{1}-z_{2}\right|^{\alpha}}
&\leq & e^{-|z_2|}\dfrac{\left|\sin(e^{|z_1|})-\sin(e^{|z_2|})\right|}{\left|z_{1}-z_{2}\right|^{\alpha}}
+ \sin(e^{|z_1|}) \dfrac{\left|e^{-|z_1|}-e^{-|z_2|}\right|}{\left|z_{1}-z_{2}\right|^{\alpha}}\\
&\leq &\displaystyle C e^{-\frac{R}{2}}\cdot \frac{\left|z_{1}-z_{2}\right|}{\left|z_{1}-z_{2}\right|^{\alpha}} \\
&\leq & Ce^{-\frac{R}{2}}\cdot R^{1-\alpha}\\
\end{array}
$$
for constant $C$ independent of $R$. Hence $f$ satisfies condition \eqref{equ-temp-6} for all $\alpha\in(0,1)$ and any $\zeta,\zeta'>0$.
\end{remark}
This finishes the proof of Theorem \ref{thm-sec2} for $\tau=0$ case.
\subsection{$\tau\in(0,\frac{\pi}{4})$ case}
In this subsection, we deal with $\tau\in(0,\frac{\pi}{4})$ case by Legendre transform and the results in previous subsection.
Let
$f\in C^{\alpha}(\mathbb R^n)$ satisfy \eqref{equ-temp-6} for some $0<\alpha<1, \zeta>1$, $\zeta'>0$ and $u\in C^{2,\alpha}(\mathbb R^n)$ be a classical solution of \eqref{Equ-exterior} satisfying \eqref{case-small}.
Let
$$
\overline{u}(x):=u(x)+\dfrac{a+b}{2}|x|^2,
$$
then
\begin{equation}\label{equ-star-1}
D^{2} \overline{u}=D^{2} u+(a+b) I>2bI\quad\text{in }\mathbb{R}^n.
\end{equation}
Let $(\widetilde{x},v)$ be the Legendre transform of $(x,\overline{u})$, i.e.,
\begin{equation}\label{LegendreTransform2}
\left\{
\begin{array}{ccc}
\widetilde{x}:=D\overline{u}(x),\\
Dv(\widetilde{x}):=x,\\
\end{array}
\right.
\end{equation}
and we have
\begin{equation*}
D ^{2} v(\widetilde{x})=\left(D^{2} \overline{u}(x)\right)^{-1}=(D^2u(x)+(a+b)I)^{-1}<\frac{1}{2b}I.
\end{equation*}
Let \begin{equation}\label{LegendreTransform}
\bar v(\widetilde{x}):=\dfrac{1}{2}|\widetilde{x}|^2-2bv(\widetilde{x}).
\end{equation}
By a direct computation, $D\bar u(\mathbb R^n)=\mathbb R^n$ and
\begin{equation}\label{property-Legendre}
\widetilde{\lambda}_{i}\left(D^{2} \bar v\right)=1-2 b \cdot \frac{1}{\lambda_{i}+a+b}=\frac{\lambda_{i}+a-b}{\lambda_{i}+a+b}\in (0,1).
\end{equation}
Thus $\bar v(\widetilde{x})$ satisfies the following Monge-Amp\`ere type equation
\begin{equation}\label{temp-16}
\operatorname{det} D^{2} \bar v=\exp \left\{\frac{2 b}{\sqrt{a^{2}+1}} f\left(\frac{1}{2 b}(\widetilde{x}-D \bar v(\widetilde{x}))\right)\right\}=: g(\widetilde{x})\quad \text{in }\mathbb{R}^n.
\end{equation}
\textbf{Step 1:} There exists $C_0>1$ such that
\begin{equation}\label{linear-of-X}
\frac{1}{C_0}|x|\leq |\widetilde{x}|\leq C_0|x|,\quad\forall~|x|> 1.
\end{equation}
We prove the two inequalities in \eqref{linear-of-X} separately.
By the definition of $\widetilde x=D\bar u(x)$ and \eqref{equ-star-1},
\begin{equation*}
|\widetilde{x}-\widetilde{0}|=|D \bar u(x)-D \bar u(0)|>2 b|x|.
\end{equation*}
Hence by triangle inequality,
\begin{equation}\label{limitofX}
|\widetilde{x}|\geq
-|\widetilde{0}|+|\widetilde{x}-\widetilde{0}|
> -|\widetilde{0}|+2b|x|,
\end{equation}
and the first inequality of \eqref{linear-of-X} follows immediately.
By the quadratic growth condition in \eqref{Condition-QuadraticGrowth-1}, we prove the linear growth result of $Du(x)$. In fact, for any $|x|\geq 1$, let
$e:=\frac{Du(x)}{|Du(x)|}\in\partial B_1$. By Newton-Leibnitz formula and \eqref{Condition-QuadraticGrowth-1}
\begin{equation}\label{gradientestimate}
\begin{array}{lllll}
u(x+|x|e)&= &\displaystyle u(x)+ \int_0^{|x|}e\cdot Du(x+se)\mathtt{d}s\\
&=& \displaystyle u(x)+\int_0^{|x|}\int_0^se\cdot D^2u(x+te)\cdot e\mathtt{d}t\mathtt{d} s +\int_0^{|x|}e\cdot Du(x)\mathtt{d}s\\
&\geq & \displaystyle u(x)+\frac{(-a+b)}{2}|x|^2+|Du(x)|\cdot |x|.\\
\end{array}
\end{equation}
Furthermore by \eqref{Condition-QuadraticGrowth-1}, there exists $C>0$ independent of $|x|\geq 1$ such that
$$
|Du(x)|\leq \dfrac{1}{|x|} \left(
C(1+\left|(x+|x|e)\right|^2)+C(1+|x|^2)+\frac{a-b}{2}|x|^2
\right)
\leq C(1+|x|).
$$
Hence there exists $C>0$ such that
\begin{equation}\label{Condition-LinearGrowth}
|Du(x)|\leq C(1+|x|),\quad \forall~x\in\mathbb{R}^n.
\end{equation}
By \eqref{Condition-LinearGrowth}, there exists $C>0$ such that
$$
|\widetilde x|=|D u(x)+(a+b) x| \leq|D u(x)|+(a+b)|x| \leq C(|x|+1).
$$
The second inequality of \eqref{linear-of-X} follows immediately.
Now we study equation \eqref{temp-16} by applying Theorem \ref{corollary_estimate3} and Remark \ref{corollary_estimate2}, which require a knowledge on the asymptotic behavior of $g(\widetilde x)$.
\textbf{Step 2: }$g(\widetilde{x})$ satisfies condition \eqref{equ-temp-6}.
By the equivalence (\ref{linear-of-X}), $$
\lim_{\widetilde{x}\rightarrow\infty}g(\widetilde{x})=\exp\left\{\frac{2 b}{\sqrt{a^{2}+1}}f(\infty)\right\}=:g(\infty)\in(0,1].
$$
By a direct computation,
$$
\begin{array}{lllll}
&\displaystyle
|\widetilde{x}|^{\zeta}|g(\widetilde{x})-g(\infty)|\\
=& \displaystyle e^{\frac{2 b}{\sqrt{a^{2}+1}}f(\infty)}
\dfrac{ |\widetilde{x}|^{\zeta}}{\left|\frac{\widetilde{x}-D\bar v(\widetilde{x})}{2b}\right|^{\zeta}}
\cdot\left|\frac{\widetilde{x}-D\bar v(\widetilde{x})}{2b}\right|^{\zeta}
\cdot \left|
e^{\frac{2b}{\sqrt{a^2+1}}(f(\frac{\widetilde{x}-D\bar v(\widetilde{x})}{2b})-f(\infty))}-1
\right|
\\
\leq &
\displaystyle C |x|^{\zeta}\left|
e^{\frac{2b}{\sqrt{a^2+1}}(f(x)-f(\infty))}-1
\right|
\\
\leq &\displaystyle C
|x|^{\zeta}\left|f(x)-f(\infty)\right|
<C.\\
\end{array}
$$
For any $\widetilde y,\widetilde z\in B_{\frac{|\widetilde x|}{2}\cdot 2b}(\widetilde x), \widetilde y\not=\widetilde z$ with $|\widetilde x|> C_0$, by \eqref{equ-star-1} we have
$$
y,z\in B_{\frac{|x|}{2}}(x),\quad
|\widetilde y-\widetilde z|\geq 2b|y-z|>0\quad\text{and}\quad y\not=z.
$$
Thus by condition \eqref{equ-temp-6},
\begin{equation}\label{equ-temp-10}
\dfrac{
|g(\widetilde y)-g(\widetilde z)|
}{|\widetilde y-\widetilde z|^{\alpha}}\leq (2b)^{-\alpha}\dfrac{\exp\{\frac{2b}{\sqrt {a^2+1}}f(y)\}
-\exp\{\frac{2b}{\sqrt {a^2+1}}f(z)\}
}{|y-z|^{\alpha}}\leq C[f]_{C^{\alpha}(\overline{B_{\frac{|x|}{2}}(x)})}.
\end{equation}
Thus $g(\widetilde x)$ satisfies
\eqref{equ-temp-6} for $0<\alpha<1, \zeta>1$ and $\zeta'>0$ as given.
By Theorem \ref{corollary_estimate3}, we have
$$
||D^2\bar v||_{C^{\alpha}(\mathbb R^n)}\leq C
$$
and
\begin{equation}\label{equ-temp-1}
\bar v-\left(\frac{1}{2}\widetilde x^T\widetilde A\widetilde x+\widetilde b\cdot \widetilde x+\widetilde c\right)=O_2(|\widetilde x|^{2-\epsilon})
\end{equation}
for some $0<\widetilde{A} \in \mathtt{Sym}(n)$ satisfying $\det \widetilde{A}=g(\infty)$, $\widetilde b\in\mathbb R^n, \widetilde c\in \mathbb R$ and $C,\epsilon > 0$.
\textbf{Step 3: } we finish the proof of Theorem \ref{thm-sec2} \eqref{case-2.1-2}.
By strip argument as in \cite{ExteriorLiouville,bao-liu-2020} etc, we prove that $I-\widetilde{A}$ is invertible. In fact,
by \eqref{property-Legendre}, $\widetilde A\leq I$ and it remains to prove $\lambda_i(\widetilde A)<1$ for all $i=1,2,\cdots,n$.
Arguing by contradiction and rotating the $\widetilde{x}$-space to make $\widetilde{A}$ diagonal, we may assume that $\widetilde{A}_{11}=1$. By \eqref{equ-temp-1} with the definition of Legendre transform (\ref{LegendreTransform}) and (\ref{limitofX}), there exists $\widetilde {b_1}$ such that
\begin{equation}\label{strip-argument}
x_1=D_1v(\widetilde x)=\widetilde {b_1}+O(|\widetilde x|^{1-\epsilon})
\quad\text{as }|\widetilde x|\rightarrow\infty.
\end{equation}
This becomes a contradiction to \eqref{linear-of-X}.
Let $$
A:=2b\left( I - \widetilde{A} \right) ^ { - 1 } - ( a + b ) I.
$$
By a direct computation, $F_{\tau}(\lambda(A))=f(\infty)$ and $$
\begin{array}{llll}
\left| D ^ { 2 } u(x) - A \right|& =&2b\left|
\left( I - D ^ { 2 } \bar v ( \widetilde { x } )\right) ^ { - 1 }-
\left( I - \widetilde{A} \right) ^ { - 1 }
\right|\\
&\leq &C|D^2\bar v(\widetilde x)-\widetilde{A}|\\
&
\leq& \dfrac{C}{|\widetilde{x}|^{\epsilon}}\quad\forall~|x|\geq 1.\\
\end{array}
$$
By the equivalence (\ref{linear-of-X}),
we have
\begin{equation}\label{Result_LimitofHessian}
\left| D ^ { 2 } u ( x ) - A \right| \leq \frac { C } { | x | ^ {\epsilon} } ,\quad\forall ~| x | \geq 1.
\end{equation}
Furthermore,
by (\ref{LegendreTransform}), for any $x,y\in\mathbb{R}^n$,
\begin{equation*}
\left|D^{2} u(x)-D^{2} u(y)\right|=2 b\left|\left(I-D^{2} \bar v(\widetilde{x})\right)^{-1}-\left(I-D^{2} \bar v(\widetilde{y})\right)^{-1}\right|.
\end{equation*}
By \eqref{Result_LimitofHessian}, $D^2\bar v(\widetilde x)$ is bounded away from $0$ and $I$, it follows that $\exists~ C>0$ such that
\begin{equation}\label{equivalentHessian}
\left|D^{2} u(x)-D^{2} u(y)\right| \leq 2 b C\left|D^{2} \bar v(\widetilde{x})-D^{2} \bar v(\widetilde{y})\right|
\end{equation}
Combining (\ref{equivalentHessian}) and the equivalence (\ref{linear-of-X}),
$D^2u$ has bounded $C^{\alpha}$ norm.
So far, we
finished the proof of Theorem \ref{thm-sec2} for $\tau\in (0,\frac{\pi}{4})$ case.
\subsection{$\tau=\frac{\pi}{4}$ case}
In this subsection, we deal with $\tau=\frac{\pi}{4}$ case by Legendre transform and analysis on the Poisson equations.
Let $f\in C^{\alpha}(\mathbb R^n)$ satisfy \eqref{equ-temp-6} for some $0<\alpha<1,\zeta,\zeta'>0$ and
$u\in C^{2,\alpha}(\mathbb R^n)$ be a classical solution of \eqref{Equ-exterior} satisfying \eqref{case-inverse}.
Let $$
\overline{u}(x):=u(x)+\dfrac{1}{2}|x|^2,
$$
then $D^2\overline u>0$ in $\mathbb R^n$.
By equation \eqref{Equ-exterior}, for all $i=1,2,\cdots,n$,
$$
-\dfrac{1}{\lambda_i(D^2\bar u)}\geq -\sum_{j=1}^n\dfrac{1}{\lambda_j(D^2\bar u)}\geq \frac{\sqrt 2}{2}\inf_{\mathbb R^n}f.
$$
Thus
there exists $\delta>0$ such that
$$
D^2\overline u(x)>\delta I,\quad\forall~x\in\mathbb R^n.
$$
Let $(\widetilde{x},v)$ be the Legendre transform of $(x,\overline{u})$ as in \eqref{LegendreTransform2}
and we have
\begin{equation*}
0<D ^2v(\widetilde{x})=(D^2\overline{u}(x))^{-1}<\dfrac{1}{\delta}I.
\end{equation*}
By a direct computation, $D\bar u(\mathbb R^n)=\mathbb R^n$ and $v(\widetilde{x})$ satisfies the following Poisson equation \begin{equation}\label{temp-modify3}
\Delta v=-\frac{\sqrt{2}}{2}f(Dv(\widetilde x))=:g(\widetilde{x})\quad\text{in }\mathbb R^n.
\end{equation}
\textbf{Step 1:} There exists $C_0>1$ such that \eqref{linear-of-X} holds. The proof is separated into two parts similarly.
By the definition of Legendre transform in \eqref{LegendreTransform2},
$$
|\widetilde x-\widetilde 0|=|D\bar u(x)-D\bar u(0)|>\delta |x|.
$$
Hence by triangle inequality,
$$
|\widetilde x|\geq -|\widetilde 0|+|\widetilde x-\widetilde 0|>-|\widetilde 0|+\delta|x|
$$
and the first inequality of \eqref{linear-of-X} follows immediately.
The second inequality of \eqref{linear-of-X} follows similarly by \eqref{Condition-QuadraticGrowth-2} and
\eqref{gradientestimate}.
\textbf{Step 2:} Asymptotic behavior of $g(\widetilde x)$ at infinity. By the equivalence \eqref{linear-of-X},
$$
g(\widetilde x)=-\frac{\sqrt 2}{2}f(x)\rightarrow -\frac{\sqrt 2}{2}f(\infty)=:g(\infty)
$$
as $|\widetilde x|\rightarrow+\infty$. Similar to the proof of \eqref{equ-temp-10},
we have
$$
\limsup_{|\widetilde x|\rightarrow+\infty}\left(|\widetilde x|^{\zeta}|g(\widetilde x)-g(\infty)|+|\widetilde x|^{\alpha+\zeta'}[g]_{C^{\alpha}(\overline{B_{\frac{|\widetilde x|}{2}}(\widetilde x)})}\right)<\infty
$$
for the give $0<\alpha<1$, $\zeta,\zeta'>0$.
\textbf{Step 3:} Asymptotic behavior of $v(\widetilde x)$ at infinity.
Since \eqref{equ-temp-6} remains when $\zeta>0$ becomes smaller, we only need to prove for $0<\zeta<2$ case for reading simplicity.
By a direct computation, $\Delta |x|^{2-\zeta}=c_{n,\zeta}|x|^{-\zeta}$ in $B_1^c$.
Thus there exist subsolution $\underline v$ and supersolution $\overline v$ of
Poisson equation
\begin{equation}\label{equ-Poisson}
\Delta \widetilde v=g(\widetilde x)-g(\infty)\quad\text{in }\mathbb R^n
\end{equation}
with
$
\underline{v},\overline v=O(|\widetilde x|^{2-\zeta})$ as $|x|\rightarrow\infty.
$
By Perron's method (see for instance \cite{BLZ,UserGuide-intro,ExteriorDirichlet}) and interior regularity, we have a classical solution $\widetilde v\in C^{2,\alpha}(\mathbb R^n)$ of \eqref{equ-Poisson} with $\widetilde v=O(|\widetilde x|^{2-\zeta})$ as $|\widetilde x|\rightarrow\infty$.
For any $|\widetilde x|=R\geq 1$, let
$$
\widetilde v_R(y):=\left(\frac{2}{R}\right)^2 \widetilde v(\widetilde x+\frac{R}{2}y),\quad y\in B_1.
$$
Then $ \widetilde v_R$ satisfies
$$
\Delta \widetilde v_R=g(\widetilde x+\frac{R}{2}y)-g(\infty)=:g_R(y)\quad\text{in }B_1.
$$
By a direct computation,
$$
||g_R||_{C^{\alpha}(\overline{B_1})}\leq CR^{-\min\{\zeta,\zeta'\}}
\quad\text{and}\quad
|| \widetilde v_R||_{C^0(\overline{B_1})}\leq CR^{-\zeta}.
$$
By interior Schauder estimates, we have
$$
||\widetilde v_R||_{C^{2,\alpha}(\overline{B_{1/2}})}\leq CR^{-\min\{\zeta,\zeta'\}}
$$
and then
$$
\widetilde v(\widetilde x)=O_2(|\widetilde x|^{2-\min\{\zeta,\zeta'\}})
$$
as $|\widetilde x|\rightarrow\infty$.
Then
$$
\Delta (v-\widetilde v)=g(\infty)\quad\text{in }\mathbb R^n
$$
and $D^2(v-\widetilde v)$ is bounded. By Liouville type theorem, $v-\widetilde v$ is a quadratic function and hence
$$
v-\left(
\frac{1}{2}\widetilde x^T\widetilde A\widetilde x+\widetilde b \widetilde x+\widetilde c
\right)=O_2(|\widetilde x|^{2-\min\{\zeta,\zeta'\}})
$$
for some $\widetilde A\in\mathtt{Sym}(n)$ with $\mathtt{trace}\widetilde A=g(\infty)$, $\widetilde b\in\mathbb R^n$ and $\widetilde c\in\mathbb R$.
Similarly we have \eqref{strip-argument} and $\widetilde A$ is invertible. Taking $A:=\widetilde A^{-1}-I$ and the result follows similar to $\tau\in(0,\frac{\pi}{4})$ case.
\section{Asymptotics of solutions of \eqref{Equ-exterior}}\label{sec-linear}
In this section, we
prove Theorem \ref{Thm-firstExpansion}. As an integral part of the preparation, we
analyze the linearized equation of \eqref{Equ-exterior} and obtain the asymptotic behavior at infinity. The major difficulty is that the linearized equation is not homogeneous.
\subsection{Asymptotics of solutions of nonhomogeneous linear elliptic equations}
Consider the linear elliptic equation
\begin{equation}\label{Dirichlet}
Lu:=a_{i j}(x) D_{i j} u(x)=f(x)\quad\text{in }\mathbb R^n,
\end{equation}
where the coefficients are uniformly elliptic,
satisfying \begin{equation}\label{HolderCoefficient}
||a_{ij}||_{C^{\alpha}(\mathbb{R}^n)}<\infty,
\end{equation}
for some $0<\alpha <1$
and
\begin{equation}\label{short-RangeCoefficient}
|a_{ij}(x)-a_{ij}(\infty)|\leq C|x|^{-\varepsilon},
\end{equation}
for some $0<(a_{ij}(\infty))\in\mathtt{Sym}(n)$ and $\varepsilon,C>0$.
\begin{theorem}\label{exteriorLiouville}
Let $v$ be a classical solution of \eqref{Dirichlet} that bounded from at least one side, the coefficients satisfy \eqref{HolderCoefficient} and (\ref{short-RangeCoefficient}) and $f\in C^{0}(\mathbb{R}^n)$ satisfy
\begin{equation}
\label{decayoff_1}
\limsup_{|x|\rightarrow+\infty} |x|^{\zeta}|f(x)|<\infty
\end{equation}
for some $\zeta>2$.
Then there exists a constant $v_{\infty}$ such that \begin{equation}\label{Result_ExteriorLiouville-2}
v ( x ) = v _ { \infty } +
\left\{
\begin{array}{llll}
O \left( |x|^{2-\min\{n,\zeta\}} \right), & \zeta\not=n,\\
O \left( |x|^{2-n}(\ln|x|) \right), & \zeta=n,\\
\end{array}
\right.
\end{equation}
as $|x|\rightarrow\infty$.
\end{theorem}
The homogeneous version of Theorem \ref{exteriorLiouville} has been proved earlier,
see for instance Gilbarg-Serrin \cite{Gilbarg-Serrin} and Li-Li-Yuan \cite{ExteriorLiouville}. Hence we start with constructing a special solution of \eqref{Dirichlet} and translate the question into homogeneous case.
By the criterion in \cite{Equivalence}, the Green's function of operator $L$ is equivalent to the Green's function of Laplacian under conditions \eqref{HolderCoefficient} and \eqref{short-RangeCoefficient}. More precisely, let $G_L(x,y)$ be the Green's function centered at $y$ , there exists constant $C$ such that
\begin{equation}\label{Equivalence-Green}
\begin{array}{llll}
C^{-1}|x-y|^{2-n} \leq G_{L}(x, y) \leq C|x-y|^{2-n},& \forall~ x\not=y,\\
\left|D_{x_{i}} G_{L}(x, y)\right| \leq C|x-y|^{1-n}, \quad i=1, \cdots, n,& \forall~ x\not=y,\\
\left|D_{x_{i}}D_{x_{j}} G_{L}(x, y)\right| \leq C|x-y|^{-n}, \quad i, j=1, \cdots, n,& \forall~ x\not=y.
\end{array}
\end{equation}
By an elementary estimate as in Bao-Li-Zhang \cite{BLZ}, we construct a solution that vanishes at infinity. More rigorously, we introduce the following result.
\begin{lemma}\label{existence}
There exists a bounded strong solution $u\in W^{2,p}_{loc}(\mathbb R^n)$ with $p>n$ of (\ref{Dirichlet}) satisfying $$
u(x)=
\left\{
\begin{array}{lllll}
O(|x|^{2-\min\{n,\zeta\}}), & \zeta\not=n,\\
O(|x|^{2-n}(\ln|x|)), & \zeta =n,\\
\end{array}
\right.
$$
as $|x|\rightarrow\infty$.
\end{lemma}
\begin{proof}
By \eqref{Equivalence-Green} and Calder\'on-Zygmund inequality,
$$
w(x):=\int_{\mathbb{R}^n}G_L(x,y)f(y)\mathtt{d}y
$$ belongs to $W^{2,p}_{loc}(\mathbb{R}^n)$ for $p>n$ and is a strong solution of \eqref{Dirichlet} (see for instance \cite{Adams,Ziemer}). It remains to compute the vanishing speed at infinity.
Let $$
\begin{array} { l } { E _ { 1 } : = \left\{ y \in \mathbb { R } ^ { n } , \quad | y | \leq | x | / 2 \right\} ,} \\ { E _ { 2 } : = \left\{ y \in \mathbb { R } ^ { n } , \quad | y - x | \leq | x | / 2 \right\}, } \\ { E _ { 3 }: = \mathbb { R } ^ { n } \backslash \left( E _ { 1 } \cup E _ { 2 } \right) .} \end{array}
$$
By a direct computation,
$$
\int_{E_1}\dfrac{1}{|x-y|^{n-2}}f(y)\mathtt{d}y
\leq C\int_{B_{\frac{|x|}{2}}}f(y)\mathtt{d}y\cdot |x|^{2-n}
\leq \left\{
\begin{array}{lllll}
C|x|^{2-\min\{n,\zeta \}}, & \zeta\not=n,\\
C|x|^{2-n}(\ln|x|),& \zeta =n.\\
\end{array}
\right.
$$
Similarly, we have $\frac{|x|}{2}\leq |y|$ in $E_2$ and hence
\begin{equation*}
\int_{E_2}\dfrac{1}{|x-y|^{n-2}}f(y)\mathtt{d}y
\leq C
\int_{|x-y|\leq \frac{|x|}{2}}\dfrac{1}{|x-y|^{n-2}}\mathtt{d}y
\cdot \dfrac{1}{|x|^{\zeta}}\leq C |x|^{2-\zeta}.
\end{equation*}
Now we separate $E_3$ into two parts
$$
E_3^+:=\{y\in E_3:|x-y|\geq |y|\},\quad E_3^-:=E_3\setminus E_3^+.
$$
Then
$$
\int_{E_3^+}\dfrac{1}{|x-y|^{n-2}\cdot|y|^{\zeta}}\mathtt{d}y
\leq \int_{|y|\geq \frac{|x|}{2}}
\dfrac{1}{|y|^{n+\zeta-2}}\mathtt{d}y\leq C|x|^{2-\zeta}
$$
and
$$
\int_{E_3^-}\dfrac{1}{|x-y|^{n-2}\cdot|y|^{\zeta}}\mathtt{d}y
\leq \int_{|y-x|\geq \frac{|x|}{2}}
\dfrac{1}{|y-x|^{n+\zeta -2}}\mathtt{d}y\leq C|x|^{2-\zeta}.
$$
Hence there exists $C>0$ such that
$$
|w(x)|\leq C
\left| \int_{E_1\cup E_2\cup E_3}\dfrac{1}{|x-y|^{n-2}}f(y)\mathtt{d}y\right|
\leq
\left\{
\begin{array}{lllll}
C|x|^{2-\min\{n,\zeta\}}, & \zeta \not=n,\\
C|x|^{2-n}(\ln|x|), & \zeta =n.\\
\end{array}
\right.
$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{exteriorLiouville}]
We may assume without loss of generality that $v$ is bounded from below, otherwise consider $-v$ instead.
Let $w(x)$ be the bounded strong solution of (\ref{Dirichlet}) from Lemma \ref{existence}, then
$$
\widetilde v:=v-w-\inf_{\mathbb R^n}(v-w)\geq 0
$$
is a strong solution of \eqref{Dirichlet} with $f\equiv0$.
By interior regularity, $\widetilde{v}$ is a positive classical solution. By Theorem 2.2 in \cite{ExteriorLiouville},
$$
\widetilde{v}( x ) = \widetilde{v} _ { \infty } + O \left( |x|^{2-n}\right)\quad \text {as } | x | \rightarrow \infty,
$$
for some constant $\widetilde{v}_{\infty}$.
Then the result follows immediately from Lemma \ref{existence}.
\end{proof}
\begin{remark}\label{ExteriorLiouville_NonPositive-remark}
If $v$ is a classical solution of \eqref{Dirichlet} with $|Dv(x)|=O(|x|^{-1})$ as $|x|\rightarrow\infty$ and $f\in C^{0}(\mathbb{R}^n)$ satisfy
(\ref{decayoff_1}), then $v$ is bounded from at least one side. The proof is similar to $f\equiv 0$ case, which can be found in Corollary 2.1 of \cite{ExteriorLiouville}.
\end{remark}
\subsection{Proof of Theorem \ref{Thm-firstExpansion}}\label{sec-Pf-1.1}
Let $u \in C^{2}\left(\mathbb{R}^{n}\right)$ be a classical solution of \eqref{Equ-exterior}, where $f$ satisfies \eqref{Low-Regular-Condition} for some $\zeta>2,m\geq 2$ and either of cases \eqref{case-MA}-\eqref{case-inverse} holds.
By extension and interior estimates, we may assume that $u\in W^{4,p}_{loc}(\mathbb R^n)$ for some $p>n$.
By Theorem \ref{thm-sec2}, Hessian matrix $D^2u$ have finite $C^{\alpha}$ norm on $\mathbb R^n$ and converge to some $A\in\mathtt{Sym}(n)$ at a H\"older speed as in \eqref{Result_LimitofHessian}.
Let $v : = u ( x ) - \frac { 1 } { 2 } x ^ { T } A x$. Applying Newton-Leibnitz formula between $$
F_{\tau}\left(\lambda\left(D^{2} v+A\right)\right)=f(x)\quad\text{and}
\quad F_{\tau}(\lambda(A))=f(\infty),
$$
we have
\begin{equation}\label{linearized-equation-3}
\overline{a_{i j}}(x) D_{i j} v:=\int_{0}^{1} D_{M_{i j}} F_{\tau}\left(\lambda(t D^{2} v+A)\right) \mathrm{d} t \cdot D_{i j} v
=f(x)-f(\infty)=:
\overline{f}(x)
\end{equation}
For any $e\in\partial B_1$, by the concavity of operator $F$, the partial derivatives $v_e:=D_ev$ and $v_{ee}:=D^2_ev$ are strong solutions of
\begin{equation}\label{linearized-equation-1}
\widehat{a_{ij}}(x)D_{ij}v_e:=D_{M_{i j}} F_{\tau} \left(\lambda( D^{2} v+A)\right) D_{i j} v_{e} =f_{e}(x),
\end{equation}
and
\begin{equation}\label{linearized-equation-2}
\widehat{a_{ij}}(x)D_{i j} v_{e e} \geq f_{e e}(x).
\end{equation}
By Theorem \ref{thm-sec2}, there exist $\epsilon>0$ and $C>0$ such that $$
\left| \overline { a _ { i j } } ( x ) - D_{M_{ij}}F_{\tau} (\lambda( A) ) \right| +\left| \widehat { a _ { i j } } ( x ) - D_{M_{ij}}F_{\tau}(\lambda(A)) \right| \leq \frac { C } { | x | ^ { \epsilon} }.
$$
By condition (\ref{Low-Regular-Condition}) and constructing barrier functions for \eqref{linearized-equation-2}, there exists $C>0$ such that for all $x\in\mathbb R^n$,
$$
v _ { e e } ( x ) \leq
\left\{
\begin{array}{llll}
C | x | ^ {2-\min\{n,\zeta+2\}}, & \zeta\not=n-2,\\
C|x|^{2-n}(\ln|x|), & \zeta=n-2.\\
\end{array}
\right.
$$
By the arbitrariness of $e$,
\begin{equation*}
\lambda_{\max }\left(D^{2} v\right)(x) \leq \left\{
\begin{array}{llll}
C | x | ^ {2-\min\{n,\zeta+2\}}, & \zeta\not=n-2,\\
C|x|^{2-n}(\ln|x|), & \zeta=n-2.\\
\end{array}
\right.
\end{equation*}
By \eqref{Low-Regular-Condition} and the ellipticity of equation (\ref{linearized-equation-3}),
\begin{equation*}
\lambda_{\min }\left(D^{2} v\right)(x) \geq-C \lambda_{\max }\left(D^{2} v\right)-C|\overline{f}(x)| \geq
\left\{
\begin{array}{llll}
-C | x | ^ {2-\min\{n,\zeta+2\}}, & \zeta\not=n-2,\\
-C|x|^{2-n}(\ln|x|), & \zeta=n-2.\\
\end{array}
\right.
\end{equation*}
Hence $$
\left| D ^ { 2 } v ( x ) \right| \leq
\left\{
\begin{array}{llll}
C | x | ^ {2-\min\{n,\zeta+2\}}, & \zeta\not=n-2,\\
C|x|^{2-n}(\ln|x|), & \zeta=n-2.\\
\end{array}
\right.$$
By Theorem \ref{thm-sec2}, the coefficients $\overline{a_{ij}},\ \widehat{a_{ij}}$ has bounded $C^{\alpha}$ norm on exterior domain. Since $\zeta>2$, applying Remark \ref{ExteriorLiouville_NonPositive-remark} to equation \eqref{linearized-equation-1}, for any $e\in\partial B_1$, $v_e(x)$ is bounded from one side and there exists $b _ { e } \in \mathbb { R }$ such that
\begin{equation}\label{capture1-order} v _ { e } ( x ) = b_ { e } +
\left\{
\begin{array}{llll}
O \left( | x | ^ { 2 - \min\{n,\zeta+1\}} \right), & \zeta\not=n-1,\\
O \left( | x | ^ { 2 - n}(\ln|x|) \right), & \zeta=n-1,\\
\end{array}
\right.
\quad\text{as } | x | \rightarrow \infty.
\end{equation}
Picking $e$ as $n$ unit coordinate vectors of $\mathbb R^n$, we found $b\in\mathbb R^n$ from \eqref{capture1-order} and let
$$\overline { v } ( x ): = v ( x ) - b x = u ( x ) - \left( \frac { 1 } { 2 } x ^ { T } A x + b x \right).$$
By (\ref{capture1-order}),$$
|D\overline{v}(x)|=|(\partial_{x_1}v-b_1,\cdots,\partial_{x_n}v-b_n)|=
\left\{
\begin{array}{llll}
O \left( | x | ^ { 2 - \min\{n,\zeta+1\}} \right), & \zeta\not=n-1,\\
O \left( | x | ^ { 2 - n}(\ln|x|) \right), & \zeta=n-1,\\
\end{array}
\right.
$$
as $|x|\rightarrow\infty$.
By \eqref{linearized-equation-3},
\begin{equation*}
\overline{a_{ij}}(x)D_{ij}\overline{v}=\overline{a_{ij}}(x)
D_{ij}v=\overline{f}(x).
\end{equation*}
By the arguments above again, there exists $c\in\mathbb R$ such that
\begin{equation*}
\overline{v}(x)=c+
\left\{
\begin{array}{llll}
O(|x|^{2-\min\{n,\zeta\}}), & \zeta\not=n,\\
O\left(|x|^{2-n}(\ln |x|)\right), & \zeta=n,\\
\end{array}
\right.
\quad\text{as }|x|\rightarrow\infty.
\end{equation*}
Notice that here we used $\zeta>2$ for $|D\bar v|=O(|x|^{-1})$ and $\overline f=O(|x|^{-\zeta})$ at infinity.
Let
$
Q(x):=\frac{1}{2}x^TAx+bx+c.
$ Then
$$
|u-Q|=|\overline{v}-c|=
\left\{
\begin{array}{llll}
O(|x|^{2-\min\{n,\zeta\}}), & \zeta\not=n,\\
O(|x|^{2-n}(\ln|x|)), & \zeta=n,\\
\end{array}
\right.
\text{ as }|x|\rightarrow\infty.
$$
Finally, we give the estimates of derivatives of $u$.
For $|x|\geq 1$, let
\begin{equation*}
E(y)=\left(\frac{2}{|x|}\right)^{2}(u-Q)\left(x+\frac{|x|}{2} y\right).
\end{equation*}
Then by Newton-Leibnitz formula,
\begin{equation*}
\underline{a^{i j}}(y) D_{ij}E(y)=F_{\tau}\left(\lambda(A+D^{2} E(y))\right)-F_{\tau}(\lambda(A))=f(x+\frac{|x|}{2}y)-f(\infty)=:\underline f(y)\quad\text{in }B_1,
\end{equation*}
where
\begin{equation*}
\underline{a^{i j}}(y)=\int_{0}^{1} D_{M_{i j}}F_{\tau}\left(\lambda(A+t D^{2} E(y))\right) \mathtt{d}t.
\end{equation*}
By the Evans-Krylov estimate and interior Schauder estimate (see for instance Chap.8 of \cite{FullyNonlinear} and Chap.6 of \cite{GT}), for all $0<\alpha<1$,
we have
$$
\begin{array}{llll}
||E||_{C^{2,\alpha}(\overline{B_{\frac{1}{2}}})}&\leq & C(||E||_{C^0(\overline{B_1})}+||\underline f||_{C^{\alpha}(\overline{B_2})})\\
&\leq & C(||E||_{C^0(\overline{B_1})}+||\underline f||_{C^{1}(\overline{B_2})})\\
&=&
\left\{
\begin{array}{llll}
O(|x|^{-\min\{n,\zeta\}}), & \zeta\not=n,\\
O(|x|^{-n}(\ln|x|)), & \zeta=n,\\
\end{array}
\right.
\text{ as }|x|\rightarrow\infty.
\end{array}
$$
By taking further derivatives and iterate, we have for all $k\leq m+1$,
$$
\begin{array}{lllll}
\left(\frac{|x|}{2}\right)^{k-2}\left|D^k(u-Q)(x)\right|&=&|D^kE(0)|\\
&\leq & C_k(||E||_{C^0(\overline{B_1})}+||\underline f||_{C^{k-2,\alpha}(\overline{B_1})})\\
&\leq & C_k(||E||_{C^0(\overline{B_1})}+||\underline f||_{C^{k-1}(\overline{B_1})})\\
&=& \left\{
\begin{array}{llll}
O(|x|^{-\min\{n,\zeta\}}), & \zeta\not=n,\\
O(|x|^{-n}(\ln|x|)), & \zeta=n,\\
\end{array}
\right.
\text{ as }|x|\rightarrow\infty.
\end{array}
$$
This finishes the proof of Theorem \ref{Thm-firstExpansion}.
\section{Proof of Theorem \ref{Thm-secondExpansion}}
In this section, we consider asymptotic expansion at infinity for classical solutions of \eqref{Equ-exterior}. Assume that $u,f$ are as in Theorem \ref{Thm-firstExpansion}. Let $\overline{a_{ij}}, \overline f$ and $v$ be as in \eqref{linearized-equation-3} and subsection \ref{sec-Pf-1.1} respectively.
In the following, we only need to focus on $\zeta>n$ case as explained in Remark \ref{thm-radial}.
It follows from \eqref{equ-asym-Behavior} in Theorem \ref{Thm-firstExpansion},
\begin{equation*}
\left|\overline{a_{i j}}(x)-D_{M_{i j}} F_{\tau}(\lambda(A))\right| \leq C\left|D^{2} v(x)\right|=O_{m-1}\left(|x|^{-n}\right)
\end{equation*}
and hence
$$
\begin{array}{llll}
D_{M_{i j}} F_{\tau}(\lambda(A))D_{ij}v &= & \overline f-(\overline{a_{ij}}(x)-D_{M_{i j}} F_{\tau}(\lambda(A)))D_{ij}v=:g(x)\\
&=&O_m(|x|^{-\zeta})+ O_{m-1}\left(|x|^{-2n}\right) \\
&=& O_{m-1}(|x|^{-\min\{2n,\zeta\}})
\end{array}
$$
by \eqref{Low-Regular-Condition}
as $|x|\rightarrow\infty$.
Let $$Q:=
[D_{M_{i j}} F_{\tau}(\lambda(A))]^{\frac{1}{2}}\quad\text{and}\quad \widetilde v(x):=v(Qx).
$$
Then
\begin{equation}\label{temp-1}
\Delta \widetilde v(x)=g(Qx)=:\widetilde g(x)\quad\text{in } \mathbb R^n.
\end{equation}
By a direct computation,
$$
\widetilde v=O_{m+1}(|x|^{2-n})\quad\text{and}\quad \widetilde g=O_{m-1}\left(|x|^{-\min\{2 n,\zeta\}}\right).
$$
Let $\Delta_{\mathbb{S}^{n-1}}$ be the Laplace-Beltrami operator on unit sphere $\mathbb{S}^{n-1}\subset\mathbb{R}^n$ and
$$
\Lambda_0=0,~\Lambda_1=n-1,~\Lambda_2=2n,~\cdots,~\Lambda_k=k(k+n-2),~\cdots,
$$
be the sequence of eigenvalues of $-\Delta_{\mathbb S^{n-1}}$ with eigenfunctions
\begin{equation*}Y_1^{(0)}=1,~Y_{1}^{(1)}(\theta),~Y_{2}^{(1)}(\theta),~\cdots,~ Y_{n}^{(1)}(\theta),~\cdots,~Y_{1}^{(k)}(\theta),~\cdots,~Y_{m_k}^{(k)}(\theta),~\cdots
\end{equation*}
i.e.,
$$
-\Delta_{\mathbb{S}^{n-1}}Y_m^{(k)}(\theta)=\Lambda_kY_m^{(k)}(\theta),\quad\forall~
m=1,2,\cdots,m_k.
$$
By Lemmas 3.1 and 3.2 of \cite{bao-liu-2020}, there exists a solution $\widetilde v_{\widetilde g}$ of $\Delta \widetilde v_{\widetilde g}=\widetilde g$ in $\mathbb R^n\setminus\overline{B_1}$ with
\begin{equation*}
\widetilde v_{\widetilde g}=\left\{\begin{array}{ll}
O_{m}\left(|x|^{2-\min\{2 n,\zeta\}}\right), & \min\{2 n,\zeta\}-n \notin \mathbb{N}, \\
O_{m}\left(|x|^{2-\min\{2 n,\zeta\}}(\ln |x|)\right), & \min\{2 n,\zeta\}-n \in \mathbb{N}.
\end{array}\right.
\end{equation*}
Thus
$\overline v(x):=\widetilde v-\widetilde v_{\widetilde g}$ is harmonic on $\mathbb R^n\setminus\overline{B_1}$ with $\overline v=O(|x|^{2-n})$ as $|x|\rightarrow\infty$.
By spherical harmonic expansions, there exist constants $C_{k,m}^{(1)}, C_{k,m}^{(2)}$ such that $$
\overline{v}=\sum_{k=0}^{\infty} \sum_{m=1}^{m_{k}} C_{k,m}^{(1)}Y_{m}^{(k)}(\theta) |x|^{k} +\sum_{k=0}^{\infty} \sum_{m=1}^{m_{k}} C_{k,m}^{(2)} Y_{m}^{(k)}(\theta) |x|^{2-n-k}.
$$
By the vanishing speed of $\overline v$, we have $C_{k,m}^{(1)}=0$ for all $k,m$. Thus similar to the proof of Lemma 3.3 in \cite{bao-liu-2020}, there exist constants $c_{k,m}$ with $k\in\mathbb N$, $m=1,\cdots,m_k$ such that
\begin{equation*}
\widetilde v=
\left\{
\begin{array}{llll}
\displaystyle \sum_{k=0}^{[\zeta]-n} \sum_{m=1}^{m_{k}} c_{k, m}Y_{m}^{(k)}(\theta)|x|^{2-n-k} +
O_{m}\left(|x|^{2-\zeta}\right), & n<\zeta<2n,~\zeta\not\in\mathbb N,\\
\displaystyle \sum_{k=0}^{\zeta-n-1} \sum_{m=1}^{m_{k}} c_{k, m} Y_{m}^{(k)}(\theta)|x|^{2-n-k}+
O_{m}\left(|x|^{2-\zeta}(\ln |x|)\right), & n<\zeta< 2n,~\zeta\in\mathbb N,\\
\displaystyle \sum_{k=0}^{n-1} \sum_{m=1}^{m_{k}} c_{k, m}
Y_{m}^{(k)}(\theta)|x|^{2-n-k} +
O_{m}\left(|x|^{2-2n}(\ln |x|)\right), & 2n\leq \zeta.\\
\end{array}
\right.
\end{equation*}
By rotating backwards by $Q^{-1}$,
the results in Theorem \ref{Thm-secondExpansion} follow immediately.
\small
\bibliographystyle{plain}
|
{
"timestamp": "2021-02-18T02:16:39",
"yymm": "2102",
"arxiv_id": "2102.08723",
"language": "en",
"url": "https://arxiv.org/abs/2102.08723"
}
|
\section{Structural Representations based on Von Neumann Entropy}
Next, we present VNEstruct\xspace for generating structural node representations, employing the Von Neumann entropy, a model-agnostic measure, that quantifies the structural complexity of a graph. The Von Neumann graph entropy (VNE) has been shown to have a linear correlation with other graph entropy measures~\cite{anand}. Graph entropy methods have been recently proved successful for computing graph similarity~\cite{lipan}.
\subsection{Von Neumann Entropy on Graphs}
In quantum mechanics, the state of a quantum mechanical system is described by a \emph{density} matrix $\rho$, i.e., a positive semidefinite, hermitian matrix with unit trace~\cite{Braunstein2006}. The Von Neumann entropy of the quantum system is defined as:
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1.3pt}
\begin{equation}
\label{eq::1}
H(\rho) = -\Tr(\rho\log\rho) = -\sum_{i=1}^n \lambda_i\log\lambda_i,
\end{equation}
where $\Tr(\cdot)$ is the trace of a matrix, and $\lambda_i$'s are the eigenvalues of $\rho$.
Correspondingly, connecting it to graphs, given a graph $G=(V,E)$ and its Laplacian $L_G= D-A$, the VNE denoted by $H(G)$ , is defined as in Equation \ref{eq::1}, by replacing $\rho$ with $\rho(L_G) = \frac{L_G}{\text{Tr}(L_G)} = \frac{L_G}{2|E|}$~\cite{Braunstein2006}.
Note that $\lambda_i = \frac{1}{\text{Tr}(L_G)}v_i$ where $\lambda_i, v_i$ are the $i$-th eigenvalue of $\rho(L_G)$ and $L_G$, respectively. Therefore, $0 \leq \lambda_i \leq 1$ holds for all $i \in \{ 0,1,\ldots,n \}$~\cite{Passerini2009QuantifyingCI}.
This indicates that \Eqref{eq::1} is equivalent to the Shannon entropy of the probability distribution $\{\lambda_i\}_{i=1}^n$.
Hence, $H(G)$ serves as a skewness metric of the eigenvalue distribution and it has been shown that it provides information on the structural complexity of a graph~\cite{Passerini2009QuantifyingCI}.
\textbf{Efficient approximation scheme.}
The computation of VNE requires the eigenvalue decomposition of the density matrix which can be done in $\mathcal{O}(n^3)$ time. Recent works~\cite{fast_incr, choi2018fast} have proposed an efficient approximation of $H(G)$. Starting from \Eqref{eq::1} and following~\cite{minello}, we obtain:
\begin{equation}\label{eq::3}
H(G) \approx \text{Tr}\big(\rho(L_G)(I_n - \rho(L_G))\big) = Q,
\end{equation}
where $I_n$ is the $n \times n$ identity matrix, and
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1pt}
\begin{equation}\label{eq::4}
Q = \frac{\text{Tr}(L_G)}{2m} - \frac{\text{Tr}(L_G^2)}{4m^2}
= 1 - \frac{1}{2m} - \frac{1}{4m^2}\sum_{i=1}^n d_i^2\,,
\end{equation}
where $m=|E|$ and $d_i$ is the ith-node degree. Finally, as~\cite{fast_incr} suggests, we obtain a tighter approximation of $H(G)$:
\setlength\abovedisplayskip{1.3pt}
\setlength\belowdisplayskip{1.3pt}
\begin{equation}\label{eq::5}
\hat{H} = -Q\ln\lambda_{\max}\,,
\end{equation}
where $\lambda_{\max}$ is the largest eigenvalue of $\rho(L(G))$.
It can be shown that for any graph $G$, we have $H(G) \geq \hat{H}(G)$ where the equality holds if and only if $\lambda_{\max} = 1$~\cite{choi2018fast}.
\begin{table*}[t]
\centering
\def1.1{1.1}
\resizebox{\textwidth}{!}{
\begin{tabular}{|lc|l||ccc|cc|} \hline
Configuration & Shapes & Algorithm & Homogeneity & Completeness & Silhouette & Accuracy & F$1$-score \\ \hline
\multirow{7}{*}{Basic / Basic Perturbed} & \multirow{7}{*}{\includegraphics[width=.2\textwidth]{figures/shapes2.pdf}} & DeepWalk & 0.178 / 0.172 & 0.115 / 0.124 & 0.163 / 0/171 & 0.442 / 0.488 & 0.295 / 0.327 \\
& & RolX& 0.983 / 0.764 & 0.976 / 0.458 & 0.846 / 0.429 & \textbf{1.000} / 0.928 & \textbf{1.000} / \textbf{0.886} \\
& & struc2vec& 0.803 / 0.625 & 0.595 / 0.543 & 0.402 / 0.429 & 0.784 / 0.703 & 0.708 / 0.632 \\
& & GraphWave& 0.868 / 0.714 & 0.797 / 0.326 & 0.730 / 0.287 & 0.995 / 0.906 & 0.993 / 0.861 \\
& & VNEstruct& \textbf{0.986} / \textbf{0.882} & \textbf{0.983} / \textbf{0.701} & \textbf{0.891} / \textbf{0.478} & 0.920 / \textbf{0.940} & 0.901 / 0.881 \\ \hline
\multirow{7}{*}{Varied / Varied Perturbed} &
\multirow{7}{*}{\includegraphics[width=.2\textwidth]{figures/shapes5.pdf}} & DeepWalk& 0.327 / 0.300 & 0.220 / 0.231 & 0.216 / 0.221 & 0.329 / 0.313 & 0.139 / 0.128 \\
& & RolX& \textbf{0.984} / 0.682 & 0.939 / 0.239 & 0.748 / 0.062 & \textbf{0.998} / 0.856 & 0.996 / 0.768 \\
& & struc2vec& 0.805 / 0.643 & 0.626 / 0.524 & 0.422 / \textbf{0.433} & 0.738 / 0.573 & 0.592 / 0.412 \\
& & GraphWave& 0.941 / 0.670 & 0.843 / 0.198 & \textbf{0.756} / 0.005 & 0.982 / 0.793 & \textbf{0.965} / 0.682 \\
& & VNEstruct& 0.950 / \textbf{0.722} & \textbf{0.945} / \textbf{0.678} & 0.730 / 0.399 & 0.988 / \textbf{0.899} & 0.95 \textbf{0.878} \\ \hline
\hline
\end{tabular}
}
\caption{Performance of the baselines and the VNEstruct method for learning structural embeddings averaged over $20$ synthetically generated graphs. Dashed lines denote perturbed graphs.}
\label{tab:shapes}
\end{table*}
\subsection{The VNEstruct\xspace Algorithm}
Based on the VNE and its approximation, we introduce our proposed approach to construct structural representations. The VNEstruct\xspace algorithm extracts ego-networks of increasing radius and computes their VNE.
Then, the representation of a node comprises of the Von Neumann entropies that emerged from the node's ego-networks.
Therefore, the set of entropies of the ego-networks of a node serves as a ``signature'' of the structural identity of its neighborhood.
Let $R$ be the maximum considered radius. For each $r \in \{1,..,R\}$ and each node $v \in V$, the algorithm extracts the $r$-hop neighborhood $G_v^r = (V',E')$, where $V' = \{u \in V | d(u,v) \leq r\}$ and $E' = \{(u,v) | u, v \in V', (u,v) \in E \}$.
Next, $H(G_v^r)$ of the $r$-hop neighborhood of $v$ is computed using \Eqref{eq::5}.
Finally, the $R$ entropies are arranged into a single vector $h_v \in \mathbb{R}^R$. As shown in Figure~\ref{fig:method}, VNEstruct\xspace identifies structural equivalences of nodes that are distant to each other. Specifically, nodes $u$ and $v$ share structurally identical $1$-hop neighborhoods.
Therefore, the entropies of their $1$-hop neighborhoods are equal to each other.
\makeatletter
\newcommand{\let\@latex@error\@gobble}{\let\@latex@error\@gobble}
\makeatother
\textbf{Computational Complexity.}
The algorithm consists of: ($1$) the extraction of the ego-networks and ($2$) the computation of VNEs per subgraph.
The first step is linear in the number of edges of the node's neighborhood.
In the worst case, the complexity is $\mathcal{O}(nm)$, but for sparse graphs the complexity is constant in practice.
For the second step, following the approximation scheme in subsection 2.1,
$\lambda_{max}$ is computed through the power iteration method~\cite{power_iteration}, which requires $\mathcal{O}(n+m)$ operations, as the Laplacian matrix has $n+m$ nonzero entries.
Hence, the whole method exhibits linear complexity $\mathcal{O}(n+m)$, while for very sparse graphs, it becomes $\mathcal{O}(n)$.
\textbf{Robustness over "small" perturbations.} We will next show that utilizing the VNE, we can acquire robust structural representations over possible perturbations on the graph structure. Clearly, if two graphs are isomorphic to each other, then their entropies will be equal to each other. It is important, though, for structurally similar graphs to have similar entropies, too. So, let $\rho, \rho' \in \mathbb{R}^{n \times n}$ be the density matrices of two graph laplacians $L_G,L_{G'}$, as described above.
Let also $\rho = P \rho' P^\top + \epsilon$ where $P$ is an $n \times n$ permutation matrix equal to $\argmin_{P} || \rho - P \rho' P^\top ||_F$ and $\epsilon$ is an $n \times n$ symmetric matrix.
If $G,G'$ are nearly-isomorphic, then the Frobenius norm of $\epsilon$ is small. By applying the Fannes-Audenaert inequality~\cite{Audenaert_2007}, we have that:
\begin{equation*}
|H(\tilde{\rho}) - H(\rho)| \leq \frac{1}{2}T \ln(n-1) + S(T),
\end{equation*} where $T = ||\tilde{\rho}-\rho||_1$ is the trace distance between $\rho,\tilde{\rho}$ and $S(T) = -T\log T- (1-T)\log(1-T)$.
However, $||\tilde{\rho}-\rho||_1 = \sum_i|\lambda_i^{\tilde{\rho}-\rho}| \leq n||\tilde{\rho} - \rho ||_{op}$, where $|| \cdot ||_{op}$ is the operator norm.
Therefore, $|H(\tilde{\rho})- H(\rho)| \leq \frac{n}{2}ln(n-1)||\epsilon||_{op} + S(T)$, leading thus to a size-dependent upper bound of the difference between the entropies of structurally similar graphs.
\subsection{Graph-level Representations}
Next, we propose how the structural representations generated by VNEstruct\xspace can be combined with node attributes to perform graph classifications tasks.
The majority of the state-of-the-art methods learn node representations using message-passing schemes~\cite{hamilton, xu2018powerful}, where each node updates its representation according to its neighbors' representations, utilizing the graph structure information.
In this work, we do not use any message-passing scheme and we ignore the graph structure.
Instead, we augment the node attribute vectors of a graph with the structural representations generated by VNEstruct\xspace.
Thus, information about the graph structure is implictly incorporated into the augmented node attributes. Given a matrix of node attributes $X \in \mathbb{R}^{n\times d}$, the approach performs the following steps:
\begin{itemize}
\itemsep0em
\item Computation of $H_v \in \mathbb{R}^{n\times R}$.
\item Concatenation of node attribute vectors with structural node representations: $X' = [X || H] \in \mathbb{R}^{n\times (d+R)}$.
\item Aggregation of node vectors $X'$ into:\\ $H_G = \psi( \sum_{v \in V_G} \phi(X'_v)) $, where $\phi$ and $\psi$ are MLPs.
\end{itemize}
This approach is on par with recent studies that propose to augment the node attributes with structural characteristics to avoid performing message-passing steps~\cite{DBLP:journals/corr/abs-1905-04579}.
In comparison to a GNN, this procedure reduces the computational complexity of the training procedure since each graph is represented as a set of node representations.
\section{Introduction}
The amount of data that can be represented as graphs has increased significantly in recent years. Graph representations are ubiquitous in several fields such as in biology, chemistry~\cite{pmlr-v70-gilmer17a}, and social networks~\cite{hamilton}.
Many applications require performing machine learning tasks on graph-structured data, such as graph classification~\cite{xu2018powerful}, semi-supervised node classification~\cite{Kipf:2016tc} and link prediction~\cite{kipf2016variational}
The past few years have witnessed great activity in the field of learning on graphs. Graph Neural Networks (GNNs) have emerged as a successful approach on learning node-level and graph-level representations.
Recently, the interest on this field has focused on the expressiveness of GNNs~\cite{xu2018powerful,clip} and how these models can be deep enough to extract long-range information from distant nodes in the graph~\cite{deeperinsight,pairnorm,khop}. So far, most of the models are designed so that they preserve the proximity between nodes, i.e., nodes that are close to each other in the graph obtain similar representations~\cite{hamilton, perozzi2014deepwalk}. However, some tasks require assigning similar representations to nodes that can be distant in the graph, but structurally equivalent. For example, in chemistry, properties of a molecule often depend on the interaction of the atoms at its oposite sides and their neighborhood topology~\cite{deepatoms}.
These tasks require \emph{structural representations}, i.e. embeddings that can identify structural properties of a node's neighborhood. There is a growing literature that adresses this problem through different approaches. RolX ~\cite{rolx} extracts features for each node and performs non-negative matrix factorization to automatically discover node roles. Struc2vec~\cite{DBLP:journals/corr/FigueiredoRS17},
performs random walks on a constructed multi-layer graph to learn structural representations. GraphWave~\cite{donat} and DRNE~\cite{drne} employ diffusion wavelets and LSTM aggregation operators, respectively, to generate structural node embeddings.
However, most of these approaches suffer from high time or space complexity.
In this paper, we propose a novel and simple structural node representation algorithm, VNEstruct\xspace, that capitalizes on information-theoretic tools. The algorithm employs the Von Neumann entropy to construct node representations related to the structural identity of the neighborhood of each node. These representations capture the structural symmetries of the neighborhoods of increasing radius of each node. We show empirically the ability of VNEstruct\xspace to identify structural roles and its robustness to graph perturbations through a node classification and node clustering study on highly symmetrical synthetic graphs.
Moreover, we introduce a method of combining the generated representations by VNEstruct\xspace with the node attributes of a graph, in order to avoid the incorporation of the graph topology in the optimization, contrary to the workflow of a GNN. Evaluated on real-world graph classification tasks, VNEstruct\xspace achieves state-of-the-art performance, while maintaining a high efficiency compared to standard GNN models.
\section{Experiments}
Next, we empirically show the robustness that VNEstruct\xspace exhibits to graph perturbations in subsection 3.1 and we evaluate its graph classification performance in subsection 3.2.
\subsection{Structural Role Identification}
In order to evaluate the robustness of the structural representations generated by our method, we measure its performance on perturbed synthetic datasets, which were introduced in~\cite{donat, DBLP:journals/corr/FigueiredoRS17}.
We perform both classification and clustering tasks with the same experimental setup as in~\cite{donat}.\\
\textbf{Dataset setup.} The generated synthetic datasets are identical to those used in~\cite{donat}. They consist of basic symmetrical shapes, as shown in Table~\ref{tab:shapes}, that are regularly placed along a cycle of length $30$. The \textit{basic} setups use 10 instances of only one of the shapes of Table~\ref{tab:shapes}, while the \textit{varied} setups use 10 instances of each shape, randomly placed along the cycle.
The perturbed instances are formed by randomly rewiring edges.
The colors in the shapes indicate the different classes.\\
\textbf{Evaluation.} For the classification task, we measure the \textit{accuracy} and the \textit{F1-score}.
For the clustering task, we report the $3$ evaluation metrics, that were also calculated in~\cite{donat}: the \textit{Homogeneity} evaluates the conditional entropy of the structural roles in the produced clustering result, the \textit{Completeness} evaluates how many nodes with equivalent structural roles are assigned to the same cluster and the \textit{Silhouette} measures the intra-cluster distance vs. the inter-cluster distance.
In Table~\ref{tab:shapes}, VNEstruct outperforms the competitors on the perturbed instances of the synthetic graphs. On the basic and varied configurations, VNEstruct\xspace outperforms the competitors in the node clustering evaluation and achieves comparable performance with RolX in node classification. On the perturbed configurations, VNEstruct\xspace exhibits stronger performance than its competitors. The results in Table~\ref{tab:shapes} suggest a comparison of VNEstruct, RolX, and GraphWave in noisy scenarios. This comparison is provided in \Figref{fig:performances}, where we report the performance with respect to the number of rewired edges (from $0$ to $20$). We see that VNEstruct\xspace is more robust than GraphWave and Rolx in the presence of noise.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{figures/performances_together2.jpeg}
\caption{Classification and clustering performance of VNEstruct\xspace and the baselines with respect to noise.}
\label{fig:performances}
\end{figure}
\subsection{Graph Classification}
Next, we evaluate VNEstruct\xspace and the baselines in the task of graph classification. We compare our proposed algorithm against well-established message-passing algorithms for learning graph representations.
Note that in contrast to most of the baselines, we pre-compute the entropy-based structural representations, and then we represent each graph as a set of vectors that encode structural characteristics. We used 4 common graph classification datasets (3 from bioinformatics: MUTAG, PROTEINS, PTC-MR and 1 from social-networks: IMDB-BINARY~\cite{xu2018powerful}).
\textbf{Baselines.} The goal of the comparison is to show that by decomposing the graph structure and the attribute space, we can achieve comparable results to the state-of-the-art algorithms. Thus, we use as baselines graph neural network variants and specifically: DGCNN~\cite{Zhang2018AnED}, Capsule GNN~\cite{Xinyi2019CapsuleGN}, GIN~\cite{xu2018powerful}, GCN~\cite{Kipf:2016tc}, GAT~\cite{gat}. Moreover, GFN~\cite{DBLP:journals/corr/abs-1905-04579} augments the attributes with structural features and ignores the graph structure during the learning procedure.
\textbf{Model setup.}
For the baselines, we followed the same experimental setup, as described in~\cite{DBLP:journals/corr/abs-1905-04579} and, thus, we report the achieved accuracies. For GAT, we used a summation operator as an aggregator of the node vectors into a graph-level representation. Regarding the VNEstruct\xspace, we performed 10-fold cross-validation with Adam optimizer and a 0.3 learning rate decay every 50 epochs. In all experiments, we set the number of epochs to 300. We choose the radius of the ego-networks from $r \in \{1,2,3,4\}$ and the number of hidden layers of the MLPs from $d \in \{8,16,32\}$.
Table~\ref{tab::graphclass} illustrates the average classification accuracies of the proposed approach and the baselines on the $5$ graph classification datasets.
Interestingly, the proposed approach achieves accuracies comparable to those of the state-of-the-art message-passing models. VNEstruct\xspace outperformed all the baselines on $3$ out of $4$ datasets, while achieved the second-best accuracy on the remaining dataset (i.e., PROTEINS).
\begin{figure}[h!]
\centering
\includegraphics[width=1.\columnwidth, height=4.2cm]{figures/speed2.jpeg}
\caption{Training time per epoch (in sec) of VNEstruct and competitors for the graph classification tasks. }
\label{fig:speed}
\end{figure}
\begin{table}[t]
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{|c|cccc|}
\hline
Method & MUTAG & IMDB-BINARY & PTC-MR & PROTEINS \\ \hline
DGCNN & 85.83 $\pm$ 1.66 & 70.03 $\pm$ 0.86 & 58.62$\pm$2.34 & 75.54 $\pm$ 0.94 \\
CapsGNN & 86.67 $\pm$ 6.88 & 73.10 $\pm$ 4.83 & - & 76.28 $\pm$ 3.63 \\
GAT & 88.90 $\pm$ 3.21 & 75.39 $\pm$ 1.30 & 63.87 $\pm$ 5.31 & 76.1 $\pm$ 2.89 \\
GIN & 89.40 $\pm$ 5.60 & 75.10 $\pm $ 5.10 & 64.6 $\pm$ 7.03 & 76.20 $\pm$ 2.60 \\
GCN & 87.20 $\pm$ 5.11 & 73.30 $\pm$ 5.29 & 64.20 $\pm$ 4.30 & 75.65 $\pm$ 3.24 \\
\hline
GFN & 90.84 $\pm$ 7.22 & 73.00 $\pm$ 4.29 & - & \textbf{77.44 $\pm$ 3.77}\\
VNEstruct\xspace & \textbf{91.08 $\pm$ 5.65 } & \textbf{75.40 $\pm$ 3.33} & \textbf{65.39 $\pm$ 8.57} & 77.41 $\pm$ 3.47 \\
\hline
\end{tabular}
}
\caption{Average classification accuracy ($\pm$ standard deviation) of the baselines and the proposed VNEstruct\xspace.}
\label{tab::graphclass}
\end{table}
Figure~\ref{fig:speed} illustrates the average training time per epoch of VNEstruct\xspace and the baselines that apply message-passing schemes.
The proposed approach is generally more efficient than the baselines.
Specifically, it is $0.31$ times faster than GIN and $0.60$ times faster than GCN on average.
This improvement in efficiency is mainly because the graph structural features are computed in a preprocessing step and are then concatenated with the node attributes. However, the computational cost of the preprocessing step is negligible, as it is performed only once in the experimental setup. Furthermore, we should mention that due to the low dimensionality of the generated embeddings ($d \leq 4$), our method does not have any significant requirements in terms of memory.
\section{Conclusion}
In this paper, we proposed VNEstruct\xspace to generate structural node representations, based on the entropies of ego-networks. We showed empirically the robustness of VNEstruct\xspace under the presence of noise in the graph. We, also, proposed an approach for performing graph classification, that combines the representations of VNEstruct\xspace with the nodes' attributes, avoiding the computational cost of message passing schemes. The proposed approach exhibited a strong performance in real-world datasets, maintaining high efficiency.
\section{Structural Representations based on Von Neumann Entropy}
Next, we present VNEstruct\xspace for generating structural node representations, employing the Von Neumann entropy, a model-agnostic measure, that quantifies the structural complexity of a graph. The Von Neumann graph entropy (VNE) has been shown to have a linear correlation with other graph entropy measures~\cite{anand}. Graph entropy methods have been recently proved successful for computing graph similarity~\cite{lipan}.
\subsection{Von Neumann Entropy on Graphs}
In quantum mechanics, the state of a quantum mechanical system is described by a \emph{density} matrix $\rho$, i.e., a positive semidefinite, hermitian matrix with unit trace~\cite{Braunstein2006}. The Von Neumann entropy of the quantum system is defined as:
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1.3pt}
\begin{equation}
\label{eq::1}
H(\rho) = -\Tr(\rho\log\rho) = -\sum_{i=1}^n \lambda_i\log\lambda_i,
\end{equation}
where $\Tr(\cdot)$ is the trace of a matrix, and $\lambda_i$'s are the eigenvalues of $\rho$.
Correspondingly, connecting it to graphs, given a graph $G=(V,E)$ and its Laplacian $L_G= D-A$, the VNE denoted by $H(G)$ , is defined as in Equation \ref{eq::1}, by replacing $\rho$ with $\rho(L_G) = \frac{L_G}{\text{Tr}(L_G)} = \frac{L_G}{2|E|}$~\cite{Braunstein2006}.
Note that $\lambda_i = \frac{1}{\text{Tr}(L_G)}v_i$ where $\lambda_i, v_i$ are the $i$-th eigenvalue of $\rho(L_G)$ and $L_G$, respectively. Therefore, $0 \leq \lambda_i \leq 1$ holds for all $i \in \{ 0,1,\ldots,n \}$~\cite{Passerini2009QuantifyingCI}.
This indicates that \Eqref{eq::1} is equivalent to the Shannon entropy of the probability distribution $\{\lambda_i\}_{i=1}^n$.
Hence, $H(G)$ serves as a skewness metric of the eigenvalue distribution and it has been shown that it provides information on the structural complexity of a graph~\cite{Passerini2009QuantifyingCI}.
\textbf{Efficient approximation scheme.}
The computation of VNE requires the eigenvalue decomposition of the density matrix which can be done in $\mathcal{O}(n^3)$ time. Recent works~\cite{fast_incr, choi2018fast} have proposed an efficient approximation of $H(G)$. Starting from \Eqref{eq::1} and following~\cite{minello}, we obtain:
\begin{equation}\label{eq::3}
H(G) \approx \text{Tr}\big(\rho(L_G)(I_n - \rho(L_G))\big) = Q,
\end{equation}
where $I_n$ is the $n \times n$ identity matrix, and
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1pt}
\begin{equation}\label{eq::4}
Q = \frac{\text{Tr}(L_G)}{2m} - \frac{\text{Tr}(L_G^2)}{4m^2}
= 1 - \frac{1}{2m} - \frac{1}{4m^2}\sum_{i=1}^n d_i^2\,,
\end{equation}
where $m=|E|$ and $d_i$ is the ith-node degree. Finally, as~\cite{fast_incr} suggests, we obtain a tighter approximation of $H(G)$:
\setlength\abovedisplayskip{1.3pt}
\setlength\belowdisplayskip{1.3pt}
\begin{equation}\label{eq::5}
\hat{H} = -Q\ln\lambda_{\max}\,,
\end{equation}
where $\lambda_{\max}$ is the largest eigenvalue of $\rho(L(G))$.
It can be shown that for any graph $G$, we have $H(G) \geq \hat{H}(G)$ where the equality holds if and only if $\lambda_{\max} = 1$~\cite{choi2018fast}.
\begin{table*}[t]
\centering
\def1.1{1.1}
\resizebox{\textwidth}{!}{
\begin{tabular}{|lc|l||ccc|cc|} \hline
Configuration & Shapes & Algorithm & Homogeneity & Completeness & Silhouette & Accuracy & F$1$-score \\ \hline
\multirow{7}{*}{Basic / Basic Perturbed} & \multirow{7}{*}{\includegraphics[width=.2\textwidth]{figures/shapes2.pdf}} & DeepWalk & 0.178 / 0.172 & 0.115 / 0.124 & 0.163 / 0/171 & 0.442 / 0.488 & 0.295 / 0.327 \\
& & RolX& 0.983 / 0.764 & 0.976 / 0.458 & 0.846 / 0.429 & \textbf{1.000} / 0.928 & \textbf{1.000} / \textbf{0.886} \\
& & struc2vec& 0.803 / 0.625 & 0.595 / 0.543 & 0.402 / 0.429 & 0.784 / 0.703 & 0.708 / 0.632 \\
& & GraphWave& 0.868 / 0.714 & 0.797 / 0.326 & 0.730 / 0.287 & 0.995 / 0.906 & 0.993 / 0.861 \\
& & VNEstruct& \textbf{0.986} / \textbf{0.882} & \textbf{0.983} / \textbf{0.701} & \textbf{0.891} / \textbf{0.478} & 0.920 / \textbf{0.940} & 0.901 / 0.881 \\ \hline
\multirow{7}{*}{Varied / Varied Perturbed} &
\multirow{7}{*}{\includegraphics[width=.2\textwidth]{figures/shapes5.pdf}} & DeepWalk& 0.327 / 0.300 & 0.220 / 0.231 & 0.216 / 0.221 & 0.329 / 0.313 & 0.139 / 0.128 \\
& & RolX& \textbf{0.984} / 0.682 & 0.939 / 0.239 & 0.748 / 0.062 & \textbf{0.998} / 0.856 & 0.996 / 0.768 \\
& & struc2vec& 0.805 / 0.643 & 0.626 / 0.524 & 0.422 / \textbf{0.433} & 0.738 / 0.573 & 0.592 / 0.412 \\
& & GraphWave& 0.941 / 0.670 & 0.843 / 0.198 & \textbf{0.756} / 0.005 & 0.982 / 0.793 & \textbf{0.965} / 0.682 \\
& & VNEstruct& 0.950 / \textbf{0.722} & \textbf{0.945} / \textbf{0.678} & 0.730 / 0.399 & 0.988 / \textbf{0.899} & 0.95 \textbf{0.878} \\ \hline
\hline
\end{tabular}
}
\caption{Performance of the baselines and the VNEstruct method for learning structural embeddings averaged over $20$ synthetically generated graphs. Dashed lines denote perturbed graphs.}
\label{tab:shapes}
\end{table*}
\subsection{The VNEstruct\xspace Algorithm}
Based on the VNE and its approximation, we introduce our proposed approach to construct structural representations. The VNEstruct\xspace algorithm extracts ego-networks of increasing radius and computes their VNE.
Then, the representation of a node comprises of the Von Neumann entropies that emerged from the node's ego-networks.
Therefore, the set of entropies of the ego-networks of a node serves as a ``signature'' of the structural identity of its neighborhood.
Let $R$ be the maximum considered radius. For each $r \in \{1,..,R\}$ and each node $v \in V$, the algorithm extracts the $r$-hop neighborhood $G_v^r = (V',E')$, where $V' = \{u \in V | d(u,v) \leq r\}$ and $E' = \{(u,v) | u, v \in V', (u,v) \in E \}$.
Next, $H(G_v^r)$ of the $r$-hop neighborhood of $v$ is computed using \Eqref{eq::5}.
Finally, the $R$ entropies are arranged into a single vector $h_v \in \mathbb{R}^R$. As shown in Figure~\ref{fig:method}, VNEstruct\xspace identifies structural equivalences of nodes that are distant to each other. Specifically, nodes $u$ and $v$ share structurally identical $1$-hop neighborhoods.
Therefore, the entropies of their $1$-hop neighborhoods are equal to each other.
\makeatletter
\newcommand{\let\@latex@error\@gobble}{\let\@latex@error\@gobble}
\makeatother
\textbf{Computational Complexity.}
The algorithm consists of: ($1$) the extraction of the ego-networks and ($2$) the computation of VNEs per subgraph.
The first step is linear in the number of edges of the node's neighborhood.
In the worst case, the complexity is $\mathcal{O}(nm)$, but for sparse graphs the complexity is constant in practice.
For the second step, following the approximation scheme in subsection 2.1,
$\lambda_{max}$ is computed through the power iteration method~\cite{power_iteration}, which requires $\mathcal{O}(n+m)$ operations, as the Laplacian matrix has $n+m$ nonzero entries.
Hence, the whole method exhibits linear complexity $\mathcal{O}(n+m)$, while for very sparse graphs, it becomes $\mathcal{O}(n)$.
\textbf{Robustness over "small" perturbations.} We will next show that utilizing the VNE, we can acquire robust structural representations over possible perturbations on the graph structure. Clearly, if two graphs are isomorphic to each other, then their entropies will be equal to each other. It is important, though, for structurally similar graphs to have similar entropies, too. So, let $\rho, \rho' \in \mathbb{R}^{n \times n}$ be the density matrices of two graph laplacians $L_G,L_{G'}$, as described above.
Let also $\rho = P \rho' P^\top + \epsilon$ where $P$ is an $n \times n$ permutation matrix equal to $\argmin_{P} || \rho - P \rho' P^\top ||_F$ and $\epsilon$ is an $n \times n$ symmetric matrix.
If $G,G'$ are nearly-isomorphic, then the Frobenius norm of $\epsilon$ is small. By applying the Fannes-Audenaert inequality~\cite{Audenaert_2007}, we have that:
\begin{equation*}
|H(\tilde{\rho}) - H(\rho)| \leq \frac{1}{2}T \ln(n-1) + S(T),
\end{equation*} where $T = ||\tilde{\rho}-\rho||_1$ is the trace distance between $\rho,\tilde{\rho}$ and $S(T) = -T\log T- (1-T)\log(1-T)$.
However, $||\tilde{\rho}-\rho||_1 = \sum_i|\lambda_i^{\tilde{\rho}-\rho}| \leq n||\tilde{\rho} - \rho ||_{op}$, where $|| \cdot ||_{op}$ is the operator norm.
Therefore, $|H(\tilde{\rho})- H(\rho)| \leq \frac{n}{2}ln(n-1)||\epsilon||_{op} + S(T)$, leading thus to a size-dependent upper bound of the difference between the entropies of structurally similar graphs.
\subsection{Graph-level Representations}
Next, we propose how the structural representations generated by VNEstruct\xspace can be combined with node attributes to perform graph classifications tasks.
The majority of the state-of-the-art methods learn node representations using message-passing schemes~\cite{hamilton, xu2018powerful}, where each node updates its representation according to its neighbors' representations, utilizing the graph structure information.
In this work, we do not use any message-passing scheme and we ignore the graph structure.
Instead, we augment the node attribute vectors of a graph with the structural representations generated by VNEstruct\xspace.
Thus, information about the graph structure is implictly incorporated into the augmented node attributes. Given a matrix of node attributes $X \in \mathbb{R}^{n\times d}$, the approach performs the following steps:
\begin{itemize}
\itemsep0em
\item Computation of $H_v \in \mathbb{R}^{n\times R}$.
\item Concatenation of node attribute vectors with structural node representations: $X' = [X || H] \in \mathbb{R}^{n\times (d+R)}$.
\item Aggregation of node vectors $X'$ into:\\ $H_G = \psi( \sum_{v \in V_G} \phi(X'_v)) $, where $\phi$ and $\psi$ are MLPs.
\end{itemize}
This approach is on par with recent studies that propose to augment the node attributes with structural characteristics to avoid performing message-passing steps~\cite{DBLP:journals/corr/abs-1905-04579}.
In comparison to a GNN, this procedure reduces the computational complexity of the training procedure since each graph is represented as a set of node representations.
\section{Introduction}
The amount of data that can be represented as graphs has increased significantly in recent years. Graph representations are ubiquitous in several fields such as in biology, chemistry~\cite{pmlr-v70-gilmer17a}, and social networks~\cite{hamilton}.
Many applications require performing machine learning tasks on graph-structured data, such as graph classification~\cite{xu2018powerful}, semi-supervised node classification~\cite{Kipf:2016tc} and link prediction~\cite{kipf2016variational}
The past few years have witnessed great activity in the field of learning on graphs. Graph Neural Networks (GNNs) have emerged as a successful approach on learning node-level and graph-level representations.
Recently, the interest on this field has focused on the expressiveness of GNNs~\cite{xu2018powerful,clip} and how these models can be deep enough to extract long-range information from distant nodes in the graph~\cite{deeperinsight,pairnorm,khop}. So far, most of the models are designed so that they preserve the proximity between nodes, i.e., nodes that are close to each other in the graph obtain similar representations~\cite{hamilton, perozzi2014deepwalk}. However, some tasks require assigning similar representations to nodes that can be distant in the graph, but structurally equivalent. For example, in chemistry, properties of a molecule often depend on the interaction of the atoms at its oposite sides and their neighborhood topology~\cite{deepatoms}.
These tasks require \emph{structural representations}, i.e. embeddings that can identify structural properties of a node's neighborhood. There is a growing literature that adresses this problem through different approaches. RolX ~\cite{rolx} extracts features for each node and performs non-negative matrix factorization to automatically discover node roles. Struc2vec~\cite{DBLP:journals/corr/FigueiredoRS17},
performs random walks on a constructed multi-layer graph to learn structural representations. GraphWave~\cite{donat} and DRNE~\cite{drne} employ diffusion wavelets and LSTM aggregation operators, respectively, to generate structural node embeddings.
However, most of these approaches suffer from high time or space complexity.
In this paper, we propose a novel and simple structural node representation algorithm, VNEstruct\xspace, that capitalizes on information-theoretic tools. The algorithm employs the Von Neumann entropy to construct node representations related to the structural identity of the neighborhood of each node. These representations capture the structural symmetries of the neighborhoods of increasing radius of each node. We show empirically the ability of VNEstruct\xspace to identify structural roles and its robustness to graph perturbations through a node classification and node clustering study on highly symmetrical synthetic graphs.
Moreover, we introduce a method of combining the generated representations by VNEstruct\xspace with the node attributes of a graph, in order to avoid the incorporation of the graph topology in the optimization, contrary to the workflow of a GNN. Evaluated on real-world graph classification tasks, VNEstruct\xspace achieves state-of-the-art performance, while maintaining a high efficiency compared to standard GNN models.
\section{Experiments}
Next, we empirically show the robustness that VNEstruct\xspace exhibits to graph perturbations in subsection 3.1 and we evaluate its graph classification performance in subsection 3.2.
\subsection{Structural Role Identification}
In order to evaluate the robustness of the structural representations generated by our method, we measure its performance on perturbed synthetic datasets, which were introduced in~\cite{donat, DBLP:journals/corr/FigueiredoRS17}.
We perform both classification and clustering tasks with the same experimental setup as in~\cite{donat}.\\
\textbf{Dataset setup.} The generated synthetic datasets are identical to those used in~\cite{donat}. They consist of basic symmetrical shapes, as shown in Table~\ref{tab:shapes}, that are regularly placed along a cycle of length $30$. The \textit{basic} setups use 10 instances of only one of the shapes of Table~\ref{tab:shapes}, while the \textit{varied} setups use 10 instances of each shape, randomly placed along the cycle.
The perturbed instances are formed by randomly rewiring edges.
The colors in the shapes indicate the different classes.\\
\textbf{Evaluation.} For the classification task, we measure the \textit{accuracy} and the \textit{F1-score}.
For the clustering task, we report the $3$ evaluation metrics, that were also calculated in~\cite{donat}: the \textit{Homogeneity} evaluates the conditional entropy of the structural roles in the produced clustering result, the \textit{Completeness} evaluates how many nodes with equivalent structural roles are assigned to the same cluster and the \textit{Silhouette} measures the intra-cluster distance vs. the inter-cluster distance.
In Table~\ref{tab:shapes}, VNEstruct outperforms the competitors on the perturbed instances of the synthetic graphs. On the basic and varied configurations, VNEstruct\xspace outperforms the competitors in the node clustering evaluation and achieves comparable performance with RolX in node classification. On the perturbed configurations, VNEstruct\xspace exhibits stronger performance than its competitors. The results in Table~\ref{tab:shapes} suggest a comparison of VNEstruct, RolX, and GraphWave in noisy scenarios. This comparison is provided in \Figref{fig:performances}, where we report the performance with respect to the number of rewired edges (from $0$ to $20$). We see that VNEstruct\xspace is more robust than GraphWave and Rolx in the presence of noise.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{figures/performances_together2.jpeg}
\caption{Classification and clustering performance of VNEstruct\xspace and the baselines with respect to noise.}
\label{fig:performances}
\end{figure}
\subsection{Graph Classification}
Next, we evaluate VNEstruct\xspace and the baselines in the task of graph classification. We compare our proposed algorithm against well-established message-passing algorithms for learning graph representations.
Note that in contrast to most of the baselines, we pre-compute the entropy-based structural representations, and then we represent each graph as a set of vectors that encode structural characteristics. We used 4 common graph classification datasets (3 from bioinformatics: MUTAG, PROTEINS, PTC-MR and 1 from social-networks: IMDB-BINARY~\cite{xu2018powerful}).
\textbf{Baselines.} The goal of the comparison is to show that by decomposing the graph structure and the attribute space, we can achieve comparable results to the state-of-the-art algorithms. Thus, we use as baselines graph neural network variants and specifically: DGCNN~\cite{Zhang2018AnED}, Capsule GNN~\cite{Xinyi2019CapsuleGN}, GIN~\cite{xu2018powerful}, GCN~\cite{Kipf:2016tc}, GAT~\cite{gat}. Moreover, GFN~\cite{DBLP:journals/corr/abs-1905-04579} augments the attributes with structural features and ignores the graph structure during the learning procedure.
\textbf{Model setup.}
For the baselines, we followed the same experimental setup, as described in~\cite{DBLP:journals/corr/abs-1905-04579} and, thus, we report the achieved accuracies. For GAT, we used a summation operator as an aggregator of the node vectors into a graph-level representation. Regarding the VNEstruct\xspace, we performed 10-fold cross-validation with Adam optimizer and a 0.3 learning rate decay every 50 epochs. In all experiments, we set the number of epochs to 300. We choose the radius of the ego-networks from $r \in \{1,2,3,4\}$ and the number of hidden layers of the MLPs from $d \in \{8,16,32\}$.
Table~\ref{tab::graphclass} illustrates the average classification accuracies of the proposed approach and the baselines on the $5$ graph classification datasets.
Interestingly, the proposed approach achieves accuracies comparable to those of the state-of-the-art message-passing models. VNEstruct\xspace outperformed all the baselines on $3$ out of $4$ datasets, while achieved the second-best accuracy on the remaining dataset (i.e., PROTEINS).
\begin{figure}[h!]
\centering
\includegraphics[width=1.\columnwidth, height=4.2cm]{figures/speed2.jpeg}
\caption{Training time per epoch (in sec) of VNEstruct and competitors for the graph classification tasks. }
\label{fig:speed}
\end{figure}
\begin{table}[t]
\centering
\resizebox{0.47\textwidth}{!}{
\begin{tabular}{|c|cccc|}
\hline
Method & MUTAG & IMDB-BINARY & PTC-MR & PROTEINS \\ \hline
DGCNN & 85.83 $\pm$ 1.66 & 70.03 $\pm$ 0.86 & 58.62$\pm$2.34 & 75.54 $\pm$ 0.94 \\
CapsGNN & 86.67 $\pm$ 6.88 & 73.10 $\pm$ 4.83 & - & 76.28 $\pm$ 3.63 \\
GAT & 88.90 $\pm$ 3.21 & 75.39 $\pm$ 1.30 & 63.87 $\pm$ 5.31 & 76.1 $\pm$ 2.89 \\
GIN & 89.40 $\pm$ 5.60 & 75.10 $\pm $ 5.10 & 64.6 $\pm$ 7.03 & 76.20 $\pm$ 2.60 \\
GCN & 87.20 $\pm$ 5.11 & 73.30 $\pm$ 5.29 & 64.20 $\pm$ 4.30 & 75.65 $\pm$ 3.24 \\
\hline
GFN & 90.84 $\pm$ 7.22 & 73.00 $\pm$ 4.29 & - & \textbf{77.44 $\pm$ 3.77}\\
VNEstruct\xspace & \textbf{91.08 $\pm$ 5.65 } & \textbf{75.40 $\pm$ 3.33} & \textbf{65.39 $\pm$ 8.57} & 77.41 $\pm$ 3.47 \\
\hline
\end{tabular}
}
\caption{Average classification accuracy ($\pm$ standard deviation) of the baselines and the proposed VNEstruct\xspace.}
\label{tab::graphclass}
\end{table}
Figure~\ref{fig:speed} illustrates the average training time per epoch of VNEstruct\xspace and the baselines that apply message-passing schemes.
The proposed approach is generally more efficient than the baselines.
Specifically, it is $0.31$ times faster than GIN and $0.60$ times faster than GCN on average.
This improvement in efficiency is mainly because the graph structural features are computed in a preprocessing step and are then concatenated with the node attributes. However, the computational cost of the preprocessing step is negligible, as it is performed only once in the experimental setup. Furthermore, we should mention that due to the low dimensionality of the generated embeddings ($d \leq 4$), our method does not have any significant requirements in terms of memory.
\section{Conclusion}
In this paper, we proposed VNEstruct\xspace to generate structural node representations, based on the entropies of ego-networks. We showed empirically the robustness of VNEstruct\xspace under the presence of noise in the graph. We, also, proposed an approach for performing graph classification, that combines the representations of VNEstruct\xspace with the nodes' attributes, avoiding the computational cost of message passing schemes. The proposed approach exhibited a strong performance in real-world datasets, maintaining high efficiency.
|
{
"timestamp": "2021-02-18T02:17:00",
"yymm": "2102",
"arxiv_id": "2102.08735",
"language": "en",
"url": "https://arxiv.org/abs/2102.08735"
}
|
\section{Introduction}
Absorption imaging is a standard technique for observations in quantum gas experiments which relies on resonant atom light interaction in ideally closed optical cycle schemes.\cite{ketterleMakingProbingUnderstanding1999}
At very high magnetic fields in the Paschen-Back regime these can be found for every ground state.
At moderate fields, where only the excited state is well in the Paschen-Back regime, this is only possible for the atom's stretched states with maximal or minimal magnetic quantum number. Efficient optical pumping schemes to reach these stretched states \cite{berningerUniversalThreeFourbody2011, berningerPaper}
are not available for arbitrary initial states.
However, when using Feshbach resonances to tune the atomic interaction strength,\cite{chinFeshbachResonancesUltracold2010} the choice of atomic states is fixed.
Recently, a scheme for fluorescence imaging has been developed that improves the single-atom detection in these states.\cite{BergschneiderImaging} It makes use of two atomic transitions in order to obtain an approximately closed four-level optical cycle. Here, we adapt this scheme to absorption imaging of dense atomic clouds.
\begin{figure}
\centering
\includegraphics{figure1.pdf}
\caption{\textbf{Absorption imaging of a BEC of $^{39}$K.}
The number of scattered photons $N_\mathrm{scatt}$ levels off within $\sim 5\,\mu \mathrm{s}$ when imaging the atoms with a single laser frequency ($\sigma^-$, red points). By adding a second frequency ($\sigma^+$), the signal can be enhanced drastically (blue points). The difference is clearly visible in the absorption images of the atom cloud after $20\,\mu\mathrm{s}$ (same color scale used for both images; in the dark blue regions no photons are scattered). The upper inset shows the energy eigenstates of the ground state $\mathrm{S}_{1/2}$ and the excited state $\mathrm{P}_{3/2}$ hyperfine manifold as a function of the magnetic field $B$. The two imaging transitions are indicated with arrows. The lower inset shows a simplified schematic of the experimental setup. From left to right: laser light impinges on the atomic cloud, which is imaged via an objective and a secondary lens onto a CCD camera.
}
\label{fig1}
\end{figure}
\section{Experimental setting}
We exemplify the technique with a Bose-Einstein condensate (BEC) of $^{39}$K in the state which corresponds to $\lvert F,m_F\rangle =\lvert 1,-1\rangle$ at low magnetic fields. For our experiments we work at $550\,\mathrm{G}$, close to a broad Feshbach resonance.\cite{derricoFeshbachResonancesUltracold2007}
Figure 1 compares the absorption signal obtained with the improved absorption scheme (blue points) to the signal using only one laser frequency (red points). The latter results in a vanishing scattering after $\sim 5\,\mu\mathrm{s}$. With the addition of the second frequency, a drastic enhancement is achieved. In the experimental setup, each laser frequency is generated by a dedicated external cavity diode laser that is offset-locked\cite{offsetlock} to the cooling laser stabilized on the D2 line of $^{39}$K. After double-pass AOM paths for pulsing, both laser frequencies are coupled into the same single-mode optical fiber with orthogonal polarizations and pass through the same quarter-wave plate after the fiber. A CCD camera detects the total absorption signal (see Fig.~1, lower inset).
The number of scattered photons is estimated by
$N_{\mathrm{scatt}} = - \mathcal{G}\,(C_{\mathrm{f}}-C_{\mathrm{i}})$,
where $C_{\mathrm{f}}$ and $C_{\mathrm{i}}$ are the number of integrated counts on the CCD camera with and without atoms, respectively. The factor $\mathcal{G}$ includes the camera gain as well as a correction factor for the solid angle of the objective, reflection loss along the imaging beam path, and the quantum efficiency of the camera.
\begin{figure}
\centering
\includegraphics{figure2.pdf}
\caption{\textbf{Four-level scheme for imaging.}
The imaging transition $\sigma^-$ transfers atoms from the initial state $\lvert g_-\rangle = \sqrt{p}\,\lvert -1/2, -1/2\rangle + \sqrt{1-p}\,\lvert 1/2, -3/2\rangle$ (black dot) to the excited state $\lvert e_-\rangle\simeq\lvert -3/2,-1/2\rangle$. Here, the states $\lvert m_J,m_I\rangle$ are the basis states of the electron's total angular momentum $\mathbf{J}$ and the nuclear spin $\mathbf{I}$.
$p = 0.98$ at a magnetic field of $550\,\mathrm{G}$. Most of the atoms decay back to the initial state (dashed arrow), but a small leakage populates the state $\lvert g_+\rangle = \sqrt{p}\,\lvert 1/2, -3/2\rangle + \sqrt{1-p}\,\lvert -1/2, -1/2\rangle$ (dotted arrow). A second laser frequency drives the transition $\sigma^+$, which couples the state $\lvert g_+\rangle$ to the excited state $\lvert e_+\rangle\simeq\lvert 3/2, -3/2\rangle$ from where the atoms can decay back into $\lvert g_+\rangle$ and $\lvert g_-\rangle$ only. This results in a closed optical cycle when the excited states are sufficiently pure in quantum numbers $m_J,m_I$.
}
\label{fig2}
\end{figure}
\section{Principle of the method}
The upper inset of Fig.~1 shows the Breit-Rabi diagram of the $\mathrm{S}_{1/2}$ ground state and $\mathrm{P}_{3/2}$ excited state manifold, with the employed transitions depicted as arrows.
The relevant four-level scheme is depicted in Fig.~2. The atoms are initially prepared in $\lvert g_-\rangle \sim \lvert m_J,m_I\rangle = \lvert -1/2,-1/2\rangle$ with $m_J$ and $m_I$ denoting the magnetic quantum numbers of the electron's total angular momentum $\mathbf{J}$ and the nuclear spin $\mathbf{I}$, respectively.
Imaging at a single frequency involves a $\sigma^-$ transition to the state $\lvert e_-\rangle\simeq\lvert -3/2,-1/2\rangle$ in the $\mathrm{P}_{3/2}$ excited state manifold. The nearby states ($<15\,\mathrm{MHz}$) with the same $m_J$ are not addressed, since the nuclear spin quantum number $m_I$ is not changed by electric dipole transitions and the atomic eigenstates are pure up to $10^{-4}$ in the $\lvert m_J, m_I\rangle$ states. The excited state $\lvert e_-\rangle$ has a small leakage into a dark state $\lvert g_+\rangle\sim\lvert 1/2, -3/2\rangle$, which causes the quick saturation of the signal in Fig.~1.
Specifically, the two ground states read
\begin{align}
\lvert g_-\rangle = \sqrt{p}\,\lvert -1/2, -1/2\rangle + \sqrt{1-p}\,\lvert 1/2, -3/2\rangle , \nonumber \\
\lvert g_+\rangle = \sqrt{p}\,\lvert 1/2, -3/2\rangle + \sqrt{1-p}\,\lvert -1/2, -1/2\rangle ,\label{eq:groundstate}
\end{align}
with $p \simeq 0.98$ at a field of $550\,\mathrm{G}$. As both ground states have an admixture of $\lvert -1/2,-1/2\rangle$, the excited state $\lvert e_-\rangle\simeq\lvert -3/2,-1/2\rangle$ can decay into both. The $2\,\%$ admixture is consistent with the observed time scale of $2.2\,\mu\mathrm{s}$, after which half of the atoms are transferred into the dark state.
To enhance the signal we address the state $\lvert g_+\rangle$ with the second laser frequency. This $\sigma^+$ light couples $\lvert g_+\rangle$ to the excited state $\lvert e_+\rangle\simeq\lvert 3/2, -3/2\rangle$. It closes the optical cycle to good approximation and results in the effective four-level system shown in Fig.~2. During a typical $10\,\mu\mathrm{s}$ imaging pulse we expect to lose only $2\,\%$ of the atoms into the ground states $\sim\lvert -1/2,1/2\rangle$ and $\sim\lvert 1/2,-1/2\rangle$, which are not addressed by the imaging light. This results from the limited purity of the excited states in the $ \lvert m_J,m_I\rangle$ basis. Off-resonant coupling to other excited states is negligible for typical imaging intensities since the closest transitions are detuned by at least $350\,\mathrm{MHz}$.
\begin{figure}
\centering
\includegraphics{figure3.pdf}
\caption{\textbf{Optimization of the intensity ratio.}
The number of scattered photons is measured at different ratios $r=I_-/I_\mathrm{tot}$ and different total intensities $I_\mathrm{tot}=I_-+I_+$. The imaging pulse length is $10\,\mu\mathrm{s}$, and the data points correspond to total intensities of
$23\,\mathrm{mW/cm^2}$ (triangle),
$42\,\mathrm{mW/cm^2}$ (diamond),
$60\,\mathrm{mW/cm^2}$ (square),
$79\,\mathrm{mW/cm^2}$ (circle).
For the highest intensities, the largest signal is found at $r\simeq0.5$.
For decreasing light intensities this optimum slightly shifts to larger ratios as it takes longer to reach the steady state. For the highest intensities we can compare the data to numerical solutions of the optical Bloch equations for the four-level system, scaled by a global factor (solid curves).
}
\label{fig3}
\end{figure}
\section{Optimal intensity ratio}
We optimize the absorption signal by varying the ratio $r=I_-/I_\mathrm{tot}$ between the intensities $I_-$ and $I_+$ on the two imaging transitions $\sigma^-$ and $\sigma^+$, respectively. Here, the total imaging beam intensity $I_\mathrm{tot} = I_- + I_+$ is kept constant. Figure 3 shows the number of scattered photons $N_\mathrm{scatt}$ for different configurations and compares the results to numerical solutions of the optical Bloch equations for the four-level system (scaled by a constant factor). In the case without $\sigma^+$ light ($r=1$) the total signal is limited by the decay into the dark state. Imaging without $\sigma^-$ light ($r=0$) results in no signal, as the initial state of the atoms is not addressed by this light. For the highest imaging intensities the maximum number of scattered photons is obtained at $r\simeq 0.5$, as expected from the steady state solution.
For smaller intensities, the optimum is at higher ratios $r$. This results from the initial pumping dynamics starting in $\lvert g_-\rangle$ that are still relevant for the short imaging duration of $10\,\mu\mathrm{s}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figure4.pdf}
\caption{\textbf{Calibration of the imaging system.}
For each ratio $r=I_-/I_\mathrm{tot}$ the effective saturation intensity $I_{\mathrm{sat}}^{\mathrm{eff}} = \alpha I_{\mathrm{sat}}$ is chosen such that the resulting atomic column density $n_c$ is invariant under changes of the imaging intensity $I_\mathrm{tot}$. The inset shows this procedure for $r=0.5$. The theoretical predictions obtained from the steady state solution and the numerical simulation of the dynamics are shown by the dashed and the solid curves, respectively. The experimental values are scaled by the mean of the three points around $r=0.5$, and the theoretical curves by their respective values at $r=0.5$. The error bars are estimated by bootstrap resampling.
}
\end{figure}
\section{Calibration}
To obtain an accurate estimate of the atomic density, we calibrate the imaging system following the method presented in Reinaudi et~al.\cite{reinaudiStrongSaturationAbsorption2007} Each atom in the cloud is described as an effective two-level system, which includes saturation effects. One solves the resulting Beer-Lambert-type differential equation for resonant light and integrates along the direction of the imaging beam. This leads to the atomic column density
\begin{equation}
n_\mathrm{c}=\frac{1}{\sigma_\mathrm{eff}}\left[\ln \left(\frac{I_{\mathrm{i}}}{I_{\mathrm{f}}}\right)+\frac{I_{\mathrm{i}}-I_{\mathrm{f}}}{I_{\mathrm{sat}}^{\mathrm{eff}}}\right].
\end{equation}
Here, the final intensity $I_{\mathrm{f}}$ and the initial intensity $I_{\mathrm{i}}$ are the total intensities measured via the signal on the CCD camera with and without the presence of atoms, respectively.
$\sigma_\mathrm{eff}$ is the effective scattering cross-section and $I_{\mathrm{sat}}^{\mathrm{eff}}=\alpha I_{\mathrm{sat}}$ is the effective saturation intensity. The deviation from the bare saturation intensity $I_{\mathrm{sat}}$
of a single closed two-level optical cycle
captures effects of polarization, detuning fluctuations of the laser from atomic resonance, and optical pumping effects. We estimate the effective saturation intensity by taking absorption images for a constant atom number with different total imaging intensities.
$I_{\mathrm{sat}}^{\mathrm{eff}}$ is optimized such that the column density $n_\mathrm{c}$ is invariant under changes in intensity. As shown in the inset of Fig.~4, we find an optimum for $I_\mathrm{sat}^\mathrm{eff} = (18\pm 4) I_\mathrm{sat}$, where $I_\mathrm{sat}$ is the saturation intensity of a single transition.
With the value of $I_\mathrm{sat}^\mathrm{eff}$ at hand an absolute atom number can be calibrated by comparing atomic density distributions with theoretical predictions \cite{TwoDYefsah} or the detection of atomic shot noise.\cite{ReadoutMussel}
To predict a value for $I_{\mathrm{sat}}^{\mathrm{eff}}$ we scale the theoretical results for the scattering rate versus intensity of the four-level system to the expectation for an effective two-level system. For the steady state the analytic solution reads
\begin{equation}
I_{\mathrm{sat}}^{\mathrm{eff}} (r) = \frac{I_\mathrm{sat}}{2r(1-r)}\;.
\end{equation}
We use that the coupled four-level system can be described by two two-level systems with equal $I_\mathrm{sat}$ which are only coupled to each other via the incoherent spontaneous decay of their excited states. Thus, no coherence is built up and the two subsystems can be described as being independent. In the steady state this leads to an imbalance in population of the two systems for $r\neq0.5$. In the case of $r=0.5$ the two populations are equal and the effective saturation intensity is twice the value of the single two-level system. We attribute the remaining deviation in the absolute value between experimental and theoretical $I_{\mathrm{sat}}^{\mathrm{eff}}$ mainly to instabilities of the imaging laser frequencies.
In Fig.~4 we show experimental results for the dependence of $I_{\mathrm{sat}}^{\mathrm{eff}}$ on the ratio $r$ and compare them to the analytic solution (dashed curve). At the largest and smallest ratios, deviations between experimental and analytic behavior arise due to the initial population dynamics of the four-level system. These can be captured by a numerical simulation, as shown by the solid curve in Fig.~4. As before, the numerical results for the scattering rate versus total intensity are scaled to those of an effective two-level system. From $r\sim0.4$ to $0.6$ the effective saturation intensity varies only slightly, making the calibration of the column density robust against small changes of the imaging intensities.
\section{General perspectives}
Finally, we note that the imaging procedure can be generalized to all alkali-like atoms. The ground states can always be written as a superposition of maximally two $\lvert m_J,m_I\rangle$ states. This is a consequence of the the fact that the spin operator $F_z$ commutes with the Hamiltonian
\begin{equation}
H = a_{hf}/\hbar^2\ \mathbf{J}\cdot\mathbf{I} + \mu_B B_z/\hbar\ (g_J J_z + g_I I_z)
\end{equation}
of the ground state hyperfine manifolds. Here, $a_{hf}$ is the magnetic dipole constant and $g_J, g_I$ the electron and nuclear $g$-factors, respectively. This means that the z-projection $m_F = m_J + m_I$ of $\mathbf{F}$ is always a good quantum number. Since $J = 1/2$ for the ground states of all alkali atoms (i.e. $m_J=\pm1/2$) there are maximally two states with the same $m_F$. Except for the stretched states with maximal $\vert m_F \vert$, all states can be written in the form of Eq.~\ref{eq:groundstate}, and the imaging scheme can be applied.
\begin{acknowledgments}
We thank Selim Jochim, Martin Gärttner and Benedikt Erdmann for discussions. This work was supported by the DFG Collaborative Research Center SFB1225 (ISOQUANT), the ERC Advanced Grant Horizon 2020 EntangleGen (Project-ID 694561), and the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster). M.~H.~acknowledges support from the Landesgraduiertenförderung Baden-Württemberg, C.~V. and N.~L.~acknowledge support from the Heidelberg graduate school for physics.
\end{acknowledgments}
\section*{Data Availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
{
"timestamp": "2021-02-18T02:16:39",
"yymm": "2102",
"arxiv_id": "2102.08724",
"language": "en",
"url": "https://arxiv.org/abs/2102.08724"
}
|
\chapter{Conclusions and Future Work}
\label{ch:conclusions}
\section{Conclusions}
In this project, a deep learning method that is based on a convolutional neural network is considered. Software simulation tools have been learned and used to achieve the main objectives of our project such as Linux, robot operating system (ROS), C++, python, and GAZEBO simulator.
Software simulation is to achieve the output of the method using the laser sensor data that is preprocessed in such a way that enables the network to decide which direction to follow to move nearer to the target.
The goal-oriented motion problem for the DroNet approach has been solved using mapping and path planning. In addition to this, this thesis proposed a marketing service restaurant application at a low cost. Finally, the simulation results are very promising and the robot performance is good.
\section{Future Work}
Now, we are going to list some possible directions for future work:
\begin{itemize}
\item We first realize our project by hardware implementation.
\item This work may be extended for agriculture application by combination internet of thing (IOT) approach.
\item The goal oriented motion problem for the DroNet approach my solved via potential field approach.
\end{itemize}
\chapter*{\centering Abstract}
\thispagestyle{empty}
Mobile robotics is a research area that has witnessed incredible advances for the last decades. Robot navigation is an essential task for mobile robots. Many methods are proposed for allowing robots to navigate within different environments. This thesis studies different deep learning-based approaches, highlighting the advantages and disadvantages of each scheme. In fact, these approaches are promising that some of them can navigate the robot in unknown and dynamic environments. In this thesis, one of the deep learning methods based on convolutional neural network (CNN) is realized by software implementations. There are different preparation studies to complete this thesis such as introduction to Linux, robot operating system (ROS), C++, python, and GAZEBO simulator. Within this work, we modified the drone network (namely, DroNet) approach to be used in an indoor environment by using a ground robot in different cases. Indeed, the DroNet approach suffers from the absence of goal-oriented motion. Therefore, this thesis mainly focuses on tackling this problem via mapping using simultaneous localization and mapping (SLAM) and path planning techniques using Dijkstra. Afterward, the combination between the DroNet ground robot-based, mapping, and path planning leads to a goal-oriented motion, following the shortest path while avoiding the dynamic obstacle. Finally, we propose a low-cost approach, for indoor applications such as restaurants, museums, etc, on the base of using a monocular camera instead of a laser scanner.
\chapter*{\centering Acknowledgements}
\thispagestyle{empty}
In the Name of ALLAH, the Most Merciful, the Most Compassionate all praise be to ALLAH and prayers and peace be upon the Prophet Mohamed. First and foremost, we are totally sure that this work would have never become truth, without help of ALLAH.
Secondly, we would like to express our deepest appreciation to our supervisors professor Mostafa Elshafei and Dr. Mohamed Sobhy for their continuous stimulating suggestions and encouragement. We would like to express our special thanks of gratitude to professor Mostafa Elshafei, who granted us the opportunity for achievement our graduation project at Zewail City. We would like to express our deepest appreciation to Dr. Mohamed Sobhy for his patient academic guidance through the research and preparation of this thesis. Because of their invaluable advices and constructive direction, I have been able to finish this dissertation for my graduation project.
Third, we would like to acknowledge with much appreciation the crucial role of Eng. Ihab S. Mohamed. This work wouldn't have been finalized at this level without his continuous support in every aspect of the project as well as his fruitful discussions and bright suggestions from the start. He has taught us so much about robotics and have helped develop a stronger desire to continue to pursue research in the field. Thanks a lot for his tediously reviewing our thesis.
Finally, we must express our very profound gratitude to our parents, sisters, brothers, and families for providing us with unfailing support and continuous encouragement throughout our years of study and through the process of working on this project. This accomplishment would not have been possible without them. Thank you.
\chapter{Convolutional Neural Networks}
\label{ch:CNN}
\section{Artificial Neural Networks}
Even though computers are designed by and for humans, it is clear that the concept of a computer is very different from a human brain. The human brain is a complex and non-linear system, and on top of that its way of processing information is highly parallelized. It is based on structural components known as neurons, which are all designed to perform certain types of computations. It can be applied to a huge amount of recognition tasks, and usually performs these within 100 to 200 ms. Tasks of this kind are still very difficult to process, and just a few years ago, performing these computations on a CPU could take
days \cite{haykin2004comprehensive}.
Inspired by this amazing system, in order to make computers more suitable for these kinds of task a new way of handling these problems arose. It is called an Artificial neural network (ANN). An ANN is a model based on a potentially massive interconnected network of processing units, suitably called neurons. In order for the network and its neurons to know how to handle incoming information, the model has to acquire knowledge. This is done through a learning process. The connections between the neurons in the network are represented by weights, and these weights store the knowledge learned by the model. This kind of structure results in high generalization, and the fact that the way the neurons handle data can be non-linear is beneficial for a whole range of different applications. This opens up completely new approaches for input-output mapping and enables the creation of highly adaptive models for computation \cite{haykin2004comprehensive}. The learning process itself generally becomes a case of what is called supervised learning, which is described in the next segment.
\section{What is CNN?}
\label{sec:cnn}
\textit{Convolutional Neural Network (CNN, or ConvNet)} is a type of artificial neural network inspired by biological processes \cite{lecun2013deep}. In machine learning, it is a class of deep, feed-forward artificial neural networks that has successfully been applied to analyzing visual imagery. It can be seen as a variant of Multilayer Perceptron (MLP). In computer vision, a traditional MLP connects each hidden neuron with every pixel in the input image trying to find global patterns. However, such a connectivity is not efficient because pixels distant to each other are often less correlated. The found patterns are thus less discriminative to be fed to a classifier. In addition, due to this dense connectivity, the size of parameters grows largely as the size of an input image increases, resulting in substantial increases in both computational complexity and memory space usage.
However, these problems can be alleviated in CNNs. A hidden neuron in CNNs only connects to a local patch in the input image. This type of sparse connectivity is more effective to discover local patterns and these local patterns learned from one part of an image are also applicable to other parts of the image.
CNNs have been widely used for visual based classification applications. In recent years, a series of R-CNN methods are proposed to apply CNNs on object detection tasks \cite{girshick2014rich, ren2015faster, girshick2015fast, mohamed2019detection}. In \cite{girshick2014rich}, the original version of R-CNN, R-CNN takes full image and object proposals as input. The regional object proposals could come from a variety of methods and in their work they use Selective Search \cite{uijlings2013selective}. Each proposed region is then cropped from the original image and wrapped to a unified $227 \times 227$ pixel size. A 4096-dimensional feature vector is extracted by forward propagating the subtracted region through fine-tuned CNN with \textit{five convolutional layers} and \textit{two fully connected layers}. With the feature vectors, a set of class-specific linear support vector machines (\textit{SVMs}) are trained for classifications.
R-CNN achieves excellent object detection accuracy, however, it has notable drawbacks. First, training and testing has multiple stages including fine-tuning CNN with \textit{Softmax} loss, training SVMs and learning bounding-box regressors. Secondly, the CNN part is slow because it performs forward pass for each object proposal without sharing computation. To address the speed problem, Spatial Pyramid Pooling network (SPPnet) \cite{he2014spatial} and Fast R-CNN \cite{girshick2015fast} are proposed. Both methods compute one single convolutional feature map for the entire input image and do the cropping on the feature map instead of on the original image and then extract feature vectors for each region. For feature extraction, SPPnet pools the feature maps into multiple sizes and concatenate them as a spatial pyramid \cite{lazebnik2006beyond}, while Fast R-CNN only use single scale of the feature maps. The feature sharing of SPPnet accelerates R-CNN by 10 to 100x in testing and 3x in training. However it still has the same multiple-stage pipeline as R-CNN. In Fast R-CNN Girshick propose a new type of layer, region of interest (RoI) pooling layer, to connect the gap between feature maps and classifiers. With this layer, they build an \textit{semi} end-to-end training framework which only rely on full image input and object proposals.
All the mentioned methods rely on external object proposal input. In \cite{ren2015faster}, the authors proposed proposal-free framework called Faster R-CNN. In Faster R-CNN, they use a RPN, which slides over the last convolutional feature maps to generate bounding-box proposals in different scales and ratio aspects. These proposals are then fed back to Fast R-CNN as input. Another proposal-free work \textit{You Only Look Once} is proposed in \cite{redmon2016you}. This network uses features from the entire image to predict object bounding box. Instead of sliding windows on the last convolutional feature maps, this network connects the feature map output to an 4096-dimensional followed by another full-connected $7\times7\times24$ tensor. The tensor is a $7\times7$ mapping of the input image. Each grid of the tensor is a 24-dimensional vector which encodes bounding boxes and class probabilities of the object whose center falls into this grid on the origin image. The YOLO network is 100 to 500x faster than Fast R-CNN based methods, though with less than 8\% \textit{mean Average Precision} (mAP) drop on VOC 2012 test set \cite{everingham2015pascal}.
Some other specific R-CNN variants are also proposed to solve different problems. The paper from \cite{gkioxari2014r} presents a R-CNN based networks with triple loss functions combined for the task of keypoints (as representation pose) prediction and action classification of people. It also adapt R-CNN to use more than one region, but also contextual subregions for human detection and action classification called R*CNN \cite{gkioxari2015contextual}. In \cite{ouyang2015deepid}, the authors proposed DeepID-Net with deformation constrained pooling layer, which models the deformation of object parts with geometric constraint and penalty. Furthermore, a broad survey of the recent advances in CNNs and its applications in computer vision, speech and natural language processing have been presented in \cite{gu2015recent}.
On the other hands, in general, there are some effective laser-based methods for object detection, estimation and tracking using machine learning approaches \cite{barbiereal, teichman2011practical, xiaoa2016simultaneous, pinto2013object}. A multi-modal system for detecting, tracking and classifying objects in outdoor environment was presented in \cite{premebida2007lidar}.
\section{Network Structures and Essential Layers}
In this section, some important concepts related to general CNNs including the structure of networks and essential layers will be covered.
\subsection{CNN Architectures}
\label{subsec:cnnArch}
In general, the architecture of CNN can be decomposed into two stages, which are hierarchical feature extraction stage and classification stage. A typical architecture of CNN is shown in Figure \ref{fig:typicalcnn}. An input image is convolved by a set of trainable filters (kernels) each with a nonlinear mapping (e.g. ReLU \cite{nair2010rectified}) to produce so-called \textit{feature maps}. Each feature map containing special features is then partitioned into equal-sized, non-overlapping regions and the maximum (or average) of each region is passed to the next layer (sub-sampling layer), resulting in resolution-reduced feature maps with depth unchanged. This operation allows small translation to the input image, thus more robust features that are invariant to translations are more likely to be found \cite{goodfellowdeep}. These two steps, convolutions and subsampling, are alternated for two iterations in the CNN in Figure \ref{fig:typicalcnn} and the resulting feature maps are fully connected with a MLP to perform classification. In some applications, the final fully connected layer that performs classification is replaced with other classifiers e.g. SVM. For example, the state-of-the-art object detector R-CNN \cite{girshick2014rich} extracts high-level features from the penult final fully layer and feeds them to SVMs for classification \cite{braun2016pose}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.38]{TypicalCnn}
\caption{A typical architecture of CNN (Wikipedia).}
\label{fig:typicalcnn}
\end{figure}
\subsection{Layers in CNNs}
\label{subsec:layercnn}
As mentioned in Section \ref{subsec:cnnArch}, CNNs are commonly made up of mainly three layer types: \textit{convolutional layer, pooling layer (usually subsampling) and fully connected layer}. The explanations of these layers and the introduction of other auxiliary layers that are not shown in Figure \ref{fig:typicalcnn} will be introduced.
\begin{itemize}
\item \textit{Convolution Layer}\vspace{0.3cm}\\
The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of filters or kernels, which have a small receptive field, but extend through the full depth of the input volume or image. The convolution operation replicates a filter across the entire image field to get the response of each location and form a response feature map. Given multiple filters, the network will get a stack of features maps to form a new 3D volume.
Officially, the convolution layer accepts a volume or image of size $W_1 \times H_1 \times D_1$ from previous layer as input data, where $H_1 , W_1, D_1$ are image height, image width, and a number of channels (or depth) respectively. The layer defines $K$ filters with the shape $F \times F \times D_1$ each, where $F$ is the kernel size. The convolution of input volume and filters produces the output volume of size $W_2 \times H_2 \times K$, where the new volume's $W_2$ and $H_2$ are dependent on the filter size, stride and pad settings of the convolution operation. In general, the formula for calculating the output size, $W_2 \text{and} H_2$, for any given convolution layer is defined as:
\begin{itemize}
\item width: $W_2 = \frac{(W_1 - F + 2P )}{S} + 1$,
\item height: $H_2 = \frac{(H_1 - F + 2P )}{S} + 1$,
\end{itemize}
Where: $K$ is the filter size, $P$ is the padding, and $S$ is the stride.
For instance, Figure \ref{fig:convol} illustrate a 2D version convolution where the $7 \times 7 \times 1$ input volume is convolved with one $3 \times 3$ filter. With $0$ padding and 1 stride settings, it produces a $5 \times 5 \times 1$ output volume.
\begin{figure}[h]
\centering
\includegraphics[scale=1.3]{convol}
\caption{An example of convolution operation in 2D \cite{mohamed2017detection}.}
\label{fig:convol}
\end{figure}
\begin{itemize}
\item \textit{Stride and Padding}\\
There are two main parameters that must be tuned after choosing the filter size $K$ in order to modify the behavior of each layer. These two parameters are the \textit{stride} and the \textit{padding}. \textit{Stride}, $S$, controls how the filter convolves around the input volume. In the previous example, $S=1$. This means that the filter convolves around the input volume by shifting one unit at a time. So, the amount by which the filter shifts is the stride. Moreover, as we keep applying convolution layers, the size of the volume will decrease faster than we would like. Therefore, in order to preserve as much information about the original input volume so that we can extract those low level features, the zero-padding must be applied. Let's say we want to apply the same convolution layer but we want the output volume to remain $7 \times 7 \times 1$, which is equal to the input size. To do this, we can apply a zero-padding of size 1 to that layer. Zero-padding pads the input volume with zeros around the border. The zero-padding is defined as:
\begin{equation}
P = \frac{K-1}{2}.
\end{equation}
\end{itemize}
\item \textit{Pooling Layer}\vspace{0.3cm}\\
Another important concept of CNNs is pooling, which is a form of non-linear down-sampling. It partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum (in case of \textit{max-pooling}). The function of pooling layer is to reduce the spatial size of representation and hence reduce the amount of parameters and amount of computations in the network and also control over-fitting. There are several non-linear functions to implement pooling such as \textit{max-pooling, average-pooling and stochastic-pooling}. The pooling layer operates independently on every depth slice of the input and resizes it spatially. Pooling is an translation-invariance operation. The pooled image keeps the structural layout of the input image.
Formally a pooling layer accepts a volume of size $W_1 \times H_1 \times D_1$ as input and output a volume of size $W_2 \times H_2 \times D_1$. The output width $W_2$ and height $H_2$ are dependent on the kernel size, stride and pad settings, as shown in Figure \ref{fig:maxpool}. The produced output has dimensions:
\begin{itemize}
\item width: $W_2 = \frac{(W_1 - F)}{S} + 1$, and
\item height: $H_2 = \frac{(H_1 - F)}{S} + 1$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{Max-pool}
\vspace{-0.7cm}
\caption{An example of pooling with a $2\times2$ filter and a stride of 2 \cite{mohamed2017detection}.}
\label{fig:maxpool}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{relu}
\vspace{-0.4cm}
\caption{The ReLU activation function.}
\label{fig:relu}
\end{figure}
\item \textit{ReLU Layer}\vspace{0.3cm}\\
Rectified Linear Units (ReLU) is one of the most notable non-saturated activation functions, which can be used by neurons just like any other activation function. The ReLU activation function is defined as (Figure \ref{fig:relu}):
\begin{equation}
f(x) = \text{max}(0, x).
\end{equation}
ReLU is an element wise operation (applied per pixel) and replaces all negative pixel values in the feature map by zero. It increases the nonlinear properties of the decision function and of the overall network without affecting the receptive fields of the convolution layer. For that reason, after each convolution layer, it is convention to apply a ReLU layer immediately afterwards. The main reason that it is used is because of how efficiently it can be computed compared to more conventional activation functions like the \textit{sigmoid function} $f(x) = \vert tanh(x) \vert$ and \textit{hyperbolic tangent function} $f(x) = tanh(x)$, without making a significant difference to generalization accuracy. Many works have shown that ReLU works better than other activation functions empirically \cite{maas2013rectifier, he2015delving}. Moreover, the recently used activation functions in CNNs based on ReLU such as Leaky ReLU \cite{maas2013rectifier}, Parametric ReLU \cite{he2015delving}, Randomized ReLU \cite{xu2015empirical}, and Exponential Linear Unit (ELU) \cite{clevert2015fast} were introduced in \cite{gu2015recent}.
\item \textit{Fully Connected Layer}\vspace{0.3cm}\\
Eventually, after several convolutional and max pooling layers, the high-level reasoning in the neural network is done via fully connected layers. Neurons in a fully connected layer have full connections to all neurons in the previous layer. It provides a form of dense connectivity and loses the structural layout of the input image. Fully connected layers are usually inserted after the last convolution layer to reduce the amount of features and creating vector-like representation.
\item \textit{Loss Layer}\vspace{0.3cm}\\
It is important to choose an appropriate loss function for a specific task. The loss layer specifies the learning process by comparing the output of the network with the true label (or target) and minimizing the cost. Generally, the loss is calculated by forward pass and the gradient of network parameters with respect to loss is calculated by the backpropagation. For multi-class classification problems, softmax classifier with loss is commonly used. Firstly, it takes multi-class scores as input, and uses softmax function to normalize the input and get a distribution-like output. Then, the loss is computed by calculating the cross-entropy of the target class probability distribution and the estimated distribution. The softmax function is defined as:
\begin{equation}
y(x)_i = \frac{\text{exp}(x_i)}{\sum_{j=1}^{n}\text{exp}(x_j)},
\end{equation}
Where:
\begin{itemize}
\item $0 \leq y(x)_i \leq 1$,
\item $\sum_{j=1}^{n}y(x)_j =1$,
\item $i = 1,\ldots, n$ \& $ n$ is the number of Classes.
\end{itemize}
The cross-entropy between the target distribution $p$ and the estimation distribution $q$ is given by
\begin{equation}
H(p,q) = \sum_i p_i \text{log}q_i.
\end{equation}
The purpose of the softmax classification layer is simply to transform all the network activations in your final output layer to a series of values that can be interpreted as probabilities. The softmax function is also known as the normalized exponential function. The recently used loss layers (e.g. Hinge loss \cite{zhang2004solving}, L-Softmax loss \cite{liu2016large}, Contrastive loss \cite{chopra2005learning, hadsell2006dimensionality}, and Triplet loss \cite{schroff2015facenet}), were presented in \cite{gu2015recent}.
\end{itemize}
\chapter{Preparation Studies}
\label{ch:preparation-studies}
\section{Mobile Robots}
Mobile robots are vehicles with the ability to change their positions. These robots can move on grounds, on the surface of water, under water and in the air. Two modes can be used to operate mobile robots. One is tele-operated mode where movement instructions are given externally. Another mode is autonomous where robots operate on the information that these get from sensors and no external instructions are given. Wheeled mobile robots are one of the types of mobile robots extensively used in research and industry, as wheel is the most popular locomotion mechanism in mobile robotics. One of the advantages of wheeled robots is that balancing is not a problem as robots are designed in such a way that all wheels are on the ground. The Figure Below show examples of mobile robots.
\begin{figure}[H]
\centering
\includegraphics[scale=0.2]{images/mobile-robots.png}
\caption{Mobile robots\protect\footnotemark{}.}
\label{fig:mobile robots examples}
\end{figure}
\footnotetext{\url{https://robohub.org/robot-teams-create-supply-chain-to-deliver-energy-to-explorer-robots/}}
\section{Kinematics of Differential Drive Robots}
The differential drive robot is, probably, the most common and most used mobile robot in the current times. A differential drive robot consists of two independently driven wheels that rotate about the same axis, as well as one or more caster wheels, ball casters, or low-friction sliders that keep the robot horizontal. This is the case with the robot in our simulation. Figure \ref{fig:differential robot model} shows a visual representation of the system.
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{images/differential_drive_model.png}
\caption{Differential robot model \cite{rashid2014simulation}.}
\label{fig:differential robot model}
\end{figure}
For the differential drive system, you are going to need to know 2 parameters:
\begin{itemize}
\item L: The distance between the wheels of the robot, also known as Wheel Base.
\item R: The radius of the wheels of the robot.
\end{itemize}
These parameters are relatively easy to measure in any system. On a real robot, can just measure them with a ruler, On a simulated robot, you can also extract these values from the unified robot deccription format (URDF) file of the robot.
From the visual representation above, the 2 inputs for the system:
\begin{itemize}
\item $v_R$: Rate at which the right wheel is turning
\item $v_L$: Rate at which the left wheel is turning
\end{itemize}
So, in order to have the kinematic model of our system, a set of equations that connect the inputs of our system with the outputs are required. For the differential drive robot, these are the equations:
\begin{equation}
\begin{array}{l}
\dot{x}=\frac{R}{2}\left(v_{r}+v_{l}\right) \cos (\theta) \\
\dot{y}=\frac{R}{2}\left(v_{r}+v_{l}\right) \sin (\theta) \\
\dot{\theta}=\frac{R}{L}\left(v_{r}-v_{l}\right)
\end{array}
\end{equation}
\section{The Purpose of Using Robot Operating System (ROS)}
Robot Operating System (ROS) allows you to stop reinventing the wheel. Reinventing the wheel is one of the main killers for new innovative applications. The ROS goal is to provide a standard for robotics software development, that you can use on any robot. Whether you are programming a mobile robot, a robotic arm, a drone, a boat, a vending machine, You can use the Robot Operating System. This standard allows you to actually focus on the key features of your application, using an existing foundation, instead of trying to do everything yourself. ROS is more of a middle ware, something like a low-level “framework” based on an existing operating system. The main supported operating system for ROS is Ubuntu. You have to install ROS on your operating system in order to use it. Robot Operating System is mainly composed of 2 things:
\begin{itemize}
\item a core (middle ware) with communication tools,
\item a set of plug \& play libraries.
\end{itemize}
Basically, a middle ware is responsible for handling the communication between programs in a distributed system (as shown in figure \ref{fig:ros with libraries})
\begin{figure}[H]
\centering
\includegraphics[scale=0.43]{images/ros_with_libraries.png}
\caption{ROS with libraries \cite{dahl1972structured}\protect\footnotemark{}.}
\label{fig:ros with libraries}
\end{figure}
\footnotetext{\url{https://roboticsbackend.com/what-is-ros/}}
ROS comes with 3 main communication tools:
\begin{itemize}
\item Topics: Those will be used mainly for sending data streams between nodes. Example: you’re monitoring the temperature of a motor on the robot. The node monitoring this motor will send a data stream with the temperature. Now, any other node can subscribe to this topic and get the data.
\item Services: They will allow you to create a simple synchronous client/server communication between nodes. Very useful for changing a setting on your robot, or ask for a specific action: enable freedrive mode, ask for specific data, etc.
\item Actions: A little bit more complex, they are in fact based on topics. They exist to provide you with an asynchronous client/server architecture, where the client can send a request that takes a long time (ex: asking to move the robot to a new location). The client can asynchronously monitor the state of the server, and cancel the request anytime.
\end{itemize}
\section{Sensors}
Robots must sense the world around them in order to react to variations in tasks and environments. The sensors can range from minimalist setups designed for quick installation to highly elaborate and tremendously expensive sensor rigs.
Many successful industrial deployments use surprisingly little sensing. A remarkable number of complex and intricate industrial manipulation tasks can be performed through a combination of clever mechanical engineering and limit switches, which close or open an electrical circuit when a mechanical lever or plunger is pressed, in order to start execution of a pre-programmed robotic manipulation sequence. Through careful mechanical setup and tuning, these systems can achieve amazing levels of throughput and reliability. It is important, then, to consider these binary sensors when enumerating the world of robotic sensing. These sensors are typically either “on” or “off.” In addition to mechanical limit switches, other binary sensors include optical limit switches, which use a mechanical “flag” to interrupt a light beam, and bump sensors, which channel mechanical pressure along a relatively large distance to a single mechanical switch. These relatively simple sensors are a key part of modern industrial automation equipment, and their importance can hardly be overstated.
Another class of sensors return scalar readings. For example, a pressure sensor can estimate the mechanical or barometric pressure and will typically output a scalar value along some range of sensitivity chosen at time of manufacture. Range sensors can be constructed from many physical phenomena (sound, light, etc.) and will also typically return a scalar value in some range, which seldom includes zero or infinity!
Each sensor class has its own quirks that distort its view of reality and must be
accommodated by sensor-processing algorithms. These quirks can often be surprisingly severe. For example, a range sensor may have a “minimum distance” restriction: if an object is closer than that minimum distance, it will not be sensed. As a result of these quirks, it is often advantageous to combine several different types of sensors in a robotic system.
\subsection{Visual Cameras}
Higher-order animals tends to rely on visual data to react to the world around them. If only robots were as smart as animals! Unfortunately, using camera data intelligently is surprisingly difficult, as we will describe in later chapters of this book. However, cameras are cheap and often useful for tele-operation, so it is common to see them on robot sensor heads.
Interestingly, it is often more mathematically robust to describe robot tasks and environments in three dimensions (3D) than it is to work with 2D camera images. This is because the 3D shapes of tasks and environments are invariant to changes in scene lighting, shadows, occlusions, and so on. In fact, in a surprising number of application domains, the visual data is largely ignored; the algorithms are interested in 3D data. As a result, intense research efforts have been expended on producing 3D data of the scene in front of the robot.
When two cameras are rigidly mounted to a common mechanical structure, they form a stereo camera. Each camera sees a slightly different view of the world, and these slight differences can be used to estimate the distances to various features in the image. This sounds simple, but as always, the devil is in the details. The performance of a stereo camera depends on a large number of factors, such as the quality of the camera’s mechanical design, its resolution, its lens type and quality, and so on. Equally important are the qualities of the scene being imaged: a stereo camera can only estimate the distances to mathematically discernible features in the scene, such as sharp, high-contrast
Corners. A stereo camera cannot, for example, estimate the distance to a featureless wall, although it can most likely estimate the distance to the corners and edges of the wall, if they intersect a floor, ceiling, or other wall of a different color. Many natural outdoor scenes possess sufficient texture that stereo vision can be made to work quite well for depth estimation. Uncluttered indoor scenes, however, can often be quite difficult.
Several conventions have emerged in the ROS community for handling cameras. The canonical ROS message type for images is sensor\_msgs/Image , and it contains little more than the size of the image , its pixel encoding scheme, and the pixels themselves. To describe the intrinsic distortion of the camera resulting from its lens and sensor alignment, the sensor\_msgs/CameraInfo message is used. Often, these ROS images need to be sent to and from OpenCV, a popular computer vision library.
\subsection{An Image Processing Example of ROS Architecture}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{images/Visual-camera-diagram.png}
\caption{An image processing example of ROS architecture\protect\footnotemark{}.}
\label{Visual camera diagram.}
\end{figure}
\footnotetext{\url{https://www.researchgate.net/figure/An-image-processing-example-of-ROS-architecture-The-Camera-Node-publishes-images-in-a_fig3_341992473}}
The Camera Node publishes images in a message named image\_data which is subscribed by both Image Display Node and Image Processing Node. The ROS Master tracks publishers and subscribers enabling individual nodes to locate and message each other.
\subsection{Depth Cameras}
As discussed in the previous section, even though visual camera data is intuitively appealing, and seems like it should be useful somehow, many perception algorithms work much better with 3D data. Fortunately, the past few years have seen massive progress in low-cost depth cameras. Unlike the passive stereo cameras described in the previous section, depth cameras are active devices. They illuminate the scene in various ways, which greatly improves the system performance. For example, a completely featureless indoor wall or surface is essentially impossible to detect using passive stereo vision.
However, many depth cameras will shine a texture pattern on the surface, which is subsequently imaged by its camera. The texture pattern and camera are typically set to operate in near-infrared wavelengths to reduce the system’s sensitivity to the colors of objects, as well as to not be distracting to people nearby.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/Depth_camera.png}
\caption{Depth camera.}
\label{Depth camera.}
\end{figure}
Some common depth cameras, such as the Microsoft Kinect as shown in Figure \ref{Depth camera.}, project a structured light image. The device projects a precisely known pattern into the scene, its camera observes how this pattern is deformed as it lands on the various objects and surfaces of the scene, and finally a reconstruction algorithm estimates the 3D structure of the scene from this data. It’s hard to overstate the impact that the Kinect has had on modern robotics! It was designed for the gaming market, which is orders of magnitude larger than the robotics sensor market, and could justify massive expenditures for the development and production of the sensor. The launch price of 150\$ was incredibly cheap for a sensor capable of outputting so much useful data. Many robots were quickly retrofitted to hold Kinects, and the sensor continues to be used across research and industry. Although the Kinect is the most famous (and certainly the most widely used) depth camera in robotics, many other depth-sensing schemes are possible. For example, unstructured light depth cameras employ “standard” stereo-vision algorithms with random texture injected into the scene by some sort of projector. This scheme has been shown to work far better than passive stereo systems in feature-scarce environments, such as many indoor scenes.
A different approach is used by time-of-flight depth cameras. These imagers rapidly blink an infrared light emitting diode (LED) or laser illuminator, while using specially designed pixel structures in their image sensors to estimate the time required for these light pulses to fly into the scene and bounce back to the depth camera. Once this “time of flight” is estimated, the (constant) speed of light can be used to convert the estimates into a depth image, as illustrated in Figure \ref{Principle of operation of a time-of-flight camera.} .
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/Depth_camera_diagram.png}
\caption{Principle of operation of a time-of-flight camera\protect\footnotemark{}.}
\label{Principle of operation of a time-of-flight camera.}
\end{figure}
\footnotetext{\url{https://en.wikipedia.org/wiki/Time-of-flight\_camera\#/media/File:Time\_of\_flight\_camera\_principle.svg}}
Intense research and development is occurring in this domain, due to the enormous existing and potential markets for depth cameras in video games and other mass-market user-interaction scenarios. It is not yet clear which (if any) of the schemes discussed previously will end up being best suited for robotics applications. At the time of writing, cameras using all of the previous modalities are in common usage in robotics experiments.
Just like visual cameras, depth cameras produce an enormous amount of data. This data is typically in the form of point clouds, which are the 3D points estimated to lie on the surfaces facing the camera. The fundamental point cloud message is sensor\_msgs/PointCloud2 (so named purely for historical reasons). This message allows for unstructured point cloud data, which is often advantageous, since depth cameras often cannot return valid depth estimates for each pixel in their images. As such, depth images often have substantial “holes,” which processing algorithms must handle gracefully.
\subsection{Laser Scanners}
Although depth cameras have greatly changed the depth-sensing market in the last few years due to their simplicity and low cost, there are still some applications in which laser scanners (Figure \ref{Laser scanner diagram.}) are widely used due to their superior accuracy and longer sensing range. There are many types of laser scanners, but one of the most common schemes used in robotics involves shining a laser beam on a rotating mirror spinning around 10 to 80 times per second (typically 600 to 4,800 RPM). As the mirror rotates, the laser light is pulsed rapidly, and the reflected waveforms are correlated with the outgoing waveform to estimate the time of flight of the laser pulse for a series of angles around the scanner.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/Laser-scanner-diagram.png}
\caption{Laser scanner diagram\protect\footnotemark{}.}
\label{Laser scanner diagram.}
\end{figure}
\footnotetext{\url{https://www.cognex.com/en-nl/what-is/industrial-barcode-reading/laser-scanners}}
Laser scanners used for autonomous vehicles are considerably different from those used for indoor or slow-moving robots. Vehicle laser scanners made by companies such as Velodyne must deal with the significant aerodynamic forces, vibrations, and temperature swings common to the automotive environment. Since vehicles typically move much faster than smaller robots, vehicle sensors must also have considerably longer range so that sufficient reaction time is possible. Additionally, many software tasks for autonomous driving, such as detecting vehicles and obstacles, work much better when multiple laser scan lines are received each time the device rotates, rather than just one. These extra scan lines can be extremely useful when distinguishing between classes of objects, such as between trees and pedestrians. To produce multiple scanlines, automotive laser scanners often have multiple lasers mounted together in a rotating structure, rather than simply rotating a mirror. All of these additional features naturally add to the complexity, weight, size, and thus the cost of the laser scanner.
The complex signal processing steps required to produce range estimates are virtually always handled by the firmware of the laser scanner itself. The devices typically output a vector of ranges several dozen times per second, along with the starting and stopping angles of each measurement vector. In ROS, laser scans are stored in sensor\_msgs/LaserScan messages, which map directly from the output of the laser scanner. Each manufacturer, of course, has their own raw message formats, but ROS drivers exist to translate between the raw output of many popular laser scanner manufacturers and the sensor\_msgs/LaserScan message format.
\section{TurtleBot}
TurtleBot is the robot we used in this thesis, The TurtleBot was designed in 2011 as a minimalist platform for ROS-based mobile robotics education and prototyping. It has a small differential-drive mobile base with an internal battery, power regulators, and charging contacts. Atop this base is a stack of laser- cut “shelves” that provide space to hold a netbook computer and depth camera, and lots of open space for prototyping. To control cost, the TurtleBot relies on a depth camera for range sensing; it does not have a laser scanner. Despite this, mapping and navigation can work quite well for indoor spaces. TurtleBot are available from several manufacturers for less than 2,000\$. More information is available at \url{http://turtlebot.org}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/turtlebot-burger}
\caption{Turtlebot burger\protect\footnotemark{}.}
\label{TurtleBot.}
\end{figure}
\footnotetext{\url{https://www.robot-advance.com/EN/art-turtlebot3-burger-1997.htm}}
Because the shelves of the TurtleBot (in Figure \ref{TurtleBot.}) are covered with mounting holes, many owners have added additional subsystems to their TurtleBot, such as small manipulator arms, additional sensors, or upgraded computers. However, the “stock” TurtleBot is an excellent starting point for indoor mobile robotics. Many similar systems exist from other vendors, such as the Pioneer and Erratic robots and thousands of custom- built mobile robots around the world. The examples in this book will use the TurtleBot, but any other small differential-drive platform could easily be substituted.
\section{Navigation}
\subsection{ROS Navigation }
ROS has a set of resources that are useful so a robot is able to navigate through a medium, in other words, the robot is capable of planning and following a path while it deviates from obstacles that appear on its path throughout the course. These resources are found on the navigation stack. One of the many resources needed for completing this task and that is present on the navigation stack are the localization systems, that allow a robot to locate itself, whether there is a static map available or simultaneous localization and mapping is required. Adaptive Monte Carlo Localization (AMCL) is a tool that allows the robot to locate itself in an environment through a static map, a previously created map. The disadvantage of this resource is that, because of using a static map, the environment that surrounds the robot can not suffer any modification, because a new map would have to be generated for each modification and this task would consume computational time and effort. Being able to navigate only in modification-free environments is not enough, since the robots should be able to operate in places like industries and schools, where there is constant movement. To bypass the lack of flexibility of static maps, two other localization systems are offered by the navigation stack: gmapping and hector mapping. Both gmapping and hector mapping are based on Simultaneous Localization and Mapping (SLAM), a technique that consists of mapping an environment at the same time that the robot is moving, in other words, while the robot navigates through an environment, it gathers information from the environment through his sensors and generates a map. This way you have a mobile base able not only to generate a map of an unknown environment as well as updating the existing map, thus enabling the use of the device in more generic environments, not immune to changes. The difference between gmapping and hector mapping is that the first one takes in account the odometry information to generate and update the map and the robot’s pose, however, the robot needs to have encoders, preventing some robots(e.g. flying robots) from using it. The odometry information is interesting because they are able to aid in the generation of more precise maps, since understanding the robot dynamics we can estimate its pose. The dynamic behaviour of the robot is also known as kinematics. Kinematics is influenced, basically, by the way that the devices that guarantee the robot’s movement are assembled. Some examples of mechanical features that influence the kinematics are: the wheel type, the number of wheels, the wheels positioning and the angle at which they are disposed. However, as much useful as the odometry information can be, it isn’t immune to faults. The faults are caused by the lack of precision on the capitation, friction, slip, drift and other factors, and, with time, they may accumulate, making inconsistent data and prejudicing the maps formation, that tend to be distorted under these circumstances. Other indispensable data to generate a map are the sensors‘ distance readings, for the reason that they are responsible in detecting the external world and, this way, serve as reference to the robot. Nonetheless, the data gathered by the sensors must be adjusted before being used by the device. These adjustments are needed because the sensors measure the environment in relation to themselves, not in relation to the robot, in other words, a geometric conversion is needed. To make this conversion simpler, ROS offers the TF tool, which makes it possible to adjust the sensor's positions in relation to the robot and, this way, adequate the measures to the robot‘s navigation.
\subsubsection{The Navigation Stack}
The ROS Navigation Stack is generic. That means, it can be used with almost any type of moving robot, but there are some hardware considerations that will help the whole system to perform better, so they must be considered. These are the requirements:
\begin{enumerate}
\item The Navigation package will work better in differential drive and holonomic robots. Also, the mobile robot should be controlled by sending velocity commands in the form $x$, $y$ (linear velocity), $z$ (angular velocity).
\item The robot should mount a planar laser somewhere around the robot. It is used to build the map of the environment and perform localization.
\item Its performance will be better for square and circular shaped mobile bases.
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[height=3in, width=\linewidth]{images/The-Navigation-Stack-diagram.png}
\caption{The navigation stack diagram\protect\footnotemark{}.}
\label{The Navigation Stack diagram.}
\end{figure}
\footnotetext{\url{https://www.researchgate.net/figure/An-overview-of-the-ROS-Navigation-stack-8_fig1_340864490}}
According to the shown diagram, we must provide some functional blocks in order to work and communicate with the Navigation stack. Following are brief explanations of all the blocks which need to be provided as inputto the ROS Navigation stack:
\begin{itemize}
\item Odometry source: Odometry data of a robot gives the robot position with respect to its starting position. Main odometry sources are wheel encoders, inertial measurement unit (IMU), and 2D/3D cameras (visual odometry). The odom value should publish to the Navigation stack, which has a message type of nav\_msgs/Odometry. The odom message can hold the position and the velocity of the robot.
\item Sensor source: Sensors are used for two tasks in navigation: one for localizing the robot in the map (using for example the laser) and the other one to detect obstacles in the path of the robot (using the laser, sonars or point clouds).
\item sensor transforms/tf: the data captured by the different robot sensors must be referenced to a common frame of reference (usually the base\_link) in order to be able to compare data coming from different sensors. The robot should publish the relationship between the main robot coordinate frame and the different sensors' frames using ROS transforms.
\item base\_controller: The main function of the base controller is to convert the output of the Navigation stack, which is a Twist (geometry\_msgs/Twist) message, into corresponding motor velocities for the robot.
\end{itemize}
\subsubsection{The move\textunderscore base node}
This is the most important node of the Navigation Stack. It's where most of the "magic" happens. The main function of the move\_base node is to move a robot from its current position to a goal position with the help of other Navigation nodes. This node links the global planner and the local planner for the path planning, connecting to the rotate recovery package if the robot is stuck in some obstacle, and connecting global costmap and local costmap for getting the map of obstacles of the environment.
The following is the list of all the packages which are linked by the move\_base node:
\begin{itemize}
\item global-planner.
\item local-planner.
\item rotate-recovery.
\item costmap-2D
\end{itemize}
The following are the other packages which are interfaced to the move\_base node:
\begin{itemize}
\item map-server.
\item AMCL.
\item gmapping.
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/move-base-node.png}
\caption{Move base node\protect\footnotemark{}.}
\label{move base node.}
\end{figure}
\footnotetext{\url{https://www.theconstructsim.com/robotigniteacademy_learnros/ros-courses-library/ros-courses-ros-navigation-in-5-days/}}
\subsection{Robot Localization}
Robot localization is the process of determining where a mobile robot is located with respect to its environment. Localization is one of the most fundamental competencies required by an autonomous robot as the knowledge of the robot's own location is an essential precursor to making decisions about future actions. In a typical robot localization scenario, a map of the environment is available and the robot is equipped with sensors that observe the environment as well as monitor its own motion. The localization problem then becomes one of estimating the robot position and orientation within the map using information gathered from these sensors. Robot localization techniques need to be able to deal with noisy observations and generate not only an estimate of the robot location but also a measure of the uncertainty of the location estimate.Robot localization provides an answer to the question: Where is the robot now? A reliable solution to this question is required for performing useful tasks, as the knowledge of current location is essential for deciding what to do next.
\subsubsection{Monte Carlo Localization}
Because the robot may not always move as expected, it generates many random guesses as to where it is going to move next. These guesses are known as particles. Each particle contains a full description of a possible future pose. When the robot observes the environment it's in (via sensor readings), it discards particles that don't match with these readings, and generates more particles close to those that look more probable. This way, in the end, most of the particles will converge in the most probable pose that the robot is in. So the more you move, the more data you'll get from your sensors, hence the localization will be more precise. These particles are those arrows that are shown in RViz in the next figure.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/Monte-Carlo-localization.png}
\caption{Monte Carlo localization.}
\label{Monte Carlo localization.}
\end{figure}
Monte Carlo localization (MCL) \cite{fox1999monte}, also known as particle filter localization is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot.
\subsubsection{The AMCL Package}
In order to navigate around a map autonomously, a robot needs to be able to localize itself into the map. And this is precisely the functionality that the amcl node (of the amcl package) provides us. In order to achieve this,the amcl node uses the MCL (Monte Carlo Localization) algorithm. The AMCL (Adaptive Monte Carlo Localization) package provides the amcl node, which uses the MCL system in order to track the localization of a robot moving in a 2D space. This node subscribes to the data of the laser, the laser-based map, and the transformations of the robot, and publishes its estimated position in the map. On startup, the amcl node initializes its particle filter according to the parameters provided. Basically, the amcl node takes data from the laser and the odometry of the robot, and also from the map of the environment, and outputs an estimated pose of the robot. The more the robot moves around the environment, the more data the localization system will get, so the more precise the estimated pose it returns will be.
\subsection{Navfn}
The Navfn planner\footnote{For supplementary reading visit \url{http://wiki.ros.org/navfn}}
is probably the most commonly used global planner for ROS Navigation. It uses Dijkstra's algorithm in order to calculate the shortest path between the initial pose and the goal pose. navfn provides a fast interpolated navigation function that can be used to create plans for a mobile base. The planner assumes a circular robot and operates on a costmap to find a minimum cost plan from a start point to an end point in a grid. The navigation function is computed with Dijkstra's algorithm, but support for an $A^{*}$ heuristic may also be added in the near future.
\subsubsection{Carrot Planner}
The carrot planner\footnote{For supplementary reading visit: \url{http://wiki.ros.org/carrot_planner?distro=noetic}} takes the goal pose and checks if this goal is in an obstacle. Then, if it is in an obstacle, it walks back along the vector between the goal and the robot until a goal point that is not in an obstacle is found.
It, then, passes this goal point on as a plan to a local planner or controller. Therefore, this planner does not do any global path planning. It is helpful if you require your robot to move close to the given goal, even if the goal is unreachable. In complicated indoor environments, this planner is not very practical. This algorithm can be useful if, for instance, you want your robot to move as close as possible to an obstacle. But instead we use another global planner.
\subsubsection{Global Planner}
The global planner\footnote{For supplementary reading visit: \url{http://wiki.ros.org/global_planner?distro=noetic}} is a more flexible replacement for the navfn planner. It allows you to change the algorithm used by navfn (Dijkstra's algorithm) to calculate paths for other algorithms. These options include support for
$A^{*}$ \cite{hart1972correction}, toggling quadratic approximation, and toggling grid path.
\begin{figure}[H]
\centering
\subfigure[Standard behavior]{
\includegraphics[width=0.45\linewidth]{images/Standard-Behavior.png}}
\hspace{5pt}
\subfigure[Simple potential calculation]{
\includegraphics[width=0.45\linewidth]{images/Simple-Potential-Calculation.png}}
\caption{Standard behavior and simple potential calculation paths\protect\footnotemark{}.}
\label{fig:example-2}
\end{figure}
\footnotetext{\url{http://wiki.ros.org/global_planner}}
\chapter{Proposed Approach and Simulation Results}
\label{ch:proposed_approach}
\section{Simulation Setup}
\subsection{ROS}
ROS \cite{quigley2009ros} is a flexible platform to build robotics software applications. Its col-
lection of tools, libraries and conventions greatly simplifies the task of building
complex and robust robotics behaviours. In addition, ROS was created to encour-
age collaborative robotics software development across the world. The ecosystem of ROS is illustrated in Fig \ref{ROS ecosystem}.
The file system and nodes representation in ROS are extremely helpful in organizing and building robotics tasks.
\begin{figure}[H]
\centering
\includegraphics[scale=0.58]{images/diff.png}
\caption{Difference between containers and virtual machines.}
\label{}
\end{figure}
ROS offers a message passing interface that
provides inter-process communication referred to as middleware. The middleware provides facilities like: publish/subscribe anonymous message passing, recording
and playback of messages, remote procedure calls, and distributed parameter
system. In addition, ROS provides common robot-specific features that help in
running basic and core robotics functions. The offered features include standard
message definitions for robots, robot geometry library, robot description language
(URDF), pose estimation and localization tools. Perhaps the most well-known
tool in ROS is Rviz. Rviz provides general purpose, three-dimensional visualiza-
tion of many sensor data types and any URDF-described robot. We can easily
visualize the laser scanned data, robot’s odometry, environment map, and many
other topics that the robot subscribes to. Rviz can be seen as a tool to visualize
what your robot can see. Another useful tool in ROS is rqt. Using the rqt graph
plugin we can introspect and visualize a live ROS system, showing nodes and the
connections between them, and being able to easily debug and understand our
running system and how it is structured.
For all the mentioned features, we have chosen ROS as a software platform
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{images/ROS-ecosystem.png}
\caption{ROS ecosystem.}
\label{ROS ecosystem}
\end{figure}
to develop and test our robot’s navigation system on. The ROS version we use
is Kinetic Kame which is the 10th official ROS release and is supported on our
operating system, Ubuntu Xenial.
\subsection{RVIZ}
Rviz stands for ROS visualization. It is a general-purpose 3D visualization environment for robots, sensors, and algorithms. Like most ROS tools, it can be used for any robot and rapidly configured for a particular application.
rviz can plot a variety of data types streaming through a typical ROS system, with heavy emphasis on the three-dimensional nature of the data. In ROS, all forms of data are attached to a frame of reference. For example, the camera on a Turtlebot is attached to a reference frame defined relative to the center of the Turtlebot’s mobile base. The odometry reference frame, often called odom, is taken by convention to have its origin at the location where the robot was powered on, or where its odometers were most recently reset. Each of these frames can be useful for teleoperation, but it is often desirable to have a “chase” perspective, which is immediately behind the robot and looking over its``shoulders''. This is because simply viewing the robot’s camera frame can be deceiving — the field of view of a camera is often much narrower than we are used to as humans, and thus it is easy for tele-operators to bonk the robot’s shoulders when turning corners. A sample view of \textit{rviz} configured to generate a chase perspective is shown in Figure~\ref{fig:rviz}. Observing the sensor data in the same 3D view as a rendering of the robot’s geometry can make tele-operation more intuitive.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/sample-view-of-rviz-configured-to-generate-a-chase-perspective.png}
\caption{Sample view of \textit{rviz} configured to generate a chase perspective \cite{quigley2015programming}.}
\label{fig:rviz}
\end{figure}
\subsection{GAZEBO}
In general, robot motions can be divided into mobility and manipulation. The mobility aspects can be handled by two- or three-dimensional simulations in which the environment around the robot is static. Simulation manipulation, however, requires a significant increase in the complexity of the simulator to handle the dynamics of not just the robot, but also the dynamic models in the scene. For example, at the moment that a simulated household robot is picking up a handheld object, contact forces must be computed between the robot, the object, and the surface the object was previously resting upon.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/Gazebo.png}
\caption{Gazebo \cite{quigley2015programming}.}
\label{Gazebo.}
\end{figure}
Simulators often use rigid-body dynamics, in which all objects are assumed to be in compressible, as if the world were a giant pinball machine. This assumption drastically improves the computational performance of the simulator, but often requires clever tricks to remain stable and realistic, since many rigid-body interactions become point contacts that do not accurately model the true physical phenomena. The art and science of managing the tension between computational performance and physical realism are highly nontrivial. There are many approaches to this trade-off, with many well suited to some domains but ill suited to others.
\section{Drone Network (DroNet)}\label{DroNet}
Civilian drones are soon expected to be used in a wide variety of tasks, such as aerial surveillance, delivery, or monitoring of existing architectures. Nevertheless, their deployment in urban environments has so far been limited. Indeed, in unstructured and highly dynamic scenarios, drones face numerous challenges to navigate autonomously in a feasible and safe way. In contrast to traditional “map-localize-plan” methods.
This is achieved by DroNet: a convolutional neural network that can safely drive a drone through the streets of a city. Designed as a fast 8-layers residual network, DroNet produces two outputs for each single inputimage: a steering angle to keep the drone navigating while avoiding obstacles, and a collision probability to let the unmanned aerial vehicle (UAV) recognize dangerous situations and promptly react to them. The challenge is however to collect enough data in an unstructured outdoor environment such as a city.
Figure~\ref{fig:Dronet architecture}: DroNet is a forked Convolutional Neural Network that predicts, from a single $200\times200$ frame in gray-scale, a steering angle and a collision probability. The shared part of the architecture consists of a ResNet-8 with 3 residual blocks ,followed by dropout and ReLU non-linearity. Afterwards, the network branches into 2 separated fully-connected layers, one to carry out steering prediction, and the other one to infer collision probability. In the notation above, we indicate for each convolution first the kernel’s size, then the number of filters, and eventually the stride if it is different from 1.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{images/DroNet.png}
\caption{Dronet architecture \cite{loquercio2018dronet}.}
\label{fig:Dronet architecture}
\end{figure}
\section{Methodology}\label{Methodology}
The approach aims at reactively predicting a steering angle and a probability of collision
from the drone on-board forward-looking camera. These are later converted into control flying commands which enable a UAV to safely navigate while avoiding obstacles. Since they aim to reduce the bare image processing time, they advocate a single convolutional neural network (CNN) with a relatively small size. The resulting network, which we call DroNet. The architecture is partially shared by the two tasks to reduce the network’s complexity and processing time, but is then separated into two branches at the very end. Steering prediction is a regression problem, while collision prediction is addressed as a binary classification problem. Due to their different nature and output range, we propose to separate the network’s last fully-connected layer. During the training procedure, we use only images recorded by manned vehicles. Steering angles are learned from images captured from a car, while probability of collision, from a bicycle.
\section{DroNet Control}
The outputs of DroNet are used to command the UAV to move on a plane with forward velocity and steering angle $\theta_k$. More specifically, they use the probability of collision $p_t$ provided by the network to modulate the forward velocity: the vehicle is commanded to go at maximal speed $v_{max}$ when the probability of collision is null, and to stop whenever it is close to 1. They use a low-pass filtered version of the modulated forward velocity $v_k$ to provide the controller with smooth, continuous inputs (0 $\leq \alpha \leq$ 1):
\begin{center}
\begin{equation}
\label{q1}
v_{k}=(1-\alpha)v_{k-1}+\alpha(1-p_{t})v_{max}
\end{equation}
\end{center}
Where:
\begin{itemize}
\item $v_{k}$: The required forward velocity.
\item $v_{k-1}$: The forward velocity from the previous iteration and zero for the first iteration.
\item $p_{t}$: Probability of collision provided by the neural network.
\item $v_{max}$: Max forward velocity of the robot.
\end{itemize}
similarly, they map the predicted scaled steering $s_{k}$ into a rotation around the body z-axis (yaw angle $\theta$), corresponding to the axis orthogonal to the propellers’ plane. Concretely, they convert $s_k$ from a [-1,1] range into a desired yaw angle $\theta_k$ in the range $[ -\pi /2 , \pi /2 ]$ and low pass filter it:
\begin{center}
\begin{equation}
\label{q2}
\theta_{k}=(1-\beta)\theta_{k-1}+\beta(\pi/2)s_{k}
\end{equation}
\end{center}
Where:
\begin{itemize}
\item $\theta_{k}$ : The required steering angle.
\item $\theta_{k-1}$: The steering angle from the previous iteration and zero for the first iteration.
\item $s_{k}$: The steering angle provided by the neural network.
\end{itemize}
In all our experiments we set $\alpha = 0.7$ and $\beta = 0.5$, while $v_{max}$ was changed according to the testing environment. The above constants have been selected empirically trading off smoothness for reactiveness of the drone’s flight. As a result, they obtain a reactive navigation policy that can reliably control a drone from a single forward-looking camera. An interesting aspect of their approach is that they can produce a collision probability from a single image without any information about the platform’s speed. Indeed, they conjecture the network to make decisions on the base of the distance to observed objects in the field of view. Convolutional networks are in fact well known to be successful on the task of monocular depth estimation.
\subsection{How The DroNet Approach Implemented The Proposed Above Control in ROS}
Dronet control is implemented as anode that receives (steering angle , probability of collision ) from ``/cnn\_out/predictions'' topic which is the output of the neural network.
The neural network is presented in another node ``/dronet\_perception'' which receives images from the camera and treats it as an input to the neural network and then transmit the output (probability of collision , steering angle) to the “cnn\_ out/predictions” topic.
\subsection{Graphical Representation}
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{images/dronet.png}
\caption{The rqt\_graph of dronet.}
\label{dronet.}
\end{figure}
\section{Simulation Results for DroNet Using Ground Robot}
In this chapter, we will discuss the simulation results acquired throughout the whole project. We will divide the project into six stages and each stage will contain a result sample and description of the problem faced. The simulation tool that we are using is GAZEBO. In the following subsections, the different scenarios for DroNet ground robot based will be simulated.
\subsection {Scenario \#1}
Converting the DroNet from a drone based autonomous navigation into a ground based robot (turtlebot) based autonomous navigation. Then, creating an environment looks like the environment that the DroNet neural network was trained on. The DroNet neural network was trained on real world data targeted to navigate through streets so the simulated environment that we created is a single lane road as shown in Figure~\ref{Single Lane Road.}.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth,height=300px]{images/Single-Lane-Road.png}
\caption{Single lane road environment.}
\label{Single Lane Road.}
\end{figure}
From Figure~\ref{First Test Collision.}, the ground robot DroNet based will collide with the wall which solved in the next subsection.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{images/first-test-collision.png}
\caption{Robot collision.}
\label{First Test Collision.}
\end{figure}
\subsection{Scenario \#2}
\textbf{\large Objective }This is achieved by tuning the LPF parameters which are linear and angular velocity in Equation \ref{q1} and Equation \ref{q2} respectively. They set $\alpha$ = 0.3 and $\beta$ = 0.5 while $V_{max}$ was changed according to the testing environment.
\subsection{Scenario \#3}
This scenario was concerned with trying the tuned ground based DroNet in an environment as shown in Figure \ref{3rd stage environnmet.} where the neural network wasn’t trained on it before to test how it would behave.
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{images/3rd_stage_environnmet.png}
\caption{Indoor environment.}
\label{3rd stage environnmet.}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.47\linewidth]{images/Start-Position.png}}
\hspace{5pt}
\subfigure[]{
\includegraphics[width=0.47\linewidth]{images/Robot-consdering-The-Path-Blocked-and-Turning.png}}
\caption{(a) Start position of the robot. (b) Robot direction.}
\label{fig:example-2}
\end{figure}
From Figure~\ref{fig:example-2} (a), we can conclude that, the blue beam from laser scanner overhead the turtleBot shows its field. The ideal scenario for the turtleBot motion is to go to the path that perfect fit the robot. But, the robot did not go through it and robot considers the whole area is blocked and turned as shown in Figure~\ref{fig:example-2} (b).
\subsection{Scenario \#4}
After retraining the neural network using a dataset generated from the same that is used in scenario \#3 as shown in Figure \ref{3rd stage environnmet.}. Dataset will be divided into inputs and outputs. Input is the images and the output is the probability of collision and steering angle corresponding to each image.
Afterwards, the collected data set is passed to the training script wich made by the DroNet's creators.
\begin{figure}[H]
\centering
\subfigure[]{
\includegraphics[width=0.47\linewidth]{images/Start-Position.png}}
\hspace{5pt}
\subfigure[]{
\includegraphics[width=0.47\linewidth]{images/Heading-to-The-Perfect-Fit-Path.png}}
\caption{(a) Start position (b) Heading to the perfect fit path.}
\label{dd}
\end{figure}
Figure \ref{dd} (a) also shows the blue beam from laser scanner shows the robot field. On the other hand, Figure \ref{dd} (b) shows the turtleBot moves to the path that perfectly fit the robot.
\section{How Mapping and Localization is Achieved?}
In order to perform autonomous navigation, the robot must have a map of the environment. The robot will use this map for many things such as planning trajectories, avoiding obstacles, etc. The mapping and localization is achieved as follows:
\subsection{Simultaneous Localization and Mapping}
Simultaneous Localization and Mapping (SLAM) is the name that defines the robotic problem of building a map of an unknown environment while simultaneously keeping track of the robot's location on the map that is being built.
\subsection{The Gmapping Package}
The gmapping ROS package is an implementation of a specific SLAM algorithm called gmapping (\url{https://www.openslam.org/gmapping.html}). This means that, somebody (\url{http://wiki.ros.org/slam\_gmapping})
has implemented the gmapping algorithm for us to use inside ROS, without having to code it ourself. So if we use the ROS Navigation stack, we only need to know (and have to worry about) how to configure gmapping for our specific robot (in our case Turtlebot). The gmapping package contains a ROS Node called slam\_gmapping, which allows us to create a 2D map
using the laser and pose data that your mobile robot is providing while moving around an environment. This node basically reads data from the laser and the transforms of the robot, and turns it into an occupancy grid map (OGM).
\subsection{Saving The Map}
Another of the packages available in the ROS Navigation Stack is the map\_server package. This package provides the map\_saver node, which allows us to access the map data from a ROS Service, and save it into a
file. When you request the map\_saver to save the current map, the map data is saved into two files: one is the YAML file, which contains the map metadata and the image name, and second is the image itself, which has the encoded data of the occupancy grid map.
\section{How Path Planning is Achieved?}
Moving from one place to another is a trivial task, for humans. One decides how to move in a split second. For a robot, such an elementary and basic task is a major challenge. In autonomous robotics, path planning is a central problem in robotics. The typical problem is to find a path for a robot, whether it is a vacuum cleaning robot, a robotic arm, or a magically flying object, from a starting position to a goal position safely. The problem consists in finding a path from a start position to a target position. This problem was addressed in multiple ways in the literature depending on the environment model, the type of robots, the nature of the application, etc.Safe and effective mobile robot navigation needs an efficient path planning algorithm since the quality of the generated path affects enormously the robotic application. Typically, the minimization of the traveled distance is the principal objective of the navigation process as it influences the other metrics such as the processing time and the energy consumption. Path planning is divided into global and local path planning.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{images/global-and-local-path-planning.png}
\caption{Global and local path planning\protect\footnotemark{}.}
\label{global and local path planning.}
\end{figure}
\footnotetext{\url{https://www.researchgate.net/figure/Global-and-local-path-planning_fig4_322441239}}
\section{Simulation Results after Mapping and Path Planning for
Ground Robot}
In the following subsection, the mapping and localization scenario for the turtlebot will be presented.
\subsection{Scenario \#5}
In this scenario, the turtlebot will be moving on a pre-defined path to target without needing a laser range sensor. This is achieved as the follows sequences:
\begin{enumerate}
\item Generating an obstacle map for the environment by using the Gmapping package as shown in Figure \ref{Mapping}.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth,height=300px]{images/Mapping.png}
\caption{Mapping for the environment used.}
\label{Mapping}
\end{figure}
\item Generating the shortest path to the target using dijkstra path planner then saving the path obtained as shown in Figure \ref{path}.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth,height=300px]{images/Path-Planning.png}
\caption{Path planning for the environment used.}
\label{path}
\end{figure}
\item Finally, creating a script that retrieve the path from the file and move on the path without Laser range sensor as shwon in Figure \ref{Without Laser Range Sensor}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.37]{images/Moving-on-Path.png}
\caption{Moving the turtleBot on path without laser sensor.}
\label{Without Laser Range Sensor}
\end{figure}
\end{enumerate}
\section{ Scenario \#6 Market Application}
In this scenario, the Combination of DroNet in Scenario \#4 with a script capable of retrieving a saved path and move on that path in Scenario \#5. This combination leads to a motion to a target and avoiding all dynamic object such as humans without a need of expensive laser range sensor and in replace of the laser range sensor it will use an RGP camera which is very cheap. Finally, this application can be applied in a restaurant to serve clients with low cost.
\chapter{Introduction}
\label{ch:intro}
Recently mobile robots have started to work in the real world scenarios. Applications of mobile robots are immense and acquiring importance. These applications include agricultural robotics such as fertilizing and planting , support to medical services such as transportation of medication , client support such as museum tour, exhibition guides and military missions such as surveillance and monitoring. A group of mobile robots can do work in parallel and it has advantages over single robot systems. Multi mobile robot systems can complete a given task faster as compared to a single robot. In such tasks where multi mobile robots are involved, there is a requirement that all robots navigate and avoid each other to reach their goal positions. Multi mobile robot systems can be used for material transportation in factories, defense, agricultural robotics and service support.
\section{Motivation}\label{motivation}
Mobile robot is an autonomous agent capable of navigating intelligently anywhere using sensor-actuator control techniques. The applications of the autonomous mobile robot in many fields such as industry, space, defence and transportation, and other social sectors are growing day by day. Furthermore, navigation from one point to another point is one of the most basic tasks almost in every robotic system nowadays. There are many methods have been proposed throughout the last century to achieve this fundamental operation \cite{mohamed2020model}. Also, there are several challenges that are faced during navigation. These challenges include fluctuations in navigation accuracy depending on the complexity of the environment as well as problems in mapping precision, localization accuracy, actuators efficiency and etc. Till now, the navigation system in dynamic environments is the main important challenges in mobile robot systems. Recently, this topic is one of the hot research areas. So, there are many approaches to achieve this task with the highest possible accuracy. Thus, this thesis will focus on the deep learning approaches as they have showed the most auspicious results of all the various investigated methods.
\section{Definition of Autonomous Navigation}
Autonomous navigation means that a robot is able to plan its path and execute its
plan without human intervention. In some cases remote navigation aids are used
in the planning process, while at other times the only information available to
compute a path is based on input from sensors aboard the robot itself. An
autonomous robot is one which not only can maintain its own stability as it moves
but also can plan its movements. Autonomous robots use navigation aids when
possible but can also rely on visual, auditory, and olfactory cues. Once basic
position information is gathered in the form of triangulated signals or
environmental perception, machine intelligence must be applied to translate some basic motivation (reason for leaving the present position) into a route and motion plan. This plan may have to accommodate the estimated or communicated
intentions of other autonomous robots in order to prevent collisions, while
considering the dynamics of the robot's own movement envelope.
\section{Problem Statement}
The main core of this project is studying and evaluation the state-of-art the deep learning approaches for robotic navigation which are recently proposed in both static and dynamic environments. After studying the implementation of each approach and the advantages and disadvantages of these algorithms we were able to set our minds on studying and modifying the DroNet approach \cite{loquercio2018dronet}. The DroNet approach was proposed to accomplish the requirements for civilian drones that are soon expected to be used in a wide variety of tasks such as aerial surveillance, delivery, or monitoring of existing architectures. The DroNet approach is a convolutional neural network (CNN) that can safely drive a drone through the streets of a city.\footnote{For supplementary video see: \url{https://youtu.be/ow7aw9H4BcA}} This approach mainly works in outdoor and indoor environments and it also suffers from absences of goal oriented motion. This thesis aims autonomous mobile robot navigation in dynamic environments.
\section{Thesis Objectives}
The main objective of this thesis is autonomous mobile robot navigation in dynamic environments. This is achieved by modifying the DroNet approach proposed in \cite{loquercio2018dronet} to navigate in an indoor environment using ground robot, then retraining the CNN to enhance the performance of DroNet in this environment. In addition to this, we generate path to target so we need a map and a path planning technique. For the map we use simultaneous localization and mapping (SLAM) (gmapping) and for path planning we used Dijkstra. Finally, the combination between the modified DroNet and the paths generated to get a goal oriented motion with shortest path and dynamic obstacle avoidance with low cost. We have implemented and tested their method on both ROS simulation environment and GAZEBO simulation for robotic system.
\section{Thesis Structure}
This thesis is organized as follows:
Chapter~\ref{ch:intro} presents the thesis motivation, definition of autonomous navigation, the problem statement, the thesis objectives and the thesis structure. In Chapter~\ref{ch:Literature-Review}, a literature review of deep learning-based schemes are studied. Chapter~\ref{ch:CNN} introduces the convolutional neural network (CNN). After that, Chapter~\ref{ch:preparation-studies} studies the simulation tools that we used in our graduation project such as Linux, robot operating system (ROS), C++, python and GAZEBO simulator. Chapter~\ref{ch:proposed_approach} presents our proposed approach and its simulation results. Finally, Chapter~\ref{ch:conclusions} includes the conclusions and future work direction.
\chapter{Literature Review}
\label{ch:Literature-Review}
\section{Virtual-to-Real Deep Reinforcement Learning: Continuous Control of Mobile Robots for Mapless Navigation}
\subsection{Approach}
The approach \cite{tai2017virtual}, presents a learning-based mapless motion planner by taking the sparse 10-dimensional range findings and the target position with respect to the mobile robot coordinate frame as input and the continuous steering commands as output. Traditional motion planners for mobile ground robots with a laser range sensor mostly depend on the obstacle map of the navigation environment where both the highly precise laser sensor and the obstacle map building work of the environment are indispensable. We show that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations. The trained planner can be directly applied in unseen virtual and real environments. The experiments show that the proposed mapless motion planner can navigate the nonholonomic mobile robot to the desired targets without colliding with any obstacles.
\subsection{Conclusion}
In this approach, a mapless motion planner was trained end-to-end through continuous control deep rienforcement learning (RL) from scratch.We revised the state-of-art continuous deep-RL method so that the training and sample collection can be executed in parallel. By taking the 10-dimensional sparse range findings and the target position relative to the mobile robot coordinate frame as input, the proposed motion planner can be directly applied in unseen real environments without ne-tuning,even though it is only trained in a virtual environment. When compared to the low-dimensional map-based motion planner, our approach proved to be more robust to extremely complicated environments.
\section{GOSELO: Goal-Directed Obstacle and Self-Location Map for Robot Navigation Using Reactive Neural Networks}
\subsection{Approach}
Robot navigation using deep neural networks has been drawing a great deal of attention. Although reactive neural networks easily learn expert behaviors and are computationally efficient, they suffer from generalization of policies learned in specific environments. As such, reinforcement learning and value iteration approaches for learning generalized policies have been proposed.However, these approaches are more costly. In the approach \cite{kanezaki2017goselo}, they tackle the problem of learning reactive neural networks that are applicable to general environments. The key concept is to crop, rotate,and resize an obstacle map according to the goal location and the agent’s current location so that the map representation will be better correlated with self-movement in the general navigation task, rather than the layout of the environment. Furthermore, in addition to the obstacle map, we input a map of visited locations that contains the movement history of the agent, in order to avoid failures that the agent travels back and forth repeatedly over the same location as shown in Figure \ref{fig:goselo}. Experimental results reveal that the proposed network outperforms the state-of-the-art value iteration network in the grid-world navigation task. they also demonstrate that the proposed model can be well generalized to unseen obstacles and unknown terrain. Finally, they demonstrate that the proposed system enables a mobile robot to successfully navigate in a real dynamic environment.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{images/goselo.png}
\caption{Goselo \cite{kanezaki2017goselo}.}
\label{fig:goselo}
\end{figure}
The proposed method is based on a convolutional neural network (CNN) to estimate the next best step among neighboring pixels in a grid map. We refer to such a CNN as a “reac-tive CNN” because it reacts to specific patterns on a map in order to determine the movement of the agent. Navigation based on a reactive CNN has three main advantages, as described below.First, a reactive CNN estimates the next best step in a constant time in any situation. In contrast, the computational time of most existing path planning methods, such as the $A^{*}$ search and rapidly exploring random tree (RRT) depends on the scale and complexity of the map. Furthermore, such classical path planning methods will fail when there is no path to the goal. A CNN-based method can suggest a plausible direction in which to proceed at every moment, regardless of the existence of a path, which is important for navigation in cluttered, dynamic environments. Second, a reactive CNN can use graphics processing unit (GPU) acceleration due to its high potential for parallelization. This is also a major advantage over many classical path planning methods that cannot be wholly parallelized, because every point on a path is dependent on other locations. Finally, a reactive CNN can efficiently learn expert behaviors, e.g., human controls, without modeling the rewards and the policy behind the behaviors.
\subsection{Conclusion}
They proposed a novel navigation method for an online-editable 2D map via an image classification technique. The computation time required by the proposed method to estimate the best direction for the agent remains constant at each step. Another significant advantage of the proposed method is that the agent preferentially moves to new locations, which helps the agent to avoid the local minima trap. Experimental results demonstrated the effectiveness of the proposed goal-directed map representation, i.e., GOSELO, as well as its superiority to existing neural-network-based methods (such as the VIN method) in terms of both success rate and computational cost. We also demonstrated that the proposed method can be generalized to avoid unseen obstacles and navigate unknown terrain. Experiments using the Peacock mobile robot demonstrated the robustness of the proposed navigation system with respect to dynamic scenarios involving crowds of people. Pea-cock successfully moved continuously, all day long for two days, while avoiding people. Peacock demonstrated the advantage of the proposed method over classical path planning methods, such as $A^{*}$ search, which fails to predict the next step when there is no path to the goal. Although we used a CPU for the prediction of a single future step, there would be more room for predicting dozens of future steps if we use a GPU. We are planning such an extension to predict a more reliable direction to proceed. Extending GOSELO from 2D to 3D is another area for future study.
\section{End-to-End Deep Learning for Autonomous Navigation of Mobile Robot}
\subsection{Approach}
This paper \cite{kim2018end} proposes an end-to-end method for train-ing convolutional neural networks for autonomous navigation of a mobile robot. Traditional approach for robot navigation consists of three steps. The first step is extracting visual features from the scene using the camera input. The second step is to figure out the current position by using a classifier on the extracted visual features. The last step is making a rule for moving the direction manually or training a model to handle the direction.
In contrast to the traditional multi-step method, the proposed visuo-motor navigation system can directly output the linear and angular velocities of the robot from an input image in a singlestep. The trained model gives wheel velocities for navigation asoutputs in real-time making it possible to be implanted on mobilerobots such as robotic vacuum cleaners. The experimental results show an average linear velocity error of 2.2 cm/s and average angular velocity error of 3.03 degree/s. The robot deployed with the proposed model can navigate in a real-world environment by only using the camera without relying on any other sensors such as LiDAR, Radar, IR, GPS, IMU.The proposed system architecture is shown in Figure \ref{End-to-End Arch.}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/End-to-End}
\caption{End-to-end deep architecture \cite{kim2018end}.}
\label{End-to-End Arch.}
\end{figure}
The input of the proposed architecture is an red green blue (RGB) image and the outputs are linear and angular velocities. The system does not require detection, localization or planning modules for navigation separately. The CNN architecture used in this paper is AlexNet . Even though other well-known architectures such as VGGNet , GoogleNet or ResNet can be used, these networks are not applicable for real-time robot navigation due to slow inference speed. Our network performs multi-label regression giving outputs as two real-values. The ground truth velocities are in the range of 0 to 0.5m/s for linear velocity, and in the range of -1.5 to 1.5 radians for angular velocity. Two velocities were normalized to the values between0 and 1 for CNN input. Since the output values are oscillated,using raw output values makes the robot movement unstable. For acquiring consistent outputs, post-processing for noise reduction was conducted to make the movement of the robot stable.
\subsection{Conclusion}
Traditional methods for robot navigation or path planning require multiple and complex algorithms for localization,navigation and action planning. The proposed approach using end-to-end deep learning could make it possible to control the robot motor directly from the visual input as the humandid. The human can decide the path seeing only a local scene without any information of the global map. This result verified the potential of the proposed system as a local path planner.For future work, the visuo-motor system as a global pathplanner can be developed. Moreover, the model can be com-pressed for direct deployment of the visuo-motor system on the embedded board without a server.
\section{From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous
Ground Robots}
\subsection{Approach}
This paper \cite{pfeiffer2017perception} represent a model that is able to learn the complex mapping from
raw 2D-laser range finder and a target position to produce the required steering
commands for the robot. A data-driven end-to-end motion planner based on CNN
model is proposed. The supervised training data is based on expert demonstration
generated using an existing motion planner. The system can navigate the robot
safely though cluttered environment to reach the goal.
Their proposed solution does not require any global map for the robot to navigate.
Given the sensor data and relative target position, the robot is able to navigate
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{images/DNN_Architecture.png}
\caption{DNN architecture.}
\label{}
\end{figure}
to the desired location while avoiding the surrounding obstacles. By design, the
approach is not limited to any kind of environment. However, in this paper, only
navigation in static environments is considered. Their main contribution can be
summarized in two points. First, a data-driven end-to-end motion planner from
laser range findings to motion commands. Second, deployment and tests on a
real robotic platform in unknown environments.
The end-to-end relationship between input data and steering commands can
result in an arbitrarily complex model. Among different machine learning ap-
proaches, DNNs/CNNs are well known for their capabilities as a hyper-parametric
function approximation to model complex and highly nonlinear dependencies. To
avoid the problem of generating data for training, simulation data is collected
where a global motion planner is used as an expert. Since no pre-processing of
the laser data is required, the computational complexity and therefore also the
query time for a steering command only depends on the complexity of the model,
which is constant once it is trained.
The paper also consider the case when a robot is faced with a sudden appearing
object blocking the path. Their proposed deep planner reacted clearly to the
object by swerving to the right and after removing the obstacle, it corrects its
course of motion again to reach the target as fast as possible.
\subsection{Conclusion}
This work showed some limitation with real world experimentation. They
found that the robot has some weaknesses when it comes to wide open spaces
with clutter around. However, they suggested that this can potentially results
from the fact that the model was trained purely from perfect simulation data
and re-training using real sensor data might reduce this undesirable effect. In
addition, once the robot enters a convex dead-end region, it is not capable of
freeing itself. Moreover, the motion of the robot sometimes fluctuate apparently
because their architecture does not contain any memory of past visited locations.
Another drawback of the work is that the motion considered are not fully
autonomous and needs sometimes help from a human with a joystick when the
robot got stuck.
\section*{\centering Acknowledgement}
\input{chapters/acknowledgement.tex}
\input{chapters/abstract.tex}
\cleardoublepage
\tableofcontents
\cleardoublepage
\input{chapters/intro.tex}
\cleardoublepage
\input{chapters/literature-review.tex}
\cleardoublepage
\input{chapters/CNN.tex}
\cleardoublepage
\input{chapters/Preparation-Studies.tex}
\cleardoublepage
\input{chapters/Proposed_Approach_and_Simulation_Results.tex}
\cleardoublepage
\cleardoublepage
\input{chapters/Conclusions-and-Future-Work.tex}
\cleardoublepage
\addcontentsline{toc}{chapter}{\biblabel}
\printbibliography[title=\biblabel]
\cleardoublepage
\end{document}
|
{
"timestamp": "2021-02-18T02:17:58",
"yymm": "2102",
"arxiv_id": "2102.08758",
"language": "en",
"url": "https://arxiv.org/abs/2102.08758"
}
|
\section{Introduction}
Due to the increase in data traffic and the number of communicating devices, there is an increasing need to design efficient communication strategies to boost the data rate, spectral efficiency and manage the interference.
To that end, multi-antenna/multiple-input multi-output (MIMO) processing is a key technology.
To deal with the interference problem in multi-user multi-antenna systems, the perfect channel state information (CSI) at receiver (CSIR) and transmitter (CSIT) are essential.
%
However, it is difficult to obtain accurate CSI due to quantization error, channel mobility, and estimation error.
%
Even with the ideal assumption of perfect CSIR, it is questionable whether a base station (BS) can obtain accurate CSIT.
Rate-splitting multiple access (RSMA) has recently emerged and has been found to have multiple advantages over conventional multiple access methods in terms of robustness against imperfect CSIT \cite{RS:rob}, and spectral and energy efficiencies \cite{RS:mag}, \cite{RS:eff}.
The key feature of RSMA is the split of the messages into common and private parts. The common parts are encoded in a common stream that can be decoded by multiple users.
%
On the other hand, each of the private messages is encoded in a private stream which is decoded by its respective receiver.
%
Each receiver then decodes the common stream, retrieves its intended common part, then removed the common stream from the received signal using successive interference cancellation (SIC). %
%
After removing the common stream, each receiver can decode its intended private stream by treating the remaining private streams as interferences.
%
From the common stream and the private stream, each receiver can reconstruct the original message.
%
The flexibility of RSMA lies in adjusting the content and power allocated to the common and private streams, so as to partially decode interference and partially treat interference as noise \cite{RS:Bridging}.
%
Such flexibility leads to more robustness and performance enhancements in various network and propagation conditions \cite{RS:Bridging}.
In \cite{RS:uni},\cite{RS:uni_multi}, it shown that RSMA can outperform conventional approaches in terms of rate maximization under perfect CSI assumption. Especially in \cite{RS:uni}, it is shown that RSMA unifies other four strategies (i.e, non-orthogonal multiple access (NOMA), space-division multiple access (SDMA), orthogonal multiple access (OMA) and multicasting) and outperforms them in a two user multiple-input single-output (MISO) broadcast channel (BC) channel.
In case of imperfect CSIT and perfect CSIR scenario, the BS is unable to calculate the achievable rates at the receivers accurately. Thus, the BS should adjust precoding vector and power allocation by using estimated channel and error information. In \cite{RS:imperfect}, sample average approximation combined with a weighted minimum mean square error (WMMSE) algorithm is used for sum-rate maximization in RSMA by generating channel error ensembles.
In \cite{RS:rob}, max-min fairness optimization using the worst case rate and WMMSE is proposed with bounded channel error. Both studies show robust transmission of RSMA in multi-user (MU) MISO compared to conventional approaches.
In this paper, we consider both imperfect CSIR and CSIT under the assumption that the BS obtains CSI from the receiver through lossless channel feedbacks.
This is the first paper studying the design and optimization of RSMA with both imperfect CSIT and CSIR.
We formulate the sum-rate maximization problem in RSMA based MU-MISO system.
%
For converting non-convex problem to convex problem, the algorithm using the two methods semidefinite relaxation (SDR) and concave-convex procedure (CCCP) is proposed.
%
By the proposed algorithm, we jointly optimize precoding vectors and power allocation.
%
In simulation, we show the performance gains of the proposed RSMA over existing techniques.
The reminder of this paper is organized as follows. In section II, system model and achievable rate in imperfect CSI are described. In section III, the optimization problem for maximizing the sum-rate is formulated and joint precoding vector and power allocation optimization is conducted for sum-rate optimization by the proposed algorithm based on SDR and CCCP. Simulation result are provided in section IV. The paper is concluded in V.
\subsection{Notaion}
Standard letter indicates scalar, lower case boldface letter denotes vector, and upper case boldface letter denotes matrix.
Notation $\mathbf{A}\succeq \mathbf{B}$ indicates that matrix $\mathbf{A}- \mathbf{B}$ is positive semidefinite matrix.
Superscript $(\cdot)^{H}$ denotes hermitian (conjugate transpose).
Trace of matrix $\mathbf{A}$ is denoted by $\mathrm{tr}(\mathbf{A})$ and
rank of matrix $\mathbf{A}$ is denoted by $\mathrm{rank}(\mathbf{A})$.
Notations of $|\cdot |$, $||\cdot ||$, and $ \mathbb{E}[\cdot]$ refer to the absolute value, Euclidean norm, and expectation operation, respectively. A matrix $\mathbf{I}_n$ denotes a $n$ by $n$ identity matrix.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{System.pdf}
\centering
\caption{System architecture of rate-splitting multiple access with imperfect CSIR and CSIT in MU-MISO BC}
\label{System}
\end{figure}
\section{System Model}
\subsection{Rate-Splitting Multiple Access Based System
}
We consider a single cell MU-MISO system operating in downlink where the BS equipped with $N_{t}$ antennas serves $K$ single antenna users.
As shown in Fig. \ref{System}, the main idea of RSMA is to split a message for user-$k$ $W_{k}$, $k=1,\dots, K$
into common and private parts, i.e. $W_{k}=\{W_{p,k},W_{c,k}\}$.
The common part can be decoded by all users and the private part can be decoded by only the corresponding user. All common parts of each user message are combined into one common message $W_{c}$, i.e. $W_{c}=\{ W_{c,1},\dots,W_{c,K} \}$. The common message $W_{c}$ is encoded into the common stream $s_{c}$ by using a codebook known to all users and each private message $W_{p,k}$ is encoded into the private stream $s_{k}$ by using a codebook known to only the intended receiver.
%
Each stream is assumed to be independent zero mean unit variance Gaussian random variable, i.e. $s_{i}\sim \mathcal{CN}(0,1),~i \in \mathcal{I}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptscriptstyle\Delta}}}\{c, 1,\dots, K\} .$ These $K+1$ streams are linearly precoded by using precoding vector $\mathbf{p}_{i} \in \mathbb{C}^{N_{t}\times1},~i\in \mathcal{I}$.
The transmitted signal at the BS is expressed as
%
\begin{align}
\mathbf{x}&=\mathbf{p}_\mathrm{c}s_\mathrm{c}+\sum_{k=1}^{K}\mathbf{p}_{k}s_{k},\label{tranmit_signal}
\end{align}
where a transmitted signal power constraint with a total power $P_t$ is
\begin{align}
\sum\limits_{\substack{i\in\mathcal{I}}}\norm{\mathbf{p}_{i}}^2\leq P_{t}.
\end{align}
We refer to $\mathbf{h}_{k}\in \mathbb{C}^{N_{t}\times1}$ as a downlink channel vector from the BS to user-$k$ and a received signal at user-$k$ is denoted by
\begin{equation}
y_{k}=\mathbf{h}_{k}^{H}\boldsymbol{\mathrm{x}}+n, ~k=1, \dots, K,\label{received_signal}
\end{equation}
where $n\sim \mathcal{CN}(0,\sigma_{{n}}^2)$ is additive white Gaussian noise (AWGN). Since the common stream can be decoded by all users, users can remove the common stream by SIC. Thus, users decode the private stream after SIC.
When decoding the common stream, all private streams are treated as interference. When decoding a private stream, only other private streams are treated as interference, provided that the common stream is completely removed.
Each user reconstructs the original message after retrieving the part of its message encoded in the common stream and the part encoded in the private stream.
\subsection{Assumption on Channel State Information}
We assume that users cannot accurately estimate the channel vector, i.e. imperfect CSIR.
The channel model is given by
\begin{equation}
\mathbf{h}_{k}=\hat{\mathbf{h}}_{k}+\mathbf{e}_{k},\label{channel}
\end{equation}
where $\hat{\mathbf{h}}_{k}$ is an estimated channel and $\mathbf{e}_{k}\sim \mathcal{CN} (0,\mathbf{\Phi}_k)$ is a channel error.
Also, we assume that the BS has the same CSI with the users because of lossless channel feedback.
Thus, all users and the BS know the expectation of the channel, $\mathbb{E}[\mathbf{h}_k]=\hat{\mathbf{h}}_{k}$, and the covariance of the channel, $\mathbb{E}[(\mathbf{h}_k-\mathbb{E}[\mathbf{h}_k])(\mathbf{h}_k-\mathbb{E}[\mathbf{h}_k])^H]=\mathbf{\Phi}_k$.
In this paper, it is assumed that the covariance matrix of channel error $\mathbf{e}_{k}$ is $\mathbf{\Phi}_k=\sigma _{\mathrm{e},k}^2\mathbf{I}$. In other words, the channel error is assumed as a vector of independent and identically distributed (i.i.d) random variables.
\subsection{Achievable Rate}
It is difficult to determine an explicit achievable rate under the imperfect CSIR assumption, since the users do not know the actual channel. Thus, the concept of generalized mutual information (GMI) is used in order to characterize the achievable rate at a user with imperfect CSI \cite{GMI:1},\cite{GMI:2}. We first introduce a general form of GMI by considering a point to point case for simplicity. When the input has Gaussian distribution $x\sim \mathcal{CN} (0,\epsilon_x)$, the output signal is expressed by
\begin{align}
y=hx+n
\label{y:GMI}
\end{align}
where $h$ is the fading channel and $n\sim \mathcal{CN} (0,N)$ is the noise. When knowing expectation and variance of channel, $h$ can be broken into $\hat{h}$ and $e$, i.e. ${h}=\hat{h}+e$ where $\mathbb{E}[h]=\hat{h}$ and $\mathbb{E}[e]=0$. We can intuitively consider $\hat{h}$ as an estimate of the channel and $e$ as a channel error having zero mean with variance $\sigma_h^2$. GMI is defined by
\begin{align}
{I_\mathrm{GMI}}=\log_2\left(1+\frac{|\hat{h}|^2 \epsilon_x}{\mathbb{E}[|e|^2]\epsilon_x+N}\right),
\label{I:GMI}
\end{align}
where $\mathbb{E}[|e|^2]=\sigma^2_{h}$.
In the case of imperfect CSIR, GMI corresponds to an achievable rate when a user uses a nearest neighbor decoder and the input is Gaussian distribution \cite{GMI:3}.
By using this property, we apply GMI to RSMA based system and derive the achievable rate under imperfect CSI.
In RSMA approach, a user first decodes the common stream and then decodes the corresponding private stream after SIC. By this feature, the rate of the private stream is derived using the received signal after SIC.
The received signal at user-$k$ in (\ref{received_signal}) is rewritten as
\begin{align}
y_{k} &=\hat{\mathbf{h}}_{k}^{H}\mathbf{x}+\mathbf{e}_{k}^{H}{\mathbf{x}}+n, ~k=1,\dots ,K \label{yk2}\\
&=\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_\mathrm{c}s_\mathrm{c}
+\mathbf{e}_{k}^{H}\mathbf{p}_\mathrm{c}s_\mathrm{c} \nonumber
\\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\sum_{j=1}^{K}(\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_{j}s_{j}+\mathbf{e}_{k}^{H}\mathbf{p}_{j}s_{j})+n.\label{yk3}
\end{align}
Considering the common stream, the signal received from user-$k$ in (\ref{yk3}) can be re-expressed in the form of (\ref{y:GMI}) as
%
\begin{align}
y_{k} &=\hat{h}_{k,c}s_\mathrm{c}
+e_{k,c} s_\mathrm{c}
+z_\mathrm{c}\\
&={h}_{k,c}s_\mathrm{c} +z_\mathrm{c}.
\end{align}
where ${h}_{k,c}=\hat{h}_{k,c}+e_{k,c}$,
$\hat{h}_{k,c}=\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_\mathrm{c}$,
$e_{k,c}=\mathbf{e}_{k}^{H}\mathbf{p}_\mathrm{c}$, and $z_\mathrm{c}=\sum_{j=1}^{K}(\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_{j}s_{j}+\mathbf{e}_{k}^{H}\mathbf{p}_{j}s_{j})+n$.
Due to the independence between each stream and the noise, the expectation and variance of each component are derived as follows:
\begin{align}
&\mathbb{E}[{h}_{k,c}]=\hat{h}_{k,c}=\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_\mathrm{c},\label{aver1}\\
&\mathbb{E}[e_{k,c}]=0,
~\mathbb{E}[z_\mathrm{c}]=0,\\
&\mathbb{E}[|e_{k,c}|^2]=\mathbb{E}[|\mathbf{e}_{k}^{H}\mathbf{p}_\mathrm{c}|^2],\label{aver2}\\
&\mathbb{E}[|z_\mathrm{c}|^2]=\sum\limits_{j=1}^{K} ({|\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}|}{}^{2}+\mathbb{E}[|\mathbf{e}_{k}^{H}\mathbf{p}_{j}|^2])+\sigma_n^2.\label{aver5}
\end{align}
Substituting the values in (\ref{aver1}), (\ref{aver2}) and (\ref{aver5}) into (\ref{I:GMI}), the GMI for the common stream under imperfect CSIT can be obtained as
\begin{align}
R_{c,k}&=\log_{2} \bBigg@{3}(1+\frac{|\hat{\bm{\mathrm{h}}}{}^{H}_{k}\mathbf{p}_\mathrm{c}|^{2}}{\sum\limits_{j=1}^{K} {|\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}|}{}^{2}+\sum\limits_{\substack{j\in \mathcal{I}}}{\mathbb{E}[|\mathbf{e}^{H}_{k}\mathbf{p}_{j}|^{2}]}+\sigma _{{n}}^2}\bBigg@{3}),
\label{Rck1}
\end{align}
where $\mathbb{E}[|\mathbf{e}^{H}_{k}\mathbf{p}_{j}|^{2}]
={\mathbf{p}_{j}^\mathrm{H}\mathbf{\Phi}_k\mathbf{p}_{j}}$, due to $\mathbb{E}[\mathbf{e}_{k}{\mathbf{e}_{k}}^H]=\mathbf{\Phi}_k$. Note that $|\mathbf{e}^{H}_{k}\mathbf{p}_{c}|^{2}$ is associated with not only the desired stream but also the channel error. Thus this term is considered as interference when decoding the desired stream, since users do not have any information on the channel error. The same phenomenon occurs when decoding the private streams.
When operating with perfect CSIR, the common stream can be removed perfectly by SIC.
However, the common stream cannot be removed perfectly under imperfect CSIR, since the users do not have accurate information the actual channel.
Thus, the part of the common stream associated with channel error still remains after SIC. The received signal after SIC with imperfect CSIR is expressed by
\begin{align}
y_{k,\mathrm{SIC}} &=y_{k}-\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_\mathrm{c}s_\mathrm{c}\\
&=\mathbf{e}_{k}^{H}\mathbf{p}_\mathrm{c}s_\mathrm{c} +\sum_{j=1}^{K}(\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_{j}s_{j}+\mathbf{e}_{k}^{H}\mathbf{p}_{j}s_{j})+n.
\end{align}
The rate of the private stream can be obtained in a similar manner as the common part. The received signal after SIC is rewritten as
\begin{align}
y_{k,\mathrm{SIC}} &=\hat{h}_{k}s_{k}
+e_{k} s_{k}
+z_{k}\\
&={h}_{}s_{k} +z_{k}.
\end{align}
where ${h}_{k}=\hat{h}_{k}+e_{k}$,
$\hat{h}_{k}=\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_{k}$,
$e_{k}=\mathbf{e}_{k}^{H}\mathbf{p}_{k}$, and $z_{k}=\mathbf{e}_{k}^{H}\mathbf{p}_{c}s_\mathrm{c}+\sum_{\substack{j=1\\j\neq k}}^{K}(\hat{\mathbf{h}}_{k}^{H}\mathbf{p}_{j}s_{j}+\mathbf{e}_{k}^{H}\mathbf{p}_{j}s_{j})+n$.
Thus, the achievable rate of private stream for user-$k$ with imperfect CSIR is determined as
\begin{align}
R_{k}
&=\log_{2}\bBigg@{3}(1+\frac{{|\mathbf{\hat{h}}{}^{H}_{k}\mathbf{p}_{k}|}{}^{2}}{\sum\limits_{\substack{j=1\\j\neq k}}^{K} |\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}|^{2}\!\!+\!\!\sum\limits_{j\in \mathcal{I}}{\mathbb{E}[|\mathbf{e}^{H}_{k}\mathbf{p}_{j}|^{2}]}+\sigma _{{n}}^2}\bBigg@{3}).
\label{Rk2}
\end{align}
\section{ Sum-rate Maximization with imperfect CSI }
In this section, a sum-rate maximization problem is formulated. We transform the optimization problem and propose a novel algorithm for solving the optimization problem that is non-convex.
\subsection{Problem Formulation}
Our objective is to optimize precoding vectors consisting of power and direction for maximizing the sum-rate. The sum-rate is expressed as \begin{align}R_{s}=R_{c}+\sum\limits_{k=1}^{K}R_{k},
\end{align}
in which the common rate $R_{c}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptscriptstyle\Delta}}} \min_{k} R_{c,k}, ~k=1,\dots,K$ because the common stream is decoded by all users.
The optimization problem for sum-rate maximization is formulated as:
\begin{maxi!}|s|[0]
{\substack{\mathbf{p}_{i},\forall i }}{R_{c}+\sum\limits_{k=1}^{K}R_{k}}
{\label{P1}}{\bm{\mathsf{(P1):}} \nonumber}
%
\addConstraint{R_{c,k} \geq R_{c} \label{P1:const1}}
\addConstraint{\sum\limits_{\substack{\forall i \in \mathcal{I} }}\norm{\mathbf{p}_{i}}^2\leq P_{t}. \label{P1:const2}}
\end{maxi!}
We first simplify expression of the objective function of $\bm{\mathsf{(P1)}}$ using a stacking method introduced in \cite{GMI:ex1}.
First, we equivalently transform the expressions of the rate (\ref{Rck1}) and (\ref{Rk2}) as
\begin{align}
R_{c,k}&=\log_{2}\bBigg@{4}(\frac{\sum\limits_{\substack{j\in \mathcal{I}}}{\mathbf{p}_{j}^{H}(\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}+\mathbf{\Phi}_k)\mathbf{p}_{j}}+\sigma _{{n}}^2}{\sum\limits_{j=1}^{K} {\mathbf{p}_{j}^{H}\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}}+\sum\limits_{\substack{j\in \mathcal{I}}}{\mathbf{p}_{j}^{H}\mathbf{\Phi}_k\mathbf{p}_{j}}+ \sigma _{{n}}^2}\bBigg@{4})\label{Rck1_2}
\end{align}
and
\begin{align}
R_{k}=\log_{2}\bBigg@{4}(\frac{\sum\limits_{\substack{j=1}}^{K} \mathbf{p}_{j}^{H}\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}\!\!+\!\!\sum\limits_{j\in \mathcal{I}}{\mathbf{p}_{j}^{H}\mathbf{\Phi}_k\mathbf{p}_{j}}+\sigma _{{n}}^2}{\sum\limits_{\substack{j=1\\j\neq k}}^{K} \mathbf{p}_{j}^{H}\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}+\!\!\sum\limits_{j\in \mathcal{I}}{\mathbf{p}_{j}^{H}\mathbf{\Phi}_k\mathbf{p}_{j}}+\sigma _{{n}}^2}\bBigg@{4}).
\label{Rk2_2}
\end{align}
By using a combined precoding vector $\mathbf{p}=[\mathbf{p}_{1}^{H},\dots,\mathbf{p}_{K}^{H},\mathbf{p}_{c}^{H}]^{H} \in \mathbb{C}^{N_{t}(K+1) \times 1}$, the numerator term in (\ref{Rck1_2}) can be expressed by
\begin{align}{\sum\limits_{\substack{j\in \mathcal{I}}}{\mathbf{p}_{j}^{H}(\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}+\mathbf{\Phi}_k)\mathbf{p}_{j}}+\sigma _{{n}}^2}=\mathbf{p}^{H}\mathbf{A}_{k}\mathbf{p},
\end{align}
where $\mathbf{A}_{k}\in \mathbb{C}^{N_{t} (K+1) \times N_{t} (K+1)}$ is a block diagonal and positive definite matrix defined by
\begin{align}
\boldsymbol{\mathrm{A}}_{k}=
\begin{bmatrix}
\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}& 0 &\cdots & 0 \\
0 & \hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k} & \cdots &0 \\
\vdots & \vdots &\ddots & \vdots \\
0 &0 & \cdots & \hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}
\end{bmatrix}\nonumber\\
+\left(\frac{\sigma _{{n}}^2}{P_{t}}+\sigma _{\mathrm{e},k}^2\right)\mathbf{I}_{N_t(K+1)}
\label{Ak}
\end{align}
under the assumption that all transmission power is used, i.e., $\norm{\mathbf{p}}^{2}=P_t$, and $\mathbf{\Phi}_k=\sigma _{\mathrm{e},k}^2\mathbf{I}$.
We apply the same approach to the denominator and the numerator terms of (\ref{Rck1_2}), (\ref{Rk2_2}). Each term can be rewritten as:
\begin{align}
{\sum\limits_{j=1}^{M} {\mathbf{p}_{j}^{H}\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}}+\sum\limits_{\substack{j\in \mathcal{I}}}{\mathbf{p}_{j}^{H}\mathbf{\Phi}_k\mathbf{p}_{j}}+ \sigma _{{n}}^2}=\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p},\\
{\sum\limits_{\substack{j=1\\j\neq k}}^{M} \mathbf{p}_{j}^{H}\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}+\!\!\sum\limits_{j\in \mathcal{I}}{\mathbf{p}_{j}^{H}\mathbf{\Phi}_k\mathbf{p}_{j}}+\sigma _{{n}}^2}=\mathbf{p}^{H}\mathbf{D}_{k}\mathbf{p},
\end{align}
where the positive definite matrix $\mathbf{B}_{k},\mathbf{D}_{k}$ are defined as
\begin{align}
\boldsymbol{\mathrm{B}}_{k}=\boldsymbol{\mathrm{A}}_{k}-
\begin{bmatrix}
0 & \cdots & \cdots & 0 \\
\vdots & \ddots & \ddots & \vdots \\
0 & \cdots & 0 &0\\
0 & \cdots & 0 & \hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}
\end{bmatrix},
\label{Bk}
\end{align}
\begin{align}
\boldsymbol{\mathrm{D}}_{k}=\boldsymbol{\mathrm{B}}_{k}-
\begin{bmatrix}
{0} & \cdots & 0 & \cdots & 0 \\
\vdots& \ddots &\vdots & \ddots &\vdots \\
0 & \cdots &
{\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}}
& \cdots & 0 \\
\vdots & \ddots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & \cdots & {0}
\end{bmatrix}.
\label{Dk}
\end{align}
The matrix in the second term of (\ref{Dk}) is a block diagonal matrix formulated by diag($\bm{0},\dots,\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k},\dots,\bm{0}$)
in which $\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}{}^{H}_{k}$ is a $k$th diagonal sub-block.
As the result, the achievable rates are simplified to
\begin{align}
R_{c,k}=\log_{2}\left(\frac{\mathbf{p}^{H}\mathbf{A}_{k}\mathbf{p}}{\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p}}\right),~R_{k}=\log_{2}\left(\frac{\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p}}{\mathbf{p}^{H}\mathbf{D}_{k}\mathbf{p}}\right),\label{RR3}
\end{align}
and the objective function of the optimization problem $\bm{\mathsf{(P1)}}$ can also be simplified to
\begin{align}
f(\mathbf{p})=\sum\limits_{k=1}^{K}\log_{2}\left(\frac{\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p}}{\mathbf{p}^{H}\mathbf{D}_{k}\mathbf{p}}\right)+\min_{j}\log_{2}\left(\frac{\mathbf{p}^{H}\mathbf{A}_{j}\mathbf{p}}{\mathbf{p}^{H}\mathbf{B}_{j}\mathbf{p}}\right).
\end{align}
In this case, because $f(\mathbf{p})=f(\alpha \mathbf{p})$ with non-zero parameter $\alpha$, the power constraint ($\ref{P1:const2}$) can be ignored. Each rate is written as the difference of concave functions, e.g. $R_k=\log_2(\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p})-\log_2(\mathbf{p}^{H}\mathbf{D}_{k}\mathbf{p})$, which is a non-convex function. Thus, finding the optimal solution is difficult due to the non-convexity of the objective function.
In order to solve the non-convex problem, we propose an algorithm based on alternating optimization.
\subsection{Proposed Optimization Algorithm}
Before finding a solution, to transform the problem, we derive upper and lower bounds of the denominator and numerator in (\ref{RR3}) by using auxiliary variables $a_{k},b_{k},c_{k},d_{k}$:
\begin{align}
\mathbf{p}^{H}\mathbf{A}_{k}\mathbf{p}\geq e^{a_{k}},
\,\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p}\geq e^{c_{k}},
\label{AC:bound}
\end{align}
\begin{align}
\mathbf{p}^{H}\mathbf{B}_{k}\mathbf{p}\leq e^{b_{k}},
\,\mathbf{p}^{H}\mathbf{D}_{k}\mathbf{p}\leq e^{d_{k}}.
\label{BD:bound}
\end{align}
Using these bounds, we can induce the lower bound of objective function $f(\mathbf{p})$ by
\begin{align}
f(\mathbf{p})&\geq \frac{1}{\ln{2}}\left[\sum\limits_{k=1}^{K}\ln \left(\frac{e^{c_k}}{e^{d_k}}\right)+\min_{j}\ln\left(\frac{e^{a_j}}{e^{b_j}}\right)\right]\nonumber
\\
&=\frac{1}{\ln{2}}\left[\sum\limits_{k=1}^{K}\left(c_{k}-d_{k}\right)+\min_{j}(a_{j}-b_{j})\right].
\label{f:bound}
\end{align}
Finally, by adding one more slack variable $l_{c}$ for term of $\min_{j}(a_{j}-b_{j})$,
the optimization problem $\bm{\mathsf{(P1)}}$ can be transformed to $\bm{\mathsf{(P2)}}$ denoted as
\begin{maxi!}|s|[0]
{\substack{\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\\\mathbf{p},l_{c}}}{\sum\limits_{k=1}^{K}(c_{k}-d_{k})+l_{c}}
{\label{P3}}{\bm{\mathsf{(P2):}}\nonumber}
\addConstraint{a_{k}-b_{k}\geq l_{c}, k=1,\dots,K}
\addConstraint{(\ref{AC:bound}), (\ref{BD:bound})},\nonumber
\end{maxi!}
where $\mathbf{a}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptscriptstyle\Delta}}}[a_{1},\dots,a_{K}]$, $\mathbf{b}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptscriptstyle\Delta}}}[b_{1},\dots,b_{K}]$, $\mathbf{c}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptscriptstyle\Delta}}}[c_{1},\dots,c_{K}]$, and $\mathbf{d}\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptscriptstyle\Delta}}}[d_{1},\dots,d_{K}]$.
The problem is still non-convex, since the constraints (\ref{AC:bound}), (\ref{BD:bound}) are non-convex.
For constraint (\ref{AC:bound}), we apply SDR technique which obtains a solution that is close to the optimal solution in non-convex quadratically constrained quadratic program (QCQP) \cite{SDR}.
SDR converts a non-convex problem into a convex problem by removing the rank one constraint which causes non-convexity.
To apply SDR, we transform the quadratic term of (\ref{AC:bound}) as
\begin{align}
\mathbf{p}^{H}\mathbf{A}_{k}\mathbf{p}=\mathrm{tr}(\mathbf{p}^{H}\mathbf{A}_{k}\mathbf{p})=\mathrm{tr}(\mathbf{A}_{k}\mathbf{p}\mathbf{p}^{H})=\mathrm{tr}(\mathbf{A}_{k}\mathbf{X})
\end{align}
by converting $\mathbf{p}\mathbf{p}^{H}$ to $\mathbf{X}$ with constraints $\mathbf{X}\succeq 0$ and $\mathrm{rank}(\mathbf{X})=1$. Then the constraint (\ref{AC:bound}) becomes convex when the rank constraint is removed.
Generally, an optimal solution of the relaxed problem may not satisfy the rank constraint, which implies that an additional process is required to construct a genuine solution that satisfies the rank constraint.
This issue will be tackled after the algorithm is described.
\begin{algorithm}[t]
\caption{Alternating Optimization based on SDR and CCCP}
\label{Algo}
\begin{algorithmic}[1]
\STATE Initialize $b_{k}^{(0)}, d_{k}^{(0)}$ and set s=0 and $\epsilon$ to a small value\\
\STATE \algorithmicrepeat
\STATE $s\leftarrow s+1$
\STATE Given $b_{k}^{(s-1)}, d_{k}^{(s-1)}$ , solve the problem $\mathsf{(P3)}$ and obtain optimal $\mathbf{X}^*,a_{k}^*,b_{k}^*,c_{k}^*,d_{k}^*,l_{c}^*$
\STATE Update $b_{k}^{(s)}\leftarrow b_{k}^*$, $d_{k}^{(s)}\leftarrow d_{k}^*$
\STATE \algorithmicuntil \,\, convergence of $b_{k}^{(s)}, d_{k}^{(s)}$
\STATE Decomposition $\mathbf{X}^*=\mathbf{U \Sigma U}^{H}$
\STATE Generate enough random vectors $\mathbf{r}\sim \mathcal{CN} (0,\mathbf{I}_{N_{t} (K+1)})$
\STATE Choose the best $\mathbf{\bar{r}}=\mathbf{U \Sigma}^{1/2}\mathbf{r}$ as a solution $\mathbf{p}^*$
\end{algorithmic}
\end{algorithm}
The constraint functions in (\ref{BD:bound}) are difference of convex (DC) functions that are generally non-convex.
For this problem, we approximate the DC function to a convex function by the CCCP method, which guarantees a local optimal solution of the DC problem \cite{MM_algorithm}.
For such approximation, we linearize the exponential term in (\ref{BD:bound}), which is concave, by using the first-order Taylor series approximation.
Finally, constraints at the $s$th iteration can be denoted by
\begin{align}
\mathrm{tr}(\mathbf{A}_{k}\mathbf{X})\geq e^{a_{k}},
~\mathrm{tr}(\mathbf{B}_{k}\mathbf{X})\geq e^{c_{k}},
\label{AC:Relx}
\end{align}
\begin{align}
\mathrm{tr}(\mathbf{B}_{k}\mathbf{X})\leq e^{b_{k}^{(s-1)}}(b_{k}-b_{k}^{(s-1)}+1),
\label{B:Relx}
\end{align}
\begin{align}
\mathrm{tr}(\mathbf{D}_{k}\mathbf{X})\leq e^{d_{k}^{(s-1)}}(d_{k}-d_{k}^{(s-1)}+1).
\label{D:Relx}
\end{align}
The sub-problem at $s$th iteration $\bm{\mathsf{(P3)}}$ is a convex problem expressed by
\begin{maxi!}|s|[0]
{\substack{\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\\\mathbf{p},l_{c}}}{\sum\limits_{k=1}^{M}(c_{k}-d_{k})+l_{c}}
{\label{P3}}{\bm{\mathsf{(P3):}} \nonumber}
\addConstraint{a_{k}-b_{k}\geq l_{c}, k=1,\dots,K}
\addConstraint{\mathbf{X}\succeq 0}
\addConstraint{(\ref{AC:Relx}), (\ref{B:Relx}), (\ref{D:Relx})}.\nonumber
\end{maxi!}
The value of $b_{k}^{(s-1)}$ and $d_{k}^{(s-1)}$ are updated by solving the sub-problem $\bm{\mathsf{(P3)}}$ and a local optimal solution is obtained with a sufficient number of iterations. The detailed process is expressed in Algorithm 1. The convex problem can be solved using CVX toolbox \cite{CVX}.
It is noted that a obtained solution $\mathbf{X}^*$ does not satisfy the constraint $\mathrm{rank}(\mathbf{X})=1$. Therefore, we refine $\mathbf{X}^*$ to satisfy the rank constraint.
Since $\mathbf X^{*} $ is a Hermitian positive semidefinite matrix, it can be decomposed in the form $\mathbf{X}^*=\mathbf{U \Sigma U}^{H}$ by the singular value decomposition. Then, we generate sufficient number of random vectors $\mathbf{r}\in \mathbb{C}^{N_{t}(K+1)\times1} $ and obtain $\mathbf{\bar{r}}=\mathbf{U \Sigma}^{1/2}\mathbf{r}$. Finally, we choose the best $\mathbf{\bar{r}}$ maximizing the objective function $f(\mathbf{p})$ as a final solution $\mathbf{p}^*$.
It has been shown that SDR with sufficiently large number of random vectors guarantees a solution close to the optimal solution \cite{SDR}.
Overall step of the proposed algorithm is described in Algorithm \ref{Algo}.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{converge.pdf}
\centering
\caption{Convergence of proposed algorithm according to the number of transmitter antennas and users}
\label{Converge}
\end{figure}
$\mathbf{Remark~(Convergence):}$ In Algorithm 1, we randomly initialize the values of $b_{k}^{(0)}$ and $d_{k}^{(0)}$.
In Fig. \ref{Converge}, it is numerically confirmed that the proposed algorithm converges to finite point as the number of iterations increases when SNR$=20$dB, $\sigma_{\mathrm{e},k}^2=0.1$. The value of objective function gradually increased without fluctuation.
We can observe that
as the number of transmitter antennas and users increases, the number of iterations required for convergence increases.
\section{simulation results}
We assume $\mathbf{h}_{k}$ has i.i.d complex Gaussian distribution with zero mean and unit variance, i.e. $\mathbf{h}_{k}\sim \mathcal{CN}(0,1)$. The variance of AWGN is fixed as $\sigma_{n}^2=1$. The channel error is also distributed by complex Gaussian distribution, i.e., $\mathbf{e}_{k}\sim \mathcal{CN}(0,\sigma_{\mathrm{e},k}^2 \mathbf{I})$. The estimated channel $\hat{\mathbf{h}_{k}}$ is independent from the channel error and has complex Gaussian distribution with zero mean and variance $\sigma_{\hat{\mathbf{h}}_{k}}^2 =1-\sigma_{\mathrm{e},k}^2 \mathbf{I}$, i.e. $\hat{\mathbf{h}}_{k}\sim \mathcal{CN}(0,1-\sigma_{\mathrm{e},k}^2 \mathbf{I})$. We also consider that all channel error has same covariance matrix $\sigma_{\mathrm{e},k}^2\mathbf{I}=\sigma_{\mathrm{e}}^2\mathbf{I}$.
%
\subsection{Comparison with Conventional Multiple Access Techniques}
In this section, we consider a 2-user scenario and provide simulation results to compare with existing multiple access strategies: SDMA, NOMA, and OMA. It has been shown that these conventional strategies are special cases of RSMA in a 2-user scenario \cite{RS:uni}. As shown in TABLE \ref{tab1}, RSMA can boil down to conventional strategies depending on the power levels allocated to the streams.
When user-1 has a stronger channel than user-2, the private stream of user-2 should be turned off, resulting in NOMA.
Regardless of the number of users, RSMA works as SDMA when the common stream is turned off. Thus, the optimal power allocation and precoding vectors in NOMA and SDMA can be carried out by modifying Algorithm 1.
Specifically, the precoding vector of the private stream of user-2 is set to zero vector in case of NOMA, while that of the common stream is set to zero vector in case of SDMA.
Also, RSMA can be reduced to OMA by allocating the total transmit power to one private stream within a time slot.
For OMA, we apply maximum ratio transmission (MRT) to precoding vector and assume that the same time resource is allocated to user-1 and user-2 for fairness.
\begin{figure}[t]
\includegraphics[width=1.03\linewidth]{SNR4.pdf}
\centering
\caption{ Sum-rate comparison between RSMA and convectional schemes, where $\sigma_{\mathrm{e}}^2=0.05$, $K=2$, and $N_t=2$
\label{SNR1}
\end{figure}
\begin{table}[t]
\caption{power assigned to each stream according to different multiple access schemes}
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
\cellcolor{Gray}$\mathbf{Multiple~Access}$& ~~~~$s_1$~~~~&~~~~$s_2$~~~~& ~~~~$s_c$~~~~\\
\hhline{|=|=|=|=|}
\cellcolor{Gray}$\mathbf{RSMA}$&$P_1$& $P_2$& $P_c$ \\
\hline
\cellcolor{Gray}~~$\mathbf{NOMA}$~~&$P_1$& 0& $P_c$ \\
\hline
\cellcolor{Gray}$\mathbf{SDMA}$&$P_1$& $P_2$& 0 \\
\hline
\cellcolor{Gray}$\mathbf{OMA}$&$P_1$&0& 0 \\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
In order to confirm the usefulness of the channel error information, a scenario in which the BS has no information about channel error, labeled as no-info $\sigma_e$, is further considered with the schemes described above.
In other words, the BS optimizes the precoding vectors by considering $\hat{\mathbf{h}}_{k}$ as the perfect channel estimate.
Note that in OMA, since the precoding vector and the power allocation are fixed, the sum-rate performance is not changed depending on whether or not there is no information about channel error in the BS.
We illustrate the achievable sum-rate of the proposed method according to SNR when the BS has two antennas and $\sigma_{\mathrm{e}}^2=0.05$.
As shown in Fig. \ref{SNR1}, RSMA has a better performance than other multiple accesses.
The gap in sum-rate between RSMA and the other schemes is notable in high SNR.
It is worth pointing out that these benefits come from the flexibility of RSMA that generalizes and bridges the conventional schemes for 2-user scenario.
Compared to SDMA, the use of the common stream for RSMA, offers more design flexibility in jointly optimizing its precoding vector and power allocation.
Under perfect CSIR, there is no increment of interference when a desired signal transmit power is increased.
However, under imperfect CSIR, the interference from the channel error is also increased as the desired signal transmit power is increased.
Thus, GMI and the sum-rate are saturated at high SNR.
Furthermore, it is described in Fig. \ref{SNR1} that the performance is degraded in the absence of information about channel error and RSMA is less sensitive to the knowledge on channel errors than the other schemes.
\begin{figure}[t]
\includegraphics[width=0.97\linewidth]{SNR3.pdf}
\centering
\caption{Sum-rates of RSMA achieved by proposed and fixed precoding vectors according to SNR and $\sigma_e^2$, where $K=2$ and $N_t=2$}
\label{SNR2}
\end{figure}
\begin{table}[t]
\caption{Precoding vector of private stream with ZF and MRT}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\cellcolor{Gray}$\mathbf{Precoding}$& Direction of precoding vector for private stream\\
\hhline{|=|=|}
\cellcolor{Gray}&{}\\[-0.7em]
\cellcolor{Gray}$\mathbf{ZF}$& \begin{tabular}{@{}c@{}}$\mathbf{v}_{k}=\mathbf{w}_{k}/\norm{\mathbf{w}_{k}}$ where $ \mathbf{W}=\hat{\mathbf{H}}^{H}(\hat{\mathbf{H}}\hat{\mathbf{H}}^{H})^{-1},$ \\ $\mathbf{W}=[\mathbf{w}_{1},\cdot\cdot\cdot,\mathbf{w}_{K}]$, and $\hat{\mathbf{H}}=[\hat{\mathbf{h}}_1,\cdot\cdot\cdot,\hat{\mathbf{h}}_K]^H$\end{tabular}
\\
\cellcolor{Gray}&{}\\[-0.7em]
\hline
\cellcolor{Gray}&{}\\[-0.7em]
\cellcolor{Gray}$\mathbf{MRT}$&$\mathbf{v}_{k}=\hat{\mathbf{h}}_{k}/||{\hat{\mathbf{h}}}_{k}||$\\[3pt]
\hline
\end{tabular}
\label{tab2}
\end{center}
\end{table}
\subsection{Impact of Proposed Precoding in RSMA}
In this section, performances of the precoding vectors are analyzed by comparing simulation results of RSMA with the proposed precoding vectors and existing fixed precoding vectors, zero-forcing (ZF) and MRT.
ZF beamforming aims to remove the interference by nulling, i.e. $|\hat{\mathbf{h}}{}^{H}_{k}\mathbf{p}_{j}|=0,~j \neq k$ \cite{ZF-BF} and MRT refers to precoding the stream in the same direction with the channel vector.
ZF and MRT are applied to the precoding vectors of the private streams. As shown in TABLE \ref{tab2}, ZF and MRT determine the direction of the precoding vector for the private stream based on the estimated channel.
As a result, the direction of the precoding vectors for the private stream, $\mathbf{v}_k=\mathbf{p}_k/\norm{\mathbf{p}_{k}},~k=1,\dots,K$, are fixed. Thus, power allocation, $P_k=\norm{\mathbf{p}_{k}}$, and precoding vector for the common stream ${\mathbf{p}_{c}}$ should be optimized.
We optimize the sum-rate of RSMA-ZF/MRT with the similar approach of the proposed RSMA by formulating an optimization problem for $P_k$ and ${\mathbf{p}_{c}}$.
Fig. \ref{SNR2} demonstrates the changes in the performance of RSMA with respect to channel error covariance $\sigma_e^2$ and SNR, where $K=2$ and $N_t=2$.
The results confirm that the rate under imperfect CSI has different characteristics from that obtained under perfect CSI.
Under perfect CSI, it is well known that MRT is near optimal in low SNR and ZF is asymptotically optimal in high SNR \cite{ZF_MRT}.
However, the imperfect knowledge of channel brings out the residual interference caused by the inaccurate operations of precoding (at the transmitter) as well as coherent detection including SIC (at the receiver), which corresponds to low SNR scenarios.
When the variance of channel error, $\sigma_e^2$, is dominant, RSMA tends to operate as in low SNR under perfect CSI so that the optimized beamformers approximate to MRT.
On the other hand, when channel error variance is very small, e.g. $\sigma_e^2\approx0$, ZF can provide a better sum-rate perforamnce than that of MRT due to negligible residual interference at high SNR regime.
As shown in Fig. \ref{SNR2}, RSMA with the proposed precoding vector performs stricly better than RSMA with ZF and MRT precoders regardless of the variance of channel error over the entire range of SNR.
%
\section{Conclusion}
In this paper, we have studied a robust design of RSMA when perfect CSI is not available at both transmitter and receiver.
To tackle the sum-rate maximization problem turned out to be non-convex, the proposed algorithm has utilized the SDR and CCCP in jointly optimizing the precoding vectors and power allocation.
The simulations results have numerically shown that the proposed RSMA achieves the enhanced sum-rate performance compared to the conventional multiple access schemes.
Also, RSMA with joint optimization of power allocation and the precoding vectors provides the sum-rate improvement over RSMA in which the fixed precoding schemes, ZF and MRT, which are applied to the private streams.
From these results, it can be seen that the proposed RSMA is a powerful technique in terms of the sum-rate performance under imperfect CSIR and CSIT.
%
%
\section*{Acknowledgment}
This work was supported by the Basic Science Research Programs
under the National Research Foundation of Korea (NRF) through the Ministry
of Science and ICT under Grants NRF-2019R1C1C1006806.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-02-18T02:17:20",
"yymm": "2102",
"arxiv_id": "2102.08738",
"language": "en",
"url": "https://arxiv.org/abs/2102.08738"
}
|
\section{INTRODUCTION}
\hspace{\parindent}
Magnetic metamaterials \cite{Heyderman_Review_2013} composed of mesospins as building blocks \cite{ostman_ising-like_2018}, offer the possibility of tailoring magnetic interactions and dynamics in almost arbitrary ways.
Previous investigations have mainly been focused on the collective magnetic order, dynamics and more exotic aspects such as frustration of artificial spin systems \cite{Heyderman_Review_2013, Nisoli_2013, Nisoli:2017hg, Rougemaille_2019}, while the internal magnetisation and dynamics of the mesospins are less explored \cite{Gliga_PRL_2013, Gliga_PRB_2015, Sloetjes_arXiv_2020}. For instance, extensive efforts have been made to mimic magnetic systems of various spatial and spin dimensionalities, where the shape of the elements have been used to enforce mesospins to be Ising- or XY-like \cite{ostman_ising-like_2018,sendetskyi_continuous_2019,ewerlin_magnetic_2013,streubel_spatial_2018,leo_collective_2018,arnalds2016new}. Even systems consisting of \textit{both} Ising- and XY-mesospins have been fabricated and investigated, which is a testament to the versatility of metamaterials \cite{Arnalds_XY,ostman_interaction_2018}. The common denominator in all these investigations, however, is that the spin dimensionality is treated as a static property of the elements. Yet, the mesospins offer access to continuous degrees of freedom and rich internal magnetic states that go well beyond their atomic analogues. This attribute is the focal point of the present study, as we turn our attention to arrays of mesospins that can exhibit two distinct magnetisation states: collinear and vortex spin textures \cite{shinjo_magnetic_2000, Klaui_vortx_2003}.
The mesospins can be thought of as having variable spin dimensionality, where the interaction strength depends on the inner magnetic textures. It is therefore possible to couple the changes in spin dimensionality to changes in the collective properties of the mesospins. The coupling of these internal and external degrees of freedom, in combination with the exploration of the role of topological effects on the observed transitions \cite{Mermin:1979io, Tchernyshyov:2005gs}, is the main motivation behind the current work.
\section{MATERIALS AND METHODS}
\subsection{Sample manufacturing}
\hspace{\parindent}
Two sets of samples were prepared: one set for photo-emission electron microscopy (PEEM) studies employing x-ray magnetic circular dichroism (PEEM-XMCD) and the other set for magneto-optical Kerr effect (MOKE) measurements. The samples were prepared depositing elemental Fe (99.95 at\%) and Pd (99.95 at\%) in an ultra-high vacuum (base pressure below $\sim 2 \times 10^{-7}$ Pa) DC magnetron sputtering system, operating using high purity Argon gas (99.995 \%). The following sample structure was used: fused silica/Pd [40 nm]/Fe$_{13}$Pd$_{87}$ [10 nm]/Pd [2 nm] \cite{ostman_hysteresis-free_2014}. After growth, electron beam lithography (EBL) was used to pattern the Fe$_{13}$Pd$_{87}$ layers into circular islands arranged on square lattices (see Fig.~\ref{fig1}). The Ar$^+$-ion milling process following development of the EBL resist and subsequent mask deposition, was stopped prior to penetration through the Pd seed layer, providing this way electrical continuity across the whole sample surface. The PEEM-XMCD samples with a disk diameter of $D = [75, 150, 350]$ nm were fabricated having two inter-disk distances; one with the nearest neighbour distance set to G = D + 40 nm, rendering the inter-disk coupling sufficiently weak to be ignored, and the other with an edge-to-edge gap of G = 20 nm to promote interactions between the magnetic elements \cite{guslienko_coupling_2001}. The MOKE samples were patterned into disks with a diameter of D~=~$[250, 350, 450]$ nm with a small edge-to-edge gap of G = 20 nm.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure1.pdf}
\caption{The magnetic metamaterial - square arrays of circular islands. Scanning electron microscopy image of a representative structure with disks of a diameter $D = 150$ nm. (Left) Interacting disks with a gap of $G=20$ nm. The inset shows the same structure at a higher magnification. (Right) Non-interacting disks with $G = D + 40$ nm. The major symmetry axes of the patterned square lattices, [10] and [11] are also denoted.}
\label{fig1}
\end{center}
\end{figure}
\subsection{PEEM-XMCD}
\hspace{\parindent}
To determine the magnetic state of the non-interacting and strongly coupled disks, photoemission electron microscopy, employing x-ray magnetic circular dichroism (PEEM-XMCD), was performed at the HERMES beamline at the SOLEIL synchrotron \cite{belkhou_hermes_2015}, and the 11.0.1 beamline at the Advanced Light Source \cite{doran_cryogenic_2012}. Prior to imaging, the samples were cooled from room temperature to roughly 100 K, in absence of any external magnetic field. The energy of the synchrotron beam was tuned to the Fe L$_3$-edge (708.4 eV), and magnetic contrast was obtained from the asymmetry ratio of intensities between right- and left-handed circularly polarised synchrotron radiation.
\subsection{Micromagnetic simulations and topological considerations}
\hspace{\parindent}
To understand the role of the interactions, we chose to explore the energies of the vortex and collinear state for different disk sizes, using the MuMax$^3$ micromagnetic simulation software \cite{vansteenkiste_design_2014}. The simulations, used to mimic the experimental conditions, were performed using one single disk, as well as square arrays of disks ($G = 20$ nm), with periodic boundary conditions. The saturation magnetisation, $M_{\text{sat}} = 3.5 \cdot 10^{5}$ A/M and exchange stiffness, $A_{\text{ex}} = 3.36 \cdot 10^{-12}$ J/m were chosen based on previous work by Östman et al. \cite{ostman_hysteresis-free_2014} and Ciuciulkaite et al. \cite{ciuciulkaite_collective_2019}. The in-plane cell size was set to 0.50(1)$l_{\text{ex}}$, where $l_{\text{ex}}$ is the exchange length as defined by $M_{\text{sat}}$ and $A_{\text{ex}}$ \cite{vansteenkiste_design_2014}.
\subsection{Hysteresis protocols}\label{hysprot}
\hspace{\parindent}
The magnetisation data, displayed in Fig.~\ref{Fig5}, was collected using a MOKE system in longitudinal configuration with the sample mounted on a cryostat, in a temperature range of 80 K $< T <$ 400 K. The $p$-polarised incident laser beam with $\lambda =$ 659 nm has a Gaussian profile with a spot diameter of roughly 2 mm. The number of islands contributing to the signal are therefore in the order of $10^7$. The reflected laser beam was passed through an analyser (extinction ratio of $10^5:1$) and then captured using a Si biased detector, connected to a pre-amplifier and lock-in amplifier. In addition, the lock-in amplifier was used to modulate the incident laser beam using a Faraday cell. The sinusoidal external magnetic field, was applied along [10] of the square lattice (see Fig.~\ref{fig1}), had an amplitude of 40 mT and a frequency of 0.11 Hz unless otherwise stated. The magnetisation data in all figures have been binned (3:1) for aesthetic purposes.
\section{RESULTS AND DISCUSSION}\label{res}
\hspace{\parindent}
We begin by discussing the size dependence of internal spin textures and spontaneous magnetic order in circular islands. To this end, we fabricated samples with nominal disk diameters $D = [75, 150, 350]$ nm, having edge-to-edge gaps $G=[20, D+40]$ nm, on one and the same wafer. This approach allows us to disregard any uncertainty related to $e.g.$ composition and thickness of the islands. Representative parts of the samples with 150 nm islands are illustrated in Fig. \ref{fig1}. The results of photo-emission electron microscopy on these samples, with magnetic contrast obtained by using x-ray magnetic circular dichroism (PEEM-XMCD), are displayed in Fig. \ref{fig2}. As inferred by the black and white contrast, the disks with a diameter of 75 nm are all found to exhibit a collinear component for both small and large distances between the islands. Similarly, the 350 nm islands are found to be in a vortex state for both the short and long distance between the islands, with no preferred sense of rotation. When the disk diameter is 150 nm, a vortex state is obtained when the distance between the islands is large, while a substantial amount of mesospins have a collinear component, when the distance between the islands is 20 nm. Thus, for this particular diameter of islands, the interaction appears to be strong enough to influence the inner magnetic states.
To understand the role of the interactions and the interplay between the involved length-scales, we need to have a look at the energy landscape of the magnetic textures.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure2.pdf}
\caption{Diameter and interaction dependence of the disk magnetic texture. PEEM-XMCD images of interacting islands (left column), and non-interacting islands (right column), recorded at approximately 100 K after cooling from room temperature in absence of external fields. Disks with a diameter of 350 nm have a preferred vortex texture for both interacting to non-interacting disks, while disks with 75 nm diameter end up in the collinear state in both cases. Disks with a diameter of 150 nm, display a stabilisation of the collinear state when the distance between the islands is small, while otherwise exhibiting vortex textures. The red circles indicate the disk sizes and positions, obtained by overlapping the PEEM-X-ray absorption spectroscopy images.}
\label{fig2}
\end{center}
\end{figure}
The magnetic energy of the metamaterial can be expressed as $E = E_{\text{t}}+E_{\text{s}}+E_{\text{j}}$, where $E_{\text{t}}$ is the energy cost of the magnetic texture arising from exchange interactions within the islands, $E_{\text{s}}$ is the magnetostatic energy and $E_{\text{j}}$ is the energy associated with the magnetostatic coupling between the islands. By calculating the energy as a function of the vortex core displacement $r$ for a disk with radius $R$, a single path -- out of many -- in the energy landscape separating the collinear and the vortex state can be obtained. Ding et al. used the same reasoning when calculating the energy barrier separating the two states in a single Co dot analytically \cite{ding_magnetic_2005}. Here, we chose to do it numerically as the calculations can be generalised to include interactions of elements in an arbitrary array. The results obtained from calculations of interacting ($G = 20$ nm) and non-interacting ($G = D + 40$ nm) disks with $D = [75, 150, 350]$ nm are shown in Fig. \ref{fig3}.
When a vortex core is at the centre of an island, its displacement is defined to be zero and the energy is $E \approx E_{\text{t}}$ because $E_{\text{s}} \approx E_{\text{j}} \approx 0$. Shifting the core from the centre gives rise to a collinear component, and consequently a stray field with a corresponding magnetostatic energy. For the purpose of the calculation, the (virtual) vortex core can even be moved outside the disk ($r>R$), corresponding to a $C$-state (see Fig. \ref{fig3}), with varying degree of gradients in the magnetic texture.
The energy maxima obtained at $r \approx 0.9 R$ can be viewed as activation barriers separating the vortex and the collinear states.
The energy difference between the interacting and non-interacting islands
increases with increasing $r$, as illustrated in the figure.
In these simulations we use the energies obtained by MuMax$^3$'s built in functions for the vortex and the collinear state, without allowing for relaxation of the magnetisation with respect to the total energy. Using this approach, the intermediate states can be calculated without the systems collapsing into either the vortex or the collinear state, enabling us to estimate the height of the activation barrier. Qualitatively, the same results can be obtained in relaxed systems by moving the core by means of an applied magnetic field. This approach, however, does not provide any information on the potential in the vicinity of the maximum of the activation barrier. In the simulations with interacting islands, all vortex cores were displaced such that the collinear component of each island was along [10], mimicking the influence of an applied field along [10] as in the MOKE experiments. The choice of either uniform or alternating vortex chirality had a negligible impact on the outcome of the simulations.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure3.pdf}
\caption{The energy landscape of the transition between the vortex to the collinear state. The landscape was obtained numerically by gradually moving the vortex core outwards from the centre of the disks. Filled symbols represent the energy values of interacting disks ($G$ = 20 nm), while empty symbols for non-interacting disks. The shaded areas represent the magnetostatic coupling $E_{\text{j}}$. The energies plotted are normalised to the total energy of the vortex state $E_{\text{v}}$. The energy barrier between the two states is tunable by choice of disk diameter and inter-island interaction strength. The top part shows a schematic of the states for different values of $r$, where the red dot indicates the position of the vortex core, when $r > R$, a $C$-state is obtained.}
\label{fig3}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure4.pdf}
\caption{The boundary between vortex and collinear states. The energy of the collinear state is lowered (blue squares) by inter-island interactions. As a result, it is possible to stabilise collinear states for a range of diameters which otherwise favour formation of vortices (gray shaded area). The energies plotted are normalised to the total energy of the vortex state $E_{\text{v}}$.}
\label{fig4}
\end{center}
\end{figure}
The size dependence in the energy of interacting and non interacting mesospins is summarised in Fig. \ref{fig4}. For non-interacting islands the collinear state has the lowest energy when $D \lesssim 140$ nm, while a vortex state is favoured when the diameter is larger. With an inter-island distance of 20 nm, a collinear state is favoured up to about $D\approx 190$ nm. Consequently the critical size at which the vortex state is favoured is shifted to larger diameters when interactions become prominent. Hence, a region of bistability (marked by a grey shading in the figure) is obtained. We have thereby rationalised the results displayed in Fig. \ref{fig2}, ${i.e.}$ why the 150 nm islands form vortices in absence of interactions, while a significant amount of the mesospins show a collinear component when they interact.
When collinear, the mesospins can be viewed as being two-dimensional \cite{Arnalds_XY} (XY-rotor) and zero-dimensional \cite{ostman_hysteresis-free_2014} when in a vortex state. At the same time, the magnetic texture can be defined in terms of topological quasiparticles \cite{Zhang:2015de, Donnelly_2020fh}, as $e.g.$ illustrated in Fig. \ref{Fig6} and listed in Table~1 (Appendix~\ref{mesoclass}). The states displayed in Fig. \ref{Fig6} can thereby be defined as having topological charge of +1, determined by $1-g$, with $g$ being the genus of the structure \cite{Tchernyshyov:2005gs}. The boundary of the islands can host magnetic charges with fractional winding numbers, $w$, which together with the winding number of the bulk charge, $q$, must add up to 1 (see Fig. \ref{Fig6}) \cite{Tchernyshyov:2005gs, Sloetjes:2020iw}. Consequently, the winding of the vortex can been seen as being transferred to the edge of the island when the magnetisation changes from a vortex to a collinear state ( $r > R$).
The transition from a vortex to a collinear state therefore involves no change in the total winding number, while the magnetic cores and their polarity annihilate at the edges of the islands.
While it is trivial to annihilate vortices by applying an external field, the opposite is certainly not true. For this reason we chose to focus on the field and temperature dependence of islands with a diameter of 250 nm and larger ($D = [250, 350, 450]$ nm), with gaps $G$ = 20 nm. Disks in this size range spontaneously form vortices when cooled, while the application of an external magnetic field results in a collinear state, allowing us to control the magnetic texture of the mesospins. The stability of the $dressed$ collinear state can thus be investigated by removing the external field while monitoring the magnetisation of the samples. Representative magnetisation loops for the 450 nm islands, recorded at four different temperatures, are provided in the top half of Fig. \ref{Fig5}. The bottom part illustrates the temperature dependence of the magnetisation.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\linewidth]{Figure5.pdf}
\caption{Temperature dependence of the dimensionality transition in a magnetic metamaterial. Magnetisation measurements with the applied field along [10] (see Fig. \ref{fig1}) for the 450 nm islands having a gap of $G$~=~20~nm. (Top) Representative hysteresis curves at 80, 220, 250 and 275 K. The curves have been normalised for each individual temperature. The nucleation field crosses $\mu_0 H = 0$ at approximately 220 K, indicating a stability of the collinear state below this temperature, during the timescale of the measurement. (Bottom) The magnetisation as a function of temperature at 0 and 40 mT as well as the ratio $M_{\text{m}}$=$M_{\text{r}}$/$M_{\text{s}}$. The orange filled circles mark the temperatures at which the hysteresis curves, shown above, were taken. The size dependence of the transition temperature, $T_{\text{m}}$, for island sizes $D = [250, 350, 450]$ nm is shown in the inset. The error bars of the data in the inset are smaller than the data point symbols.}
\label{Fig5}
\end{center}
\end{figure}
At $T$ = 275 K the hysteresis loops display a typical vortex nucleation ($H_{\text{n}}$) and annihilation ($H_{\text{a}}$) signature with zero remanence \cite{cowburn_single-domain_1999}, in line with both the PEEM-XMCD results as well as the simulations. At temperatures below $\approx$ 250 K, the sample exhibits clear remanence, consistent with the presence of a ferromagnetic state. Ferromagnetic response requires alignment of mesospins with a net moment, in stark contrast to the zero field PEEM-XMCD results. The reduction of the remanent magnetisation ($M_{\text{r}}$) with temperature contains both the decrease of the moment of the material (intrinsic material properties) as well as changes in the texture and orientation of the mesospins. To disentangle these contributions we need to identify their signatures. Changes in the material-related magnetisation can be described by a power law up to the ordering temperature of the materials ($T_{\mathrm{C}}$), while the temperature dependence of the texture and orientation of the mesospins is unknown. However, separation of the two contributions can be obtained by identifying the difference in their field dependence. For instance, it is sufficient to apply a relatively weak field to remove most of the magnetic texture within a disk, while weak fields only marginally affect the thermally induced excitations of the magnetisation in the material. This is seen in the field dependence displayed in Fig.~\ref{Fig5}: at $T$ = 275 K, a field of approximately 6 mT is sufficient for obtaining a transition from a vortex to a collinear state. Nevertheless, this field does little to alter the thermally induced reduction of the magnetisation. We can therefore consider the magnetisation at a field of 40 mT as predominantly representative for the material, $i.e.$ we define this magnetisation as the saturation of the mesoscopic texture ($M_{\text{s}}$). Consequently, the temperature dependence of $M_{\text{m}}$=$M_{\text{r}}$/$M_{\text{s}}$ can be thought of as corresponding to the temperature dependence of the remanence of the mesospin texture.
The results displayed in the bottom half of Fig.~\ref{Fig5} (450 nm islands), illustrate $M_{\text{r}}$, $M_{\text{s}}$ and $M_{\text{m}}$=$M_{\text{r}}$/$M_{\text{s}}$, the inferred remanence of the mesospins. A plateau with $M_{\text{m}} = 3/4$ is observed, consistent with constant magnetic texture and thereby a fixed topology of the mesospins, below 220K. At the same time, a substantial fraction of the moment (1/4) is perpendicular and/or antiparallel with respect to the direction of the net magnetisation. At 220 K there is an abrupt change in $M_{\text{m}}$, which vanishes at 270 K. From 270 K and up to the intrinsic Curie temperature of the material, the obtained hysteresis loops are consistent with the presence of vortex states. Thus, we observe a sharp transition from a collective state of interacting mesospins to non-interacting vortex states of the elements with zero net magnetisation. To this observation we assign a collapse from a state with a spin dimensionality of 2, to a state with zero spin dimensionality.
The temperature dependence of this transition can be determined by fitting an error function to the data, from which the temperature $T_{\text{m}}$ where $M_{\text{m}}(T_{\text{m}})=\frac{1}{2}M_{\text{m}}(T=0~\mathrm{K})$ (inset of Fig. \ref{Fig5}, bottom panel) is determined. An error function was chosen without assigning any physical interpretation to it. A clear size dependence ($T_{\text{m}} \propto 1-1/D$) is observed. The results imply an increase in stability of the collinear state with increasing disk size. Extrapolation to infinite size yields a transition temperature approaching $T_{\text{C}}$ of the continuous film, as expected. Extrapolation to zero in $T_{\text{m}}$ yields $\approx$ 100 nm island size, which corresponds to the lower limit for the possibility to observe the transition.
However, as illustrated in Fig. \ref{fig4}, the collinear state is already
favoured when $D \lesssim 190$ nm, rendering this region unattainable.
As the vortex state is energetically favoured above this size, the observed transitions in 250, 350 and 450 nm elements are kinetically limited and do not correspond to thermodynamic phase transitions in a strict sense. This is further emphasised by the observed frequency dependence of the transition temperature, $T_{\text{m}}$ (see Appendix~\ref{kinetics}). The dressed mesospins (collinear state) in this size range, are therefore in an arrested metastable state with the boundaries of the mesospins being vital for the creation and annihilation process of the vortex cores.
In conclusion, the temperature dependence of the topologically homeomorphic transition is not only defined by the material properties and size of the mesospins but also by their mutual interactions and internal degrees of freedom. In particular, the distance between the elements alters the inter-island interactions and thereby the internal magnetic states, while at the same time, the internal magnetic states affect the interactions and thereby the magnetic order. The interplay between these two length-scales leads to an exotic transition, that is to say that it is not based on thermally induced randomisation. Rather, the transition pertains to a thermally activated change within the elements, resulting in a collapse of the effective interaction strength and the mesospin dimensionality. The transition discussed here does not have any trivial classical counterpart, calling for a new conceptual framework involving mutual dependence of energy and length-scales \cite{Wilson:1979wn}. The results represent stumbling steps towards understanding emergent properties and complexity that may extend beyond the immediate field of physics \cite{castellano2009statistical,michaud2018social}.
\section*{Data availability}
The data that support the findings of this study are available from the authors upon reasonable request.
\section*{Acknowledgments}
The authors would like to acknowledge the excellent user support provided to them at the Advanced Light Source (Dr. Andreas Scholl, Dr. Rajesh Chopdekar) and SOLEIL (Dr. Rahid Belkhou) synchrotrons, as well as Dr. Erik Östman for helping out with data collection. This research used resources of the Advanced Light Source, a U.S. DOE Office of Science User Facility under contract no. DE-AC02-05CH11231. The excellent support and infrastructure of the MyFab facility at the \AA ngstr\"om Laboratory of Uppsala University is also highly appreciated. B.H. acknowledges financial support from the Swedish Research Council and the Swedish Foundation for Strategic Research. V.K. acknowledges support from the Knut and Alice Wallenberg Foundation project ``{\it Harnessing light and spins through plasmons at the nanoscale}'' (2015.0060) and the Swedish Research Council (Project No. 2019-03581).
|
{
"timestamp": "2021-05-19T02:12:19",
"yymm": "2102",
"arxiv_id": "2102.08731",
"language": "en",
"url": "https://arxiv.org/abs/2102.08731"
}
|
\section{Introduction}
Text of paper \ldots
\begin{acks}
This material is based upon work supported by the
\grantsponsor{GS100000001}{National Science
Foundation}{http://dx.doi.org/10.13039/100000001} under Grant
No.~\grantnum{GS100000001}{nnnnnnn} and Grant
No.~\grantnum{GS100000001}{mmmmmmm}. Any opinions, findings, and
conclusions or recommendations expressed in this material are those
of the author and do not necessarily reflect the views of the
National Science Foundation.
\end{acks}
\section{Case Study 1: Comparing Frameworks}
\label{sec:compareAFEvaluation}
\subsection{Experimental Setup}
\label{sec:experimentalsetup}
We use a dual socket Intel Xeon E5 server system with 20 physical
cores at 2.9 GHz, hyperthreading, and 32 GB memory. Table
\ref{tbl:benchmarks} lists the used benchmarks, from Parsec 3.0
\cite{bienia11benchmarking2011} and Rodinia 3.1 \cite{Che2009}. Table
\ref{tbl:benchmarks} also contains the description, type, application
accuracy metric, and default run-time for each benchmark. Accuracy
loss is the error relative to the most accurate configuration. Shorter
runtime interprets as higher performance.
\texttt{blackscholes}'s only tunable parameter is the number of prices
to estimate and modifying it does not affect accuracy. Thus,
PowerDial has no effect on \texttt{blackscholes}. Similarly,
\texttt{canneal}, \texttt{heartwall}, \texttt{kmeans}, and
\texttt{x264} use math functions infrequently; the Approximate Math
Library is not applicable to them.
We evaluate each suite across multiple inputs and compare the median
across the inputs for this evaluation. In this section, we are
evaluating known frameworks, thus we use the training inputs from Table
\ref{tbl:benchmarks}. In subsequent sections---where we present new
techniques---we divide inputs into training and test and build
combinations of frameworks using the training data and then use the
test data to ensure our selected combination works well on previously
unseen data.
\def\tabularxcolumn#1{m{#1}}
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\small\arraybackslash}X}%
\newcolumntype{D}[1]{>{\hsize=#1\hsize\centering\bfseries\footnotesize\arraybackslash}X}%
\newcolumntype{E}[1]{>{\hsize=#1\hsize\centering\footnotesize\arraybackslash}X}%
\newcolumntype{F}[1]{>{\hsize=#1\hsize\centering\tiny\arraybackslash}X}%
\newcolumntype{G}[1]{>{\hsize=#1\hsize\centering\scriptsize\arraybackslash}X}%
\newcolumntype{H}[1]{>{\hsize=#1\hsize\centering\bfseries\scriptsize\arraybackslash}X}%
\begin{table*}[tb]
\centering
\caption{Benchmarks used for evaluation.}
\footnotesize
\begin{tabularx}{\textwidth}{|D{0.6}|E{1.4}|E{1.35}|E{1.35}|E{0.3}|}
\hline
\cellcolor[gray]{0.8} {Benchmarks} & \cellcolor[gray]{0.8}{\textbf{Accuracy Metric}} & \cellcolor[gray]{0.8} {\textbf{Training inputs}} & \cellcolor[gray]{0.8}{\textbf{Test Inputs}} & \cellcolor[gray]{0.8}{\textbf{Runtime (sec)}} \\
\hline
Blackscholes & Average Relative Error of Prices & 30 lists with 1M initial prices & 90 lists with 1M initial prices & 3.2 \\
\hline
Bodytrack & Average Distance of Poses & sequence of 100 frames & sequence of 261 frames & 3.1 \\
\hline
Canneal & Average Relative Routing Cost & 30 netlists with 400K+ elements & 90 netlists with 400K+ elements & 6.88 \\
\hline
Fluidanimate & Distance between Particles & 5 fluids with 100K+ particles & 15 fluids with 500K+ particles & 17.2 \\
\hline
Heartwall & Average Relative Error of heart frames & sequence of 30 ultrasound images & sequence of 100 ultrasound images & 11.6 \\
\hline
Kmeans & Distance between Cluster Centers & 30 vectors with 256K data points & 90 vectors with 256K data points & 3.1 \\
\hline
Particlefilter & Distance between Particles & sequence of 60 frames & sequence of 240 frames & 12.9 \\
\hline
Srad & Image Diff (RMSE) & 3 images with 2560*1920 pixels & 9 images with 2560*1920 pixels & 22.6 \\
\hline
Streamcluster & Distance between Cluster Centers & 3 streams of 19k-100K data points & 9 streams of 100K data points & 30 \\
\hline
Swaptions & Average Relative Error of Prices & 40 swaptions & 160 swaptions & 6.2 \\
\hline
x264 & Relative PSNR+Bitrate & 4 HD videos of 200+ frames & 12 HD videos of 200+ frames & 7.7 \\
\hline
\end{tabularx}
\label{tbl:benchmarks}
\end{table*}
We evaluate three approximation frameworks. PowerDial (PD) transforms
an application's command line parameters into \textit{software knobs}
that are automatically manipulated to trade accuracy for performance
\cite{Hoffmann2011}. Each application has tunable \emph{knobs}, which
can take different values, and an assignment of values to knobs is a
configuration. Loop Perforation (LP) identifies \textit{perforatable
loops} whose iterations can be skipped to produce faster, but less
accurate results \cite{Sidiroglou-Douskos2011}. A set of loops and
perforation rates is a configuration. The Approximate Math Library
(AML) substitutes math functions with a variable Taylor series
expansion. A set of functions and their number of terms is the
configuration.
\subsection{Comparison by Difference of Coverage}
\label{sec:compareAFDOC}
To compare Loop Perforation and PowerDial, we calculate the average
coverage function ($C(X,Y)$ from Eq. 3) across all benchmarks
for both.
On average, Loop Perforation covers
only $0.3664$ Pareto-optimal points of PowerDial, while PowerDial
covers $0.4416$ Pareto-optimal points of Loop Perforation. Thus,
$DOC(LoopPerforation, PowerDial) = -0.075$ which shows the slight
superiority of PowerDial over Loop Perforation, on average.
On the other hand,
negative values of $DOC(AML, PD)=-0.9135$ and $DOC(AML, LP)=-0.929$
shows significant inferiority of Approximate Math
Library against other frameworks, on average.
\def\tabularxcolumn#1{m{#1}}
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\small\arraybackslash}X}%
\newcolumntype{D}[1]{>{\hsize=#1\hsize\centering\bfseries\footnotesize\arraybackslash}X}%
\newcolumntype{E}[1]{>{\hsize=#1\hsize\centering\footnotesize\arraybackslash}X}%
\newcolumntype{F}[1]{>{\hsize=#1\hsize\centering\tiny\arraybackslash}X}%
\subsection{Comparison by Pareto-optimal Curves}
\label{sec:compareAFParetoOptimalCurves}
\captionsetup[subfigure]{labelformat=empty}
\begin{figure*}[t]
\newcolumntype{V}{>{\centering\arraybackslash} m{0.1\linewidth} }
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centerline\arraybackslash}X}%
\centering
\subfloat[(a) Performance/AccuracyLoss trade-off spaces for PowerDial, Loop Perforation, and Approximate Math Library.]{\label{fig:cs1configSpace}
\hskip0.0cm \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\subfloat{\input{img/cs1configSpace/Blackscholes-configSpace.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Bodytrack-configSpace.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Canneal-configSpace.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Fluidanimate-configSpace.tex}} &
\subfloat{\hspace*{-55pt}\input{img/cs1configSpace/Heartwall-configSpace.tex}} &
\subfloat{\hspace*{-55pt}\input{img/cs1configSpace/Kmeans-configSpace.tex}} \\[-3ex]
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Particlefilter-configSpace.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Srad-configSpace.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Streamcluster-configSpace.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1configSpace/Swaptions-configSpace.tex}} &
\subfloat{\hspace*{5pt}\input{img/cs1configSpace/X264-configSpace.tex}} \\
\end{tabular*}
}
\vspace{-1.5em}
\subfloat[(b) VIPER comparison of PowerDial, Loop Perforation, and Approximate Math Library.]{\label{fig:cs1viperLines}
\hskip0.0cm \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\subfloat{\input{img/cs1viperLines/Blackscholes-cs1viperLine.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Bodytrack-cs1viperLine.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Canneal-cs1viperLine.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Fluidanimate-cs1viperLine.tex}} &
\subfloat{\hspace*{-55pt}\input{img/cs1viperLines/Heartwall-cs1viperLine.tex}} &
\subfloat{\hspace*{-55pt}\input{img/cs1viperLines/Kmeans-cs1viperLine.tex}} \\[-3ex]
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Particlefilter-cs1viperLine.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Srad-cs1viperLine.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Streamcluster-cs1viperLine.tex}} &
\subfloat{\hspace*{3pt}\input{img/cs1viperLines/Swaptions-cs1viperLine.tex}} &
\subfloat{\hspace*{5pt}\input{img/cs1viperLines/X264-cs1viperLine.tex}} \\
\end{tabular*}
}
\caption{Comparison of PowerDial, Loop Perforation, and Approximate
Math Library. (a) shows Pareto-optimal frontiers and (b) shows
VIPER.}
\label{fig:comparecs1}
\end{figure*}
\iffalse
\captionsetup[subfigure]{labelformat=empty}
\begin{figure*}[tp]
\newcolumntype{V}{>{\centering\arraybackslash} m{0.1\linewidth} }
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centerline\arraybackslash}X}%
\centering
\subfloat[(a) Performance/AccuracyLoss trade-off spaces for PowerDial, Loop Perforation, and Approximate Math Library.]{\label{fig:cs1configSpace}
\hskip0.0cm \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\subfloat{\input{img/cs1configSpace/Blackscholes-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/Bodytrack-configSpace.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1configSpace/Canneal-configSpace.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1configSpace/Fluidanimate-configSpace.tex}} \\[-3.5ex]
\subfloat{\input{img/cs1configSpace/Heartwall-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/Kmeans-configSpace.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1configSpace/Particlefilter-configSpace.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1configSpace/Srad-configSpace.tex}} \\[-3.5ex]
\subfloat{\input{img/cs1configSpace/Streamcluster-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/Swaptions-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/X264-configSpace.tex}} \\[-2ex]
\end{tabular*}
}
\vskip -0.8em
\subfloat[(b) VIPER comparison of PowerDial, Loop Perforation, and Approximate Math Library.]{\label{fig:cs1viperLines}
\hskip0.0cm \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\subfloat{\input{img/cs1viperLines/Blackscholes-cs1viperLine.tex}} &
\subfloat{\input{img/cs1viperLines/Bodytrack-cs1viperLine.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1viperLines/Canneal-cs1viperLine.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1viperLines/Fluidanimate-cs1viperLine.tex}} \\[-3.5ex]
\subfloat{\input{img/cs1viperLines/Heartwall-cs1viperLine.tex}} &
\subfloat{\input{img/cs1viperLines/Kmeans-cs1viperLine.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1viperLines/Particlefilter-cs1viperLine.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs1viperLines/Srad-cs1viperLine.tex}}\\[-3.5ex]
\subfloat{\input{img/cs1viperLines/Streamcluster-cs1viperLine.tex}} &
\subfloat{\input{img/cs1viperLines/Swaptions-cs1viperLine.tex}} &
\subfloat{\input{img/cs1viperLines/X264-cs1viperLine.tex}} \\[-2ex]
\end{tabular*}
}
\vspace{-2ex}
\caption{Comparison of PowerDial, Loop Perforation, and Approximate
Math Library. (a) shows Pareto-optimal frontiers and (b) shows
VIPER.}
\vspace{-2ex}
\label{fig:comparecs1}
\end{figure*}
\fi
\figref{cs1configSpace} illustrates the frameworks' trade-off spaces.
Each point represents a configuration. The y-axis is runtime
normalized to the default configuration and the x-axis is the accuracy
loss. Each framework's Pareto-optimal curve is shown in the same color
as the trade-off space. These plots highlight how configurations cover
wide ranges of runtime and accuracy loss. While in some cases---{\em e.g. }{}
\texttt{canneal}, \texttt{kmeans}, and \texttt{srad}---Pareto-optimal
curves are easily to compare, in other benchmarks---{\em e.g. }{}
\texttt{particlefilter} and \texttt{swaptions}---comparison is
unfeasible. For \texttt{fluidanimate}, the Pareto-optimal curves
intersect multiple times; the best approximation framework differs
across the range of accuracy loss.
\subsection{Comparison by VIPER{}}
\label{sec:compareAFPIR}
Just viewing the Pareto-optimal curves in \figref{cs1configSpace}
provides limited intuition as
differences are not always visible. We use VIPER{} to
compare these frameworks in Figure \textcolor{red}{3b}. The y-axis
represents the performance improvement ratio (PIR), while the x-axis
illustrates accuracy loss. The horizontal line represents Loop
Perforation: points above that line mean the corresponding technique
is faster than Loop Perforation. The backdrop color indicates the
best method for an accuracy loss.
VIPER shows only useful configurations---if one dominates others,
VIPER shows a small range of accuracyLoss.
Thus, some ranges are not shown in the Figure
3b plots comparing to accuracy loss ranges in
\figref{cs1configSpace} because there is no benefit
to increasing $accuracyLoss$.
VIPER{} provides the following insights:
\begin{itemize}
\item It illustrates how frameworks perform within a
specific accuracy loss range. For instance, while PowerDial finds a
higher performance configuration than Loop Perforation for
\texttt{bodytrack} and \texttt{canneal}, its performance is
worse for \texttt{kmeans} and \texttt{x264} for most accuracies.
\item While the distinction between frameworks is not clear in
\figref{cs1configSpace} for \texttt{streamcluster} and
\texttt{swaptions}, VIPER{} allows quick, obvious comparison.
\item VIPER{} clearly illustrates the intersection of Pareto-optimal
curves; {\em e.g. }{}, in \texttt{fluidanimate} and \texttt{srad}.
\end{itemize}
Since VIPER{} only requires trade-off spaces to compare, it can
be applied to any approximation frameworks regardless of
system level. We believe VIPER{} provides clear
insights, which are instantly visually recognizable and
mathematically meaningful.
VIPER is not a replacement for existing methods, but a
complement that simplifies comparison.
\iffalse
\begin{figure*}
\begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\fbox{\subfloat{\input{img/cs1configSpace/Blackscholes-configSpace.tex}}} &
\fbox{\subfloat{\input{img/cs1configSpace/Bodytrack-configSpace.tex}}} &
\fbox{\subfloat{\input{img/cs1configSpace/Canneal-configSpace.tex}}} &
\fbox{\subfloat{\input{img/cs1configSpace/Fluidanimate-configSpace.tex}}} &
\fbox{\subfloat{\input{img/cs1configSpace/Heartwall-configSpace.tex}}} &
\fbox{\subfloat{\input{img/cs1configSpace/Kmeans-configSpace.tex}}} \\[-2ex]
\subfloat{\input{img/cs1configSpace/Particlefilter-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/Srad-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/Streamcluster-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/Swaptions-configSpace.tex}} &
\subfloat{\input{img/cs1configSpace/X264-configSpace.tex}} \\
\end{tabular*}
\caption{6 x 2}
\end{figure*}
\fi
\iffalse
\tblref{coverageMetricPDLP} shows the average coverage function across
all benchmarks. In the table, each cell is $C(X,Y)$ where $X$ is the
first column and $Y$ is shown in the first row. Higher values of
$C(X,Y)$ implies the greater coverage of framework $X$ over framework
$Y$.
\begin{table}[t]
\centering
\caption{Average coverage function of frameworks over each other.}
\footnotesize
\begin{tabularx}{0.45\textwidth}{|D{1}|C{1}|C{1}|C{1}|}
\hline
$C(X,Y)$ & {\cellcolor[gray]{0.8}\textbf {Loop Perforation}} & {\cellcolor[gray]{0.8}\textbf {PowerDial}} & {\cellcolor[gray]{0.8}\textbf {Approximate Math Library}} \\
\hline
{\cellcolor[gray]{0.8} Loop Perforation} & - & 0.4419 & 0.9385 \\
\hline
{\cellcolor[gray]{0.8} PowerDial} & 0.3664 & - & 0.9385 \\
\hline
{\cellcolor[gray]{0.8}\textbf {Approximate Math Library}} & 0.0250 & 0.0095 & - \\
\hline
\end{tabularx}
\label{tbl:coverageMetricPDLP}
\end{table}
\fi
\iffalse
\begin{center}
\begin{tabular} {p{0.40\linewidth} p{0.40\linewidth}}
\input{img/perfImprovement/Fluidanimate-perfImprovment.tex} &
\input{img/perfImprovement/Particlefilter-perfImprovment.tex} \\
\end{tabular}
\captionof{figure}{VIPER plots with LP as the baseline.}
\end{center}
\begin{figure*}[t]
\newcolumntype{V}{>{\centering\arraybackslash} m{0.1\linewidth} }
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centerline\arraybackslash}X}%
\subfloat[Performance/Accuracy trade-off spaces for PowerDial, Loop Perforation, and Approximate Math Library]{
\hskip0.0cm \begin{tabular} {p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth}}
\centerline{\input{img/cs1configSpace/Blackscholes-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Bodytrack-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Canneal-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Fluidanimate-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Heartwall-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Kmeans-configSpace.tex}} \\[-1ex]
\centerline{\input{img/cs1configSpace/Particlefilter-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Srad-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Streamcluster-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/Swaptions-configSpace.tex}} &
\centerline{\input{img/cs1configSpace/X264-configSpace.tex}} \\ [-2ex]
\end{tabular}
\label{fig:cs1configSpace}
}
\subfloat[VIPER comparison of PowerDial, Loop Perforation, and Approximate Math Library]{
\hskip0.0cm \begin{tabular} {p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth} p{0.16\textwidth}}
\centerline{\input{img/cs1viperLines/Blackscholes-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Bodytrack-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Canneal-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Fluidanimate-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Heartwall-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Kmeans-cs1viperLine.tex}} \\[-1.5ex]
\centerline{\input{img/cs1viperLines/Particlefilter-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Srad-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Streamcluster-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/Swaptions-cs1viperLine.tex}} &
\centerline{\input{img/cs1viperLines/X264-cs1viperLine.tex}} \\[-2ex]
\end{tabular}
\label{fig:cs1viperLines}
}
\fi
\iffalse
\begin{table*}[tb]
\centering
\caption{Benchmarks used for evaluation.}
\footnotesize
\begin{tabularx}{\textwidth}{|D{0.5}|E{0.8}|E{1.6}|E{1.40}|E{1.40}|E{0.3}|}
\hline
\cellcolor[gray]{0.8} {Benchmarks} & \cellcolor[gray]{0.8}{\textbf{Domain}} & \cellcolor[gray]{0.8}{\textbf{Accuracy Metric}} & \cellcolor[gray]{0.8} {\textbf{Training inputs}} & \cellcolor[gray]{0.8}{\textbf{Test Inputs}} & \cellcolor[gray]{0.8}{\textbf{Runtime (sec)}} \\
\hline
Blackscholes & Financial Analysis & Average Relative Error of Prices & 30 lists with 1M initial prices & 90 lists with 1M initial prices & 3.2 \\
\hline
Bodytrack & Computer Vision & Average Distance of Poses & sequence of 100 frames & sequence of 261 frames & 3.1 \\
\hline
Canneal & Engineering & Average Relative Error of Routing Cost & 30 netlists with 400K+ elements & 90 netlists with 400K+ elements & 6.88 \\
\hline
Fluidanimate & Animation & Distance between Particles & 5 fluids with 100K+ particles & 15 fluids with 500K+ particles & 17.2 \\
\hline
Heartwall & Medical Imaging & Average Relative Error of heart frames & sequence of 30 ultrasound images & sequence of 100 ultrasound images & 11.6 \\
\hline
Kmeans & Machine Learning & Distance between Cluster Centers & 30 vectors with 256K data points & 90 vectors with 256K data points & 3.1 \\
\hline
Particlefilter & Medical Imaging & Distance between Particles Coordinates & sequence of 60 frames & sequence of 240 frames & 12.9 \\
\hline
Srad & Image Processing & Image Diff (RMSE) & 3 images with 2560*1920 pixels & 9 images with 2560*1920 pixels & 22.6 \\
\hline
Streamcluster & Data Mining & Distance between Cluster Centers & 3 streams of 19k-100K data points & 9 streams of 100K data points & 30 \\
\hline
Swaptions & Financial Analysis & Average Relative Error of Prices & 40 swaptions & 160 swaptions & 6.2 \\
\hline
x264 & Media Processing & Average Relative Error of PSNR+Bitrate & 4 HD videos of 200+ frames & 12 HD videos of 200+ frames & 7.7 \\
\hline
\end{tabularx}
\label{tbl:benchmarks}
\end{table*}
\fi
\section{Case Study 2: BOA Evaluation}
\label{sec:Evaluation}
We compare variations of BOA{} to prior exploration techniques
using the same
experimental setup from
\secref{experimentalsetup}. We now split our inputs
into training and test data sets as shown in Table \ref{tbl:benchmarks}.
For each exploration
technique, we first use the training inputs to find Pareto-efficient
configurations, then we evaluate those points using the separate test data.
\subsection{Points of Comparison}
We compare BOA{} to state-of-the-art approaches for
locating Pareto-efficient points in large trade-off spaces:
\begin{itemize}
\item \textbf{MCKP}: The \emph{multiple choice knapsack problem}
variant of the classic knapsack problem has classes of items and
must choose one item from each class. MCKP has been used to find
Pareto-efficient processor designs in the performance-power space
for application specific embedded processors \cite{Yang2003}. We
declare each framework to be a class. MCKP then selects the
Pareto-optimal configurations of each class while keeping the
default values for other classes. This creates a new, small
trade-off space which can be searched with brute force.
\item \textbf{NSGA-II}: The non-dominated sorting-based
multi-objective evolutionary algorithm (NSGA-II) explores large
trade-off spaces to find Pareto-optimal configurations using an
evolutionary genetic algorithm \cite{Deb2002}. NSGA-II is the
state-of-the-art for multiobjective optimization of embedded
processors that navigate performance-power trade-offs, having been
cited over 20,000 times as it finds better configurations in
less time than other evolutionary algorithms
\cite{Zitzler2012,Knowles1999}.
\end{itemize}
\subsection{Comparison by Difference of Coverage}
\label{sec:coverageMetricEvaluation}
Recall from Section \ref{sec:compare1} that difference of coverage
(DOC) implies the efficiency of one curve over another.
\figref{convergence} displays the difference of
coverage for various techniques over NSGA-II per benchmark and on
average. The y-axis shows $DOC(X, NSGA)$ which is DOC of exploration
technique $X$ over NSGA-II (see \secref{coverageanddominance}).
Negative values of $DOC(X, NSGA)$ indicate that $X$ does not find as
many Pareto-efficient points as NSGA-II. Conversely, positive values
imply that technique $X$ provides that many more values that dominate
NSGA-II. BOA{}-flex and BOA{}-prob on average locate
52.8-65.6\% more Pareto-efficient configurations than NSGA-II.
BOA{}'s superiority is due to its focus on the configurations
that have been shown to be Pareto-optimal on individual frameworks.
NSGA-II starts the exploration from a random set of points in the
combined trade-off space and iteratively looks for more efficient points.
MCKP uses the individual Pareto-optimal curves but keeps the rest of
frameworks at default configurations. Since the frameworks are not fully
independent, we empirically find the some configurations that were not
Pareto-optimal in the original frameworks become part of a Pareto-efficient
curve of combined trade-off space when we consider multiple frameworks
together. By expanding BOA{} with threshold and probabilistic
exploration, we search more points, resulting in more efficient
configurations. In short, MCKP does not search enough combinations,
while NSGA-II searches too many. By restricting the search to points
likely to be near the Pareto-optimal frontier for individual frameworks,
BOA achieves the right balance and the best empirical results.
\emph{This data shows that, for approximate computations, BOA{}
produces many more efficient configurations than prior
state-of-the-art search techniques.}
\begin{figure}[tb]
\begin{center}
\input{img/coverageFunction/coverageFunction.tex}
\caption{Difference of coverage over NSGA-II. Higher bars are
better.}
\label{fig:convergence}
\end{center}
\end{figure}
\subsection{Comparison by VIPER}
\label{sec:perfImprovmentEvaluation}
\captionsetup[subfigure]{labelformat=empty}
\begin{figure*}[pt]
\newcolumntype{V}{>{\centering\arraybackslash} m{0.1\linewidth} }
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\arraybackslash}X}%
\subfloat[(a) Pareto-optimal curves found by each exploration technique.]{
\hskip0.0cm \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\subfloat{\input{img/cs2configSpace/Blackscholes-configSpace.tex}} &
\subfloat{\input{img/cs2configSpace/Bodytrack-configSpace.tex} } &
\subfloat{\hspace*{-85pt} \input{img/cs2configSpace/Canneal-configSpace.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs2configSpace/Fluidanimate-configSpace.tex}} \\[-3.5ex]
\subfloat{\input{img/cs2configSpace/Heartwall-configSpace.tex} } &
\subfloat{\input{img/cs2configSpace/Kmeans-configSpace.tex} } &
\subfloat{\hspace*{-85pt} \input{img/cs2configSpace/Particlefilter-configSpace.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs2configSpace/Srad-configSpace.tex} } \\[-3.5ex]
\subfloat{\input{img/cs2configSpace/Streamcluster-configSpace.tex}} &
\subfloat{\input{img/cs2configSpace/Swaptions-configSpace.tex} } &
\subfloat{\input{img/cs2configSpace/X264-configSpace.tex}} \\[-2.5ex]
\end{tabular*}
\label{fig:cs2configSpace}
}
\subfloat[(b) VIPER comparison of MCKP, NSGA-II, and different versions of BOA{}.]{
\hskip0.0cm \begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}} c@{\extracolsep{\fill}}}
\subfloat{\input{img/cs2viperLines/Blackscholes-cs2viperLine.tex}} &
\subfloat{\input{img/cs2viperLines/Bodytrack-cs2viperLine.tex} } &
\subfloat{\hspace*{-85pt} \input{img/cs2viperLines/Canneal-cs2viperLine.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs2viperLines/Fluidanimate-cs2viperLine.tex}} \\[-3.5ex]
\subfloat{\input{img/cs2viperLines/Heartwall-cs2viperLine.tex} } &
\subfloat{\input{img/cs2viperLines/Kmeans-cs2viperLine.tex} } &
\subfloat{\hspace*{-85pt} \input{img/cs2viperLines/Particlefilter-cs2viperLine.tex}} &
\subfloat{\hspace*{-85pt} \input{img/cs2viperLines/Srad-cs2viperLine.tex}} \\[-3.5ex]
\subfloat{\input{img/cs2viperLines/Streamcluster-cs2viperLine.tex}} &
\subfloat{\input{img/cs2viperLines/Swaptions-cs2viperLine.tex} } &
\subfloat{\input{img/cs2viperLines/X264-cs2viperLine.tex}} \\[-2.5ex]
\end{tabular*}
\label{fig:cs2viperLines}
}
\vspace{-2ex}
\caption{Comparison of MCKP, NSGA-II, and BOA{} with different thresholds. (a) shows
comparison by Pareto-efficient frontiers and (b) shows comparison by
VIPER.}
\label{fig:compare}
\end{figure*}
\figref{cs2configSpace} shows the Pareto-efficient points for each
benchmark and search method. The y-axis shows runtime (normalized to
the default configuration) and the x-axis represents accuracy loss. We
use median runtime across test inputs.
These figures display the range of runtime and accuracy loss that a
method can achieve. For instance, NSGA-II and MCKP cannot provide
normalized runtime less than 78.1\% and 55.5\% of the default
configuration respectively for \texttt{particlefilter}.
Figure \textcolor{red}{5b} illustrates the VIPER comparison of
NSGA-II, MCKP, and different variations of
BOA{}. The y-axis shows performance improvement ratio while the
x-axis shows accuracy loss. We use NSGA-II as the baseline, so it is
represented by a horizontal line. For the same accuracy loss, lines
above that horizontal represent better (more efficient)
configurations, and lines below represent configurations worse than
those found by NSGA-II.
For most applications, MCKP stays below the horizontal, meaning it
is worse than NSGA-II. By comparing
BOA{}-simple and MCKP performance improvement ratio lines, we see
that BOA{}-simple outperforms MCKP.
From the VIPER{} plots we also find the maximum and minimum
performance improvement over NSGA-II. Considering
\texttt{fluidanimate}, the NSGA-II line is at 0.25 indicating that the
maximum performance is 4$\times$ better than NSGA-II, and minimum
performance is 25\% worse. In fact, for every benchmark BOA{}-flex
finds at least one configuration with higher performance for the same accuracy.
Whenever NSGA-II locates more Pareto-efficient points than
BOA{}-simple, by expanding the Pareto-efficient configurations we
reduce the performance improvement ratio gap. Benchmarks
\texttt{heartwall}, \texttt{kmeans}, and \texttt{particlefilter}
demonstrate how expanding the combined configurations provides higher
Pareto-efficiency. In total, we find by increasing the threshold,
lines of BOA{}-flex are above the NSGA-II line more than 95\% of
the time.
\emph{These results provide visual
confirmation that BOA{} not only finds a greater number of
efficient points than prior techniques, but BOA{}'s points are
also significantly better, representing much more efficient
trade-offs. Furthermore, we believe this case study provides
further evidence of VIPER's value, as the VIPER charts are
visually intuitive in Figure 5b, but the Pareto frontiers (Figure 5a)
do not immediately show which framework is best at a given accuracy
or by how much. }
\subsection{Exploration Time}
\label{sec:exploredConfigsEvaluation}
\def\tabularxcolumn#1{m{#1}}
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\small\arraybackslash}X}%
\newcolumntype{D}[1]{>{\hsize=#1\hsize\centering\bfseries\footnotesize\arraybackslash}X}%
\newcolumntype{E}[1]{>{\hsize=#1\hsize\centering\footnotesize\arraybackslash}X}%
\newcolumntype{F}[1]{>{\hsize=#1\hsize\centering\tiny\arraybackslash}X}%
\begin{table*}[t]
\centering
\caption{Number of Explored Configurations.}
\resizebox{0.85\textwidth}{!}{
\begin{tabularx}{\textwidth}{|D{1.60}|E{0.85}|E{0.85}|E{0.85}|E{0.91}|E{0.91}|E{0.91}|E{1}|E{1}|E{0.91}|E{1.21}|}
\hline
\cellcolor[gray]{0.8}{Benchmarks} & \cellcolor[gray]{0.8} {LP} & \cellcolor[gray]{0.8} {PD} & \cellcolor[gray]{0.8} {AML} & \cellcolor[gray]{0.8} {MCKP} & \cellcolor[gray]{0.8} {NSGA-II} & \cellcolor[gray]{0.8} {BOA{}-simple} & \cellcolor[gray]{0.8} {BOA{}-flex {\footnotesize Th=0.05}} & \cellcolor[gray]{0.8} {BOA{}-flex {\footnotesize Th=0.1}} & \cellcolor[gray]{0.8} {BOA{}-prob} & Combined Configs \\
\hline
Blackscholes & 20 & - & 216 & 5 & 240 & 3 & 12 & 18 & 27 & 4,320 \\
\hline
Bodytrack & 768 & 200 & 36 & 15 & 800 & 30 & 224 & 256 & 144 & 5,529,600 \\
\hline
Canneal & 7 & 525 & - & 23 & 480 & 33 & 102 & 165 & 110 & 3,675 \\
\hline
Fluidanimate & 144 & 20 & 6 & 11 & 180 & 24 & 216 & 672 & 168 & 17,280 \\
\hline
Heartwall & 256 & 320 & - & 19 & 400 & 98 & 665 & 722 & 777 & 81,920 \\
\hline
Kmeans & 120 & 100 & - & 11 & 200 & 32 & 216 & 216 & 192 & 12,000 \\
\hline
Particlefilter & 200 & 380 & 216 & 12 & 1000 & 240 & 1760 & 7200 & 1728 & 16,416,000 \\
\hline
Srad & 256 & 10 & 36 & 11 & 320 & 48 & 288 & 396 & 160 & 92,160 \\
\hline
Streamcluster & 384 & 256 & 6 & 9 & 480 & 80 & 392 & 1380 & 640 & 589,824 \\
\hline
Swaptions & 768 & 100 & 36 & 15 & 1000 & 330 & 2310 & 11025 & 1584 & 2,764,800 \\
\hline
X264 & 768 & 400 & - & 17 & 1000 & 45 & 345 & 400 & 405 & 307,200 \\
\hline
\end{tabularx}
}
\label{tbl:exploredConfigs}
\end{table*}
The Pareto-efficiency of the located points depends on exploration
time. While Figures \ref{fig:convergence} and \ref{fig:compare} show
that BOA{} produces better configurations than other
techniques, it is important to know if that gain comes
from exploring more points or from a better exploration strategy.
Table \ref{tbl:exploredConfigs} presents the number of configurations
explored for each benchmark for different methods, including different
thresholds for BOA{}-flex.
To get an estimate of the time spent exploring the combined
trade-off space for a specific benchmark, we can multiply
the number of explored configurations by the average runtime
(from the last row in Table 1).
Comparing NSGA-II and BOA{}-simple
across all benchmarks, NSGA-II explores 2.05\% of all possible
configurations, while BOA{}-simple explores about 14$\times$ less.
BOA{}-flex and
BOA{}-prob only search the 0.682\% and 0.692\% of all possible
configurations, respectively.
\emph{These results
indicate that BOA{} not only finds better combinations of
approximate frameworks, it does so with less searching.}
Since MCKP only chooses configurations from individual Pareto-optimal
curves rather than merging the configurations, the number of explored
configurations stays very low. In the worst case, MCKP explores up to
the sum of the Pareto-optimal points of PowerDial, Loop Perforation,
and the Approximate Math Library. Unfortunately, while MCKP searches
a small space, it is too small to find many useful points.
\subsection{Input Sensitivity}
\label{sec:inputSensitivity}
Since exhaustive exploration is not feasible, we use training and
test data to ensure robustness of BOA{} on unseen data. We show
how well the behavior on
training inputs predicts that on test inputs. For each search method,
we take the normalized runtime and accuracy loss, compute a linear
least squares fit of training data to test data, and compute the
correlation coefficient of each fit. Higher correlation coefficients
imply greater robustness; \textit{i}.\textit{e}.{} the behavior of configurations
found during training data is a good predictor of test behavior
Table \ref{tbl:correlationTableAccuLoss} shows the correlation
coefficient ($R$-values) for accuracy loss for each exploration method
per benchmark. Table \ref{tbl:correlationTableRuntime} shows the
$R$-values for runtime. By harmonic mean, BOA{} has higher
consistency of accuracy loss and normalized runtime comparing to
NSGA-II by 17\% and 64\% respectively. Since MCKP evaluates few
configurations, its predictions are quite robust---one advantage
of MCKP over other techniques.
While some
benchmarks such as \texttt{fluidanimate} and \texttt{streamcluster}
clearly stress the difference between training and test inputs.
NSGA-II's heuristic approach can select configurations
for the training data that produce bad results on the test
data. In contrast, BOA{} not only finds more efficient
configurations, its results are also much more robust when applied to
new inputs, producing uniformly high $R$-values.
\emph{These results indicate that BOA{} is a
sound method for combining approximation frameworks.}
\def\tabularxcolumn#1{m{#1}}
\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
\begin{table}[t]
\centering
\caption{Correlation coefficients for accuracy loss.}
\resizebox{0.40\textwidth}{!}{
\begin{tabularx}{0.45\textwidth}{|D{1.40}|E{0.88}|E{0.90}|E{1.02}|E{0.90}|E{0.90}|}
\hline
\cellcolor[gray]{0.7} {\tiny Benchmark} & \cellcolor[gray]{0.7} {\tiny MCKP} & \cellcolor[gray]{0.7} {\tiny NSGA-II} & \cellcolor[gray]{0.7} {\tiny BOA{}-simple} & \cellcolor[gray]{0.7} {\tiny BOA{}-flex} & \cellcolor[gray]{0.7} {\tiny BOA{}-prob} \\
\hline
{\scriptsize Blackscholes} & 1 & 1 & 1 & 1 & 1\\
\hline
{\scriptsize Bodytrack} & 0.992 & 0.972 & 0.990 & 0.989 & 0.949 \\
\hline
{\scriptsize Canneal} & 0.999 & 0.999 & 1 & 1 & 1 \\
\hline
{\scriptsize Fluidanimate} & 0.539 & 0.463 & 0.594 & 0.943 & 0.772 \\
\hline
{\scriptsize Heartwall} & 0.951 & 0.341 & 0.964 & 0.999 & 0.959 \\
\hline
{\scriptsize Kmeans} & 0.999 & 0.996 & 0.999 & 1 & 0.999 \\
\hline
{\scriptsize Particlefilter} & 0.933 & 0.999 & 0.822 & 0.997 & 0.997 \\
\hline
{\scriptsize Srad} & 1 & 1 & 0.958 & 0.927 & 0.999 \\
\hline
{\scriptsize Streamcluster} & 1 & 0.403 & 0.573 & 0.873 & 0.562 \\
\hline
{\scriptsize Swaptions} & 1 & 0.999 & 0.999 & 0.999 & 0.999 \\
\hline
{\scriptsize X264} & 0.888 & 0.841 & 0.992 & 0.938 & 0.983\\
\hline
{\scriptsize Average} & 0.908 & 0.696 & 0.863 & 0.968 & 0.928 \\
\hline
\end{tabularx}
}
\label{tbl:correlationTableAccuLoss}
\end{table}
\def\tabularxcolumn#1{m{#1}}
\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
\begin{table}[t]
\centering
\caption{Correlation coefficients for normalized runtime.}
\resizebox{0.40\textwidth}{!}{
\begin{tabularx}{0.45\textwidth}{|D{1.40}|E{0.88}|E{0.90}|E{1.02}|E{0.90}|E{0.90}|}
\hline
\cellcolor[gray]{0.7} {\tiny Benchmark} & \cellcolor[gray]{0.7} {\tiny MCKP} & \cellcolor[gray]{0.7} {\tiny NSGA-II} & \cellcolor[gray]{0.7} {\tiny BOA{}-simple} & \cellcolor[gray]{0.7} {\tiny BOA{}-flex} & \cellcolor[gray]{0.7} {\tiny BOA{}-prob} \\
\hline
{\scriptsize Blackscholes} & 0.993 & 0.725 & 1 & 0.988 & 0.982 \\
\hline
{\scriptsize Bodytrack} & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 \\
\hline
{\scriptsize Canneal} & 0.985 & 0.319 & 0.988 & 0.989 & 0.998 \\
\hline
{\scriptsize Fluidanimate} & 0.999 & 0.999 & 0.999 & 0.953 & 0.985 \\
\hline
{\scriptsize Heartwall} & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 \\
\hline
{\scriptsize Kmeans} & 0.981 & 0.935 & 0.992 & 1 & 0.997 \\
\hline
{\scriptsize Particlefilter} & 0.992 & 0.987 & 0.998 & 0.999 & 0.998 \\
\hline
{\scriptsize Srad} & 0.998 & 0.746 & 0.999 & 0.999 & 0.999 \\
\hline
{\scriptsize Streamcluster} & 1 & 0.056 & 0.998 & 0.999 & 0.999 \\
\hline
{\scriptsize Swaptions} & 1 & 0.991 & 0.999 & 0.973 & 0.971 \\
\hline
{\scriptsize X264} & 0.998 & 0.995 & 0.983 & 0.980 & 0.998 \\
\hline
{\scriptsize Average } & 0.995 & 0.358 & 0.996 & 0.989 & 0.993 \\
\hline
\end{tabularx}
}
\label{tbl:correlationTableRuntime}
\end{table}
\captionsetup[subfigure]{labelformat=empty}
\subsection{Combination Distribution}
\begin{table}[t]
\centering
\caption{Combinations of approximation frameworks found by BOA.}
\resizebox{0.35\textwidth}{!}{
\begin{tabularx}{0.40\textwidth}{|D{1.32}|E{0.92}|E{0.92}|E{0.92}|E{0.92}|}
\hline
\cellcolor[gray]{0.7} {\tiny Benchmark} & \cellcolor[gray]{0.7} {\tiny LP (Pareto-opt)} & \cellcolor[gray]{0.7} {\tiny PD (Pareto-opt)} & \cellcolor[gray]{0.7} {\tiny AML (Pareto-opt)} & \cellcolor[gray]{0.7} {\tiny BOA{}-simple} \\
\hline
{\scriptsize Blackscholes} & 1 & - & 3 & 3 \\
\hline
{\scriptsize Bodytrack} & 6 & 5 & 1 & 30 \\
\hline
{\scriptsize Canneal} & 3 & 11 & - & 33 \\
\hline
{\scriptsize Fluidanimate} & 4 & 3 & 2 & 24 \\
\hline
{\scriptsize Heartwall} & 7 & 14 & - & 98 \\
\hline
{\scriptsize Kmeans} & 4 & 8 & - & 32 \\
\hline
{\scriptsize Particlefilter} & 6 & 8 & 5 & 240 \\
\hline
{\scriptsize Srad} & 2 & 8 & 3 & 48 \\
\hline
{\scriptsize Streamcluster} & 8 & 5 & 2 & 80 \\
\hline
{\scriptsize Swaptions} & 11 & 5 & 6 & 330 \\
\hline
{\scriptsize X264} & 9 & 5 & - & 45 \\
\hline
\end{tabularx}
}
\label{tbl:distributedConfigs}
\end{table}
When BOA{} combines frameworks, it considers
multiple configurations from each rather than choosing from
one or two only. Table \ref{tbl:distributedConfigs}
includes the number of configurations BOA{}-simple selects from
each framework to generate the new, combined trade-off
space. As mentioned in \secref{compareAFPIR}, the
Approximate Math Library is never better than Loop Perforation
or PowerDial in any range of accuracy loss. However, BOA uses the
Approximate Math Library in combination with Loop Perforation and
PowerDial for 7 out of the 11 applications.
\emph{These
results show that there is real benefit to combining
frameworks, as even the Approximate Math Library---which is
uniformly the worst of the three techniques by
themselves---contributes to Pareto-efficient points in the combined
space found by BOA{}.}
\iffalse
\begin{table*}[th]
\centering
\caption{Summary of training and test inputs.}
\footnotesize
\begin{tabularx}{0.9\textwidth}{|D{0.5}|E{1.25}|E{1.25}|}
\hline
\cellcolor[gray]{0.8} {Benchmarks} & \cellcolor[gray]{0.8} {\textbf{Training inputs}} & \cellcolor[gray]{0.8}{\textbf{Test Inputs}}\\
\hline
Blackscholes & 30 lists with 1M initial prices & 90 lists with 1M initial prices \\
\hline
Bodytrack & sequence of 100 frames & sequence of 261 frames \\
\hline
Canneal & 30 netlists with 400K+ elements & 90 netlists with 400K+ elements \\
\hline
Fluidanimate & 5 fluids with 100K+ particles & 15 fluids with 500K+ particles \\
\hline
Heartwall & sequence of 30 ultrasound images & sequence of 100 ultrasound images \\
\hline
Kmeans & 30 vectors with 256K data points & 90 vectors with 256K data points \\
\hline
Particlefilter & sequence of 60 frames & sequence of 240 frames \\
\hline
Srad & 3 images with 2560*1920 pixels & 9 images with 2560*1920 pixels \\
\hline
Streamcluster & 3 streams of 19k-100K data points & 9 streams of 100K data points \\
\hline
Swaptions & 40 swaptions & 160 swaptions \\
\hline
X264 & 4 HD videos of 200+ frames & 12 HD videos of 200+ frames \\
\hline
\end{tabularx}
\label{tbl:inputs}
\end{table*}
\fi
\iffalse
\begin{table}[t]
\centering
\caption{Number of configurations used from each approximation framework.}
\footnotesize
\begin{tabularx}{0.4\textwidth}{|D{1.28}|C{0.93}|C{0.93}|C{0.93}|C{0.93}|}
\hline
& \cellcolor[gray]{0.8} {LP (Pareto-opt)} & \cellcolor[gray]{0.8} {PowerDial (Pareto-opt)} & \cellcolor[gray]{0.8} {AML (Pareto-opt)} & \cellcolor[gray]{0.8} {BOA{}({\footnotesize Th=0})} \\
\hline
Blackscholes & 1 & - & 3 & 3 \\
\hline
Bodytrack & 6 & 5 & 1 & 30 \\
\hline
Canneal & 3 & 11 & - & 33 \\
\hline
Fluidanimate & 4 & 3 & 2 & 24 \\
\hline
Heartwall & 7 & 14 & - & 98 \\
\hline
Kmeans & 4 & 8 & - & 32 \\
\hline
Particlefilter & 6 & 8 & 5 & 240 \\
\hline
Srad & 2 & 8 & 3 & 48 \\
\hline
Streamcluster & 8 & 5 & 2 & 80 \\
\hline
Swaptions & 11 & 5 & 6 & 330 \\
\hline
X264 & 9 & 5 & - & 45 \\
\hline
\end{tabularx}
\label{tbl:distributedConfigs}
\end{table}
\fi
\iffalse
\def\tabularxcolumn#1{m{#1}}
\newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}}
\begin{table}[t]
\centering
\begin{tabularx}{\textwidth}{|D{1.60}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|E{0.94}|}
\hline
& \multicolumn{2}{E{1}|}{MCKP} & \multicolumn{2}{E{1}|}{NSGA} & \multicolumn{2}{E{1}|}{BOA0} & \multicolumn{2}{E{1}|}{BOA2} & \multicolumn{2}{E{1}|}{BOA5} \\
\hline
App & AccuLoss & Runtime & AccuLoss & Runtime & AccuLoss & Runtime & AccuLoss & Runtime & AccuLoss & Runtime\\
\hline
BL & 1 & 0.98612 & 1 & 0.52657 & 1 & 1 & 1 & 1 & 1 & 0.97753 \\
\hline
BD & 0.98484 & 0.99965 & 0.94555 & 0.99888 & 0.9808 & 0.9994 & 0.9313 & 0.99975 & 0.97925 & 0.99971 \\
\hline
CA & 0.99998 & 0.9716 & 0.99826 & 0.10208 & 1 & 0.97547 & 0.99999 & 0.95763 & 1 & 0.97835 \\
\hline
FA & 0.29014 & 0.99917 & 0.21525 & 0.99864 & 0.35319 & 0.99896 & 0.90414 & 0.99894 & 0.88976 & 0.90978 \\
\hline
HW & 0.90353 & 0.99931 & 0.11613 & 0.99961 & 0.92919 & 0.99866 & 0.89167 & 0.99846 & 0.99959 & 0.99867 \\
\hline
KM & 0.99967 & 0.96238 & 0.99388 & 0.87361 & 0.99988 & 0.98416 & 1 & 0.97192 & 1 & 1 \\
\hline
PF & 0.87217 & 0.98457 & 0.99888 & 0.97599 & 0.67623 & 0.99734 & 0.60821 & 0.99295 & 0.99537 & 0.99987 \\
\hline
SR & 1 & 0.99712 & 1 & 0.55703 & 0.91922 & 0.99974 & 0.91687 & 0.99926 & 0.86099 & 0.99976 \\
\hline
ST & 1 & 1 & 0.16293 & 0.00317 & 0.32915 & 0.99779 & 0.1959 & 0.90346 & 0.76378 & 0.99943 \\
\hline
SW & 1 & 1 & 0.99999 & 0.98225 & 0.99985 & 0.99977 & 1 & 0.95006 & 0.99996 & 0.94858 \\
\hline
X264 & 0.78953 & 0.9968 & 0.70777 & 0.99157 & 0.98541 & 0.96762 & 0.89216 & 0.99396 & 0.88161 & 0.96219 \\
\hline
Average & 0.894 & 0.991 & 0.739 & 0.728 & 0.833 & 0.992 & 0.849 & 0.978 & 0.943 & 0.979 \\
\hline
\end{tabularx}
\caption{Correlation Table.}
\label{tbl:correlationTable}
\end{table}
\fi
\section{Combining Approximation Frameworks}
\label{sec:combine}
The prior section shows that none of Loop Perforation, PowerDial, or
the Approximate Math Library is uniformly best. This observation
motivates us to combine frameworks. At one level, this process is
quite easy---just create a new trade-off space that is
the cross product of all configurations in the original frameworks.
The challenge, of course, is quickly locating the Pareto-efficient
points in the resulting massive combined trade-off space.
We meet this challenge with the BOA{} family of search
algorithms. All BOA{} methods select a subset of the combined
trade-off space and exhaustively search that subspace space.
The first algorithm,
BOA{}-simple, only considers configurations in the cross-product
of individual frameworks' Pareto-optimal configurations. This
technique produces a relatively small set of points to search, but may
be subject to local minima if the approximation
frameworks are not independent.
Unfortunately, most approximation frameworks are not independent.
For example, Loop Perforation changes the number of loop
iterations within an application; PowerDial may change convergence
criteria. When we combine configurations from these frameworks, we
find that some configurations that were Pareto-optimal when
considering only the original frameworks are now far from optimal.
Conversely, we empirically find that some configurations that were not
Pareto-optimal in the original frameworks combine to be Pareto-optimal
when we consider multiple frameworks together.
These observations motivate us to expand
BOA{}-simple to include more non Pareto-optimal
configurations in combination.
BOA{}-flex expands the combined search space to
consider the configurations that produce a trade-off
within a user-defined threshold
of Pareto-optimal. This technique searches more points and tends to
find more efficient combinations, but it is deterministic. A common
way to avoid local minima in large search spaces is expanding the
exploration area with some form
of randomization. We follow this approach with the last algorithm:
BOA{}-prob, which probabilistically selects
configurations from each individual framework to combine.
Specifically, it uses a sigmoid probability function, so that the
closer points are to Pareto-optimal, the more likely they are to be
included in the combined trade-off space. BOA{}-prob includes
most of the same points as the other frameworks, but includes some
outliers with small probability, making it more robust in the presence
of local minima.
\noindent \textbf{BOA-simple:}
The simple version of BOA{} forms the cross-product of all
Pareto-optimal configurations from the individual frameworks.
After executing on evaluation platform, BOA{}-simple returns the
Pareto-efficient configurations found in this
combined trade-off space.
The worst case complexity of BOA{}-simple is bounded by
$O(m+log(m))*2^{\frac{n}{m}}$ where $m$ is the number of frameworks to
be merged and $n$ contains the total number of parameters of all
approximation frameworks\cite{Givargis2001}.\footnote{For the purpose of time
complexity analysis, we assume each approximation knob can take on
two values only, however, in reality, parameters may be assigned a
larger number of values.} In our experiments, input parameters are
the sum of the number of loop rates, software knobs, and Taylor series
bounds that represent our three frameworks.
While the algorithm has exponential
complexity, it is practical because so few configurations lie on the
Pareto-optimal frontiers produced by individual frameworks (see
\figref{comparecs1} and Table \ref{tbl:distributedConfigs}).
\noindent \textbf{BOA-flex:}
BOA{}-flex augments BOA{}-simple with a user-specified
selection threshold, as shown in
\algoref{paretoThreshold}. This threshold also removes some
inconsistency that may arise due to experimental noise; {\it i.e.,} it
is possible that for high variance applications, the true
Pareto-optimal configurations cannot be found with confidence, so
adding the threshold makes the search more robust. Specifically,
BOA{}-flex considers all configurations whose trade-off is within
the user-specified threshold of a Pareto-optimal trade-off.
This threshold is specified in terms of \emph{normalized Euclidean
distance}. All trade-offs are normalized so
that accuracy loss and runtime range from 0 to 1.
Accuracy loss of 1 means the lowest
quality.
A runtime of 1 is the slowest execution time.
A trade-off point is the output of executing a configuration and,
is thus, a pair of accuracy loss and runtime. Having
normalized all configurations accuracy loss and runtime, we can then
compute the Euclidean distance between the trade-offs of two separate
configurations. Given this definition, the threshold specifies how
close to Pareto-optimal a trade-off must be for it to be included in
the search. For example, the threshold is $0.05$, and then the
algorithm will include any configuration whose accuracy loss/runtime
trade-off is within 5\% of a Pareto-optimal point. If the threshold is
zero, this algorithm is equivalent to BOA{}-simple.
\begin{algorithm}[th]
\caption{BOA{}-flex: expands search space by $threshold$.}
\begin{algorithmic}[1]
\footnotesize \REQUIRE $frameworks$ \LineComment{trade-off spaces of frameworks}
\footnotesize \REQUIRE $threshold$ \LineComment{User-defined $threshold$}
\STATE $Combination$ = [] \LineComment{configurations to explore}
\FOR{$f$ in $frameworks$}
\STATE ${Pareto-opt}_f \leftarrow$ Get-Pareto-Opt($f$)
\FOR{Config $C_i$ in $Pareto-opt$}
\FOR{Config $C_j$ in $f$}
\IF {$NormalizedEuclideanDistance(C_i,C_j)$ $\leqslant$ $threshold$}
\STATE $Combination$.append($C_j$)
\ENDIF
\ENDFOR
\ENDFOR
\ENDFOR
\newline
\RETURN $Combination$ \LineComment{Set of points to explore} \newline
\end{algorithmic}
\label{algo:paretoThreshold}
\end{algorithm}
\iffalse
\begin{algorithm}[th]
\caption{BOA{}-flex: expands search space by $threshold$.}
\SetAlgoNoLine
\DontPrintSemicolon
\KwIn{ \footnotesize $frameworks$, trade-off spaces of frameworks }
\KwIn{ \footnotesize $threshold$, User-defined $threshold$}
\KwOut{ \footnotesize $Combination$, Set of configurations to explore}
\;
$Combination$ = [] \;
\For{$f$ in $frameworks$} {
${Pareto-opt}_f \leftarrow$ Get-Pareto-Opt($f$) \;
\For{Config $C_i$ in $Pareto-opt$}{
\For{Config $C_j$ in $f$}{
\If {$NormalizedEuclideanDistance(C_i,C_j)$ $\leqslant$ $threshold$}{
$Combination$.append($C_j$)
}
}
}
}
\label{algo:paretoThreshold}
\end{algorithm}
\fi
\noindent \textbf{BOA-prob:}
While BOA{}-flex expands the combined search space, it only
considers additional configurations that are close to an individual
framework's Pareto-optimal curve. To make BOA{} even more robust
to local minima, BOA{}-prob employs a sigmoid probability function
to include a few points that are further from the individual
frameworks' Pareto-optimal curves:
\begin{equation} \label{eq:5}
\begin{gathered}
S(C_j) = \frac {1} {1 +exp(\frac {-\Delta + \beta}{\gamma})}
\end{gathered}
\end{equation}
where $\Delta$ is the normalized Euclidean distance between
configuration $C_j$ and the nearest Pareto-optimal configuration.
$\beta$ is the horizontal shift and $\gamma$ decides the curve's smoothness.
\algoref{paretoRandom} shows how BOA{}-prob uses Equation \ref{eq:5}.
\begin{algorithm}[th]
\caption{BOA{}-prob: probabilistic search space expansion.}
\begin{algorithmic}[1]
\footnotesize \REQUIRE $frameworks$ \LineComment{trade-off spaces of frameworks}
\STATE $Combination$ = [] \LineComment{Pareto-efficient configurations}
\FOR{$f$ in $frameworks$}
\STATE ${Pareto-opt}_f \leftarrow$ Get-Pareto-Opt($f$)
\FOR{Config $C_i$ in $Pareto-opt$}
\FOR{Config $C_j$ in $f$}
\STATE $\Delta$ = $NormalizedEuclideanDistance$($C_i$,$C_j$)
\STATE $S({C_j})$ = $\frac{1}{1 +exp(\frac{-\Delta + \beta}{\gamma})}$
\STATE r=rand() \LineComment{Random number between 0 and 1}
\IF {($r$ < $S({C_j})$) or ($\Delta d=0$)}
\STATE $Combination$.append($C_j$)
\ENDIF
\ENDFOR
\ENDFOR
\ENDFOR
\newline
\RETURN $Combination$ \LineComment{Set of points to explore} \newline
\end{algorithmic}
\label{algo:paretoRandom}
\end{algorithm}
\iffalse
\begin{algorithm}[th]
\caption{BOA{}-prob: probabilistic search space expansion.}
\SetAlgoNoLine
\DontPrintSemicolon
\KwIn{ \footnotesize $frameworks$, trade-off spaces of frameworks }
\KwOut{ \footnotesize $Combination$, Set of configurations to explore}
\;
$Combination$ = [] \;
\For{$f$ in $frameworks$}{
${Pareto-opt}_f \leftarrow$ Get-Pareto-Opt($f$) \;
\For{Config $C_i$ in $Pareto-opt$}{
\For{Config $C_j$ in $f$} {
$\Delta$ = $NormalizedEuclideanDistance$($C_i$,$C_j$) \Comment*[r]{Distance to nearest Pareto-opt point}
$S({C_j})$ = $\frac{1}{1 +exp(\frac{-\Delta + \beta}{\gamma})}$ \;
$r$ = $rand()$ \Comment*[r]{Random number between 0 and 1}
\If {($r$ < $S({C_j})$) or ($\Delta d=0$)}{
$Combination$.append($C_j$)
}
}
}
\label{algo:paretoRandom}
\end{algorithm}
\fi
We use constants $\beta=0.2$ and $\gamma=0.01$ so there is a 92\%
chance of including the point that has $\Delta < 0.05$, and 50\%
chance of selecting the point with $\Delta < 0.1$. At $\Delta=0.2$,
there is less than 1\% chance of including the point in the
combination. If $\Delta=0$, it means that $C_j$ is actually
Pareto-optimal and BOA{}-prob includes it. The interdependent
parameters $\beta$ and $\gamma$ control the size of combined
trade-off space and the exploration time.
\section{Comparing Approximation Frameworks}
\label{sec:compare1}
\subsection{Terminology}
To produce performance/accuracy tradeoffs, any
approximation framework must have one or more tunable
\emph{parameters}. The values assigned to the parameter set represent
a \emph{configuration} and the range of possible parameter settings is
a \emph{configuration space}. Each configuration represents a
\emph{trade-off} between performance and accuracy. The
\emph{trade-off space} (or \emph{design space}) is the set of all
possible trade-offs; {\it i.e.,} the range of achievable performance
and accuracy.
We consider large search spaces and often do not
know the true optimal values for which we are searching. We therefore
distinguish between \emph{Pareto-optimal}---meaning we know
that a point is on the true Pareto-optimal frontier----and
\emph{Pareto-efficient}---meaning a point on the estimated,
unknown Pareto-optimal frontier. Thus, if we say a point is
Pareto-efficient, it is better than all
other points found so far, but we do not know that it is truly
Pareto-optimal.
\subsection{Numerical Comparisons}
\label{sec:coverageanddominance}
For large trade-off spaces, a point-by-point
comparison is not possible. Therefore, prior work
has introduced analytical methods
for comparing trade-off spaces based on the number of
Pareto-optimal---if the trade-off space is known---or
Pareto-efficient---if the trade-off space is estimated---points
from each framework.
A point in our accuracy-performance trade-off space is a 2D vector with
$runtime$ and $accuracyLoss$. Ideally, we would have zero
run time and zero accuracy loss; {\em i.e.,} instantaneously get a
perfect answer, leading to:
\theoremstyle{definition}
\begin{definition}{Objective Function:}
Given points $x_{1}$ and $x_{2}$ in the objective
to be minimized is $f(x)$ where:
\begin{equation}
\begin{gathered}
f(x_1) < f(x_2) \iff accuracyLoss(x_1) < accuracyLoss(x_2) \\
\& \enspace runtime(x_1) < runtime(x_2)
\end{gathered}
\end{equation}
\end{definition}
Points closer to the origin represent more
efficient configurations. Given the objective function $f(x)$, we
determine if a point is more efficient than another by:
\theoremstyle{definition}
\begin{definition}{Dominance:}
Given points $x_1$ and $x_2$, we say:
\begin{equation}
\begin{gathered}
x_1 \succeq x_2 \textnormal{ (weakly dominates) if } f(x_1) \leqslant f(x_2) \\
x_1 \succ x_2 \textnormal{ (dominates) if } f(x_1) < f(x_2)
\end{gathered}
\end{equation}
\end{definition}
A point is Pareto-optimal if it is not dominated by any other point. A
point is Pareto-efficient if we do not know of another point that
dominates it. \figref{coverageScenarios}(a) illustrates an example of
dominance where point $x_3$ is dominated by point $x_2$.
\emph{Coverage} quantifies the number of Pareto-efficient points
produced by different techniques \cite{Palermo2009}:
\theoremstyle{definition}
\begin{definition}{\emph{Coverage}}
is the dominance ratio of the Pareto-efficient curves induced by two
separate frameworks. If $X$ and $Y$ are two Pareto-efficient
curves, and $x$ and $y$ represent points on them respectively, then:
\begin{equation}
C(X,Y)=\frac{| \{\: y \in Y \: | \: \exists x \in X : x \succeq y \: \} |}{|Y|}
\end{equation}
\end{definition}
$C(X,Y)=1$ means that all points in Y are weakly dominated by points
in X; {\it i.e.,} all points of $X$ provide lower runtime for the same
accuracy loss than the points of $Y$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.35\textwidth]{figs/coverage_ver2.png}
\caption{Dominance (a) and Coverage (b) Functions (from
\cite{Palermo2009}). In (a) $x_2$ dominates $x_3$ as $x_2$ is
both faster and more accurate. In (b), $X$ covers $2/3$ of $Y$
because $y_2$ and $y_3$ are dominated by at least one point in
$X$. }
\label{fig:coverageScenarios}
\end{figure}
\figref{coverageScenarios}(b) illustrates the coverage of curve $X$
with respect to curve $Y$. The point $y_2$ and $y_3$ on the $Y$ curve
are dominated by at least one point on the curve $X$---$x_2$, for
example---therefore $C(X,Y)=\frac{2}{3}$. In contrast, no point on $X$
is dominated by a point on curve $Y$ which means $C(Y,X)=0$. By this
metric, we consider $X$ more efficient, but note that the curve $Y$
extends through a larger range within the trade-off space; {\em i.e.,}
$y_1$ is a useful point which neither
dominates nor is dominated by any points in $X$. The coverage
function is non-symmetric ($C(X,Y) \;\neq\; C(Y,X)$) and usually their
sum does not equal 1 \cite{Holzer2007}. Hence, we need a metric that
considers both coverage functions simultaneously:
\theoremstyle{definition}
\begin{definition}{\em Difference Of Coverage}
compares coverage for two different Pareto-efficient curves.
\begin{equation}
DOC(X,Y)=C(X,Y)-C(Y,X)
\end{equation}
\end{definition}
As a result, when $DOC(X,Y)\geq 0$, that fraction of $Y$ points which
are dominated by $X$ is greater than $X$ points that are dominated by
$Y$. Higher $DOC$ implies one set is more efficient than the other.
If $DOC(X,Y)$ is close to zero both may provide the same efficiency.
This metric is widely used by in multi-objective optimization problems
\cite{Ascia2004,Marti2007,Marti2009}.
\subsection{VIPER}
\label{sec:compare}
Numerical comparisons suffer from two
major shortcomings. First, they do not show the full range of
accuracy loss induced by each framework. Second, as seen
in \figref{motivationFA}, the best approximation framework varies as
accuracy loss changes. Numerical metrics---like DOC---have
limited expressiveness; \figref{coverageScenarios}(b) shows that $y_1$
is a useful point, but DOC makes $X$ look uniformly better than $Y$.
As an alternative to numerical methods, researchers have used Pareto-optimal
curves to compare frameworks, but this graphical evaluation has
proven problematic \cite{Givargis2001,Knowles1999}.
While curves may look compact, they
can be different by orders of magnitude; {\em e.g. }{} when there is a
steep slope and a large range covered, a small change in
one dimension ({\em e.g. } accuracy loss) leads to a significant shift in the
other ({\em e.g. } runtime).
\newcommand{\LineComment}[1]{\hfill $\triangleright$ \scriptsize{#1}}
\begin{algorithm}[th]
\caption{VIPER{}.}
\begin{algorithmic}[1]
\footnotesize \REQUIRE $M,B$ \LineComment{Lower convex hulls for framework $M$ and baseline $B$}
\STATE $MinX = Max(Min(M.x), Min(B.x))$ \LineComment{lower bound}
\STATE $MaxX = Min(Max(M.x), Max(B.x))$ \LineComment{upper bound}
\STATE $step = (MaxX-MinX)/1000$
\FOR {$accuLoss = MinX; accuLoss<MaxX;accuLoss+=step$}
\STATE $M_i \leftarrow$ find point on $M$ where $M_{i}.x < accuLoss < M_{i+1}.x$
\STATE $B_j \leftarrow$ find point on $B$ where $B_{j}.x < accuLoss < B_{j+1}.x$
\STATE $\hat{y}_{M} \leftarrow$ interpolate runtime between $M_i$ and $M_{i+1}$ where $x=accuLoss$
\STATE $\hat{y}_{B} \leftarrow$ interpolate runtime between $B_j$ and $B_{j+1}$ where $x=accuLoss$
\STATE $perfImprovRatio[accuLoss]=\hat{y}_{M} / \hat{y}_{B}$
\ENDFOR
\STATE NORMALIZE($perfImprovRatio$) \LineComment{limit the ratio to [0,1]}
\newline
\RETURN $perfImprovRatio$ \LineComment{array of points}
\end{algorithmic}
\label{algo:perfImprovRatio}
\end{algorithm}
\iffalse
\begin{algorithm}[th]
\caption{VIPER{} computes the Performance Improvement Ratio.}
\SetAlgoNoLine
\DontPrintSemicolon
\KwIn{Lower convex hulls for framework $M$ and baseline $B$}
\KwOut{Set of $perfImprovRatio$ values}
\label{algo:perfImprovRatio}
\BlankLine
\footnotesize
$MinX = Max(Min(M.x), Min(B.x))$ \Comment*[r]{Lower bound}
$MaxX = Min(Max(M.x), Max(B.x))$ \Comment*[r]{Upper bound}
$step = (MaxX-MinX)/1000$
\BlankLine
\For {$accuLoss = MinX; accuLoss<MaxX;accuLoss+=step$}{
$M_i \leftarrow$ find point on $M$ where $M_{i}.x < accuLoss < M_{i+1}.x$ \;
$B_j \leftarrow$ find point on $B$ where $B_{j}.x < accuLoss < B_{j+1}.x$ \;
$\hat{y}_{M} \leftarrow$ interpolate runtime between $M_i$ and $M_{i+1}$ where $x=accuLoss$ \;
$\hat{y}_{B} \leftarrow$ interpolate runtime between $B_j$ and $B_{j+1}$ where $x=accuLoss$ \;
$perfImprovRatio[accuLoss]=\hat{y}_{M} / \hat{y}_{B}$
}
\BlankLine
NORMALIZE($perfImprovRatio$) \Comment*[r]{Limit the ratio to [0,1]} \;
\end{algorithm}
\fi
To provide an alternative visualization of approximation frameworks
we introduce VIPER{}, which illustrates the relative performance
of frameworks
for any range of accuracy loss.
\algoref{perfImprovRatio} explains how VIPER{} calculates the
\emph{performance improvement ratio} (PIR) of one framework $M$ over a
baseline $B$. The PIR is the ratio of the frameworks'
performance at a given accuracy loss.
To make the charts readable, PIR is in
the range [0,1]. PIR$=1$ means a configuration is the fastest in the
space, while PIR$=0$ is the slowest. A configuration with PIR$=0.6$
achieves 60\% of the maximum speedup.
First, VIPER{} finds the lower and upper bounds for the accuracy
loss which defines the range of comparison. Then, this range gets
divided by a parameterized granularity. We use a granularity of 1000
in this paper as larger values produced no benefit and
smaller values make the charts less clear. For each $accuLoss$ value,
VIPER{} finds the corresponding runtime in both frameworks.
Afterwards, we search for the nearest points on each lower convex hull
where their accuracy loss is smaller than $accuLoss$ (identified with
$M_i$ and $B_j$ points respectively). Then, we interpolate the runtime
of the specified $accuLoss$ for both frameworks (named as
$\hat{y}_{M}$ and $\hat{y}_{B}$), and divide these interpolated
$runtime$ values to compute the performance improvement ratio.
Finally, we normalize the ratio to [0,1]. When we
compare more than two frameworks, the ratio is normalized to the
lowest and highest among all.
VIPER{} then charts the PIR across the
range of accuracy loss. The values for the baseline $B$ will be a
straight horizontal line. Values of $M$ above that line indicate that
$M$ achieves higher performance for that accuracy loss. If the line
for $M$ stays above that for $B$ for a greater range of accuracy loss,
it means $M$ has found more efficient configurations, on average. The
color shading on the plot background indicates the highest performance
method for that accuracy loss from the multiple frameworks. Therefore,
if the plot's background is dominated by a single color, the
corresponding method provides the more efficient configurations.
Thus, VIPER{} allows
users to see at a (literal) glance, whether one approximation
framework is clearly superior to another.
\section{Conclusion}
\label{sec:conclusion}
A proliferation of approximation
frameworks have recently appeared that exploit different configurable
parameters to trade reduced accuracy for decreased resource consumption.
This paper proposes methods for both comparing and combining different
frameworks. VIPER{} is a visualization tool for comparing
approximation frameworks across their entire range of available
accuracies. We show this tool is useful for comparing existing
approximation frameworks regardless of their type and applied system
level. BOA{} is a family of algorithms that combine approximation
frameworks
and quickly locate Pareto-efficient
configuration combination.
\paragraph*{Acknowledgments}
The effort on this project is funded by the U.S. Government under the
DARPA BRASS program and by a DOE Early Career Award. Additional
funding comes from the NSF (CCF-1439156, CNS-1526304, CCF-1823032,
CNS-1764039.
\section{Introduction}
This document is a model and instructions for \LaTeX.
Please observe the conference page limits.
\section{Ease of Use}
\subsection{Maintaining the Integrity of the Specifications}
The IEEEtran class file is used to format your paper and style the text. All margins,
column widths, line spaces, and text fonts are prescribed; please do not
alter them. You may note peculiarities. For example, the head margin
measures proportionately more than is customary. This measurement
and others are deliberate, using specifications that anticipate your paper
as one part of the entire proceedings, and not as an independent document.
Please do not revise any of the current designations.
\section{Prepare Your Paper Before Styling}
Before you begin to format your paper, first write and save the content as a
separate text file. Complete all content and organizational editing before
formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on
proofreading, spelling and grammar.
Keep your text and graphic files separate until after the text has been
formatted and styled. Do not number text heads---{\LaTeX} will do that
for you.
\subsection{Abbreviations and Acronyms}\label{AA}
Define abbreviations and acronyms the first time they are used in the text,
even after they have been defined in the abstract. Abbreviations such as
IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use
abbreviations in the title or heads unless they are unavoidable.
\subsection{Units}
\begin{itemize}
\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''.
\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
\item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''.
\item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.)
\end{itemize}
\subsection{Equations}
Number equations consecutively. To make your
equations more compact, you may use the solidus (~/~), the exp function, or
appropriate exponents. Italicize Roman symbols for quantities and variables,
but not Greek symbols. Use a long dash rather than a hyphen for a minus
sign. Punctuate equations with commas or periods when they are part of a
sentence, as in:
\begin{equation}
a+b=\gamma\label{eq}
\end{equation}
Be sure that the
symbols in your equation have been defined before or immediately following
the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at
the beginning of a sentence: ``Equation \eqref{eq} is . . .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
\subsection{Some Common Mistakes}\label{SCM}
\begin{itemize}
\item The word ``data'' is plural, not singular.
\item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.
\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).
\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.
\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.
\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.
\item Do not confuse ``imply'' and ``infer''.
\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.
\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.
\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.
\end{itemize}
An excellent style manual for science writers is \cite{b7}.
\subsection{Authors and Affiliations}
\textbf{The class file is designed for, but not limited to, six authors.} A
minimum of one author is required for all conference articles. Author names
should be listed starting from left to right and then moving down to the
next line. This is the author sequence that will be used in future citations
and by indexing services. Names should not be listed in columns nor group by
affiliation. Please keep your affiliations as succinct as possible (for
example, do not differentiate among departments of the same organization).
\subsection{Identify the Headings}
Headings, or heads, are organizational devices that guide the reader through
your paper. There are two types: component heads and text heads.
Component heads identify the different components of your paper and are not
topically subordinate to each other. Examples include Acknowledgments and
References and, for these, the correct style to use is ``Heading 5''. Use
``figure caption'' for your Figure captions, and ``table head'' for your
table title. Run-in heads, such as ``Abstract'', will require you to apply a
style (in this case, italic) in addition to the style provided by the drop
down menu to differentiate the head from the text.
Text heads organize the topics on a relational, hierarchical basis. For
example, the paper title is the primary text head because all subsequent
material relates and elaborates on this one topic. If there are two or more
sub-topics, the next level head (uppercase Roman numerals) should be used
and, conversely, if there are not at least two sub-topics, then no subheads
should be introduced.
\subsection{Figures and Tables}
\paragraph{Positioning Figures and Tables} Place figures and tables at the top and
bottom of columns. Avoid placing them in the middle of columns. Large
figures and tables may span across both columns. Figure captions should be
below the figures; table heads should appear above the tables. Insert
figures and tables after they are cited in the text. Use the abbreviation
``Fig.~\ref{fig}'', even at the beginning of a sentence.
\begin{table}[htbp]
\caption{Table Type Styles}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\
\cline{2-4}
\textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\
\hline
copy& More table copy$^{\mathrm{a}}$& & \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.}
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{figure}[htbp]
\centerline{\includegraphics{fig1.png}}
\caption{Example of a figure caption.}
\label{fig}
\end{figure}
Figure Labels: Use 8 point Times New Roman for Figure labels. Use words
rather than symbols or abbreviations when writing Figure axis labels to
avoid confusing the reader. As an example, write the quantity
``Magnetization'', or ``Magnetization, M'', not just ``M''. If including
units in the label, present them within parentheses. Do not label axes only
with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization
\{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of
quantities and units. For example, write ``Temperature (K)'', not
``Temperature/K''.
\section*{Acknowledgment}
The preferred spelling of the word ``acknowledgment'' in America is without
an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B.
G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor
acknowledgments in the unnumbered footnote on the first page.
\section*{References}
Please number citations consecutively within brackets \cite{b1}. The
sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference
number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at
the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$''
Number footnotes separately in superscripts. Place the actual footnote at
the bottom of the column in which it was cited. Do not put footnotes in the
abstract or reference list. Use letters for table footnotes.
Unless there are six authors or more give all authors' names; do not use
``et al.''. Papers that have not been published, even if they have been
submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers
that have been accepted for publication should be cited as ``in press'' \cite{b5}.
Capitalize only the first word in a paper title, except for proper nouns and
element symbols.
For papers published in translation journals, please give the English
citation first, followed by the original foreign-language citation \cite{b6}.
\section{Introduction}
Approximation frameworks \emph{configure} applications to operate
within a \emph{trade-off} space where the result accuracy is exchanged for
other benefits, typically increased performance. Different
approximation frameworks exist across the layers of the system stack.
Some focus on the circuit level
\cite{Palem2003,lingamneni2013,Ingole2015,Chippa2013,Muralidharan2016,Chakrapani2006}.
Others replace expensive hardware with approximations
\cite{samadi2014,Esmaeilzadeh2012,Temam2012,Esmaeilzadeh2006}. Still
others exist at the programming language and compiler level
\cite{bornholt2014,Kansal2013,Sampson2011,Oh2013,ansel2011,Baek2010,Fousse2007,Kwon2009}.
As approximation methods proliferate, it is natural to question their
interaction; especially:
\begin{itemize}
\item \emph{How to compare the trade-off spaces induced by different
techniques?} Comparing individual points in the trade-off
space is easy: simply compare all framework's performance at that point. Most
approximation frameworks, however, produce a trade-off space---with
a range of operating points---so we need techniques that compare
frameworks across that entire range.
\item \emph{How to combine different techniques' trade-off spaces and
locate Pareto-efficient points in the new space?} The challenge is
quickly locating more efficient configurations in the immense combined trade-off space, which is too big
to search exhaustively.
\end{itemize}
\noindent \textbf{VIPER and BOA: }
To compare approximation frameworks we propose VIPER: Visualizing
Improved PErformance Ratios. While existing techniques use numerical
comparisons \cite{Zhou2011,Holzer2007,Ascia2004,Marti2007,Marti2009}
or simply display Pareto-optimal curves, VIPER produces a visual
representation of the trade-off space. VIPER produces charts showing
normalized performance for different frameworks across all
possible accuracy loss ranges. A chart is divided into different,
mathematically meaningful regions that show how much one framework
out-performs others.
To combine frameworks, we propose BOA: Blending Optimal
Approximations. BOA is a family of algorithms that locate
Pareto-efficient points in the huge trade-off space produced by
multiple frameworks. In its simplest version, BOA-simple
searches the cross product of Pareto-optimal points from individual
frameworks. BOA extensions, evaluate more of the search space---either deterministically or probabilistically---including more
near-optimal points. BOA then returns the
Pareto-efficient points from this search space.
\noindent \textbf{Summary of Results:}
We consider two case studies. Both use prior
approximation frameworks from across the system stack: Loop Perforation---a compiler technique (LP)
\cite{Sidiroglou-Douskos2011}, PowerDial---an application-level technique (PD) \cite{Hoffmann2011}, and
the Approximate Math Library---a library that changes numerical precision (AML) \cite{Kwon2009}. We use eleven applications
covering domains from machine learning to image processing. Each
application includes multiple inputs that we divide into training and
test sets to evaluate whether combination methods produce
statistically sound results on unseen inputs.
The first case study uses VIPER to compare Loop Perforation,
PowerDial, and the Approximate Math Library. Loop Perforation simply discards loop iterations with no regard to original
intent. PowerDial , in
contrast, builds off
approximations that already exist in the application; {\it i.e.,}
those envisioned by the original programmer. The Approximate Math Library approximates math functions
({\it e.g.,} $exp$, $log$ and $sqrt$) using variable Taylor series
expansion. While these approximations work at different levels of the
stack, VIPER allows us to quickly compare them and produces more
intuitive visualizations than simply looking at Pareto-optimal curves.
The second case study combines the three approximation techniques and
locates Pareto-efficient points in this new, significantly larger
trade-off space. We compare BOA to two state-of-the-art design-space
exploration algorithms: Multiple Choice Knapsack Problem (MCKP)
\cite{Yang2003} and Non-Sorting Genetic Algorithm (NSGA-II)
\cite{Deb2002}. Compared to these two approaches, BOA achieves:
\begin{itemize}
\item \textbf{More efficient configurations}: BOA{}-simple produces
48.2\% and 35.1\% more Pareto-efficient points than MCKP and
NSGA-II, respectively.
\item \textbf{More reliable behavior on unseen inputs}: BOA{}-simple
finds statistically meaningful Pareto-efficient configurations that are not sensitive
to input data and are more likely to be efficient on
an unseen set of inputs. That is, the correlation between BOA{}'s behavior on training (seen) and test (unseen) inputs is much higher than MCKP and NSGA-II.
\end{itemize}
Somewhat surprisingly, BOA produces much better results while using a
much simpler search technique than the prior works to which it is
compared. The fact that simpler search methods can produce better
results is a key contribution of this work. The primary insight is
that, empirically, we find that optimal combinations of approximation
frameworks tend to involve configurations that are near-optimal for
the individual frameworks. Therefore, BOA explores combinations derived from these points with high probability. In
contrast, MCKP does not consider enough non-optimal configurations and
gets stuck in local minima, while NSGA-II explores too many
non-optimal configurations---avoiding the worst local minima, but also
stopping short of the true optimal combinations. Thus, BOA's method
represents a compromise that works well for approximation frameworks,
whose optimal combinations tend to be near the individually optimal
points.
\noindent \textbf{Contributions:}
\label{sec:contributions}
\begin{itemize}
\item Introduction of VIPER to visually compare approximation
trade-off spaces over their entire range.
\item Comparison analysis---based on VIPER---of the three approximation
frameworks (Loop Perforation \cite{Sidiroglou-Douskos2011},
PowerDial\cite{Hoffmann2011} and Approximate Math
Library\cite{Kwon2009}).
\item Proposal of variations of BOA for fast locating Pareto-efficient configurations in
combined trade-off space.
\item Open-source release of the VIPER and BOA tools.
\end{itemize}
\section{Motivation and Background}
\label{sec:background}
\subsection{Approximation Across the System Stack}
Approximation frameworks reduce runtime (or energy) by allowing
output quality degradation. Hardware approximation computes inexactly in return for
reduced energy, area, or time
\cite{lingamneni2013,Chippa2013,Esmaeilzadeh2011}. Many software
approximation techniques allow specific
software components to be replaced by approximate variants; {\em e.g. }{}
skipping loop iterations or replacing
math operations with Taylor-series expansions
\cite{Omer,Patterns2010,ICSE2010,Hoffmann2009,Samadi2013,Sidiroglou-Douskos2011,Hoffmann2011,Park2016,Kwon2009,Mathar2009,Abad2015,JouleGuard,Fousse2007,Canino2018}.
Some approaches use machine learning
to replace exact computation with a faster, less accurate learned variant
\cite{Esmaeilzadeh2012,Temam2012,Muralidharan2016,Sui2016,ALERT1,ALERT2}.
Languages support approximation allowing
specification of variants for key functionality and formal
analysis of their effects \cite{Sorber2007,Baek2010,Sampson2011,ansel2011,Kansal2013,bornholt2014,Proteus,LAB,Ent}.
Other mechanisms guarantee that
approximate programs will maintain some quality or energy guarantees
either through program analysis \cite{Ringenburg2015,Carbin2013,Darulova2013} or runtimes with formally analyzable dynamic adaptation \cite{JouleGuard,CoAdapt,FSE2017,FSE2015,Meantime,TAAS2017,ASPLOS2018,AdaptCap}.
\subsection{Comparing Approximation Frameworks}
Our
intuition is that some approximation frameworks will produce better results (e.g.,
higher performance for the same accuracy) than others in different
situations. Hence, we need a method to compare frameworks across their
full range of accuracy and choose the best one for a specific usage
scenario.
Points in these \emph{trade-off spaces} correspond to
\emph{configurations} of the approximation framework. Point-by-point
comparison is unfeasible since trade-off spaces include numerous
configurations, many of which are not useful. Typically, only the
Pareto-optimal points are used for comparison.
As an example, we compare three approximation frameworks: PowerDial
\cite{Hoffmann2011}, Loop Perforation \cite{Hoffmann2009,ICSE2010,Sidiroglou-Douskos2011},
and the Approximate Math Library \cite{Kwon2009}. We pick these three
frameworks because (1) they are either easily recreated or publicly
available, requiring no specialized language or hardware support, and
yet (2) they are representative of approaches applied at different
levels of the system stack. PowerDial is an application-level
approach that exploits existing trade-offs envisioned by the
application developers. Loop Perforation creates approximate
applications by applying a compiler transformation to selectively skip
loop iterations. The Approximate Math Library changes computation, and
while implemented in software, it is a good proxy for approximation
techniques that change hardware arithmetic units.
\figref{motivationFA} illustrates the trade-off spaces induced by
these three approaches for the \texttt{fluidanimate} benchmark. Each
point represents the normalized runtime and accuracy loss.
\begin{figure}[t]
\begin{center}
\input{img/motive/motive-Fluidanimate-configSpace.tex}%
\caption{\texttt{fluidanimate}'s accuracy/runtime trade-off space
with Loop Perforation (LP), PowerDial (PD), and Approximate Math Library (AML). The closer to origin, the
more efficient configuration. }
\label{fig:motivationFA}
\end{center}
\end{figure}
Comparison of approximation frameworks across their full range of
accuracy is necessary, as not all users have the same accuracy
requirements. We find, however, that looking at Pareto-optimal curves
like those in \figref{motivationFA}) is unsatisfying and
rarely makes it obvious which approximation method is better across a
range of operating points. Moreover, while for a certain range of
accuracy one framework might perform better, another framework might
produce higher performance at different accuracy ranges.
This motivates VIPER, a tool that allows users to tell---at
a glance---which framework has the best performance for any range of
accuracy loss.
\subsection{Combining Approximation Frameworks}
\figref{motivationFA} suggests that none of the three approximation
frameworks is uniformly best. Furthermore, the fact
that the three are broadly representative of approaches from different
levels of the ssystem stack motivates us to combine them for better
accuracy/runtime tradeoffs than any individual framework.
The challenge is that merging multiple
frameworks leads to an enormous trade-off space, which is infeasible
to explore exhaustively. Table \ref{tbl:exploredConfigs} lists the
number of points in the trade-off spaces of Loop Perforation,
PowerDial, and Approximate Math Library for sample benchmarks.
For example, the \texttt{x264} benchmark takes up to 4 weeks to test
all combined configurations with a single input. Considering multiple
inputs should be tested for statistically sound results, the
unfeasibility of exhaustive exploration is obvious.
More formally, combining approximation frameworks requires
quickly locating Pareto-efficient configurations in the new, larger
trade-off space. Exploring large trade-off spaces is well-studied and
has produced
two broad classes of approach. The first is carefully selecting
and exhaustively searching a subset of the combined trade-off space
\cite{Givargis2001,Yang2003}. The second class intelligently
traverses the entire combined trade-off space---not limiting the
initial configuration combinations, but exploreing a small number of the
total \cite{Ascia2004,Palermo2009,Zitzler2012}. Among
these intelligent search techniques, NSGA-II---a genetic
algorithm-based approach---has repeatedly out-performed other proposals \cite{Deb2002,Zitzler2012}.
While prior work has proven effective for application-specific
processor design, we find that it is not the best match for combining
approximation frameworks. Specifically, heuristic exploration of genetic algorithms appear to
causes two issues: (1) in an effort to avoid local minima, they produce less
efficient combinations (see Section
\ref{sec:coverageMetricEvaluation}) and (2) they add too much randomization
that leads to lower correlation between training and
test inputs (see Section \ref{sec:inputSensitivity}).
\iffalse
\fi
|
{
"timestamp": "2021-02-18T02:18:28",
"yymm": "2102",
"arxiv_id": "2102.08771",
"language": "en",
"url": "https://arxiv.org/abs/2102.08771"
}
|
\section{Introduction}
Telerehabilitation involves leveraging technologies (e.g., {} the Internet) to facilitate \textit{the communication of information} between a patient and their clinician at a distance in order to provide rehabilitation services~\cite{Brennan2010}. However, the type of information that needs to be communicated is not well defined. Prior to the viability of telerehabilitation services, researchers and developers designed non-internet \emph{home-based} therapy systems with the intention of increasing patients' access and longevity to rehabilitation services. These home-based therapy systems largely focused on motivating exercise and creating engagement (e.g., {} through gamification) at home. As telerehabilitation systems' viability and development increased, they seem to have organically followed the focus of home-based therapy technologies, where patients autonomously perform exercises, and movement data was primarily captured and conveyed to rehabilitation specialists. Thus, research and development both in telerehabilitation and home-based systems have largely aimed at sensing movement data.
Our research was initially guided by this focus on collecting and sharing movement data. We set out to determine the types and exactness of movement data needed in stroke rehabilitation, with the goal of informing the design of telerehabilitation systems for low-resource communities. As we began to observe face-to-face rehabilitation sessions and interview physiatrists, physical therapists, and occupational therapists, we instead found that the information that they dedicated effort to extract, understand, and integrate into their care plans, is incongruent with the current design paradigm of telerehabilitation systems.
In this study, we investigate the information exchanged between stroke survivors, clinicians, and caregivers in co-located in-clinic stroke rehabilitation sessions, with the goal of informing the design of future stroke telerehabilitation systems, such that patients and specialists can attain the benefits of co-located interaction. What we learned is that the information needed by rehabilitation specialists is not really a detailed understanding of the movement data, but rather a deep understanding of the \emph{experiential information}, such as the stroke survivor's emotions and motivations. We show that experiential information is information shared in stroke rehabilitation. Therefore, we posit for a paradigm reconceptualization for telerehabilitation system design, in which telerehabilitation has a focus on communicating the patient's situated context.
\newpage
Our contributions to HCI research on stroke telerehabilitation are: (1) The definition and composition of the \emph{experiential information} needed in stroke rehabilitation, (2) An explanation of our proposed paradigm reconceptualization for future stroke telerehabilitation research, and, (3) Implications for the design and development of stroke telerehabilitation systems.
\section{Background: Stroke Rehabilitation Process \& Specialists}
Stroke is one of the leading causes of long-term disability in the United States~\cite{benjamin2017heart}. The American Heart Association and American Stroke Association projected in 2013 that the costs associated with stroke will increase 129\% by 2030, and concluded that more rehabilitation and acute care services are needed to address stroke because of the national healthcare costs increasing yearly~\cite{Ovbiagele2013}. Stroke survivors in particular, will spend directly and indirectly an average of $\$103,576$ over their lifetime on treatment~\cite{Heidenreich2011}. Rehabilitation after a stroke has a high cost as it involves a wide variety of experienced clinicians and specialized equipment. Unfortunately, access to specialized rehabilitation locations puts high-level care outside of the reach of many US citizens, including those in rural and low-resource communities~\cite{benjamin2017heart}. The ongoing COVID-19 pandemic has exacerbated these challenges, as today remote healthcare is the only treatment option even for people where distance and cost was not a barrier.
To give background, we describe the stroke rehabilitation process through the perspective of the multiple rehabilitation specialists that create rehabilitation care plans for outpatient stroke survivors. These specialists coordinate through an extensive care network that includes: (1) the stroke survivor, (2) caregivers (i.e., {} the stroke survivor's immediate care network), (3) medical specialists (e.g., {} physiatrists, cardiologists, and neurologists), and (4) allied health specialists (physical therapists, occupational therapists, and speech-language therapists). The network engages in an extensive amount of co-interpretation~\cite{Mentis2015}, a collaborative and interpretive process to assess movement and treatment efficacy, and care coordination~\cite{McDonald2007}, organizing the different aspects of care.
\textbf{\textsc{Physiatrists (PHY).}}
Physical Medicine and Rehabilitation (PM\&R), or physiatry, is the branch of medicine that treats individuals with physical impairments, functional limitations, pain, and disabilities that affect the brain, spinal cord, nerves, bones, joints, ligaments, muscles, and tendons~\cite{nih-pmr-clinical-center_2017, AmericanAcademyofPhysicalMedicineandRehabilitation2020}. Physiatrists are the primary medical doctors that guide stroke rehabilitation treatment, preceding physical and occupational therapy. They are key for acquiring a holistic perspective on functional and motor stroke rehabilitation, even if, to our knowledge, no other qualitative HCI paper has included physiatrists in their studies.
\textbf{\textsc{Physical Therapist (PT) and Occupational Therapist (OT).}}
A PT provides care to restore and maintain a patient's sensory and motor abilities (e.g., {} improve gross motor movement), whereas an OT provides care to reduce the effects of a patient's disabilities through adaptation (e.g., {} retrain self-care skills)~\cite{nih-ot-clinical-center_2017,nih-pt-clinical-center_2019}. Both specialists have aligned objectives in providing goal-oriented care to their patients, to ultimately restore functional ability and mobility, and improve the quality of life through a \emph{patient-centered approach}.
\textbf{\textsc{Patient-Centered Care.}} Patient-centered care takes into account the individual needs, values, and expressed interest of patients; and it has been identified as a gap in the US health system, with the Institute of Medicine urging the United States Congress to establish funds for this purpose~\cite{InstituteofMedicine2001}. Six primary dimensions make up patient centered care~\cite{gerteis1993}: (1) respect for patients' values, (2) coordination and integrative care, (3) information, communication, and education, (4) physical comfort, (5) emotional support to combat fear and anxiety, and (6) involvement of patients' family and friends. Patient-centered care is enacted in face-to-face stroke rehabilitation sessions by specialists assessing the patient's rehabilitation progress, having their patients conduct interventions (exercises and activities), and then create or modify an existing rehabilitation care plan in concert with the patient. The care plan is typically a set of prescribed interventions that the specialists evaluate and update periodically to monitor progress, making it dynamic and evolving. Specialists typically document their assessment using the SOAP method (Subjective, Objective, Assessment, and Plan), a method widely used by healthcare professionals to fill out patient notes during an appointment, and used to promote continuity in health records~\cite{Weed1964}.
\section{Related Work on Home-based and Telerehabilitation System Design}
Technology for stroke rehabilitation can be classified as \emph{home-based therapy systems}, which are not connected online to remote specialists, and \emph{telerehabilitation systems}, which communicates rehabilitation information and data through the Internet to a remote specialist~\cite{Brennan2010}. Telerehabilitation systems vary in implementation, but they focus on, asynchronously or synchronously, connecting patients to remote specialists and transmit: (1) communication data (e.g., {} audio, video or text message) and/or (2) sensor-based data (e.g., {} movement).
After reviewing systematic and scoping reviews of telerehabilitation systems from the last 10 years~\cite{Johansson2011,Santayayon2012,Laver2013,Rogante2015,Veras2017,Sarfo2018,Tchero2018,Appleby2019}, we noticed a trend that current stroke telerehabilitation system design has centralized the asynchronous and synchronous communication of movement sensor data. This trend has its own complexities outside the scope of this work, and deserves to be studied on its own in the future.
The following review of home-based and telerehabilitation design below exposes a clear trend of focusing on movement data, typically originating from sensors, rather than considering the importance of other information needs and goals.
Most of this research is driven by generating solutions that increase exercise motivation.
\subsection{Sensors and Motivation as the Design Focus}
Data needs in stroke rehabilitation systems had been codified in 2009 in Egglestone \etal's~\cite{Egglestone2009} design framework for home-based stroke therapy systems. Through workshops with stroke survivors and clinicians, the authors identified (1) background information, such as the stroke's disruptive effects on patients or the contribution of professionals to recovery, and (2) exercise execution data for evaluation, that can be gathered through sensors or self-report. What has transpired since then is a litany of sensor-based systems focused on gathering data to support the second information need. What we have not seen is a consideration of the first form of information need. Specifically, identifying the types of ``background'' information and how to collect the information in way to present the two types of information to rehabilitation specialists.
Sensor development has been an important step for telerehabilitation and home-based systems to work---we are not denying that. However, consider the following five recently published systems:
a low-cost, wireless home-based rehabilitation sensor that reliably captures upper-limb arm posture and movement~\cite{Lim2010};
\emph{Us'em}~\cite{Beursgens:2011:UMS:1979742.1979761}, a wristband-like activity monitor of arm--hand performance designed strictly for patients to motivate the use of an impaired arm during everyday activities;
\emph{mRes}~\cite{Weiss:2014:LCT:2686893.2686989}, a low-cost device that measures rotational movement, aimed at training dorsal wrist extension and finger manipulation (both in supination and pronation), with an API for information exchange with telerehabilitation systems;
The combination of Microsoft's Kinect\footnote{\url{https://developer.microsoft.com/en-us/windows/kinect/}} sensor data with machine learning to automatically assess stroke rehabilitation exercises~\cite{lee2019}; and,
\emph{ArmSleeve}~\cite{Ploderer2016}, a sensor-embedded sleeve that captures objective upper limb data in patients' daily life, outside rehabilitation exercises, creating a visualization for OTs in a dashboard.
In all of these, the initial focus is on the valid collection of data towards motivating correct movement. Interestingly, in the evaluation of this last system, the authors found a major limitation to interpreting this movement data: the lack of contextual information.
Motivating a patient to perform, or more so ``correctly'' perform, a movement or exercise has been a prevailing goal in many of the systems designed. For instance, Alankus \etal~\cite{Alankus2010} studied therapeutic games using Nintendo Wii remote controllers\footnote{\url{https://www.nintendo.co.uk/Wii/Accessories/Accessories-Wii-Nintendo-UK-626430.html}} and a webcam to sense movement, emphasizing the role of home-based stroke rehabilitation games in keeping monotony low while providing performance feedback to specialists. In their followup work, Alankus \etal~\cite{Alankus2012c} specifically used the Wii remote to reliably sense compensatory movement (i.e. ``incorrect'' movements to achieve the same outcome because ``correct'' movement is not possible or tiring) during gameplay, and then recommend feedback mechanisms to reduce such compensatory movements. Likewise, mobile phone sensors have also been used for keeping rehabilitation engaging, measuring movement in upper-limb rehabilitation while providing instant feedback on the screen through a game~\cite{10.1145/2700648.2811337}.
We do not disagree that giving immediate feedback to patients about the correctness of exercise execution can help improve their performance as well as motivate them to try harder. For instance, vibrotactile feedback~\cite{Held2017} in an Arm Usage Coach (AUC), or using a Virtual Reality headset to be immersed in a 3D game that adapts its intensity to increase engagement ~\cite{10.1145/3316782.3321545}. What we are questioning with this paper is how useful is movement information in a telerehabilitation context and how might the focus on this design trend miss the information needs of rehabilitation specialists.
\subsection{Data Needs in Telerehabilitation}
In the development of telerehabilitation systems, there is added complexity on determining what data/information to transmit to a care specialist, and how to present it in a meaningful way. For instance, after usability testing of \emph{TeleREHA}, Perry \etal~\cite{Perry2011} found there are data needs in planning (configuration, game parameterizing and scheduling), executing (measuring exercise data), and assessing (viewing data). In response, Postolache \etal~\cite{Postolache2011} proposed intelligent telerehabilitation assistants called \emph{Rehabilitative TeleHealthCare}, where sensor signals are processed and combined into visualizations for caregivers, including physical movement (e.g., {} posture and daily walking movement), physiological data (e.g., {} heart rate, oxygen saturation and respiratory rate) and localization.
However, there have been indications that movement data is not enough in the telerehabilitation context. Dekker-van~Weering \etal ~\cite{Dekker-vanWeering:2015:DTS:2838944.2838971} evaluated clinician needs using a telerehabilitation system and pointed to the system's need to integrate patient context. For instance, integrating patient's mood to allow them to skip a day of exercise, or engagement with a therapist when choosing exercises. Lastly, in telerehabilitation co-design sessions with traumatic brain injury clinicians, How \etal~\cite{How2017c} concluded that a successful system needs to adapt to a patient's physical, cognitive and emotional state, the evolution of their rehabilitation history, and the surrounding life context such as social life. Note that the authors argue that \emph{designers} should take into account these aspects when building systems, but they do not study in depth what this contextual information is or how its role in the system.
In summary, stroke home-based therapy and telerehabilitation systems have followed a trend of collecting movement data. This trend leads to systems that provide little to no context of a patient's situated condition and excludes the collection of information like subjective data. In contrast, our research aims to integrate all components of face-to-face stroke rehabilitation into a telerehabilitation system design. We conducted a field study focused on face-to-face sessions to reveal information and practices that might not be integrated into existing telerehabilitation tools.
\section{Method}
\subsection{Field Study Design}
As we were interested in the the complex and rich information exchange that occurs in co-located rehabilitation, we conducted an ethnographically-informed field study involving rehabilitation sessions observations, interviews with specialists, and a review of stroke rehabilitation-related documents/artifacts. We held four consultation meetings with a neurologist and a PT, specialized in stroke rehabilitation and recovery, before starting the study to inform its design.
The semi-structured interviews involved specialists at the three different medical centers, and observations involved specialists who worked in two of them. Our initial observations were conducted in one hospital located in a major city in the mid-Atlantic region of the United States of America, later adding a hospital in a different system. All hospitals had Physical Medicine \& Rehabilitation departments. All of the medical centers served patients who were from (1) low socio-economic areas, (2) technologically low-resourced locations, (3) and/or surrounding rural areas. Documents and artifacts were collected at both hospitals as our study progressed.
\subsection{Participants}
Participants were recruited through snowball sampling within the participating medical centers. We interviewed four physiatrists (PHYs), five physical therapists (PTs) and seven occupational therapists (OTs), who work with stroke survivors in an outpatient setting, and observed a subset of those based on availability. The participant demographics are detailed in \autoref{tab:demoandobs}. Participants had various specialties, evident by their degrees (including M.D./Ph.D., Masters in Engineering, DPT, and MBA), as well as experience in different medical settings, most notably in low resource and rural communities outside of the US (denoted with ** in \autoref{tab:demoandobs}).
\begin{table}[h]
\smaller
\begin{tabular}{cccc}
\toprule
\multicolumn{1}{c}{Participant} & \specialcell{Experience\\ (years)} & \specialcell{Practice\\Setting} & Data \\
\midrule
PHY1 & 15 & P, H, R & I7, O9*,O10* \\
PHY2 & 17 & P, H, R & I8 \\
PHY3** & 16 & P, H & I9 \\
PHY4 & 6 & H & I13 \\
PT1** & 21 & P, H & I4 \\
PT2 & 29 & P, H & I10,O12* \\
PT3 & 3 & H & I14,O18 \\
PT4 & 5 & H & I15,O13*,O14*,O15*,O16,O17 \\
PT5 & 4 & H & I16 \\
PT6 & 7 & H & O3\\
OT1 & 5 & H & I1,O4*,O5,O6,O7,O8\\
OT2 & 18 & P, H, R & I2 \\
OT3** & 2 & H & I3 \\
OT4** & 6 & P, H, R & I5 \\
OT5 & 1y \& 8m & P, H & I6 \\
OT6** & 25 & P, H & I11,O11 \\
OT7 & 2m & P & I12 \\
OT8 & 2 & H & O1, O2\\
\bottomrule
\end{tabular}
\caption{Participant demographics and collected data summary. ** denotes experience outside of the US. * denotes caregiver was in attendance. ``I'' = Interview. ``O'' = Observation. ``P'' = Private. ``H'' = Hospital. ``R'' = Research.}
\label{tab:demoandobs}
\end{table}
\subsection{Data Collection}
\subsubsection{Observations}
We observed and video/audio recorded 18 stroke survivors rehabilitation sessions, to understand in-clinic rehabilitation, focusing on the information exchange between stroke survivors, caregivers, and rehabilitation specialists as they discuss rehabilitation.
Each session was video recorded, from the beginning to the end.
These sessions included specialists assessing the dexterity, spasticity, and cognitive function of stroke survivors, and subsequently, treatments, exercises, and therapies were prescribed.
The exercises and activities varied in the abilities they were attempting to address: motor, strength, cognitive, robotic, and aquatic.
Due to a COVID-19 state of emergency, we were unable to observe further physical therapy sessions, and we were not allowed to attend virtual sessions due to H policy.
\subsubsection{Semi-Structured Interviews}
We conducted semi-structured interviews with 16 of the 18 participants. We were unable to complete interviews with PT6 and OT8 due to workflow constraints before and after the observed rehabilitation sessions. For the participants we observed, interviews were conducted before and after the rehabilitation session at the convenience of the specialists. Before observations, interviews primarily focused on \textbf{(A)} gaining insight into the current work practices and data needs of the rehabilitation specialist and \textbf{(B)} eliciting their perceived needs for a telerehabilitation system. After observations, interviews focused on validating our interpretation of their practices and the information shared within the session. For those we were unable to observe, only one interview was conducted.
At the start of this study, the semi-structured interview protocol questions included:
\begin{enumerate}
\item\textbf{A:} What are you trying to accomplish through the rehabilitation evaluations?
\item\textbf{A:} What are you looking for when patients complete a task? (repetition, completion or form?)
\item\textbf{B:} What will you like to know about your patients when they complete activities at home?
\end{enumerate}
Following the first set of observations and interviews with OT1 and OT2 and reflecting on our consultations, we realized the specialists' interests lay beyond information on repetition completion and form, as they provided a much broader set of information types. We thus added a new prompt, asking participants to rank four data types we had collected at that point according to importance.
\begin{enumerate}
\setcounter{enumi}{3}
\item \textbf{B:}
In the order of most importance to least, what data (activity repetition, frustration, motivation, and stress) are you interested in knowing about your patient's health at home while completing rehabilitation exercises?
\end{enumerate}
\begin{itemize}
\item \textit{Activity Repetition}: Completion of the prescribed repetitions, computed from sensor data.
\item \textit{Frustration}: Level of frustration when completing a prescribed exercise.
\item \textit{Motivation}: Level of motivation when completing a prescribed exercise.
\item \textit{Stress}: Levels of stress due to stroke or home environment.
\end{itemize}
\subsubsection{Documentation and Artifact Review}
We collected documentation provided to stroke survivors to complement our understanding, including: a patient take-home exercise packet, a brochure for patients explaining stroke, a cognitive impairment assessment form called Montreal Cognitive Assessment (MOCA)\footnote{\url{https://www.mocatest.org/}}~\cite{Nasreddine2005}, and a medical note template that uses the \emph{Subjective, Objective, Assessment and Plan} (SOAP) method (example from~\cite{Susan2002} in~\autoref{app:SOAP}).
We also examined the \emph{Nine Hole Peg Test}, a quantitative assessment that measures finger dexterity\footnote{\url{https://www.sralab.org/rehabilitation-measures/nine-hole-peg-test}}, and a \emph{Box and Block test}, that asses unilateral gross movement\footnote{\url{https://www.sralab.org/rehabilitation-measures/box-and-block-test}}.
\newpage
\subsection{Ethical Considerations}
We obtained IRB approval from the University of Maryland, Baltimore County Institutional Review Board, and received approval to be on site from administrators at the partnering clinical institutions. We obtained consent from the clinician participants before the initial interview. Before observations began, we gained verbal consent from the patient and caregiver (if present) to observe and record the rehabilitation session. We did not document any identifiable patient data. The patients were not the focus of the field study; observations were primarily to get insight on the work practices of the rehabilitation specialists. Participants were not compensated for their participation.
\subsection{Data Analysis}
We performed a qualitative data analysis focusing on the \emph{information} used in rehabilitation and the needs for a telerehabilitation system. Two researchers analyzed the data, a PhD student (information scientist) with over 5 years of research and development experience in HCI; and an undergraduate student in mechanical engineering. One author also has experience as a caregiver. We compiled our field notes, transcribed all the observations and interviews, then performed open-coding of all data in three iterations, refining the codes as we progressed. The two coders (first and third authors) compared concepts informally as they coded, discussing differences in interpretation and reaching either a consensus or documenting their differences.
After open coding, we used an inductive analysis to create categories based on behaviors exhibited by the participants. Our observation codes included: Personal Check In/Conversation (when the specialist prompts the patient), RS Describe Task(s)/Goals, Start Activity, End Activity, Rehabilitation Specialist Demonstrates Task with Body, Rehabilitation Specialist Asks Patient to Move a Certain Body Part, and Rehabilitation Specialists Taking Notes. After an initial round of coding, we noticed that Personal Check In/Conversation was the most-used code. Therefore, we focused on having a conversation with the specialists, which turned our analysis efforts towards the details on the conversation. Our final set of interview and observation codes included: Describes Caregivers Involvement, Information Documented During Session, Interdisciplinary Interaction, Medication Management Conversation, and Comorbidity Discussion. Lastly, the first author grouped codes and found relations among them to create the four high-level themes presented. There was one overarching theme that described all the data, experiential information about a stroke survivor plays an important role in rehabilitation.
\section{Experiential Information Is Essential within Stroke Rehabilitation}
Our research study began under the assumption that the ideal telerehabilitation system optimally uses sensors to accurately measure movement and present it to a rehabilitation specialist. However, we quickly challenged this assumption, as our findings revealed that although movement data is an important component of stroke rehabilitation, it is not the only information need for specialists.
\emph{Experiential information} in stroke rehabilitation is information describing a stroke survivor's lived experience, collected through the \emph{patient-centered} approach and used to inform the rehabilitation care plan. Note, in our field study, multiple participants (e.g., {} PT1, PHY2, OT6, OT7) used a synonym of patient-centered, \emph{client-centered}.
Ultimately, experiential information is key to rehabilitation specialists' ability to build context around the patient's health status, and movement data. We will now detail how experiential information is built and it's components.
\subsection{Experiential Information Is Gathered Subjectively}
Rehabilitation specialists use a patient-centered approach to subjectively gather experiential information to build context. The approach includes strategies like conducting interviews during the rehabilitation session. It is important to understand that the different specialities gather experiential information and build context similarly, as they all use a patient-centered approach, but they prescribe and perform different interventions and evaluations.
We observed throughout our field study that the main job of specialists is not limited to objectively assessing stroke survivors, as OT2 commented during an interview: \textit{``You have to remember that our job is more than exercising their [patients'] arm.''} During rehabilitation sessions, we observed specialists used interviews to build context on their patients, OT3 explained: \textit{``Therapists first start with a subjective interview. They then ask them [stroke survivor] where their life situation is currently, who is looking after them, and what they can do for themselves.''} The experiential information is gathered from subjective interviews and directly influence the rehabilitation plan that specialists prescribe to their patients, and also helps specialists navigate the nuance and complexities of the stroke rehabilitation process. PHY1 said: \textit{``A number of underlying issues affect stroke rehabilitation. Physical, cognitive, social. A challenge is how to distill. There isn't always a set path, you have to play it by ear, you have to get the medical history and the physical[sic]. And the approach cannot always be solved algorithmically [sic].''}
Since the process is not built \textit{algorithmically}, specialists put the majority of their effort into gathering experiential information during rehabilitation sessions, when compared to gathering movement data. \autoref{fig:observation-time} (visualization made with Noldus ObserverXT \footnote{\url{https://www.noldus.com/observer-xt}}) shows a 14-minute snapshot of an example movement-based rehabilitation task taken from a 45-minute session. Here, the patient was required to complete a \emph{Nine Hole Peg Test} while the therapist observed. In a 14-minute span, the specialist acquired experiential information for $\sim$7 minutes (green) and took notes for $\sim$6.5 minutes (orange), whereas the actual activity lasted only $\sim$4 minutes (red). The remaining 30 minutes consisted of additional rehabilitation tasks, a medication management conversation, checking the patient's blood pressure, and creating new goals.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{Figures/observation-time.png}
\caption{A 14-minute visualization of how time was spent during a rehabilitation session. The different activities are shown on the left, and bars represent their duration.}
\Description{In this figure is a table timeline. On the y-axis is a list of events that occurred during a rehabilitation evaluation session On the x-axis is the time that has elapsed ranging from 0 minutes to 14 minutes.They're multiple colored bars that represent the categories the time in which that category occurred. The list of events on the y axis are: Personal Check In/Conversation (i.e., experiential information)(when the specialist prompts the patient), RS Describe Task(s)/Goals, Start Activity, End Activity, Rehabilitation Specialist Demonstrates Task with Body, Rehabilitation Specialist Asks Patient to Move a Certain Body Part, and Rehabilitation Specialists Taking Notes.}
\label{fig:observation-time}
\end{figure}
Rehabilitation specialists documented their subjectively-gathered experiential information by taking meticulous and systematic notes. One specialist shared with us a template used for session notes, based on the SOAP method. Here, specialists collect and document \emph{Subjective} data (e.g., {} experiential information and context) and \emph{Objective} data during an appointment, to ultimately form an \textit{Assessment} and create a \textit{Plan}. Due to patient confidentially, our research team did not have access to patient medical notes, so we provide an example patient note that was created by Susan \etal~\cite{Susan2002} using the SOAP method (\autoref{app:SOAP}).
Objective information can also become experiential information through subjective contextualization. OT5 explained she puts her health assessment (e.g., {} comorbidities) under the \textit{Subjective} section, including her professional medical opinion on how this affects physical abilities (e.g., {} fatigue related to medication), as this impacts the care plan. In an interview with PHY3, he shared with us the importance of documenting subjective insight into his medical notes. PHY3 gave the example of a musculoskeletal evaluation, what the patient reports to him goes under the \textit{Subjective} section, and the results of the evaluation will go under the under \textit{Objective}.
\subsection{Stroke Survivor's Motivation, Stress, and Frustration Levels is Experiential Information}
We found that factors like a stroke survivor's \textit{motivation, stress, and frustration levels} are a top priority for specialists to help gather experiential information. These levels are important because they all impact the ability for a stroke survivor to make progress on, and comply to, a care plan. We observed PHY4 explain this impact to her patient: \emph{``Things that make you feel like you're going down hill, or not recovering as fast, is depression, stress and fatigue. All of those things can make your symptoms feel a lot worse.''} In fact, when asked to rank the various types of information, 14 of the 16 participants ranked motivation, stress and fatigue as more important than activity repetition (\autoref{fig:ranking-chart}). All specialists expressed how important it is to gauge the patient's motivation level, as it informs the specialists' approach to prescribe a motivating and satisfying rehabilitation plan.
\begin{figure
\centering
\includegraphics[width=.67\linewidth]{Figures/HeatMap-01.png}
\caption{Heat map showing the ranking of importance of experiential information related to activity repetitions, stress, frustration and motivation. Visualization made using the Bertifier tool~\cite{6875988}}
\Description{The heat map shows a trend where the majority of participants ranked as most important motivation, then stress, then frustration, and finally activity repetition.}
\label{fig:ranking-chart}
\end{figure}
PT1 articulated how a rehabilitation plan is not bounded to objective assessments, but it has to be tailored to the motivations of the patient: \textit{``If they just do it [exercise] just once a day. Well, I am happy. What I try do is, because life is so on the go, make the rehab just part of their day. Like walking around the block.''} Simply, PT1 was sharing that the rehabilitation process is about creating a plan around the \emph{situated} conditions. PHY2 had similar sentiments as PT1: \textit{``Just getting a patient to exercise is good enough for me. I do not care about the reps.''} PHY2 shared examples of situating a care plan, such as assigning activities like walking around the neighborhood for patients that have a dog, or walking to a place she knows the patient will enjoy. The importance of the rankings can also vary because different factors dynamically change for each patient. PT2: \textit{``It [the order] depends on their home life, and things like that. Motivation is first because you don't get that, you don't get anything.''} PT2 shared similar sentiments, and said that her ranking would change for one of her patient's that has a stressful home.
Stress levels are also included as experiential information because they can have a direct impact on the stroke survivor's physical health, such as blood pressure. OT7 expanded on the importance of also understanding the source of the stress such as \textit{family stress} and \textit{personal stress}, thus ensuring that prescribed rehabilitation plans do not exacerbate the patient's stress level, blood pressure and/or heart rate levels. PT4 shared, \emph{``I tend to look at the big picture, and I might give them a home program [rehabilitation plan] that doesn't stress their lives too much, just so they do it. I try to prioritize quality of life and compliance.''} She went further to expand on her role, and what she considers the big picture. PT4 said: \emph{``My role as a PT is to get them as independent as possible, and try to return them to fun things. For example, I tried to get him [the patient] back to golfing because it is a stress reliever for him and he enjoys it. I am particularly passionate to get them back to their normal role as much as possible''.} Including stress levels as experiential information allows specialists to gauge quality of life, and prescribe an appropriate care plan.
Finally, frustration levels play an important role \emph{during} prescribed rehabilitation exercises and tasks. While observing OT6 conduct a rehabilitative exergame with her patient, she informed us about her process of managing frustration during activities: \textit{``We have to have an idea of the game, and then we pair it with their [patients] level of function. We give them a challenge, but not so much that they can't be successful, otherwise it is just frustrating.''} What we see here is the specialist paying more attention to rising frustration levels than focusing on the movements themselves. Her goal is not to achieve a certain number of movements, but instead she is looking for the right balance of effort against frustration. Thus, frustration levels are experiential information that better inform specialists, so they prescribe attainable activities. Simply, having context on what motivates, stresses, and frustrates the stroke survivor is crucial for both devising an exercise plan, as well as assessing the survivor's success at following the plan. It is more important for the specialists to learn if there is a moderate level of motivation, low number of stressors, and low frustration than to gather any further information on what movement the patient actually performed.
\subsection{Stroke Survivor's Acclimation \& Goals are Experiential Information }
The survivor's \textit{acclimation} after a stroke and their rehabilitation \textit{goals} are considered experiential information. A survivor's acclimation is simply how they are coping in their home after surviving a stroke; for example what kind of accessibility or health related issues they are having. Goals are a set of health milestone patients want to achieve through rehabilitation, such as returning back to work or driving for themselves. These two aspects are tightly connected because a survivor's acclimation can inform the progress made towards accomplishing a goal.
To evaluate the patient's acclimation at home, specialists ask questions to determine if their patients are successfully preforming Activities of Daily Living (ADLs). ADLs are basic daily self care tasks (e.g., {} managing medication, meal planning, or shopping) Typically OTs are the specialists that assess ADLs~\cite{KATZ1983} as a data point, but PT4 mentioned that she also references ADLs with her patients. PT4 said, \textit{``I do blend some with OT on the ADLs. Like I work on toileting with patients. The OT might work on using assistive device, but I do more of the gross motor stuff. Like can you transfer yourself from the wheelchair to the toilet.''} PT4 essentially uses the patient's experience on ADLs as a reference to inform the type of intervention she prescribes towards a goal.
It is important to evaluate acclimation, as this helps specialists create or modify the care plan while staying aligned to the patient's goals (e.g., {} dressing independently). However, it is more complex than simply asking if, when, or how exercises were completed, experiential information is required.
One strategy of evaluating and validating the acclimation is using sensor data to probe further about the patients' experience at home. PHY2 for example asks some of her patients to wear a \emph{Fitbit}\footnote{\url{https://www.fitbit.com/}}, so then she can review the data during the rehabilitation sessions. PHY4 recalled a time when a patient step count steadily declined, causing her to inquire if the change in activity was due to a comorbidity or decreased motivation, a crucial difference when evolving the care plan. Validating home exercises is important for specialists to determine if rehabilitation plan goals are being met. Validation can also be performed by patients themselves through self-reporting, as OT4 explained: \emph{``Validating home exercises is a combination of objective and subjective measures [..] I actually tell my patients to complete a weekly report for me''.}
Anticipating acclimation is necessary to adapt goals before it is too late, so specialists use contextual information about a patient's schedule.
For example, during an observation, the stroke survivor and caregiver informed PT2 that the survivor was having their family over later that evening for their birthday.
In response, PT2 reduced the exercise for her patient because she, \textit{``wanted him to survive tonight.''}
Taking into account what the patient's acclimation will be later on (tired), she altered the rehabilitation plan for that particular session, recognizing that the goal for the evening was to enjoy time with their family.
\subsection{Stroke Survivor's Health is Experiential Information}
\emph{Physical} and \emph{mental} health are key experiential information when prescribing a rehabilitation plan, as both impact the ability to complete the plan. Additionally, mental health can impact compliance.
\subsubsection{Physical Health}
Multiple comorbidities are common in stroke survivors. Some examples we heard during discussions include: hypertension, diabetes, malnutrition, sleep apnea, depression, insomnia, pseudo dementia, atrial fibrillation and vertigo. Comorbidities require close monitoring, management, and coordinated care amongst the stroke survivors coordinate care amongst the care network because they dictate how successfully and safely exercises are performed. Typically, these are managed through situational changes, such as changing diet or minimizing stressful activities, and medication.
Some of the participants, such as PHY1, PHY4 and OT1, took a patient's vital signs at the end of an activity to check for fatigue (\autoref{fig:rp-bp}), determining the situational changes for the exercise program. For example, in an observation with OT1, she noticed signs of discomfort and fatigue, so she took vitals and recorded them in her medical notes, ending the exercise early. In the followup interview, she told us that this patient's hypertension limits their ability for certain physical and/or cognitive interventions.
\newline
\newline
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{Figures/rp-bp.png}
\caption{Specialist taking the patient's blood pressure after an activity.}
\Description{In this figure a patient is getting their blood pressure read by the rehabilitation specialist. The specialist is standing, and the patient is sitting. The patient appears to an African-American male, and the specialist appears to be a white woman.}
\label{fig:rp-bp}
\end{figure}
\subsubsection{Mental Health}
Specialists have a particular interest in understanding cognitive abilities and depression, when inquiring about mental health.
To assess level of cognitive ability, we observed how PHY4 and OT1 refer to the MOCA. PHY1 and PT3 later explained during interviews that such assessment is important in determining what the patient can do, thus impacting the care plan, but also in understanding the patient's own ability to comprehend the exercises. In the cases where cognitive impairments hinder compliance, the specialists will often coordinate with a caregiver, so they can assist the stroke survivor with completing prescribed interventions while at home.
\newpage
Depression in particular can impact the patient's motivation to comply with the prescribed rehabilitation plan while at home. In our observations, we observed physiatrists (PHY1 and PHY4) speak extensively with stroke survivors and their caregivers about the survivors' ongoing battle with depression, and then prescribed interventions. In the observation with PHY4, she modified the patient's rehabilitation plan by first reaching out directly to the patient's neuropsychologist to discuss and coordinate a response to the patient's bout with depression.
\subsection{Caregiver's Assessment is Experiential Information}
Many specialists considered the rehabilitation process as a team effort, which includes the caregiver.
Caregivers regularly assess the patient, which becomes important experiential information. They are deeply involved with multiple aspects of a stroke survivor's life, they can be a family member, friend, or hired professional. Their role is so important in the survivor's recovery process that PT3 went as far as saying, \textit{``I think caregiver support is a major predictor into how much someone can recover. Fortunately, they can offer support, encourage [them], [overcome] cognitive deficits, [monitor] their schedule, [help with] exercise, and [cook] meals. The biggest area is compliance.''}
Caregivers play an important role in the care plan, both facilitating and validating the plan. For example, in a session with PHY4 the patient self-reported that they felt they were speaking much better. However, the caregiver felt differently, and provided a more in-depth assessment of the progress: the patient's speech indeed has improved, but slowly began to decline in the weeks leading up to the appointment. PT4 expanded on the value of this assessment: \emph{``It is pretty nice to get the caregiver's perspective, because they're a little bit more honest than the stroke survivor. I think they are really important, because they encourage the patient to get better, and they are the key for their patient live their life again. I like them to be in the therapy session.''}
Having the caregiver present during a rehabilitation session is highly valued because they provide insight on the patient's experience at home and offer assistance. We observed caregivers providing assistance to the specialists (e.g., {} supporting the stroke while walking), including insight when the specialists are completing an assessment. OT4 elaborated on how the experiential information from the caregiver complements the patient information: \emph{``Sometimes people [rehabilitation specialists] will give out an assessment to the patients and caregivers. That way, we will get the patient's insight on how they performed different ADLs, and we will get the family members' insight on how they felt the patient performed the ADLs.''} In this situation, the family member is reporting not on the exercise \emph{execution}, but on how the patient is \emph{faring} in their daily lives---what they are able to do and how they are doing it. This constitutes key experiential information.
\newpage
\section{Discussion}
We found that experiential information is essential in co-located stroke rehabilitation for rehabilitation specialists, and that they use a large variety of information other than exercise movement data.
However, the focus of home-based therapy systems is on movement data, as they aim at motivating (e.g., {} Us'em~\cite{Beursgens:2011:UMS:1979742.1979761}) and monitoring patients (e.g., {} ArmSleeve~\cite{Ploderer2016}).
This means that they (1) present movement data to specialists upfront and center, (2) have not enabled capturing experiential information of the movement data, and (3) do not provide means for annotating movement data for context. This has led to a paradigm that puts emphasis on movement data (\autoref{fig:paradigm-reconceptualization} - left), and, as a consequence this legacy has led telerehabilitation systems to focus on the computational work for recognizing movement and quantifying it for remote specialists to visualize (e.g., {} TeleREHA~\cite{Perry2011}, mRes~\cite{Weiss:2014:LCT:2686893.2686989}, Rehabilitative TeleHealthCare ~\cite{Postolache2011}, and systems covered by Santayayon \etal~\cite{Santayayon2012}).
Our study shows that this paradigm is incongruent with how specialists actually work in face-to-face stroke rehabilitation. This does not invalidate the motivation in the current paradigm to track movement and count repetitions accurately for telerehabilitation. Instead, we posit that movement data needs to exist within a more sophisticated understanding of patients' experiential information, when designing telerehabilitation systems.
\subsection{Paradigm Reconceptualization}
Previous work hints towards the need to capture information beyond movement.
For example, How \etal~\cite{How2017c} stated that successful rehabilitation systems should adapt to surrounding life context, and Ploderer \etal~\cite{Ploderer2016} recognized that lack of contextual information hinders the interpretation of movement data. The two studies however do not go into detail defining or modeling what is ``context'' information beyond movement exercise data. Through our work, we studied such information needs, and this has enabled us to move forward the paradigm through which systems are designed.
We propose a paradigm reconceptualization for \emph{situated} telerehabilitation systems (\autoref{fig:paradigm-reconceptualization} - right ), that capture the experiential information used in rehabilitation: a stroke survivor's (1) motivation, stress, and frustration levels, (2) acclimation and goals, (3) their health status, and (4) the caregiver's assessment; where exercise movement data complements the experiential information through a semantic layer that gives meaning to otherwise incomplete data. In this way, the sharing of a stroke survivor's lived experience within their care network is now taken into consideration.
This brings telerehabilitation system design closer to how co-located rehabilitation takes place, by integrating the patient-centered approach~\cite{gerteis1993} and giving meaning to movement data by incorporating experiential information on the stroke survivor's lived experience.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\textwidth]{Figures/final-figure-four.jpeg}
\caption{Current Paradigm for Telerehabilitation Systems}
\Description{This figure shows two paradigms designed by the authors, one on the left (labeled “Current paradigm”) and one on the right (labeled ”Proposed Paradigm). All the way to the right, there is a list of six dimensions of patient-centeredness [[reference number 12], including 1. Respect for patients' values, 2. Coordination and integrative care, 3. Information, communication, and education, 4. Physical comfort, 5. Emotional support to combat fear and anxiety and 6. Involvement of patients' family and friends. The left “Current telerehabilitation paradigm” has three main components. The first component, on the left side, is a pentagon. This pentagon represents a telerehabilitation system at the home of the stroke survivor. At the center of the Pentagon, is a circle that is labeled stroke survivor, and there is a solid line connecting the stroke survivor’s circle to a circle at the bottom right point of a pentagon. This additional circle at the bottom right is titled "exercise movement data". The other four points of the Pentagon are empty. The second component, in the middle, is an icon of the earth with a solid one way arrow cutting through the middle of the earth. The arrow is pointing from the first component to the third component. The second component represents the one way flow of "data" over the Internet. The third component, on the right side, is a circle that says "rehabilitation specialists". This third and final component represents rehabilitation specialists receiving data at their respective medical center or working location. In summary, the current paradigm of telerehabilitation represent data that is collected at home is exercise movement data, and that data is then sent one way over the internet to the rehabilitation specialists. It is important to note, that the data does not flow backwards from the third component to the first component. This is important because the authors’ proposed paradigm has a two way communication. The authors’ proposed telerehabilitation paradigm reconceptualization. There are three main components to this figure. The first component, on the left, is a pentagon. This pentagon represents a telerehabilitation system at the home of the stroke survivor. At the center of the Pentagon is a circle that is labeled stroke survivor, and there is a solid line connecting the stroke survivor’s circle to five circles that are revolving around the stroke survivor center circle. The five circles are at the five points of the pentagon. The five circles represent the five aspects of experiential information, and they are titled, "Exercise movement data", "Motivation, Stress, & Frustration", "Patient’s Acclimation & Goals", "Physical, Mental & Emotional Health", "care givers' assessment". This propose paradigm is different from the current paradigm because there are 5 data points that are revolving around the stroke survivor, compared to the current paradigm that only has one (i.e. Movement data). The second component, in the middle of the figure, is an icon of the earth with a dotted two way arrow cutting through the middle of the earth. The arrow is pointing between from first component to the third component. The second component represents the two way flow of "information" over the Internet. The third component, on the right side of the figure, is a circle that says "rehabilitation specialists". This third and final component represents rehabilitation specialists receiving information at their respective medical center or working location, but also sending back information . In summary, this figure represent the author’s proposed paradigm of telerehabilitation. Simply, experiential information is collected at home, that information is then sent over the internet to the rehabilitation specialists, and the specialists have the opportunity to send back relevant information to the stroke survivor (i.e. Two way dialogue) . It is important to note, that the information does flow backwards from the third component to the first component. The last important aspect of this figure is on the bottom the right of this figure, there is a list of six dimensions of patient-centeredness [reference number 12]. Again, The six dimensions are 1. Respect for patients' values, 2. Coordination and integrative care, 3. Information, communication, and education, 4. Physical comfort, 5. Emotional support to combat fear and anxiety and 6. Involvement of patients' family and friends. The proposed paradigm further illustrates how the authors’ paradigm integrates the six dimensions of patient centeredness into the five aspects of experiential information. The connections are: "Motivation, Stress, & Frustration" integrate 1. Respect for patients' values, 4. Physical comfort, 5. Emotional support to combat fear and anxiety , "Patient’s Acclimation & Goals" integrate 1. Respect for patients' values,3. Information, communication, and education,5. Emotional support to combat fear and anxiety , "Physical, Mental & Emotional Health" integrate 4. Physical comfort, 5. Emotional support to combat fear and anxiety, "care givers’ assessment" integrates 2. Coordination and integrative care, 3. Information, communication, and education, and 6. Involvement of patients' family and friends. In sum, this figure illustrates that the author’s paradigm incorporates experiential information that integrates patient centeredness, and encourages information to be shared back and forth between the stroke survivor and their rehabilitation specialists.}
\label{fig:paradigm-reconceptualization}
\end{figure*}
\newpage
\subsection{Implications for Design}
Below we provide design implications to capture context by leveraging our findings that identified types of experiential information.
\subsubsection{Capturing Experiential Information to be Congruent with Rehabilitation}
For telerehabilitation systems to be congruent with the existing rehabilitation practices, they need to provide ways to capture and share various forms of experiential information.
\textsc{Motivation, Stress, and Frustration}. The insight we received from specialists during the interviews (e.g., {} \autoref{fig:ranking-chart}) reveal the importance of motivation, stress, and frustration, and that keeping track of the factors on a regular basis can help trace the cause for changes in exercise movement and daily activity. Capturing a patient's motivation, stress and frustration levels is essential for specialists to build life context for their patients. Capturing the three could look like a system autonomously prompting questions to the patient or caregiver to answer when deviations in any of the three factors are detected. Capturing this information in the moment would allow contextualization of the patient's exercise/movement data and overall experience. The prompts will ideally elicit responses that inform the specialists the reason behind the deviation. Additionally, specialists could predetermine alternative exercises and activities they know are engaging for a particular patient, and the system could automatically propose the appropriates activity to respond to the contextual reasons underlying change in movement.
\textsc{Acclimation \& Goals}. Telerehabilitation systems would benefit from including features that capture acclimation and goals to help specialists evaluate patients' progress. How \etal~\cite{How2017c} briefly discussed how there may be better ways in which telerehabilitation systems can seek to complement rehabilitation goals of traumatic brain injury (e.g., {} stroke), but did not offer concrete design suggestions. To benefit acclimation and goal tracking, systems can implement a feature that allows patients to create a profile or virtual diary that acts as a virtual acclimation report. This report would capture how a patient is getting acclimated to a new environment and everyday tasks, that is accessible to their care networks.
For example, data from a smart watch or \emph{Fitbit} could be used to populate a home acclimation report instead of being used simply as a measure of exercise. The \emph{Fitbit} movement data can be tagged either as performing prescribed exercises or simply ADLs in the report. In this way, an event such as a decrease in tagged ADL movement data would signal that the survivor is probably not acclimating well. Additionally, the specialist would use this report to discuss the patient's goals immediately or during a session.
\textsc{Physical Health.} There have been examples of systems that collect exercise movement data for physical health, TeleREHA~\cite{Perry2011} monitors arm reach and mRes~\cite{Weiss:2014:LCT:2686893.2686989} monitors wrist function. However, our study showed that physical health is not limited to exercise movement data. A telerehabilitation system that acquires both physiological and exercise movement data brings additional benefits when working with survivors with comorbidities (e.g., {} diabetes and hypertension).
By periodically checking heart rate (e.g., {} via smart watch) and or blood pressure (e.g., {} via a built-in blood pressure monitor), a system can alert the appropriate specialist when these levels reach concerning thresholds in combination with movement data during rehabilitation exercises. This would enable specialists to intervene remotely and modify the rehabilitation plan based on accurate information, for example suggesting a rest. In a synchronous telerehabilitation system, the specialist might suggest a rest. In an asynchronous system, the data could be used to start discussion at a later time.
Rehabilitative TeleHealthCare~\cite{Postolache2011} was the one telerehabilitation system we found that aligned with our finding to combine physiological data and movement data. TeleHealthCare uses sensors to capture physiological data (heart rate, oxygen saturation (SpO2) and respiration rate) and movement data for health monitoring. However, this system does not take into account what are the patient comorbidites (e.g., {} Diabetes and Atrial Fibrillation), and thus limits how an expert can use these data combined.
\textsc{Mental Health.} As our study showed, mental health is another aspect that impacts the creation of a rehabilitation plan, and its compliance given a patient's cognitive function. A system that leverages existing standardized assessments (e.g., {} MOCA) to collect data will better match current rehabilitation we observed. To keep track of the response evolution of such assessments, a telerehabilitation system can be used to require patients to complete cognitive assessments when prescribed by the specialist throughout time. It is important that systems collect, keep track of, and transmit mental health status of the patient to specialists, as this information is an experiential data point used to contextualize deviations in exercise movement data, along with motivation stress, and frustration levels.
Moreover, it is particularly important to consider the collaboration among the care network when it comes to health data and sticking to existing workflows.
As Ng \etal~\cite{Ng2019} recently stressed, health care providers are concerned about how to adjust their practices, to better guide the use of sensor data within mental telehealth.
By keeping all the care network updated with mental health information, a telehealth system can enable the care network to collaboratively interpret the collected exercise movement and mental health data, avoiding misinterpretation, but also allowing for shared decision making when it comes to adjusting how sensor data is used.
\textsc{Caregiver Assessment}. As we learned, caregivers provide crucial support for a successful execution of a rehabilitation plan at home, and thus are the ones that hold key information about the day-to-day issues around the plan. However, no system to our knowledge considers incorporating information from a caregiver. Telerehabilitation systems should incorporate functionalities that capture caregivers' insights around the execution of exercises, so that specialists can compare and contrast this view with both the patient's view on their own execution, and the movement data. More importantly, they can act as surrogates for filling reports or reporting issues, when the stroke survivor is not able to do so, such as when the survivor's health is negatively impacted by a comorbidity. In this way, the rehabilitation specialist can better evaluate the health progress of the stroke survivor.
\subsubsection{Annotating Experiential Information to Create Shared Meaning}
One of our main insights is that specialists create meaning subjectively. We observed strategies around creating meaning on top of movement data, such as reviewing \emph{Fitbit} data together with a specialist. Ploderer \etal~\cite{Ploderer2016} concluded in their study that a major limitation of \emph{ArmSleeve} is the lack of contextual information presented across the different dashboard pages; and Mentis \etal~\cite{Mentis2017} showed that clinicians require patient contextual information to make sense of \emph{Fitbit} data---for instance if a high walking day was due to a vacation and a low walking day was due to a poor medication reaction. Telerehabilitation systems should have a dedicated semantic information layer for all the information stored. It is likely that different specialists have a unique perspective on data and want to annotate different data (e.g., {} the data related to their own profession). Having these annotations can lead to less confusion about what other specialists are doing, how they deal with issues, and most importantly, it will facilitate coordination.
\subsubsection{Supporting Coordination to Consolidate the Care Plan}
Related to our last point, we believe that telerehabilitation systems have the potential to be the central hub of information around rehabilitation. We observed the level of care coordination involved between those in the stroke survivor's care network. Therefore, it is important that telerehabilitation systems provide functionalities that best facilitate the communication and coordination of the different specialists by allowing them to access each other's data, add their own interpretation, and thus coordinate their actions. We assert this can play a major role in the care plan, as this is the one artifact where all the actors have an influence, making it dynamic.
Accommodating for dynamic, always up-to-date care plans with transparent decision making imprinted on the plan itself (through annotations), is a way to keep care consolidated.
\subsection{Limitations \& Future Work}
Our work is limited to function and mobility rehabilitation specialists within the Physical Medicine \& Rehabilitation specialty. This work does not include the other specialists that are involved in the rehabilitation processes of stroke survivors or other illnesses that require rehabilitation. Now that we have working relationships with our partners, we will conduct observations and interviews with more rehabilitation specialists. We will also begin eliciting feedback from additional stakeholders (stroke survivors and caregivers) on our paradigm refocus. Ultimately, with an eye to begin developing a prototype of a situated telerehabilitation system.
\section{Conclusion}
Our research reveals that experiential information is an essential need in stroke rehabilitation. This paper provides a more holistic view of the practices of rehabilitation specialists within the Physical Medicine \& Rehabilitation medical specialty. Additionally, this paper proposes a paradigm reconceptualization in stroke telerehabilitation system development to address the complex and dynamic nature surrounding stroke rehabilitation. We posit that our proposed refocus can lead to the development of telerehabilitation systems that will be on par with the type of interactions and evaluations that exist in a face-to-face stroke rehabilitation session. We do not suggest that sensors have no role in telerehabilitation, but we say to our research community, that the exercises are a means to the end, and it is important to understand what the end is.
\begin{acks}
We would like to thank the patients and medical personnel that participated in our study. We would like to thank Dr. Wittenberg for their insight and assistance. This work has been supported in part by NIH/NIGMS R25 GM055036 IMSD Meyerhoff Graduate Fellows Program, NSF CAREER award IIS-1552837 and SaTC award CNS-1714514, NIH/NIGMS MARC U*STAR T34 08663 National Research Service Award, and by Dean Keith J Bowman and the UMBC Constellation Professorship.
\end{acks}
\newpage
\bibliographystyle{ACM-Reference-Format}
|
{
"timestamp": "2021-02-18T02:18:24",
"yymm": "2102",
"arxiv_id": "2102.08770",
"language": "en",
"url": "https://arxiv.org/abs/2102.08770"
}
|
\section{Introduction}\label{introduction}}
\emph{Proof assistants} are tools that provide a syntax to rigorously
specify mathematical statements and their proofs, in order to
mechanically verify them. A strong motivation to use proof assistants is
to increase the trust in the correctness of mathematical results, such
as the Kepler conjecture \citep{hales2017}, which has been verified
using the proof assistants HOL Light \citep{DBLP:conf/tphol/Harrison09a}
and Isabelle \citep{DBLP:conf/tphol/WenzelPN08}, and the Four-Colour
Theorem \citep{gonthier2008}, which has been verified using Coq
\citep{DBLP:conf/tphol/Bertot08}. However, why should we believe that a
proof is indeed correct when a proof assistant says so? We might trust
such a statement if we were certain that the proof assistant was
correct, i.e. that the proof assistant only accepts valid proofs. To
verify the correctness of the proof assistant, we can either inspect it
by hand or verify it with another proof assistant in whose correctness
we trust. However, many proof assistants are too complex and change too
often to make such an endeavour worthwhile. Still, even if we ignore the
correctness of a proof assistant, we may trust its statements, provided
that the proof assistant justifies all statements in such a way that we
can comprehend the justifications and write a program to verify them. A
proof assistant ``satisfying the possibility of independent checking by
a small program is said to satisfy the \emph{de Bruijn} criterion''
\citep{barendregt2005}. We call such small programs \emph{proof
checkers}.
The logical framework Dedukti has been suggested as a universal proof
checker for many different proof assistants \citep{expressing}. Its
underlying calculus, the lambda-Pi calculus modulo rewriting
\citep{DBLP:conf/tlca/CousineauD07}, is sufficiently powerful to
efficiently express a variety of logics, such as those underlying the
proof assistants HOL and Matita \citep{DBLP:phd/hal/Assaf15}, PVS
\citep{DBLP:phd/hal/Gilbert18}, and the B method
\citep{DBLP:phd/hal/Halmagrand16}.
The Dedukti theories generated by proof assistants and automated theorem
provers can be in the order of gigabytes and take considerable amounts
of time to verify. The current architecture of Dedukti, which is written
in OCaml, allows only for a limited form of concurrent proof checking,
restricting the efficiency of proof checking on multi-core processors.
Like Dedukti, most other existing small proof checkers do not (fully)
exploit multiple cores.
Rust is a functional systems programming language that aims to combine
safety, performance, and concurrency. These properties make Rust an
interesting candidate to implement proof checkers in. This article
evaluates the effectiveness of Rust as implementation language for proof
checkers by reimplementing a fragment of Dedukti in Rust, and uses the
opportunity to explore meaningful uses of concurrency.
The major difficulty when porting Dedukti to Rust is sharing of values.
Functional programming languages such as OCaml and Haskell use a garbage
collector, which allows them to implicitly share values. In contrast,
systems programming languages such as Rust or C do not use a garbage
collector, and thus do not share values implicitly. In return, such
languages allow for fine-grained sharing; for example, data can be
marked to never be shared, shared within a single thread, or shared
between multiple threads. Using just the right amount of sharing enables
higher performance, in particular when introducing concurrency. However,
due to implicit sharing, it is difficult to establish where sharing is
actually used in functional programs, including proof checkers such as
Dedukti.
This paper deals with the following research questions: Where and which
kinds of sharing are necessary in a proof checker? Which constraints
does concurrency impose on sharing? How to implement a proof checker
that uses the appropriate amount of sharing for both concurrent and
non-concurrent use, while keeping the virtues of being small, memory-
and thread-safe, and fast? How much performance can be gained by using
such a proof checker?
I make the following contributions in this paper: I present a generic
term data type that can be instantiated to vary the sharing behaviour
for constants and terms, yielding a family of term types for efficient
concurrent and non-concurrent parsing and verification. I refine this
term type by reducing the number of pointers, improving performance
especially of concurrent verification (\autoref{terms}). I study
reduction of terms and show that concurrent reduction implies
significant overhead, making it slower than non-concurrent reduction
(\autoref{reduction}). I study verification of theories and show that it
can be parallelised neatly by breaking it into two parts, where the more
time-intensive part can be delayed and executed in parallel. This is the
most successful use of concurrency explored in this work
(\autoref{verification}). I show that parsing of theories can be
accelerated by an efficient representation of constants, and that
concurrent parsing incurs such a large overhead that it is slower than
non-concurrent parsing (\autoref{parsing}). I implement all the
presented techniques in a new proof checker called \emph{Kontroli},
supporting a fragment of Dedukti that is sufficient to verify HOL-based
theories. Kontroli is written in Rust, which combines the safety of
functional programming languages with the fine-grained control over
sharing of system programming languages. This is crucial in assuring
that Kontroli is small, memory- and thread-safe, and fast
(\autoref{implementation}). I evaluate Kontroli and Dedukti on five
different datasets stemming from interactive and automated theorem
provers. On all datasets, the non-concurrent version of Kontroli is
consistently faster than both concurrent and non-concurrent versions of
Dedukti. When concurrently checking theories, Kontroli speeds up the
most time-consuming part of proof checking by up to 6.6x when using
eight threads (\autoref{evaluation}).
\hypertarget{background}{%
\section{Background}\label{background}}
\hypertarget{lpmr}{%
\subsection{\texorpdfstring{The \(\lambda\Pi\)-Calculus Modulo
Rewriting}{The \textbackslash{}lambda\textbackslash{}Pi-Calculus Modulo Rewriting}}\label{lpmr}}
Let \(\mathcal{C}\) denote a set of constants. A term has the shape
\[t \coloneqq c \mid s \mid t u \mid x \mid \,
\lambda x\!:\!t.\, u \, \mid \,
\Pi x\!:\!t.\, u,\] where \(c \in \mathcal{C}\) is a constant,
\(s \coloneqq \Type \mid \Kind\) is a sort, \(t\) and \(u\) are terms,
and \(x\) is a bound variable. If \(x\) does not occur freely in \(u\),
we may write \(t \to u\) for \(\Pi x\!:\!t.\, u\).
A rewrite pattern has the shape \(p \coloneqq x \mid c p_1 \dots p_n\),
where \(x\) is a variable, \(c \in \mathcal{C}\) is a constant, and
\(p_1 \dots p_n\) is a potentially empty sequence of rewrite patterns
applied to \(c\).
A rewrite rule has the shape
\(r \coloneqq c p_1 \dots p_n \hookrightarrow t\), where we call
\(c p_1 \dots p_n\) the left-hand side, \(t\) the right-hand side, and
\(c\) the head symbol of \(r\). The free variables of the right-hand
side are required to be a subset of the free variables of the left-hand
side, i.e. \(\bigcup_i \FVar(p_i) \supseteq \FVar(t)\).\footnote{To
simplify the presentation, I only introduce first-order rewriting.
Note that Dedukti uses higher-order rewriting
\citep{DBLP:journals/logcom/Miller91}.}
A global context \(\Gamma\) contains statements of the form \(c: A\) and
\(c p_1 \dots p_n \hookrightarrow t\). A local context \(\Delta\)
contains statements of the form \(x: A\).
We beta-reduce terms via \((\lambda x. t) u \to _\beta t[u/x]\), where
\(t[u/x]\) denotes the substitution of \(x\) in \(t\) by \(u\).
Additionally, we reduce \(t \to _{\gamma\Gamma} u\) iff there exists a
term rewrite rule \((t' \hookrightarrow u') \in \Gamma\) and a
substitution \(\sigma\), so that \(t' \sigma = t\) and
\(u' \sigma = u\).
Let \(\to _{\Gamma}\, =\, \to _\beta \cup \to _{\gamma\Gamma}\) be our
reduction relation.\footnote{The implementations of the calculus
optionally eta-reduce terms via \((\lambda x. t x) \to _\eta t\).} We
say that two terms \(t, u\) are \(\Gamma\)-convertible, i.e.
\(t \sim _{\Gamma} u\), when there exists a term \(v\) such that
\(t \to _{\Gamma}^* v\) and \(u \to _{\Gamma}^* v\).
\begin{figure}
\includegraphics{prftree/inference-rules.tex}
\caption{Inference rules.}
\label{fig:inference-rules}
\end{figure}
We write \(\Gamma \vdash t: A\) and say that the term \(t\) has the type
\(A\) in the global context \(\Gamma\) if we can find a derivation of
\(\Gamma, \Delta \vdash t: A\) using the rules in
\autoref{fig:inference-rules} \citep[adapted from][Figure
2.4]{DBLP:phd/hal/Saillard15a}, where \(\Delta\) is an empty local
context. Type inference determines a unique type \(A\) for a term \(t\)
and a global context \(\Gamma\) such that \(\Gamma \vdash t : A\). Type
checking verifies for terms \(t\) and \(A\) and a global context
\(\Gamma\) whether \(\Gamma \vdash t : A\). If the reduction relation
\(\to _\Gamma^*\) is type-preserving, terminating, and confluent, then
type inference and type checking terminate \citep[Theorem
6.3.1]{DBLP:phd/hal/Saillard15a}.
A \emph{command} introduces either a new constant \(c: A\) or a rewrite
rule \(c p_1 \dots p_n \hookrightarrow t\). A \emph{theory} is a
sequence of commands.
We check a theory as follows: We start with an empty set of constants
\(\mathcal{C} = \emptyset\) and an empty global context
\(\Gamma = \emptyset\). For every command in the theory, we distinguish:
If the command introduces a constant \(c: A\), we verify that
\(c \notin \mathcal{C}\) and that \(\Gamma \vdash A: A'\) for some
\(A'\), then we add \(c\) to \(\mathcal{C}\) and extend the global
context such that \((c: A) \in \Gamma\). If the command introduces a
rewrite rule \(c p_1 \dots p_n \hookrightarrow t\), we verify the
existence of a local context \(\Delta\) and a type \(A\) such that
\(\Gamma, \Delta \vdash c p_1 \dots p_n : A\) and
\(\Gamma, \Delta \vdash t : A\), then we extend the global context such
that \((c p_1 \dots p_n \hookrightarrow t) \in \Gamma\).
\begin{example}Consider the following theory: \begin{align}
\prop &: \Type \\
\impl &: \prop \to \prop \to \prop \\
\prf &: \prop \to \Type \\
\prf &\,(\impl x\, y) \hookrightarrow \prf x \to \prf y \label{prfimpl} \\
\imprefl &: \Pi x\!:\!\prop.\, \prf\, (\impl x\, x) \label{imprefl-def} \\
\imprefl &\hookrightarrow \lambda x\!:\!\prop.\, \lambda p\!:\!\prf x.\, p \label{imprefl-prf}
\end{align} This theory first defines types of propositions,
implications, and proofs. Next, (\ref{prfimpl}) introduces a rewrite
rule that interprets proofs of implications. (\ref{imprefl-def}) asserts
that implication is reflexive, and (\ref{imprefl-prf}) proves it via a
rewrite rule.\end{example}
\hypertarget{cv}{%
\subsection{Concurrent Verification}\label{cv}}
Concurrent verification designates the simultaneous verification of
different parts of a theory. Following Wenzel's terminology
\citep{DBLP:conf/itp/Wenzel13}, concurrency can happen at different
levels of \emph{granularity}. I distinguish concurrent verification on
the level of theories (granularity 0) and on the level of
commands/proofs (granularity 1).\footnote{Wenzel gives yet another level
of granularity, namely sub-proofs. However, there is no concept of
sub-proofs in Dedukti.} This work focuses on command-concurrent
verification. I will evaluate the two approaches in
\autoref{evaluation}.
\hypertarget{theory-concurrent-verification}{%
\subsubsection{Theory-Concurrent
Verification}\label{theory-concurrent-verification}}
A theory can be divided into smaller theories, as long as the theory
dependencies form a directed acyclic graph. To verify a theory, all of
its (transitive) dependencies must be verified before. Theory-concurrent
verification exploits that theories that do not transitively depend on
each other can be checked concurrently.
An example of a theory dependency graph is shown in \autoref{fig:matita}
for a formalisation of Fermat's little theorem in Matita. \begin{figure}
\includegraphics{tikz/matita.tex}
\caption{Theory dependency graph of Fermat's little theorem in Matita,
encoded in STTfa.}
\label{fig:matita}
\end{figure} The ``breadth'' of the graph determines the maximum amount
of theories that can be concurrently verified; for example, for
\autoref{fig:matita} we can verify at most six theories concurrently,
namely \texttt{exp}, \texttt{bigops}, \texttt{gcd}, \texttt{cong},
\texttt{fact}, and \texttt{permutation}.
Theory-concurrent verification can be implemented by launching a
verification process for every theory, producing for every theory a
global context that contains the commands in that theory. To verify a
theory, it is necessary to load the global contexts of the theory's
dependencies. As loading of global contexts comes with some overhead,
dividing a theory into smaller theories increases the number of theories
that can be verified concurrently, at the cost of the individual
theories taking longer to verify.
\hypertarget{command-concurrent-verification}{%
\subsubsection{Command-Concurrent
Verification}\label{command-concurrent-verification}}
The verification of a command can be broken into multiple tasks. Where
theory-concurrent verification exploits that independent \emph{theories}
can be checked concurrently, command-concurrent verification exploits
that independent tasks to verify a \emph{command} can be performed
concurrently.
\begin{figure*}
\includegraphics{tikz/execution.tex}
\caption{Execution strategies.}
\label{fig:execution}
\end{figure*}
This is illustrated in \autoref{fig:execution}. We consider a proof
checker that performs four tasks for every command of a theory, namely
parsing, sharing, (type) inference, and (type) checking, which will be
further explained in the remainder of this paper. Sequential or
non-concurrent processing (\autoref{fig:sequential}) checks a command
only once all tasks have been performed for preceding commands. Parallel
parsing (\autoref{fig:parallelc}) moves parsing to a different thread,
and parallel checking (\autoref{fig:parallelj}) distributes checking
among an arbitrary number of threads. For both parallel parsing and
checking, multiple operations for different commands are executed at the
same time; for example, the second command may be parsed while the first
command is still being checked, or the first and second command may be
checked while the third command is being shared and the fourth command
is parsed. Theoretically, the combination of parallel parsing and
checking could reduce wall-clock time to check a theory by the time
taken for parsing and checking. In practice, however, the overhead of
concurrency often leads to much smaller gains, as I will show in
\autoref{evaluation}.
Command-concurrent verification allows for the concurrent verification
of commands regardless of the theory graph. Where the maximum number of
concurrently verifiable \emph{theories} is bounded by the graph breadth,
the maximum number of concurrently verifiable \emph{commands} is bounded
by the total number of commands to verify. Where theory-concurrent
verification lends itself well to processes, command-concurrent
verification lends itself well to threads, because threads allow for the
sharing of the global context between concurrent verifications and thus
to omit the I/O overhead of loading global contexts, which would become
noticeable if done for every command. However, this comes at the cost of
using thread-safe data structures for the global context, as I will
discuss in \autoref{verification}.
\hypertarget{sharing}{%
\subsection{Sharing and Concurrency}\label{sharing}}
Sharing enables multiple references to the same memory region. We call
such references \emph{physically equal}. Sharing and physical equality
are exploited in Dedukti; for example, we immediately know that
physically equal terms are convertible. In many garbage-collected
programming languages, such as Haskell and OCaml, sharing is
\emph{implicit}, i.e.~members of any type may be shared, whereas in many
programming languages without garbage collector, such as C++ and Rust,
sharing is \emph{explicit}, i.e.~only members of special types are
shared. Such special types include C++'s \texttt{shared\_ptr} and Rust's
\texttt{Rc}. To check for physical equality in Rust, we need to
explicitly wrap objects with a type such as \texttt{Rc}
(\autoref{lst:eqrust}), whereas in OCaml, such wrapping is implicit
(\autoref{lst:eqocaml}).
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{let}\NormalTok{ a = }\DataTypeTok{Some}\NormalTok{(}\DecValTok{0}\NormalTok{) }\KeywordTok{in}
\KeywordTok{let}\NormalTok{ b = a }\KeywordTok{in}
\KeywordTok{let}\NormalTok{ c = }\DataTypeTok{Some}\NormalTok{(}\DecValTok{0}\NormalTok{) }\KeywordTok{in}
\KeywordTok{assert}\NormalTok{ (a = b);}
\KeywordTok{assert}\NormalTok{ (b = c);}
\KeywordTok{assert}\NormalTok{ (a == b);}
\KeywordTok{assert}\NormalTok{ (}\DataTypeTok{not}\NormalTok{ (b == c));}
\end{Highlighting}
\end{Shaded}
\caption{Structural and physical equality in OCaml.}
\hypertarget{lst:eqocaml}{%
\label{lst:eqocaml}}%
\end{listing}
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{let}\NormalTok{ a = }\PreprocessorTok{Rc::}\NormalTok{new(}\ConstantTok{Some}\NormalTok{(}\DecValTok{0}\NormalTok{));}
\KeywordTok{let}\NormalTok{ b = a.clone();}
\KeywordTok{let}\NormalTok{ c = }\PreprocessorTok{Rc::}\NormalTok{new(}\ConstantTok{Some}\NormalTok{(}\DecValTok{0}\NormalTok{));}
\PreprocessorTok{assert!}\NormalTok{(a == b);}
\PreprocessorTok{assert!}\NormalTok{(b == c);}
\PreprocessorTok{assert!}\NormalTok{( }\PreprocessorTok{Rc::}\NormalTok{ptr_eq(&a, &b));}
\PreprocessorTok{assert!}\NormalTok{(!}\PreprocessorTok{Rc::}\NormalTok{ptr_eq(&b, &c));}
\end{Highlighting}
\end{Shaded}
\caption{Structural and physical equality in Rust.}
\hypertarget{lst:eqrust}{%
\label{lst:eqrust}}%
\end{listing}
\emph{Reference counting} is a technique that is commonly used in
languages without garbage collection to manage memory of shared objects:
A reference-counted object keeps a counter to register how often it is
referenced. Whenever a reference to an object is created, its counter is
increased, and whenever a reference to an object goes out of scope, its
counter is decreased. Finally, when an object's counter turns zero, the
object is freed.
We call data structures that can be safely shared between threads
\emph{thread-safe}. When a reference-counted object is shared between
multiple threads, its counter has to be modified \emph{atomically}, to
ensure that multiple concurrent modifications to the counter do not
interfere. Non-atomic modifications can result in memory corruption (a
counter turning 0 despite the object still being referenced) and memory
leaks (a counter remaining greater than 0 despite the object not being
referenced). However, atomic modifications imply a significant runtime
overhead. This means that thread-safe reference counting comes with
significant overhead.
Languages that do not share values implicitly allow us to minimise
concurrency overhead by choosing appropriate types for sharing. In Rust,
wrapping objects with different smart pointer types marks them as either
shareable only within one thread (\texttt{Rc}, i.e.~reference-counted),
shareable between multiple threads (\texttt{Arc}, i.e.~atomically
reference-counted), or not shareable at all (\texttt{Box}). Any of these
smart pointer types has two out of three properties: thread-safety
(\texttt{Box}, \texttt{Arc}), sharing (\texttt{Rc}, \texttt{Arc}), and
performance (\texttt{Box}, \texttt{Rc}), see \autoref{fig:pointers}. In
addition, we have a non-smart pointer type, namely references
(\texttt{\&}), which has all three desiderata mentioned above, but
requires us to prove that it points to a valid object.\footnote{Rust is
a memory-safe language, so unlike e.g.~C/C++, the compiler throws an
error if we attempt to use a reference pointing to an invalid object.
This protects against a large class of memory-related bugs.}
\begin{figure}
\includegraphics{tikz/pointers.tex}
\caption{Venn diagram of common Rust pointer types and their
properties.}
\label{fig:pointers}
\end{figure}
In summary, for concurrent type checking, we need to carefully choose
our pointer types, as this choice has a direct impact on performance.
\hypertarget{terms}{%
\section{Terms}\label{terms}}
The central data structure of our proof checker are terms. Let us have a
closer look at how they are defined. See \autoref{sharing} for an
explanation of the pointer types used here.
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{type}\NormalTok{ term =}
\NormalTok{ Kind | Type}
\NormalTok{ | Const }\KeywordTok{of} \DataTypeTok{string}\NormalTok{ | Var }\KeywordTok{of} \DataTypeTok{int}
\NormalTok{ | App }\KeywordTok{of}\NormalTok{ term * term }\DataTypeTok{list}
\NormalTok{ | Lam }\KeywordTok{of}\NormalTok{ term }\DataTypeTok{option}\NormalTok{ * term}
\NormalTok{ | Pi }\KeywordTok{of}\NormalTok{ term * term}
\end{Highlighting}
\end{Shaded}
\caption{Original terms in OCaml.}
\hypertarget{lst:oterm}{%
\label{lst:oterm}}%
\end{listing}
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{enum}\NormalTok{ Term<C, Tm> }\OperatorTok{\{}
\NormalTok{ Kind, Type,}
\NormalTok{ Const(C), Var(}\DataTypeTok{usize}\NormalTok{),}
\NormalTok{ App(Tm, }\DataTypeTok{Vec}\NormalTok{<Tm>),}
\NormalTok{ Lam(}\DataTypeTok{Option}\NormalTok{<Tm>, Tm),}
\NormalTok{ Pi(Tm, Tm),}
\OperatorTok{\}}
\KeywordTok{struct}\NormalTok{ BTerm<C>(}\DataTypeTok{Box}\NormalTok{<Term<C, BTerm<C>>>);}
\KeywordTok{struct}\NormalTok{ RTerm<C>(Rc <Term<C, RTerm<C>>>);}
\KeywordTok{struct}\NormalTok{ ATerm<C>(Arc<Term<C, ATerm<C>>>);}
\end{Highlighting}
\end{Shaded}
\caption{Original terms in Rust.}
\hypertarget{lst:term}{%
\label{lst:term}}%
\end{listing}
\autoref{lst:oterm} shows the definition of Dedukti terms in OCaml, and
\autoref{lst:term} shows its direct translation to Rust. I call the
constructors \texttt{Kind}, \texttt{Type}, \texttt{Const}, and
\texttt{Var} \emph{atomic}, and the constructors \texttt{App},
\texttt{Lam}, and \texttt{Pi} non-atomic. Unlike the OCaml terms, the
Rust terms are generic over the type of constants \texttt{C} and the
type of term references \texttt{Tm}. We will see in \autoref{parsing}
how the choice of \texttt{C} is useful. Based on the non-inductive
\texttt{Term} type, the Rust version defines three inductive term types,
namely \texttt{BTerm}, \texttt{RTerm}, and \texttt{ATerm}. In
\texttt{BTerm}, term references are unshared, whereas in \texttt{RTerm}
and \texttt{ATerm}, term references are shared, using non-atomic and
atomic reference counting, respectively. As discussed in
\autoref{sharing}, the term types satisfy the following properties
(under the assumption that the constant type \texttt{C} is thread-safe
and can be copied and compared in constant time):
\begin{itemize}
\tightlist
\item
Unlike \texttt{RTerm}, both \texttt{BTerm} and \texttt{ATerm} can be
used across threads.
\item
Unlike \texttt{BTerm}, both \texttt{RTerm} and \texttt{ATerm} can be
compared for physical equality, taking constant time.
\item
Copying a \texttt{BTerm} deep clones the term, whereas copying
\texttt{RTerm} and \texttt{ATerm} modifies their reference counter,
which is faster for \texttt{RTerm} than for \texttt{ATerm}.
\item
\texttt{BTerm}, \texttt{RTerm}, and \texttt{ATerm} are increasingly
slow to create.
\end{itemize}
Using \texttt{\&} as pointer type at the place of \texttt{Box} etc., it
is possible to create a term datatype that is thread-safe, shareable,
and fast. This is particularly interesting for concurrent verification
of commands. However, such a term type requires us to specify at
compile-time the lifetime of each term. Because we cannot precisely
predict how long each term is going to be used, we have to
over-approximate its lifetime to be as long as the verification of a
command. That means that throughout the verification of a command, we
have to keep in memory every term that is created. Compared to using
reference-counted terms, this significantly increases memory usage,
because verification may create a large number of intermediate terms.
Therefore, I did not further pursue using \texttt{\&} as pointer type
for terms.
The three inductive term types require us to wrap every term constructor
with a pointer type (\texttt{Box}, \texttt{Rc}, or \texttt{Arc}).
However, the atomic constructors \texttt{Kind}, \texttt{Type},
\texttt{Const}, and \texttt{Var} do not contain any terms and can be
cloned in constant time, therefore having to wrap them with a pointer
type is pointless. For this reason, I give a refined term type in
\autoref{lst:terma}, in which the non-atomic constructors have moved to
the \texttt{TermC} type. The refined \texttt{RTerm} and \texttt{ATerm}
can be defined analogously to the original \texttt{RTerm} and
\texttt{ATerm}. Using the refined \texttt{Term}, we do not need to wrap
atomic constructors with a pointer type, but in exchange, we need to
wrap non-atomic constructors in a \texttt{Comb}.
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{enum}\NormalTok{ Term<C, Tm> }\OperatorTok{\{}
\NormalTok{ Kind, Type,}
\NormalTok{ Const(C), Var(}\DataTypeTok{usize}\NormalTok{),}
\NormalTok{ Comb(Tm),}
\OperatorTok{\}}
\KeywordTok{enum}\NormalTok{ TermC<Tm> }\OperatorTok{\{}
\NormalTok{ App(Tm, }\DataTypeTok{Vec}\NormalTok{<Tm>),}
\NormalTok{ Lam(}\DataTypeTok{Option}\NormalTok{<Tm>, Tm),}
\NormalTok{ Pi(Tm, Tm),}
\OperatorTok{\}}
\KeywordTok{struct}\NormalTok{ BTermC<C>(}\DataTypeTok{Box}\NormalTok{<TermC<BTerm<C>>>);}
\KeywordTok{type}\NormalTok{ BTerm<C> = Term<C, BTermC<C>>;}
\end{Highlighting}
\end{Shaded}
\caption{Refined Rust terms.}
\hypertarget{lst:terma}{%
\label{lst:terma}}%
\end{listing}
\begin{figure}
\includegraphics{tikz/proptype.tex}
\caption{Original \texttt{BTerm} encoding of \(\prop \to \Type\).}
\label{fig:proptype}
\end{figure} \begin{figure}
\includegraphics{tikz/proptypea.tex}
\caption{Refined \texttt{BTerm} encoding of \(\prop \to \Type\).}
\label{fig:proptypea}
\end{figure}
\begin{example}\autoref{fig:proptype} and \autoref{fig:proptypea} show
the encoding of the term \(\prop \to \Type\) in the original and refined
\texttt{BTerm}, where \(\prop\) is a user-defined constant. In these
graphical representations, a box is shown by a rectangle with rounded
corners. We can see that the original \texttt{BTerm} uses three
constructors and three boxes, whereas the refined \texttt{BTerm} uses
four constructors and one box.\end{example}
Using fewer pointers (boxes) benefits performance most when using
\texttt{Arc} and less when using \texttt{Rc}, because \texttt{Arc} has
the largest overhead of the pointer types. On one dataset, using the
refined term types reduced total proof checking time by 20\% when using
\texttt{RTerm} and by 29\% when using \texttt{ATerm}.
\hypertarget{reduction}{%
\section{Reduction}\label{reduction}}
Asperti et al.~\citep{asperti2009} have introduced \emph{abstract
machines} to efficiently reduce terms to WHNF. Of all Dedukti
components, reimplementing abstract machines in Rust was the most
complicated, because they involve sharing, mutability, and lazy
evaluation. This section studies the feasibility of concurrent
reduction.
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{type}\NormalTok{ state = \{}
\NormalTok{ ctx : term }\DataTypeTok{Lazy}\NormalTok{.t }\DataTypeTok{list}\NormalTok{;}
\NormalTok{ term : term;}
\NormalTok{ stack : state }\DataTypeTok{ref} \DataTypeTok{list}\NormalTok{;}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
\caption{Abstract machine state in OCaml.}
\hypertarget{lst:state-ocaml}{%
\label{lst:state-ocaml}}%
\end{listing}
An abstract machine encodes a term \(u\) via a substitution \(\sigma\)
called context, a term \(t\), and a stack \([t_1, \dots, t_n]\), such
that \(u = (t \sigma) t_1 \dots t_n\). \autoref{lst:state-ocaml} shows
the definition of an abstract machine state in Dedukti. The context is a
list of lazy terms, and the stack of arguments is a list of mutable
references to states.
\begin{example}Consider an abstract machine \(m\) consisting of an empty
context, a term \(t\) and a stack \([t_1, t_2]\). Suppose that
\(t = \add\) and \(t_1\) and \(t_2\) are states encoding the terms
\(\fib 5\) and \(\fib 6\), respectively. Then the machine \(m\) encodes
the term \(\add\, (\fib 5)\, (\fib 6)\). Now suppose that we try to
match \(m\) with the left-hand side of a rewrite rule
\(\add 0\, n \hookrightarrow n\). This will evaluate the state \(t_1\)
corresponding to \(\fib 5\) to some new state \(t_1'\) and replace
\(t_1\) with \(t_1'\). Because the stack is implemented as list of
mutable references, all copies of the original machine \(m\) will also
contain \(t_1'\). This avoids recomputing \(\fib 5\) in copies of
\(m\).\end{example}
\begin{listing}\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{struct}\NormalTok{ State<C> }\OperatorTok{\{}
\NormalTok{ ctx: }\DataTypeTok{Vec}\NormalTok{<LazyTerm<C>>,}
\NormalTok{ term: RTerm<C>,}
\NormalTok{ stack: }\DataTypeTok{Vec}\NormalTok{<StatePtr<C>>,}
\OperatorTok{\}}
\KeywordTok{type}\NormalTok{ StatePtr<C> = Rc<RefCell<State<C>>>;}
\KeywordTok{type}\NormalTok{ LazyTerm<C> =}
\NormalTok{ Rc<Thunk<StatePtr<C>, RTerm<C>>>;}
\end{Highlighting}
\end{Shaded}
\caption{Abstract machine state in Rust.}
\hypertarget{lst:state-rust}{%
\label{lst:state-rust}}%
\end{listing}
\autoref{lst:state-rust} shows the corresponding definition of abstract
machines in Rust. The counterpart to Dedukti's \texttt{state\ ref} is an
\texttt{Rc}-shared mutable reference (\texttt{RefCell}) to a state, and
the counterpart to Dedukti's \texttt{term\ Lazy.t} is an
\texttt{Rc}-shared \texttt{Thunk} from a state pointer to a term. A
\texttt{Thunk\textless{}T,\ U\textgreater{}} is a delayed one-time
transformation from a type \texttt{T} to \texttt{U}. Here, evaluating a
lazy term transforms a pointer to a state
\((\sigma, t, [t_1, \dots, t_n])\) to the WHNF of a term
\((t \sigma) t_1 \dots t_n\).
Two operations performed during reduction can be trivially parallelised:
\begin{itemize}
\tightlist
\item
Substitution: We can parallelise a substitution
\((t\ t_1 \dots t_n) \sigma = (t \sigma) (t_1 \sigma) \dots (t_n \sigma)\)
by calculating \(t_i \sigma\) for multiple \(i\) in parallel. Here,
\(t_i\) is a shared lazy term (\texttt{LazyTerm}).
\item
Matching: We can parallelise matching a term \(c t_1 \dots t_n\) with
a pattern \(c p_1 \dots p_n\) by matching \(t_i\) with \(p_i\) for
multiple \(i\) in parallel. Here, \(t_i\) is a shared mutable
reference to an abstract machine state (\texttt{StatePtr}).
\end{itemize}
Both substitution and matching involve evaluation of abstract machines.
Therefore, parallelising either of these operations requires thread-safe
abstract machines. However, the shown definition of abstract machine
states uses several thread-unsafe types, namely \texttt{RTerm},
\texttt{Rc}, \texttt{RefCell}, and \texttt{Thunk}. We can obtain
thread-safe abstract machines by replacing these types with thread-safe
types, such as \texttt{RTerm} with \texttt{ATerm}, \texttt{Rc} with
\texttt{Arc}, and \texttt{RefCell} with \texttt{Mutex}. However, each of
these types adds some overhead. My experiments showed that this overhead
is so large that even with concurrent substitution and matching, the
proof checker is significantly slower. Therefore, I did not further
pursue concurrent reduction.
\hypertarget{verification}{%
\section{Verification}\label{verification}}
This section describes how to verify a sequence of commands, and how to
parallelise it. The resulting approach will perform command-concurrent
verification as introduced in \autoref{cv}.
Let us revisit the verification procedure outlined in \autoref{lpmr}: We
start with an empty global context \(\Gamma\) and perform the following
for every command: If the command introduces a constant \(c: A\), we
infer the type \(A'\) such that \(\Gamma \vdash A : A'\). If the command
introduces a rewrite rule \(l \hookrightarrow r\) in a local context
\(\Delta\), we infer the type \(A\) such that
\(\Gamma, \Delta \vdash l : A\) and check that
\(\Gamma, \Delta \vdash r : A\). Finally, we add the command to the
global context \(\Gamma\).
Proof checking usually spends the largest portion of time checking that
\(\Gamma, \Delta \vdash r : A\). Using this observation, we can
parallelise verification by deferring these checks and performing them
in parallel in a thread pool. This puts certain constraints on the used
data types: Because we are sending type checking tasks
\(\Gamma, \Delta \vdash r : A\) across threads, the global and local
contexts \(\Gamma\) and \(\Delta\) as well as the terms \(r\) and \(A\)
need to be thread-safe. However, type checking uses thread-unsafe shared
terms (\texttt{RTerm}). I am going to discuss two approaches to resolve
this dilemma.
The first approach is to use \emph{thread-safe shared} terms
(\texttt{ATerm}) in \(\Gamma\), \(\Delta\) as well as for \(r\) and
\(A\). This implies that type checking and all algorithms performed as
part of it (such as reduction and substitution) should operate on
\texttt{ATerm}. However, if all these algorithms accept \texttt{ATerm},
then sequential proof checking would also be forced to use
\texttt{ATerm}, which would result in an unnecessary overhead compared
to using \texttt{RTerm}. This can be circumvented by creating a
sequential and a parallel version of the kernel; the only difference
between these is that the parallel version uses \texttt{ATerm} wherever
the sequential version uses \texttt{RTerm}. This allows us to use the
same kernel code for both overhead-free sequential as well as for
parallel verification. One downside to this approach is that concurrent
access of multiple check threads to the same shared term has to be
synchronised. This is why it is important to reduce the amount of
sharing in terms, as done with the optimised term type in
\autoref{terms}. But even with this optimisation, multiple check threads
accessing the same shared term simultaneously can become a bottleneck.
The second approach is to use \emph{unshared} terms (\texttt{BTerm}) in
\(\Gamma\), \(\Delta\) as well as for \(r\) and \(A\). As will be
explained in \autoref{implementation}, only atomic terms contained in
\(\Gamma\), \(\Delta\) are shared. Because \texttt{BTerm} preserves the
sharing of \emph{atomic} terms, using \texttt{BTerm} for the terms in
\(\Gamma\), \(\Delta\) preserves the sharing of \texttt{ATerm} or
\texttt{RTerm}. However, during type checking, we also want to share
\emph{non-atomic} terms, which we cannot do with \texttt{BTerm}.
Therefore this approach requires us to convert the unshared terms in
\(\Gamma\), \(\Delta\) to shared terms before we can use them for type
checking. Unlike the \texttt{ATerm} approach, this approach allows us to
keep using \texttt{RTerm} (as opposed to \texttt{ATerm}) for type
checking, because the converted terms remain in the same thread. That
means that in the \texttt{ATerm} approach, we have some continual
overhead from using \texttt{ATerm}, whereas in the \texttt{BTerm}
approach, we have overhead whenever we convert a term from \(\Gamma\),
\(\Delta\) to \texttt{RTerm}, but once it is converted, we have less
continual overhead from using \texttt{RTerm} (compared to
\texttt{ATerm}). I evaluated the following strategy: Whenever type
checking requests a term from \(\Gamma\), \(\Delta\), it converts it
from a \texttt{BTerm} to an \texttt{RTerm}. Using this strategy, type
checking is much slower than using the \texttt{ATerm} approach, because
conversions from \texttt{BTerm} to \texttt{RTerm} happen very
frequently. An alternative strategy is to cache the converted terms,
such that multiple requests to the same term result in only a single
conversion. The cache could be persistent for each check task or even
across check tasks. To limit memory consumption, such a cache could be
limited to contain only e.g. the \(n\) most frequently or recently
requested terms. All of these strategies, however, are significantly
more complex to implement than the \texttt{ATerm} approach, and make it
more challenging to create a kernel that can be also used without
overhead for sequential verification. For that reason, I did not further
investigate the \texttt{BTerm} approach and use the \texttt{ATerm}
approach instead.
\hypertarget{parsing}{%
\section{Parsing}\label{parsing}}
Parsing of theories is a surprisingly expensive operation that can take
up to half the time of proof checking, as will be shown in
\autoref{evaluation}. This section presents the design of a theory
parser that can be used both sequentially and concurrently.
The parser takes a reference to an input string (\texttt{\&str}) and
lazily yields a stream of commands. The type of terms contained in a
command is \texttt{BTerm\textless{}\&str\textgreater{}}, where
\texttt{\&str} is the type of constants (see \autoref{terms}). Using
\texttt{\&str} as constant type allows us to copy constants in constant
time and to store constants as slices of the original input string. This
is significantly more efficient than using \texttt{String}, which copies
constants in linear time and allocates new memory for every constant in
the term. For example, parsing the HOL Light dataset (which will be
introduced in the evaluation in \autoref{evaluation}) takes 21.3 seconds
using \texttt{BTerm\textless{}\&str\textgreater{}} (this corresponds to
KO\({\cap p}\) in the evaluation) and 28.4 seconds using
\texttt{BTerm\textless{}String\textgreater{}}.
We can parallelise parsing as follows: In the original thread, we parse
commands and send them via a channel to a new thread, in which we
perform all subsequent operations such as sharing (which will be
explained in \autoref{implementation} and checking (see
\autoref{verification}). However, we still need to address one issue: We
cannot send references such as \texttt{\&str} through the channel,
because we cannot prove that the references remain valid, so we cannot
send the parsed commands, which contain
\texttt{BTerm\textless{}\&str\textgreater{}}. One solution to this
dilemma is the following: When parsing in a separate thread, convert
\texttt{BTerm\textless{}\&str\textgreater{}} to
\texttt{BTerm\textless{}String\textgreater{}} by duplicating the parts
of the input string that refer to constants. Allowing for this is the
main motivation for the \texttt{Term} type being generic over the
constant type.
Parallel parsing comes with considerable overhead, in particular from
sending commands through the channel. To recall, parsing the HOL Light
dataset to commands containing
\texttt{BTerm\textless{}String\textgreater{}} takes 28.4 seconds. This
increases to 96.4 seconds when additionally sending every command
through a channel. Not all of this overhead shows up in the runtime of
the proof checker, because parsing and sending is performed in a
separate thread. Still, the evaluation shows that the proof checker with
parallel parsing is slower than with sequential parsing.
\hypertarget{implementation}{%
\section{Implementation}\label{implementation}}
I implemented the techniques described in the previous sections in a
proof checker called \emph{Kontroli}.
\hypertarget{the-virtues-of-rust}{%
\subsection{The Virtues of Rust}\label{the-virtues-of-rust}}
Kontroli is implemented in the functional system programming language
\emph{Rust}. Rust combines the memory safety of functional programming
languages with the fine-grained sharing of system programming languages.
The safety of concurrency is verified by virtue of Rust's type system.
For example, the Rust compiler signals an error if we parallelise
reduction without replacing all thread-unsafe types used in the
underlying abstract machines (\autoref{reduction}), if we parallelise
verification using the kernel version with thread-unsafe terms
(\autoref{verification}), or if we parallelise a function that mutates
shared state without synchronisation, such as the inference operation
which mutates the global context. These safety checks rule out a large
class of bugs that other system programming languages, such as C and
C++, do not protect against. This is extremely useful when experimenting
with concurrency.
The kernel of Kontroli does not perform I/O; it is pure. This is
verified by the Rust compiler (using the \texttt{\#{[}no\_std{]}}
keyword) and allows the kernel to be used in restricted computing
environments, such as web browsers.
\hypertarget{details}{%
\subsection{Details}\label{details}}
The parser that is outlined in \autoref{parsing} is implemented using a
lexer that is automatically generated by the \emph{Logos} library from
an annotated algebraic data type. All intermediate data structures
generated during parsing, such as lexemes, are free of reference-counted
sharing, which contributes to the performance. Before this approach, I
implemented Dedukti parsers with parser combinators (using the
\emph{Nom} library in Rust and the \emph{attoparsec} library in
Haskell). The parser in this work is significantly faster than these
approaches as well as the parser implemented in Dedukti using ocamllex
and Menhir, as I will show in \autoref{evaluation}.
All constants in terms yielded by the parser are physically unequal to
each other, because they all point to different regions in the input
string. For example, the parser transforms the input string
\(id\!:A \to A\) into a command that introduces the constant \(id\) with
the type \(A \to A\), where \(id\), \(A\), and the second \(A\) all
point to different parts of the input string. However, it is desirable
that equivalent constants are represented by physically equal string
references, because this allows us to compare and hash constants
(operations that are frequently performed during checking) using only
their pointer addresses, which takes constant time.
This constant normalisation is fulfilled by the \emph{sharer}: The
sharer maps every constant contained in a term of a command to an
equivalent canonical constant. Because the sharer is generic over the
used constant type, it works regardless of whether \texttt{\&str} or
\texttt{String} are used as constant types and can thus be used on terms
yielded by both sequential and parallel parsing. Furthermore, because
type inference and checking operate on shared terms (\texttt{RTerm} or
\texttt{ATerm}), the sharer converts from \texttt{BTerm} to
\texttt{RTerm} or \texttt{ATerm}, respectively. Finally, when a command
introduces a new constant \(c\), the sharer introduces \(c\) into the
set of canonical constants such that future references to this constant
will be all mapped to the same \texttt{\&str}.
The sharer maps only equivalent atomic terms to physically equal terms;
it does not map equivalent \emph{non}-atomic terms to physically equal
terms. For example, for constants \(f\) and \(c\), the sharer maps
occurrences of the non-atomic term \(f c\) in different parts of a term
to physically unequal terms, even though it maps \(f\) and \(c\) to
physically equal terms. Reduction preserves sharing, but does not
introduce new sharing. For example, if the substitution \(t \sigma\) is
equivalent to \(t\), then \(t \sigma\) is physically equal to \(t\). On
the other hand, if two non-equivalent terms \(t \neq u\) reduce to two
equivalent non-atomic terms \(t' = u'\), then \(t'\) is not physically
equal to \(u'\). This approach to sharing is also implemented in
Dedukti.
Implementing parsing and sharing as separate steps enables a compact and
modular implementation that achieves high performance. On the other
hand, when parsing to terms that contain references to the input string
(as done during sequential parsing), the separation of parsing and
sharing forces us to read the whole input file before we can start
parsing and to keep the whole input file in memory until parsing and
sharing is finished. These restrictions could be overcome by integrating
the sharing step into the parser, at the cost of a more complicated and
less modular implementation.
Checking tasks of the shape \(\Gamma, \Delta \vdash r: A\), as
introduced in \autoref{verification}, are distributed among a thread
pool using the \emph{Rayon} library. This involves creating a copy of
the global context \(\Gamma\) for every checking task. The global
context is implemented as hash map that maps every constant \(c\) to the
type of \(c\) and the rewrite rules having \(c\) as head symbol. The
hash map type in Rust's standard library takes \(\mathcal{O}(n)\) to
copy, making it unsuitable as hash map for the global context, because
the global context may grow quickly and need frequent copying. I
therefore use an immutable hash map type from the \emph{im} library,
which takes \(\mathcal{O}(1)\) to copy.
\hypertarget{kernel-size-supported-features}{%
\subsection{Kernel Size \& Supported
Features}\label{kernel-size-supported-features}}
The Kontroli kernel consists of 663 lines of code, whereas the Dedukti
kernel consists of 3470 lines.\footnote{Dedukti was obtained from
\url{https://github.com/Deducteam/Dedukti} , rev. 38e0c57. Kontroli
was obtained from \url{https://github.com/01mf02/kontroli-rs}, rev.
c980688. Lines of code include neither comments nor blank lines. I
used Tokei 11.0.0 to count the lines for Kontroli by
\texttt{tokei\ src/kernel} and for Dedukti by \texttt{tokei\ kernel}.}
The size of several other proof checkers is given in
\autoref{related-work}.
To obtain such a small kernel, I omitted in Kontroli certain features of
Dedukti such as higher-order rewriting
\citep{DBLP:journals/logcom/Miller91}, matching modulo AC
\citep{DBLP:conf/rta/Contejean04}, and type inference of variables in
rewrite rules. While there is no particular obstacle to implementing
these features, they neither offer a challenge for the concurrency of
proof checking, nor significantly increase the number of datasets that
can be evaluated. On the other hand, these features increase the kernel
size, making experiments with alternative data structures more
time-consuming.
I also omitted several optimisations present in Dedukti, the most
prominent one being decision trees: Decision trees accelerate the
matching of terms in the presence of many rewrite rules
\citep{DBLP:conf/fscd/HondetB20}. However, for the theories I evaluate
in \autoref{evaluation}, decision trees are not strictly necessary for
performance.
\hypertarget{evaluation}{%
\section{Evaluation}\label{evaluation}}
I evaluate the performance of Dedukti and Kontroli on five datasets
derived from theorem provers.
A dataset is a set of theories whose dependencies form a directed
acyclic graph, as illustrated in \autoref{cv}. Every evaluated dataset
consists of two parts, namely a human-written encoding of its underlying
logic and propositions and proofs automatically generated from a theorem
prover. Compared to the second part, the first part is insignificantly
small and takes insignificant time to check.
I evaluate Kontroli and Dedukti on two kinds of datasets: problems from
automated theorem provers (ATPs) and interactive theorem provers (ITPs)
\citep{faerber2021-koeval}. ATP datasets consist of theory files that
can be checked independently, whereas ITP datasets consist of theory
files that depend on each other. Among the ATP datasets, I evaluate
proofs of TPTP problems generated by iProver Modulo and proofs of
theorems from B method set theory generated by Zenon Modulo
\citep{DBLP:journals/jar/BurelBCDHH20}. For the ITP datasets, I evaluate
parts of the standard libraries from HOL Light (up to finite Cartesian
products) and Isabelle/HOL (up to \texttt{HOL.List}), as well as
Fermat's little theorem proved in Matita
\citep{DBLP:journals/corr/abs-1807-01873}. An evaluation of Coq datasets
is unfortunately not possible because its encoding relies on
higher-order rewriting.
\begin{table}
\caption{Statistics for evaluated datasets. \label{tab:statistics}}
\begin{tabular}{lrrr}
\toprule
Dataset & Size & Theories & Commands\tabularnewline
\midrule
Matita & 2.0MB & 19 & 478\tabularnewline
HOL Light & 2.0GB & 25 & 1776535\tabularnewline
Isabelle/HOL & 2.5GB & 1 & 116927\tabularnewline
iProver & 431.4MB & 6613 & 2549602\tabularnewline
Zenon & 15.4GB & 10330 & 5032442\tabularnewline
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\includegraphics{tikz/itp-size.tex}
\caption{Size of commands for evaluated ITP datasets.}
\label{fig:itp-size}
\end{figure}
Statistics for the datasets are given in \autoref{tab:statistics}. The
distribution of the sizes of the commands for the ITP datasets is shown
in \autoref{fig:itp-size}. A data point \((x, y)\) on the figure means
that the \(x\)th largest command in a dataset is \(y\) bytes large.
Therefore, given a graph for a dataset, the \(y\)-coordinate of its
leftmost point is the size of the largest command in the dataset, and
the \(x\)-coordinate of its rightmost point is the total number of
commands in the dataset. The figure shows us for example that the
largest command of the Isabelle/HOL dataset is several orders of
magnitude larger than the largest command of the HOL Light dataset, and
that each of the approximately \(10^3\) largest commands of the
Isabelle/HOL dataset is larger than the largest command of the HOL Light
dataset.
I evaluate different configurations of Dedukti and Kontroli that
correspond to the types of verification introduced in \autoref{cv}.
First, I evaluate \emph{sequential verification}; that is, processing
always at most one command from a single theory. The configurations of
Dedukti and Kontroli that perform sequential verification are called DK
and KO. The configurations DK\(\cap p\) and KO\(\cap p\) perform only
the parsing step of DK and KO. This serves to measure the impact of
parsing on overall performance. Similarly, KO\(\setminus c\) omits the
(type) checking step of KO. This serves as a lower bound for parallel
checking, as will be explained below. Next, I evaluate \emph{concurrent
verification}. The configuration DK\(_{t=n}\) performs theory-concurrent
verification; that is, processing at most \(n\) theories concurrently,
but processing at most one command from every theory at the same time.
When \(n\) is \(\infty\), an unlimited amount of theories is processed
concurrently. The remaining configurations perform command-concurrent
verification; that is, processing at most one theory at the same time,
but processing several commands from this theory concurrently.
KO\(_{p=1}\) performs parallel parsing using a single separate parse
thread. KO\(_{c=n}\) performs parallel (type) checking of at most \(n\)
commands simultaneously, using \texttt{Arc}-shared terms with one
checking thread per command. As mentioned above, the runtime of
KO\(\setminus c\) (KO without type checking) serves as lower bound for
the runtime of KO\(_{c=n}\). The \emph{type checking time} of a Kontroli
configuration is the difference between the runtime of the configuration
and the runtime of KO\(\setminus c\).
For the ATP datasets, theory-concurrent verification is trivial because
the theories in these datasets are independent. Therefore, to evaluate
the ATP datasets, I use theory-concurrent verification for both Dedukti
and Kontroli, limiting the number of simultaneously verified theories to
24.
The evaluation system features 32 Intel Broadwell CPUs à 2.2 GHz and 32
GB RAM. Dedukti and Kontroli are compiled with OCaml 4.08.1 and Rust
1.54. I evaluate all datasets ten times and obtain their average running
time as well as the standard deviation.
\begin{figure*}
\includegraphics{tikz/itp-time.tex}
\caption{ITP dataset evaluation, runtime.}
\label{fig:eval-itp}
\end{figure*}
I now discuss the results for the ITP datasets shown in
\autoref{fig:eval-itp}. The sequential Kontroli configuration KO is
always faster than both sequential and concurrent Dedukti configurations
DK and DK\(_{t=\infty}\). Furthermore, the parser of Kontroli
(KO\(\cap p\)) is significantly faster than the parser of Dedukti
(DK\(\cap p\)); on the Isabelle/HOL dataset, it is 4.5x as fast.
Parallel parsing (KO\(_{p=1}\)), however, increases runtime on all
datasets. Like KO, KO\(_{c=1}\) processes only one command at a time;
however, KO\(_{c=1}\) uses \texttt{ATerm} where KO uses \texttt{RTerm},
so it serves to measure the overhead incurred by \texttt{ATerm}. For the
HOL Light dataset, for example, we see that it increases runtime by
28.2\%. Despite this overhead, already the configuration that uses two
threads for type checking (KO\(_{c=2}\)) is faster than the
single-threaded KO configuration on all datasets. To measure how well
type checking parallelises, we compare the type checking times of two
configurations. Type checking parallelises moderately on the Matita and
HOL Light datasets; using \(n = 2\) threads, KO\(_{c=n}\) reduces type
checking time compared to KO by 1.4x on HOL Light and 1.5x on Matita,
and there is no statistically significant improvement between \(n = 4\)
and \(n = 8\) threads. Type checking parallelises best on Isabelle/HOL,
where KO\(_{c=n}\) reduces type checking time compared to KO by 1.6x for
\(n = 2\) threads, 3.3x for \(n = 4\) threads, and 6.6x for \(n = 8\)
threads.\footnote{The factor 6.6x can be obtained by taking the ratio of
the type checking times of KO (\(306 - 87 = 219\)) and KO\(_{c=8}\)
(\(120 - 87 = 33\)). }
\begin{figure*}
\includegraphics{tikz/itp-ram.tex}
\caption{ITP dataset evaluation, peak memory consumption.}
\label{fig:ram-itp}
\end{figure*}
The peak memory consumption of a few configurations is shown in
\autoref{fig:ram-itp}. On the Matita dataset, all Kontroli
configurations consume less memory than Dedukti, and memory usage
slightly increases when increasing the number of checking threads. On
the HOL Light dataset, we have the interesting case that all
configurations consume roughly the same amount of memory, regardless of
concurrency. This can be explained by the relatively small size of the
commands in that dataset. On the Isabelle/HOL dataset, we note two
peculiarities: First, KO uses significantly more memory than DK. As
explained in \autoref{implementation}, this is because Kontroli's parser
keeps the whole input file in memory until the theory is checked,
whereas Dedukti's parser loads the input file as needed and discards the
parts that were parsed. If we subtract the size of the Isabelle/HOL
dataset (a single theory of 2.5 GB) from KO's memory consumption, we
arrive at a memory consumption close to DK. Second, with increasing
number of checking threads, the memory consumption of KO rises
drastically. This can be explained as follows: \autoref{fig:itp-size}
shows that the Isabelle/HOL dataset features larger commands than the
HOL Light dataset. Larger commands tend to take more space and time to
check than smaller commands. The total memory consumption is composed of
the memory consumption of all checking threads. Therefore, when
increasing the number of checking threads, in a dataset with larger
commands such as Isabelle/HOL, a high peak memory consumption is
likelier to occur than in a dataset with smaller commands such as HOL
Light.
\begin{figure}
\includegraphics{tikz/eval-atp.tex}
\caption{ATP dataset evaluation.}
\label{fig:eval-atp}
\end{figure}
For the ATP datasets shown in \autoref{fig:eval-atp}, we have that
Kontroli is faster than Dedukti. Kontroli checks the Zenon dataset in
62.1\% of the time taken by Dedukti.
In conclusion, on the evaluated datasets, Kontroli consistently improves
performance over Dedukti, both in sequential and in concurrent settings.
\hypertarget{related-work}{%
\section{Related Work}\label{related-work}}
The related work can be divided by two criteria, namely size and
concurrency. Work related to small size is mostly about proof checkers,
and work related to concurrency is about proof assistants. To the best
of my knowledge, this work is the first that combines the two aspects by
creating a proof checker that is both concurrent and small.
\hypertarget{proof-checkers-size}{%
\subsection{Proof Checkers \& Size}\label{proof-checkers-size}}
The type-theoretic logical framework LF is closely related to Dedukti,
being based on the lambda-Pi calculus by Harper et
al.~\citep{DBLP:journals/jacm/HarperHP93}. Appel et al.~have created a
proof checker for LF that is similar to this work due to their pursuit
of small size \citep{DBLP:journals/jar/AppelMSV03}. Their proof checker
consists of 803 LOC, where the kernel (dealing with type checking, term
equality, DAG creation and manipulation) consists of only 278 LOC and
the prekernel (dealing with parsing) consists of 428 LOC. The small size
of the proof checker is remarkable considering that it is written in C
and does not rely on external libraries.
LFSC is a logical framework that extends LF with side conditions. It is
used for the verification of SMT proofs, where LFSC acts as a meta-logic
for different SMT provers, similarly to Dedukti acting as meta-logic for
different proof assistants \citep{DBLP:journals/fmsd/StumpORHT13}. Stump
et al.~have created a proof checker \emph{generator} for LFSC that
creates a proof checker from a signature of proof rules
\citep{stump2012}. The size of the generator is 5912 LOC of C++, and the
kernel of a proof checker generated for SAT problems is 600 LOC of
C++.\footnote{Obtained from \url{https://github.com/CVC4/LFSC}, rev.
11fefc6. Measured with \texttt{tokei\ src/\ -e\ CMake*} and
\texttt{lfscc\ -\/-compile-scc\ sat.plf\ \&\&\ tokei\ scccode.*}.}
Checkers is a proof checker based on foundational proof certificates
(FPCs) developed by Chihani et
al.~\citep{DBLP:conf/tableaux/ChihaniLR15}. Unlike Dedukti, which
requires a \emph{translation} of proofs into its calculus, FPCs allow
for the \emph{interpretation} of the proofs in the original proof
calculus (modulo syntactic transformations), given an interpretation for
the original calculus. The proof checker is implemented in
\(\lambda\)Prolog and is the smallest work evaluated in this section,
consisting of only 98 LOC\footnote{Obtained from
\url{https://github.com/proofcert/checkers}, rev. 241b3c8. Measured
with
\texttt{sed\ -e\ \textquotesingle{}/\^{}\$/d\textquotesingle{}\ -e\ \textquotesingle{}/\^{}\%/d\textquotesingle{}\ lkf-kernel.mod\ \textbar{}\ wc\ -l}.}.
Where LFSC generates a proof checker from a signature, Checkers
generates a problem checker from a signature and a proof certificate,
due to relying on \(\lambda\)Prolog for parsing signatures and proof
certificates. Chihani et al.~evaluated Checkers on a set of proofs
generated by E-Prover, which unfortunately permits a comparison with
neither Dedukti nor Kontroli due to currently not supporting E-Prover
proofs.
Metamath is a language for formalising mathematics based on set theory
\citep{megill2019}. There exist several proof verifiers for Metamath,
one of the smallest being written in 308 LOC of Python.\footnote{Obtained
from \url{https://github.com/david-a-wheeler/mmverify.py}, rev.
fb2e141. Measured with \texttt{tokei\ mmverify.py}.} Furthermore,
Metamath allows to import OpenTheory proofs and thus to verify proofs
from HOL Light, HOL4, and Isabelle
\citep{DBLP:journals/jfrea/Carneiro16}.
The \texttt{aut} program is a proof checker for the Automath system
developed by Wiedijk \citep{DBLP:journals/jar/Wiedijk02}. It is written
in C and consists of 3048 LOC. It can verify the formalisation of
Landau's ``Grundlagen der Analysis''
HOL Light is a proof assistant whose small kernel (396 LOC of OCaml)
qualifies it as a proof checker \citep{DBLP:conf/tphol/Harrison09a}.
However, the code in HOL Light that extends the syntax of its host
language OCaml is comparatively large (2753 LOC).\footnote{Obtained from
\url{https://github.com/jrh13/hol-light}, rev. 4c324a2. Measured with
\texttt{tokei\ fusion.ml} and \texttt{tokei\ pa\_j\_4.xx\_7.xx.ml}.}
Among others, HOL Light has been used to certify SMT
\citep{DBLP:journals/fmsd/StumpORHT13} as well as tableaux proofs
\citep{DBLP:conf/cpp/KaliszykUV15, DBLP:conf/tableaux/0002K19}. Checking
external proofs in a proof assistant also benefits its users, who can
use external tools as automation for their own work and have their
proofs certified.
\hypertarget{proof-assistants-concurrency}{%
\subsection{Proof Assistants \&
Concurrency}\label{proof-assistants-concurrency}}
Concurrent proof checking is nowadays mostly found in interactive
theorem provers. Early work includes the Distributed Larch Prover
\citep{DBLP:conf/rta/VandevoordeK96} and the MP refiner
\citep{DBLP:conf/tphol/Moten98}.
The Paral-ITP project improved parallelism in provers that were
initially designed to be sequentially executed, such as Coq and Isabelle
\citep{DBLP:conf/mkm/BarrasGHRTWW13}. Among others, as part of the
Paral-ITP project, Barras et al.~introduced parallel proof checking in
Coq that resembles this work in the sense that it delegates checking of
opaque proofs \citep{DBLP:conf/itp/BarrasTT15}. However, unlike this
work, Coq checks the opaque proofs using processes instead of threads,
requiring marshalling of data between the prover and the checker
processes.
Isabelle features concurrency on multiple levels: Aside from
concurrently checking both theories and toplevel proofs (similar to
Dedukti and Kontroli), it also concurrently checks sub-proofs.
Furthermore, it executes some tactics in parallel, for example the
simplification of independent subgoals
\citep{wenzel2009, DBLP:conf/itp/Wenzel13}.
Like Isabelle, ACL2 checks theories and toplevel proofs in parallel, but
differs from Isabelle by automatically generating subgoals that are
verified in parallel \citep{DBLP:conf/itp/RagerHK13}. In both Isabelle
and ACL2, threads are used to handle concurrent verification.
\hypertarget{conclusion}{%
\section{Conclusion}\label{conclusion}}
In this work, I presented several techniques to parallelise proof
checking. I introduced a term type that abstracts over the type of
constants and term references, allowing it to be used both sequentially
and concurrently in parsing and checking. I further refined the term
type by reducing the number of pointers, especially improving concurrent
performance. I showed that parallelising reduction using abstract
machines involves replacing several thread-unsafe data types by
thread-safe ones, adding up too much overhead to reduce checking time in
practice. I showed that command-concurrent verification can be achieved
by breaking verification into an inference and a checking operation,
where multiple checking operations can be executed concurrently. This
necessitates thread-safe global contexts and terms. To allow for both
overhead-free sequential and concurrent verification, I created two
versions of the kernel that differ only by the used term type. I showed
that parsing can be parallelised by moving it to a separate thread, from
which the parsed commands are sent to the main thread via a channel. The
overhead of sending commands through a channel unfortunately is so high
that parallel parsing does not improve performance.
I implemented these techniques in a new proof checker called Kontroli.
Kontroli is written in the programming language Rust, which played a
crucial role to ensure memory- and thread-safety, while allowing for a
small kernel that can be used for efficient sequential and parallel
checking. The evaluation shows that on all datasets, sequential Kontroli
is faster than sequential and theory-concurrent Dedukti. On the
Isabelle/HOL dataset, the command-concurrent Kontroli speeds up type
checking by 6.6x when using eight threads.
\begin{acks}
I would like to thank François Thiré for the inspiration to write this
article. Furthermore, I would like to thank Gaspard Ferey, Guillaume
Genestier and Gabriel Hondet for explaining to me the inner workings of
Dedukti, and Emilie Grienenberger for providing me with the Dedukti
export of the HOL Light standard library. Finally, I would like to thank
the anonymous CPP reviewers, David Cerna, Thibault Gauthier, Guillaume
Genestier, Emilie Grienenberger, Gabriel Hondet, Fabian Mitterwallner,
and François Thiré for their helpful comments on drafts of this article.
This research was funded in part by the Austrian Science Fund (FWF) {[}J
4386{]}.
\end{acks}
\balance
|
{
"timestamp": "2021-12-21T02:27:07",
"yymm": "2102",
"arxiv_id": "2102.08766",
"language": "en",
"url": "https://arxiv.org/abs/2102.08766"
}
|
\section{Introduction}
Since the early days of computer science, there has been significant interest in developing an algorithmic theory of molecular and biological systems~\citep{turing1990chemical}. In distributed computing, \emph{population protocols}~\citep{angluin2006computation} have become a popular model for investigating the collective computational power of large collections of communication-bounded agents
with limited computational capabilities. This model consists of $n$ identical agents, seen as finite state machines, and computation proceeds via pairwise interactions of the agents, which trigger local state transitions. The sequence of interactions is provided by a scheduler, which picks pairs of agents to interact.
Upon every interaction, the selected agents observe each other's states, and then update their local states. The goal is to have the system reach a configuration satisfying a given predicate, while minimising the number of interactions (time complexity) and the number of states per node (space complexity) required by the protocol.
Early work on population protocols focused on the computational power of the model, i.e., the class of predicates which can computed by population protocols under various interaction graphs~\citep{angluin2006computation,AAER07}.
More recently, the focus has shifted to understanding complexity thresholds, often in the form of fundamental complexity trade-offs between time and space complexity, e.g.~\citep{angluin2008fast-computation,alistarh2015-fast,doty2018stable,gasieniec2018fast, alistarh2018space-optimal,BEFKKR18,berenbrink2020optimal,gasieniec2020time}; for recent surveys please see~\citep{elsaesser-survey, alistarh-survey}.
This line of work almost exclusively focuses on the \emph{uniform} stochastic scheduler, where each interaction pair is chosen uniformly at random \emph{among all pairs} of agents in the population, and the time complexity of a protocol is measured by the number of interactions needed to solve a task.
This is analogous to having a large \emph{well-mixed} solution of interacting particles, an assumption often used for modelling chemical reactions.
However, many natural systems exhibit spatial structure and this structure can significantly influence the system dynamics.
Indeed, there is a separation in terms of computational power for population protocols in the clique versus other interaction graphs: connected interaction graphs can simulate adversarial interactions on the clique graph by shuffling the states of the nodes~\citep{angluin2006computation} and population protocols on some interaction graphs can compute a strictly larger set of predicates than protocols on the clique; see e.g.~\citep{aspnes2009introduction} for a survey of computability results.
By comparison, surprisingly little is known about the \emph{complexity} of basic tasks in general interaction graphs under the stochastic scheduler. So far, only a handful of protocols have been analysed on general graphs. Existing analyses tend to be complex, and specialised to specific algorithms on limited graph classes~\citep{draief2012-convergence,cooper2016-fast,MNRS14,mertzios2017determining,BFKMW16}. This is natural: given the intricate dependencies which arise due to the underlying graph structure, the design and analysis of protocols in the spatial setting is understood to be challenging.
\subsection{Contributions}
In this work,
we provide a general approach showing that standard problems in population protocols can be solved \emph{efficiently} under \emph{graphical} stochastic schedulers, by leveraging solutions designed for complete graphs.
Our results are as follows:
\begin{enumerate}
\item We give a general framework for simulating a large class of \emph{synchronous} protocols designed for \emph{fully-connected networks}, in the graphical stochastic population protocol model (see \figureref{fig:pp-model}). Thus, the user can design efficient (and simple to analyse) synchronous algorithms on a clique model, and transport the analysis automatically to the population protocol model on a large class of interaction graphs. For instance, on any $d$-regular graph with edge expansion $\beta > 0$, the resulting overhead in parallel time and state complexity is in the order of $(d/\beta)^2 \cdot \polylog n$.
\item As concrete applications, we show that for any $d$-regular graph with edge expansion $\beta > 0$, there exist protocols for leader election and exact majority that stabilise both in expectation and with high probability\footnote{
The phrase ``with high probability'' (w.h.p.) means that we can choose constants so that the probability that the protocol fails to stabilise is at most $1/n^\lambda$ for any given constant $\lambda > 0$.} in $(d/\beta)^2 \cdot \polylog n$ parallel time, using $(d/\beta)^2 \cdot \polylog n$ states.
\item To complement the results following from the simulation, we also show that, on any graph $G$ with diameter $\diam(G)$ and $m$ edges, leader election can be solved both in expectation and with high probability in $O(\diam(G) \cdot m n^2 \log n)$ parallel time, using a constant-state protocol. This result provides the first running time analysis of the protocol of~\cite{beauquier2013self}.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[page=3,width=\textwidth]{figures.pdf}
\caption{The graphical population protocol model. In each step, a random edge $\{u,v\}$ is selected and the nodes $u$ and $v$ interact (blue nodes). Examples of graph classes covered by our construction: (a) regular high-girth expanders, (b) bipartite complete graphs, (c) toroidal grids. \label{fig:pp-model}}
\end{figure}
\subsection{Technical overview}
Our reduction framework combines several techniques from different areas, and can be distilled down to the following ingredients.
We start by defining a simple \emph{synchronous, fully-connected} model of communication for the $n$ nodes, called the \emph{$k$-token shuffling model}.
This is the model in which the algorithm should be designed and analysed, and is similar, and in some ways simpler, relative to the standard population model.
Specifically, nodes proceed in \emph{synchronous} rounds, in which every node $v$ first generates $k$ tokens based on its current state. Tokens are then shuffled uniformly at random among the nodes.
At the end of a round, every node $v$ updates its local state based on its current state, and the tokens it received in the round. \figureref{fig:token} illustrates the model.
This simple model is quite powerful, as it can simulate both \emph{pairwise} and \emph{one-way} interactions between all sets of agents, for well-chosen settings of the parameter~$k$.
Our key technical result is that any algorithm specified in this round-synchronous $k$-token shuffling model can be \emph{efficiently} simulated in the graphical population model.
Although intuitive, formally proving this result, and in particular obtaining bounds on the efficiency of the simulation, is non-trivial.
First, to show that simulating \emph{a single round} of the $k$-token shuffling model can be done efficiently, we introduce new type of \emph{card shuffling process}~\cite{diaconis1993comparison, wilson2004mixing,caputo2010aldous,jonasson2012interchange}, which we call the $k$-stack interchange process, and analyse its mixing time by linking it to random walks on the symmetric group.
Second, to allow correct and efficient asynchronous simulation of the synchronous token shuffling model, we introduce two new gadgets: (1)~a \emph{graphical} version of \emph{decentralised phase clocks}~\cite{alistarh2018space-optimal, gasieniec2018almost, gasieniec2018fast}, combined with (2)~an \emph{asynchronous} token shuffling protocol, which simulates the $k$-token interchange process in a graphical population protocol.
The latter ingredient is our main technical result, as it requires both efficiently combining the above components, and carefully bounding the probability bias induced by simulating a synchronous model under asynchronous pairwise-random interactions.
Finally, we instantiate this framework to solve exact majority and leader election in the graphical setting. We provide simple token-shuffling protocols for these problems, as well as backup protocols to ensure their correctness in all executions.
\begin{figure}[t]
\centering
\includegraphics[page=2,width=0.9\textwidth]{figures.pdf}
\caption{The synchronous $k$-token shuffling model with 5 nodes for $k=1$ and $k=2$. Rectangles are nodes and the small circles are tokens. In each round, nodes generate $k$ tokens based on their current state. Then all $nk$ tokens are shuffled randomly. After this, nodes update their state based on the vector of $k$ tokens they hold. (a) An execution of a protocol in the 1-token shuffling model. The arrows between tokens represent the random permutation used to shuffle tokens. (b) An execution of a protocol for $k=2$. Each node sends and receives two tokens. \label{fig:token}}
\end{figure}
\subsection{Implications}
Our results imply new and improved upper bounds on the time and state complexity of majority and leader election for a wide range of graph families. In some cases, they improve upon the best known upper bounds for these problems. Please see Table~\ref{table:comparison} for a systematic comparison. Specifically, our results show that:
\begin{itemize}
\item In \emph{sparse} graphs with good expansion properties, such as constant-degree graphs with constant edge expansion (\figureref{fig:pp-model}a), our simulation has polylogarithmic time and state complexity overhead, relative to clique-based algorithms. Thus, good expanders admit fast protocols using polylogarithmic states, despite being sparser than the~clique.
\item In \emph{dense} graphs, we obtain similar bounds whenever $d/\beta \in \polylog n$ holds. This is the case for instance in $d$-dimensional hypercubes with $n=2^d$ nodes,
but also in highly-dense clique-like graphs, such as regular complete multipartite graphs (\figureref{fig:pp-model}b), where the degree and expansion are both $\Theta(n)$.
\item In $D$-dimensional toroidal grids, we get algorithms with $n^{2/D} \polylog n$ parallel time and state complexity. These graphs include cycles (1-dimensional toroidal grids), two-dimensional grids (\figureref{fig:pp-model}c), three-dimensional lattices, and so on.
\end{itemize}
While our protocols guarantee \emph{fast} stabilisation in regular graphs with high expansion, they will stabilise in polynomial expected time in \emph{any connected graph}. The results can be carried over to certain classes of \emph{non-regular} graphs provided that they are not highly irregular and have high expansion; we discuss this in Section~\ref{sec:conclusions}, and provide examples in Appendix~\ref{app:non-regular}.
\begin{table}[t]
\centering
\small
\begin{tabular}{@{}llllll@{}}
\toprule
Graphs & Task & States & Parallel time & Ref. & Note \\ \midrule
cliques & EM & 4 & $O(n \log n)$ & \cite{draief2012-convergence} & $\Omega(n)$ parallel time necessary~\cite{alistarh2017time}. \\
& EM & $O(\log n)$ & $\Theta(\log n)$ & \cite{doty2020majority} & Optimal for certain protocols~\cite{alistarh2018space-optimal}. \\
& LE & $2$ & $\Theta(n)$ & \cite{doty2018stable} & Optimal $O(1)$-state protocol. \\
& LE & $\Theta(\log \log n)$ & $\Theta(\log n)$ & \cite{berenbrink2020optimal} & Lower bounds in~\cite{alistarh2017time,sudo2020leader}. \\
\midrule
connected & EM & $4$ & $\poly(n)$ & \cite{draief2012-convergence,BFKMW16} & Various bounds (*) \\
& LE & $6$ & $O(\diam(G) \cdot m n^2 \log n)$ & {\bf new} & Complexity analysis of \cite{beauquier2013self}. \\
\midrule
$d$-regular & EM & $(d/\beta)^2 \cdot \polylog n$ & $(d/\beta)^2 \cdot \polylog n$ & {\bf new} & Also stabilises in non-reg.\ graphs. \\
& LE & $(d/\beta)^2 \cdot \polylog n$ & $(d/\beta)^2 \cdot \polylog n$ & {\bf new} & Also stabilises in non-reg.\ graphs. \\
\bottomrule
\end{tabular}
\caption{Protocols for exact majority (EM) and leader election (LE) for different graph classes. The state complexity is the number of states used by the protocol. The parallel time column gives the expected parallel time (expected number of steps divided by $n$) to stabilise. (*) In~\cite{draief2012-convergence}, the running time of the protocol is bounded by the initial discrepancy in the inputs and the spectral properties of the contact rate matrix; bounds in terms of $n$ are only given for select graph classes (paths, cycles, stars, random graphs and cliques). No sublinear in $n$ bounds on parallel time are given~in~\cite{draief2012-convergence}.}
\label{table:comparison}
\end{table}
It is known that, in the clique setting, constant-state protocols are necessarily slower than protocols with super-constant states~\cite{doty2018stable, alistarh2018space-optimal}.
Our results suggest the existence of a similar complexity gap in the graphical setting. Specifically, on $d$-regular graphs with good expansion, such that $d/\beta \in \polylog n$, we provide polylogarithmic-time protocols for both leader election and exact majority.
This opens a significant complexity gap relative to known constant-state protocols on graphs.
For instance, the 4-state exact majority protocol for general graphs~\cite{draief2012-convergence} requires $\Omega(n)$ parallel time even in regular graphs with high expansion, if node degrees are $\Theta(n)$. (A simple example is the complete bipartite graph given in \figureref{fig:pp-model}b.) Yet, our protocols guarantee stabilisation in only $\polylog n$ parallel time in both low and high degree graphs, as long as $d/\beta$ is at most $\polylog n$.
\subsection{Roadmap}
We overview related work in \sectionref{sec:rw}. \sectionref{sec:prelim} defines the model and notation, while Sections \ref{sec:interchange} to \ref{sec:applications} develop our framework, from shuffling processes, to the simulation, and applications. \sectionref{sec:constant-le} gives an analysis for a constant-state protocol for leader election that stabilises in polynomial expected time in any connected graph.
We conclude in \sectionref{sec:conclusions} by discussing some open problems.
\section{Related Work}
\label{sec:rw}
\paragraph{Computability for graphical population protocols.}
A variant of the graphical setting was already considered in the foundational work of Angluin et al.~\cite{angluin2006computation}, which also uses a state shuffling approach. However, the resulting line of work focused on \emph{computational power} in the case where the number of states per node is constant~\cite{angluin2006computation,angluin2006stably, angluin2008self, AAER07, CMNPS11, blondin2018large}.
A key difference is that we aim to simulate pairwise interactions under the uniform stochastic scheduler, as fast protocols in the clique require that pairwise interactions are uniformly random~\cite{elsaesser-survey,alistarh-survey}. Thus, one of the main technical challenges is to devise an \emph{efficient} shuffling procedure that guarantees that the simulated interactions are (almost) uniform.
In addition, self-stabilising population protocols on graphs have been investigated particularly in the context of leader election~\cite{angluin2008self,beauquier2013self,yokota2020time,chen2019self,chen2020ssle}.
While the problem is not always solvable on all graph families~\cite{angluin2008self}, Chen and Chen~\cite{chen2019self} gave a constant-state protocol for leader election with exponential stabilisation time in directed cycles and 2-dimensional toroidal grids. Later, they gave a protocol for $d$-regular graphs using $O(d^{12})$ states~\cite{chen2020ssle}.
Beauquier, Blanchard and Burman~\cite{beauquier2013self} noted that without the requirement of self-stabilisation,
leader election can be solved on every connected graph by a constant-state protocol. We provide the first running-time upper bounds for this protocol here.
Please see Table~\ref{table:comparison} for additional references, and bound comparison.
\paragraph{Complexity in the clique model.} A parallel line of work has focused on determining the fundamental space-time trade-offs for key tasks, such as majority and leader election, when the interaction graph is a \emph{clique}~\cite{doty2018stable,draief2012-convergence,MNRS14,alistarh2017time,alistarh2018space-optimal, BKKO18,BEFKKR18,berenbrink2020optimal,gasieniec2020time}. In this case, tight or almost-tight complexity trade-offs are now known for these problems~\cite{berenbrink2020optimal, gasieniec2020time, doty2020majority, alistarh2018space-optimal}.
The vast majority of the work on complexity has focused on the clique case~\cite{elsaesser-survey,alistarh-survey}. Two natural justifications for this choice are that: (1)~the clique is a good approximation for well-mixed solutions, and (2)~the analysis of population protocols can be difficult enough even without additional complications due to graph structure.
Bounds on non-complete graphs have been studied for exact~\cite{draief2012-convergence} and approximate majority~\cite{MNRS14,mertzios2017determining}, with some recent work considering \emph{plurality consensus}~\cite{cooper2013coalescing,cooper2016-fast, BFKMW16} in a related model.
The recent survey of~\cite{elsaesser-survey} points out that running time on general graphs is poorly understood, and sets this as an open question. We take a first step towards addressing this gap.
\paragraph{Interacting particle systems.}
Another related line of work investigated dynamics of interacting particle systems on graphs, e.g.~\cite{aldous-fill-2014}. However, in this context dynamics are often assumed to be round-synchronous,
which allows the use of more powerful techniques, related to independent random walks on graphs~\cite{lovasz1993random}.
Cooper, Els\"asser, Ono and Radzik~\cite{cooper2013coalescing} analysed the coalescence time of independent random walks on a graph in terms of the expansion properties of the graph, where each node initially holds a unique particle, and in each step particles randomly move to another node. Whenever, two particles meet, they coalesce into a single one, which continues its walk.
We also employ token-based protocols on graphs, but in our case tokens are shuffled between nodes instead of coalescing.
Token-based processes have also been used to implement efficient, randomised rumour spreading protocols. For example, Berenbrink, Giakkoupis and Kling~\cite{berenbrink2018tight} analysed the cover time of a synchronous coalescing-branching random walk on regular graphs. Similarly to our work, they use conductance to bound the behaviour of this process in regular graphs.
In this work, we use token-based population protocols on graphs, where the tokens are shuffled between nodes during an interaction and the tokens instead of coalescing, may also interact in other ways.
\paragraph{Plurality consensus on expanders.} In plurality consensus, there are $k>1$ opinions and the task is the agree on opinion supported by the most nodes. Berenbrink, Friedetzky, Kling, Mallmann-Trenn and Wastell~\cite{BFKMW16} present a protocol for the plurality consensus problem in a synchronous pull-based interaction model. Their protocol also circulates tokens, and samples their count periodically (after mixing) to estimate opinion counts, running into the issue that the token movements are correlated. The authors provide a generalisation of a result by Sauerwald and Sun~\cite{SS} in order to show that the joint token distribution is negatively correlated, and therefore the token counting mechanism~concentrates.
In this work, we also employ a token exchange protocol, and encounter non-trivial correlation issues.
However, we resolve these issues differently: we characterise the distribution of the token interactions using the $k$-stack interchange process, and bound its total variation distance relative to the uniform distribution, showing that the two distributions are indistinguishable in polynomial time with high probability.
More generally, the goal of our construction is different, as we aim to provide a general framework to efficiently simulate pairwise random node interactions.
\paragraph{Shuffling processes.} Our results also connect to the work on card shuffling processes, which have a long and rich history~\cite{diaconis1981generating,aldous1983random,diaconis1993comparison,wilson2004mixing,caputo2010aldous,dieker2010interlacings,jonasson2012interchange,oliveira2013mixing}. While many of these processes are simple to describe, they are often surprisingly challenging to analyse. Here, we focus on key results related to the interchange process, where the cards are placed on the nodes of a graph and shuffling is performed by randomly exchanging cards between adjacent nodes. We note that much of the work has aimed to identify sharp bounds on the mixing time for the interchange process on various graphs.
Diaconis and Shahshahani~\cite{diaconis1981generating} gave sharp bounds of the order $\Theta(n \log n)$ on the mixing time of the random transpositions shuffle, i.e., interchange process on the clique. Aldous~\cite{aldous1983random} established that the mixing time of the interchange process on the path is bounded by $\Omega(n^3)$ and $O(n^3 \log n)$; later Wilson~\cite{wilson2004mixing} showed that the mixing time is in fact $\Theta(n^3 \log n)$. Diaconis and Saloff-Coste~\cite{diaconis1993comparison} developed a powerful technique for upper bounding the mixing time of a random walk on a finite group by comparing it to another walk with known behaviour via certain Dirichlet forms. Our analyses of the $k$-stack interchange process also rely on this comparison technique.
A decade later Wilson~\cite{wilson2004mixing} gave a general technique for proving lower bounds for many shuffling processes. In particular, he showed that the mixing time on the two-dimensional $\sqrt{n} \times \sqrt{n}$ grid is $\Theta(n^2 \log n)$ and $\Omega(n \log^2 n)$ on the hypercube. Subsequently, Jonasson~\cite{jonasson2012interchange} gave additional upper and lower bounds on the interchange process on various graphs, including showing that the mixing time on the hypercube and constant-degree expanders is at most $O(n \log^3 n)$ and $O(\rho m n\log n)$ on any $m$-edge graph with radius $\rho$.
For a further exposition to this area, we refer to
\cite{levin2017markov}.
In this work, we introduce and analyse a generalisation of the interchange process, called the $k$-stack interchange process, where each node holds $k > 0$ cards instead of one.
\paragraph{Clique emulation.}
The general idea of clique emulation over a general communication graph is a classic one, and has been used in other stronger models of distributed computing, e.g.~\cite{avin2017distributed, ghaffari2017distributed}.
For example, Ghaffari, Kuhn and Su~\cite{ghaffari2017distributed} utilise parallel random walks on graphs to come up with an efficient permutation routing scheme for the synchronous CONGEST model, with running time bounded by the mixing time of the random walk on the graph. In contrast, we bound the running times of our asynchronous protocols using the mixing time of the $k$-stack interchange process.
\section{Preliminaries}
\label{sec:prelim}
\paragraph{Graphs.}
A graph $G = (V,E)$ is $d$-regular if every node $v \in V$ is adjacent to exactly $d$ other nodes.
The edge boundary of a set $S \subseteq V$ is the set $\partial S \subseteq E$ of edges with exactly one endpoint in~$S$. The \emph{edge expansion} of the graph $G$~is defined as
\[
\beta = \min \left\{ \frac{|\partial S|}{|S|} : S \subseteq V, |S| \le n/2 \right\}.
\]
If $G$ is regular, its \emph{conductance} is $\beta/d$.
Unless otherwise mentioned, all graphs are assumed to be regular and connected.
\paragraph{Probability distributions.}
Let $E$ be a finite set. We say $\mu \colon E \to [0,1]$ is a probability distribution on $E$ if $\sum_{x \in E} \mu(x) = 1$ holds.
For $A \subseteq E$ we write $\mu(A) = \sum_{x \in A} \mu(x)$.
The \emph{uniform distribution} on $E$ is the distribution $\nu$ defined by $\nu(x) = 1/|E|$. The \emph{support} of $\mu$ is the set $\{ x : \mu(x) > 0 \}$.
The \emph{total variation distance} between distributions $\mu_1$ and $\mu_2$ on $E$~is
\[
\tvnorm{ \mu_1 - \mu_2} = \frac{1}{2} \sum_{x \in E} | \mu_1(x) - \mu_2(x) | = \max_{A \subseteq E} | \mu_1(A) - \mu_2(A)|.
\]
We say that $\mu$ is $\varepsilon$-uniform on $E$ if $\tvnorm{\mu - \nu} \le \varepsilon$.
\paragraph{Permutations and the symmetric group.}
Let $N > 0$ be a positive integer and $[N] = \{0, \ldots, N-1\}$.
A permutation on $[N]$ is a bijection from $[N]$ to $[N]$.
The symmetric group $S_N$ over $[N]$ is the group consisting of the set of all permutations on $[N]$ with function composition as the group operation and identity element $\operatorname{id}$ defined by $\operatorname{id}(i) = i$.
The inverse $x^{-1}$ of an element $x \in S_N$ is the map satisfying $x^{-1} \cdot x = x \cdot x^{-1} = \operatorname{id}$.
A \emph{transposition} $(i~j) \in S_N$ of $i$ and $j$ is the permutation that swaps the elements $i$ and $j$, but leaves other elements in place.
We say that a set $H \subseteq S_N$ generates $S_N$ if every element of $S_N$ can be expressed as a finite product of elements in $H$ and their inverses. We use $\cdot$ and $\circ$ interchangeably to denote function composition.
Let $\mu$ be a symmetric probability distribution on $S_N$, i.e., $\mu(x) = \mu(x^{-1})$. The \emph{random walk on $S_N$} with increment distribution $\mu$ is a discrete time Markov chain with state space $S_N$. In each step, a random element $x$ is sampled according $\mu$ and the chain moves from state $y$ to state $xy$.
Thus, the probability of transitioning from state $x$ to state $yx$ is $\mu(y)$.
The holding probability of the random walk is $\alpha = \mu(\operatorname{id})$. The following remark summarises some useful properties of such random walks; see e.g.~\cite{levin2017markov} for proofs.
\begin{remark}
Let $\mu$ be an increment distribution for a random walk on $S_N$.
\begin{enumerate}[noitemsep]
\item The uniform distribution $\nu$ on $S_N$ is a stationary distribution for the random walk.
\item The random walk is reversible if and only if $\mu$ is symmetric.
\item The random walk is irreducible if and only if the support of $\mu$ generates $S_N$.
\item If $\mu(\operatorname{id}) > 0$, then the random walk is aperiodic.
\end{enumerate}
\end{remark}
\paragraph{Mixing times.}
Let $\nu$ be the uniform distribution on $S_N$ and be $p^{(t)}$ be the probability distribution over states of the chain after $t$ steps.
Following~\cite{diaconis1993comparison}, we define the $\ell^s$-norm and the normalised $\ell^s$-distance to stationarity for $s > 0$ as:
\[
\left\| \mu \right\|_s = \left( \sum_{x} |\mu(x)|^s \right)^{1/s} \quad \textrm{ and } \quad d_s(t) = |S_N|^{1-1/s} \cdot \| p^{(t)} - \nu \|_s.
\]
The total variation distance and the normalised distances satisfy
$2 \tvnorm{ p^{(t)} - \nu} = d_1(t) \le d_2(t)$,
where the latter inequality follows from the Cauchy-Schwarz inequality. We define the $\varepsilon$-mixing time as
$ \tau(\varepsilon) = \min \{ t : d_1(t) \le 2\varepsilon \}$.
We refer to the value $\tau_\textrm{mix}=\tau(1/2)$ as the \emph{mixing time} of the walk.
Note that $\tau(\varepsilon) \le \lceil \log_2 \varepsilon^{-1} \rceil \cdot \tau_\textrm{mix}$.
\paragraph{Tasks.}
Let $\Sigma$ and $\Gamma$ be nonempty finite sets of input and output labels, respectively.
A task $\Pi$ on a set $V$ of $n$ nodes is a function $\Pi$ that maps any input labelling $z \colon V \to \Sigma$ to a set $\Pi(z) \subseteq \Gamma^V$ of feasible output labellings.
If $\Pi(z) = \emptyset$, then we say that $z$ is an infeasible input. We focus on two tasks:
\begin{itemize}
\item In leader election, the input is the constant function $z(v) = 1$ and the output labelling~$z'$ is feasible iff there exists $v \in V$ such that $z'(v) = 1$ and $z'(u) = 0$ for all $u \neq v$.
That is, exactly one node should output 1 and all others should output 0.
\item In the majority task, the inputs are given by $z \colon V \to \{0,1\}$ and $z' \in \Pi(z)$ if $z'(v) = b$, where $b$ is the input value held by the majority of the nodes. As conventional, the input with equally many zeros and ones is taken to be infeasible.
\end{itemize}
\paragraph{Graphical stochastic population protocols.}
Let $G = (V,E)$ be a graph. In the graphical stochastic population model, abbreviated as $\mathsf{PP}(G)$, the computation proceeds \emph{asynchronously}, where in each time step $t > 0$:
\begin{enumerate}[noitemsep]
\item a stochastic scheduler picks uniformly at random a pair $e_t = (u,v)$ of neighbouring nodes,
\item the nodes $u$ and $v$ read each other's states and update their local states.
\end{enumerate}
As is common in population protocols, we assume that the node pairs are \emph{ordered}, which will allow us to distinguish the two nodes: node $u$ is called the \emph{initiator} and $v$ is the \emph{responder}.
We assume that nodes have access to independent and uniform random bits. Specifically, upon each interaction, both $u$ and $v$ are provided with a single random bit each. We note that this assumption is common in the context of population protocols, e.g.~\cite{gasieniec2018fast}, and can be justified practically by the fact that chemical reaction network (CRN) implementations can directly obtain random bits given the structure of their interactions~\cite{brijder2019computing}.
Formally, a protocol for a task $\Pi$ is a tuple $\vec A = (f, \ell_\textrm{in}, \ell_\textrm{out})$, where $f \colon S \times \{0,1\} \times S \times \{0,1\} \to S \times S$ is the state transition function and $S$ is the set of states, $\ell_\textrm{in} \colon \Sigma \to S$ maps inputs to initial states, and $\ell_\textrm{out} \colon S \to \Gamma$ maps states to outputs. A configuration is a map $x \colon V \to S$ and $x_0 = \ell_\textrm{in} \circ z$ is the initial configuration on input $z$.
An asynchronous schedule is a random sequence $(e_t )_{t \ge 1}$ of the interaction pairs. An execution is the sequence $(x_t)_{t \ge 0}$ of configurations given by
\[
x_{t+1}(u), x_{t+1}(v) = f\left( x_t(u), q_{t+1}(u), x_t(v), q_{t+1}(v) \right) \textrm{ and } x_{t+1}(w) = x_t(w) \textrm{ for } w \in V \setminus \{u,v\},
\]
where $(u,v) = e_{t+1}$ and $q_{t+1}(u) \in \{0,1\}$ is the random bit provided to the node $u$ during the interaction. The output of the protocol at step $t$ is given by $z'_t = \ell_\textrm{out} \circ x_t$.
We say that $\vec A$ stabilises on input $z$ by step $T$ if
$ z'_{t+1} = z'_t$ and $z'_t \in \Pi(z)$
holds for all $t \ge T$. Moreover, $\vec A$ solves the task $\Pi$ with probability at least $p$ in $T(\vec A)$ steps if the protocol stabilises by step $T(\vec A)$ on any feasible input with probability at least $p$. The state complexity of the protocol is $S(\vec A) = |S|$, i.e., the number of states used by the protocol.
\paragraph{Synchronous token protocols.}
In the synchronous $k$-token shuffling model, we assume that there are $n$ agents which communicate in a round-based fashion using \emph{tokens}. In each round,
\begin{enumerate}[noitemsep]
\item every node $v$ generates exactly $k$ tokens based on its current state,
\item all $nk$ tokens are shuffled uniformly at random so that each node gets exactly $k$ tokens,
\item every node $v$ updates its local state based on its current state and the $k$ tokens it received.
\end{enumerate}
Let $X$ be the set of states a node can take and $Y$ be a set of distinct token types.
An algorithm in the token shuffling model is a tuple $\vec B = (f,g,\ell_\textrm{in}, \ell_{\textrm{out}})$. The map $f \colon X \times Y^k \to X$ is a state transition function, and $g \colon X \to Y^k$ determines which tokens each node creates at the start of each round.
As before, $\ell_{\textrm{in}} \colon \Sigma \to X$ maps input values to initial states and $\ell_{\textrm{out}} \colon X \to \Gamma$ maps the state of a node onto an output value. The initial configuration on input $z$ is $x_0 = \ell_{\textrm{out}} \circ z$.
A \emph{synchronous schedule} is a sequence $(\sigma_r)_{r \ge 1}$, where the permutation $\sigma_r \in S_{nk}$ describes how the tokens are shuffled in round~$r$.
For any $y \colon [nk] \to Y$, we let $y(v_0, \ldots, v_{k-1}) = (y(v_0), \ldots, y(v_{k-1}))$. A synchronous execution induced by $(\sigma_r)_{r \ge 1 }$ on input $z$
is defined by
\[
y_{r+1}(v_0, \ldots, v_{k-1}) = (g \circ x_r)(v) \quad \textrm{ and } \quad x_{r+1}(v) = f\left( x_r(v), \left(y_{r+1} \circ \sigma_{r+1}\right)\left(v_0, \ldots, v_{k-1}\right) \right),
\]
where
$y_{r}(v_0, \ldots, v_{k-1})$ and $(y_{r} \circ \sigma_{r+1})(v_0, \ldots, v_{k-1})$, respectively, are the $k$ tokens generated and received by node~$v$ during round $r$.
We assume the uniform synchronous scheduler, which picks each permutation $\sigma_r$ independently and uniformly at random from the set of all permutations $S_{nk}$. The output of node $v$ at the end of round $r$ is $z'_r(v) = (\ell_{\textrm{out}} \circ x_r)(v)$.
The synchronous algorithm $\vec B$ stabilises on input $z$ in $R$ rounds if $z_{r+1} = z'_r$ and $ z'_r \in \Pi(z)$ holds for all $r \ge R$. The algorithm solves the problem $\Pi$ if it stabilises in $R$ rounds on any feasible input with probability at least $p$.
\section{Shuffling on graphs: the \texorpdfstring{$k$}{k}-stack interchange process}\label{sec:interchange}
We now describe a shuffling process on graphs, which we call the \emph{$k$-stack interchange process}. This process will be useful in our analysis, and is a variant of the classic graph interchange process, e.g.~\cite{dieker2010interlacings, jonasson2012interchange}.
We analyse its mixing time using the path comparison method of Diaconis and Saloff-Coste~\cite{diaconis1993comparison}, leveraging a classical flow result of Leighton and Rao~\cite{leighton1999multicommodity}.
\paragraph{\boldmath The $k$-stack interchange process.}
Let $G = (V,E)$ a graph with $n$ vertices $\{0, \ldots, n-1\}$ and $N = kn$ for $k > 0$.
Assume each node of $G$ holds a stack of exactly $k$ cards, and consider the shuffling process where,
in every time step, one of the following actions is taken:
\begin{enumerate}[noitemsep]
\item with probability $1/2$, move the top card of a random node to the bottom of its stack,
\item with probability $1/4$, choose a random edge $\{u,v\}$ and swap the top cards of $u$ and $v$,
\item with probability $1/4$, do nothing.
\end{enumerate}
We refer to this process as the \emph{$k$-stack interchange process on $G$}. The special case of $k=1$ is the classic interchange process on $G$ with holding probability $3/4$, as the first rule does not do anything on stacks of size 1. For $k > 1$, the holding probability will be $1/4$. Instances of the process for $k=1$ and $k=2$ are illustrated in \figureref{fig:interchange}.
\begin{figure}[t]
\centering
\includegraphics[page=4,width=0.85\textwidth]{figures.pdf}
\caption{Interchange dynamics on a 4-cycle. In each step, blue cards are swapped. Top row: The 1-stack interchange process. Bottom row: The 2-stack interchange process. In each step, a randomly selected node either moves its top card to the bottom of its stack or exchanges it with the top card of a randomly selected neighbour.
\label{fig:interchange}}
\end{figure}
\begin{restatable}{theorem}{mixtheorem}\label{thm:d-beta-mixing}
Let $G$ be a $d$-regular graph with edge expansion $\beta > 0$. For any constant $k>0$, the mixing time of the $k$-stack interchange process on $G$ is
$O\left( \left(d/\beta\right)^2 n \log^3 n \right)$.
\end{restatable}
We prove this theorem in
|
{
"timestamp": "2021-05-13T02:01:43",
"yymm": "2102",
"arxiv_id": "2102.08808",
"language": "en",
"url": "https://arxiv.org/abs/2102.08808"
}
|
\section{Introduction}
\label{sec:intro}
We consider finite, simple, undirected graphs, and we refer to \cite{BM08}
for terminology and notation not defined here.
We let $N(x)$ denote the set of all vertices adjacent to vertex $x$ in a given graph.
Considering a graph $G$ and a set $S$ of its vertices,
we recall that a subgraph of $G$
\emph{induced by $S$} is simply the graph obtained from $G$
by removing all vertices of $V(G) \setminus S$.
We say that a graph $H$ is an \emph{induced subgraph} of $G$ if
there is a set of vertices of $G$ which induces a graph isomorphic to $H$.
Given a family $\mathcal H$ of graphs and a graph $G$, we say that $G$ is
\emph{$\mathcal H$-free} if $G$ contains no graph from $\mathcal H$ as an induced subgraph.
In this context, the graphs of $\mathcal H$ are referred to as
\emph{forbidden subgraphs}.
We emphasise that the studied forbidden subgraphs are not necessarily connected.
We let $H_1 \cup H_2$ denote the disjoint union of graphs $H_1$ and $H_2$,
and let $kH$ denote the disjoint union of $k$ copies of a graph~$H$.
A cycle of length at least $4$ is called a {\it hole},
and a graph whose complement is a cycle of length at least $4$
is called an {\it antihole}.
A hole (antihole) is {\it odd} if it has an odd number of vertices.
(We usually talk about holes and antiholes as induced subgraphs.)
We recall that a graph is {\it $k$-colourable}
if each of its vertices can be coloured with one of $k$ colours
so that adjacent vertices are assigned distinct colours.
The smallest integer $k$ such that a given graph $G$ is $k$-colourable
is called the {\it chromatic number} of $G$, denoted by $\chi(G).$
We let $\omega(G)$ denote the \emph{clique number} of $G$,
that is, the order of a maximum complete subgraph of $G$.
(Clearly, $\chi(G) \geq \omega(G)$ for every graph $G$.)
A graph $G$ is {\it perfect} if $\chi(G')=\omega(G')$
for every induced subgraph $G'$ of $G.$
\medskip
Studying connected $K_{1,3}$-free graphs with independence number at least $3$,
Chudnovsky and Seymour~\cite{CS} showed that their chromatic number can be at
most twice as large as the clique number (and they also presented an infinite
family of such graphs whose chromatic number is almost this large).
Considering a $3K_1$-free graph $G$ (clearly, $K_{1,3}$-free and of independence
at most $2$), we recall that its chromatic number is at least
$\frac{1}{2}|V(G)|$; and for some such graphs, $|V(G)|$ has order of magnitude
$\frac{\omega(G)^2}{\log \omega(G)}$ (by a result of Kim~\cite{Kim} on Ramsey
numbers).
While we are focused mainly on $K_{1,3}$-free graphs, we should say that relating
forbidden induced subgraphs and colourings is a classical topic in graph theory.
Numerous results are known and, naturally, stronger colouring properties can be
obtained when considering a pair (or larger set) of forbidden induced subgraphs
(for instance, see survey papers~\cite{GJPS17,RS04,RS19}).
We investigate restricting the class of $K_{1,3}$-free graphs by additional
constraints (in particular, by different choices of an additional forbidden
induced subgraph $X$) so that the resulting class consists of perfect graphs.
To this end, we will use the classical result on perfect graphs by Chudnovsky et al.~\cite{ChRST06}
as the main tool.
\begin{theorem}[Chudnovsky et al.~\cite{ChRST06}]
\label{tSPGT}
A graph is perfect if and only if it contains neither an odd hole
nor an odd antihole as an induced subgraph.
\end{theorem}
We also use the following lemma due to Ben Rebea~\cite{BR}
(see also~\cite{ChS88,Fo93}).
\begin{lemma}[Ben Rebea~\cite{BR}]
\label{lBR}
Let $G$ be a connected $K_{1,3}$-free graph
with independence number at least $3$.
If $G$ contains an induced odd antihole,
then $G$ contains an induced $C_5$.
\end{lemma}
\medskip
Our former investigation of connected $\{ K_{1,3},X \}$-free graphs
(of independence at least $3$) resulted in the following characterisations.
\begin{theorem}[Brause et al.~\cite{alpha3}]
\label{thmA-characterization-alpha_3}
Let $X$ be a graph and
$\mathcal{G}$ be the class of all connected $\{K_{1,3},X\}$-free graphs
which are distinct from an odd cycle.
Then the following statements are satisfied.
\begin{itemize}[topsep=5pt, partopsep=5pt]
\item
If $X$ is an induced subgraph of $Z_1$ or $P_4$,
then all graphs of $\mathcal{G}$ are perfect.
\item
Otherwise, there are infinitely many graphs of $\mathcal{G}$
whose chromatic number is greater than the clique number.
\end{itemize}
Furthermore, the following are satisfied for the class $\mathcal{G}'$ of
all graphs of $\mathcal{G}$ whose independence number is at least $3$.
\begin{itemize}[topsep=5pt, partopsep=5pt]
\item
If $X$ is an induced subgraph of $Z_2$ or of $P_5$,
then all graphs of $\mathcal{G}'$ are perfect.
\item
Otherwise, there are infinitely many graphs of $\mathcal{G}'$
whose chromatic number is greater than the clique number.
\end{itemize}
\end{theorem}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{clawPsZs.pdf}
\caption{The graphs $K_{1,3}$, $P_4$, $Z_1$, $P_5$ and $Z_2$
(considered in Theorem~\ref{thmA-characterization-alpha_3}).}
\label{figClawPsZs}
\end{figure}
The dichotomic nature of Theorem~\ref{thmA-characterization-alpha_3}
(for graphs with independence number at least $2$ and at least $3$)
raises a question on the nature of an analogous statement
for graphs of independence number at least $4$.
Motivated by this question, we look one step further in
this direction and investigate perfectness of these graphs.
\section{Main result}
\label{sec-main}
In this section, we answer the question motivated by the nature of Theorem~\ref{thmA-characterization-alpha_3}.
The dichotomic character of Theorem~\ref{thmA-characterization-alpha_3} does not extend to
$\{K_{1,3},X\}$-free graphs with independence number at least $4$.
We show a full characterization and describe the finitely many exceptions,
which are given by one of the forbidden pairs.
The main result of the present note is as follows.
\begin{theorem}
\label{t1}
Let $X$ be a graph
and $\mathcal G$ be the class of all connected $\{ K_{1,3},X \}$-free graphs
which are distinct from an odd cycle and have independence number at least $4$.
Let $\mathcal X$ be the set of graphs which consists of
$P_6$, $K_1 \cup P_5$, $2P_3$, $Z_2$, $K_1 \cup Z_1$
and all their induced subgraphs.
The following statements are satisfied.
\begin{enumerate}[topsep=5pt, partopsep=5pt]
\item
If $X$ belongs to $\mathcal X$,
then all graphs of $\mathcal G$ are perfect.
\item
If $X$ is $2K_1 \cup K_3$, then the only imperfect graphs of $\mathcal G$
are the graphs $E_1, \ldots, E_8$, depicted in Figure~\ref{figExceptions}.
\item
If $X$ does not belong to $\mathcal X \cup \{ 2K_1 \cup K_3 \}$,
then $\mathcal G$ contains infinitely many imperfect graphs.
\end{enumerate}
\end{theorem}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{choicesOfX.pdf}
\caption{The graphs
$P_6$, $K_1 \cup P_5$, $2P_3$, $Z_2$, $K_1 \cup Z_1$, $2K_1 \cup K_3$ and
$B_{1,2}$
(considered in Theorems~\ref{t1} and~\ref{thm-bull-exceptions}).}
\label{figChoicesOfX}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{exceptions.pdf}
\caption{The graphs $E_1, \ldots, E_8$.}
\label{figExceptions}
\end{figure}
We note that the assumption of being distinct from an odd cycle
is satisfied trivially for particular choices of $X$,
and that similar characterisations follow for the case when the graphs
considered are not necessarily distinct from an odd cycle.
(This concerns choosing $X$ as an induced subgraph of $P_4$ or $P_5$
in the respective parts of Theorem~\ref{thmA-characterization-alpha_3},
and as an induced subgraph of $P_6$, $K_1 \cup P_5$ or $2P_3$ in
Theorem~\ref{t1}.)
We also note that other choices of the graph $X$ (in item (3) of Theorem~\ref{t1})
can still admit a `nice' description of all (infinitely many) imperfect graphs in the class~$\mathcal G$.
This fact is illustrated on the example $X=B_{1,2}$ (see Figure~\ref{figChoicesOfX})
by proving Theorem~\ref{thm-bull-exceptions} in Section~\ref{sec:concl}.
\medskip
In order to prove Theorem~\ref{t1}, we will show three structural lemmas on $K_{1,3}$-free graphs.
\begin{lemma}
\label{l5}
Let $G$ be a connected $K_{1,3}$-free graph
with independence number at least $4$, and
$H_1, \ldots, H_7$ be the graphs depicted in Figure~\ref{figH}.
The following statements are satisfied.
\begin{enumerate}[topsep=5pt, partopsep=5pt]
\item
If $G$ is $H_1$-free,
then it is $C_7$-free.
\item
If $G$ is $\{H_2, \ldots, H_7 \}$-free,
then it is $C_5$-free.
\end{enumerate}
\end{lemma}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{H.pdf}
\caption{The graphs $H_1, \ldots, H_7$.
We note that $H_6$ is isomorphic to the graph $E_1$,
depicted in Figure~\ref{figExceptions}.}
\label{figH}
\end{figure}
\begin{lemma}
\label{l6}
If $G$ is a connected $\{ K_{1,3}, 2K_1 \cup K_3 \}$-free graph with independence
number at least~$4$ which is distinct from the graphs $E_1, \ldots, E_8$
(depicted in Figure~\ref{figExceptions}), then $G$ is $\{ C_5, C_7 \}$-free.
\end{lemma}
\begin{lemma}
\label{l7}
Let $\mathcal G$ be the class defined in Theorem~\ref{t1},
and let $X$ be a graph such that $X$ is not an induced subgraph of any of
$P_6$, $K_1 \cup P_5$, $2P_3$, $Z_2$, $K_1 \cup Z_1$, $2K_1 \cup K_3$.
Then infinitely many graphs of $\mathcal G$ contain an induced odd hole.
\end{lemma}
We also recall the following fact observed in~\cite{alpha3}.
\begin{obs}[Brause et al.~\cite{alpha3}]
\label{o8}
Let $G$ be a $K_{1,3}$-free graph and $C$ be a set of its vertices
such that $C$ induces a cycle of length at least $5$.
If $x$ is a vertex that does not belong to $C$ but is adjacent to a vertex of $C$, then
$N(x) \cap C$ induces $K_2$ or $P_3$ or $P_4$,
or (in case $|C| = 5$) it can induce $C_5$,
or (in case $|C| \geq 6$) it can induce $2K_2$.
\end{obs}
The proofs of Lemmas~\ref{l5}, \ref{l6} and~\ref{l7} are included below.
Assuming the lemmas are true, we prove Theorem~\ref{t1}.
\begin{proof}[Proof of Theorem~\ref{t1}]
We assume that $X$ belongs to $\mathcal X \cup \{ 2K_1 \cup K_3 \}$,
and prove statements (1) and (2).
First, we show that $G$ is $\{ C_9, C_{11}, \ldots \}$-free.
For the sake of a contradiction, we suppose that $G$ contains an induced $C_{\ell}$
(where $\ell \geq 9$ and $\ell$ is odd).
Clearly, $G$ contains each of the graphs $P_6, K_1 \cup P_5, 2P_3$
as an induced subgraph.
Since $G$ is connected and distinct from $C_{\ell}$,
it contains an additional vertex which is adjacent to a vertex of this $C_{\ell}$.
Using Observation~\ref{o8}, we conclude that $G$ also contains $Z_2$, $K_1 \cup Z_1$, and $2K_1 \cup K_3$ as induced subgraphs, a contradiction.
Next, we show that $G$ is $\{ C_5, C_7 \}$-free.
For the case when $X$ belongs to $\mathcal X$,
we observe that each of the graphs $H_1, \ldots, H_7$
(depicted in Figure~\ref{figH}) contains $X$ as an induced subgraph.
In particular, $G$ is $\{ H_1, \ldots, H_7 \}$-free,
and thus $\{ C_5, C_7 \}$-free by Lemma~\ref{l5}.
For the case when $X$ is $2K_1 \cup K_3$,
we conclude that $G$ is $\{ C_5, C_7 \}$-free by Lemma~\ref{l6}.
In particular, we can now use the fact that $G$ is $C_5$-free as follows.
Since $G$ satisfies the assumptions of Lemma~\ref{lBR},
we get that $G$ cannot contain an odd antihole as an induced subgraph
(if $G$ contained an induced odd antihole, then it would contain
induced $C_5$, contradicting the fact that $G$ is $C_5$-free).
Consequently, $G$ contains neither an odd hole
nor an odd antihole as an induced subgraph,
and thus $G$ is a perfect graph by Theorem~\ref{tSPGT}.
To conclude the proof, we observe that statement (3) follows by Lemma~\ref{l7}.
\end{proof}
In the remainder of the present section,
we prove Lemmas~\ref{l5},~\ref{l6}, and~\ref{l7}.
\begin{proof}[Proof of Lemma~\ref{l5}]
We prove the lemma by considering a minimal counterexample.
In particular, for each of the two statements,
we consider a graph $G$ which satisfies the assumptions of the statement
and contains an induced $C_\ell$
(where $\ell = 7, 5$ for statement (1), (2), respectively)
and, subject to these properties,
has a minimal number of vertices.
We let $C$ be a set of vertices inducing $C_\ell$ in $G$,
and $N(C)$ be the set of all vertices not belonging to $C$
but adjacent to a vertex of $C$.
We let $I$ be a maximum independent set of $G$,
and $e$ be the number of edges going from $C$ to $I \setminus C$.
Before proving the statements, we show three claims on basic properties of $G$.
\begin{claim}
\label{cd2}
If $x$ is a vertex with a neighbour in $C$ and a neighbour outside $C \cup N(C)$,
then $N(x)\cap C$ induces $K_2$.
\end{claim}
\begin{proofcl}[Proof of Claim~\ref{cd2}]
Since $G$ is $K_{1,3}$-free, the set $N(x)\cap C$ cannot contain two
non-adjacent vertices, and $|N(x)\cap C| \neq 1$.
It follows that $N(x)\cap C$ induces~$K_2$.
\end{proofcl}
\begin{claim}
\label{ctype3}
For every vertex $x$ of $G$, the set $N(x) \cap C$ does not induce $P_3$.
\end{claim}
\begin{proofcl}[Proof of Claim~\ref{ctype3}]
For the sake of a contradiction, we suppose that there is a vertex $x$
whose neighbours in $C$ induce $P_3$.
We let $y$ be the central vertex of this $P_3$,
and we consider the graphs $G - x$ and $G - y$.
We note that both considered graphs are connected
(since $G$ is $K_{1,3}$-free),
and both contain an induced $C_\ell$.
Furthermore, at least one of the considered graphs is of independence at least~$4$
(since $x$ and $y$ cannot both belong to $I$),
and we conclude that this graph contradicts the choice of $G$ as a minimal counterexample.
\end{proofcl}
\begin{claim}
\label{cineq}
We have $e \leq 2\ell - 4|I \cap (C \cup N(C))| + 4|I \cap N(C)|$.
\end{claim}
\begin{proofcl}[Proof of Claim~\ref{cineq}]
We recall that $I$ is an independent set.
We consider a vertex $x$ of $C$ and discuss the number of its neighbours in $I \setminus C$,
that is, the contribution to the quantity $e$.
Clearly, $x$ is adjacent to at most two vertices of $I$
(since $G$ is $K_{1,3}$-free).
Furthermore, if $x$ is adjacent to a vertex of $I \cap C$,
then $x$ has at most one neighbour in $I \setminus C$.
Similarly, if $x$ is adjacent to two vertices of $I \cap C$ or $x$ belongs to $I$,
then $x$ has no neighbour in $I \setminus C$.
Consequently, we get that $e \leq 2\ell - 4|I \cap C|$,
and the desired inequality follows since
$|I \cap C| = |I \cap (C \cup N(C))| - |I \cap N(C)|$.
\end{proofcl}
We use Claims~\ref{cd2}, \ref{ctype3} and~\ref{cineq}, and show statements (1) and (2).
First, we consider a graph $G$ chosen as a minimal counterexample to statement (1),
and a set $C$ inducing $C_7$ in $G$.
We note that Claim~\ref{cd2} implies that every vertex of $V(G) \setminus C$ is adjacent to a vertex of $C$
(since $G$ is connected and $H_1$-free).
In particular, we have $I \cap N(C) = I \setminus C$.
Furthermore, every vertex of $V(G) \setminus C$ has precisely four neighbours in $C$
(by combining the fact that $G$ is $H_1$-free together with Claim~\ref{ctype3} and Observation~\ref{o8}).
In particular, we consider the vertices of $I \setminus C$,
and conclude that $e = 4|I \setminus C|$.
On the other hand, Claim~\ref{cineq} yields that $e \leq 14 - 16 + 4|I \cap N(C)| = 4|I \setminus C| - 2$, a contradiction.
Next, we consider a minimal counterexample $G$ for statement (2),
and a set $C$ inducing $C_5$.
We first show that every vertex of $V(G) \setminus C$ is adjacent to a vertex of $C$.
For the sake of a contradiction,
we suppose that there is a vertex, say~$w$, which has no neighbour in $C$.
We observe that $w$ is precisely at distance two from $C$
(using Claim~\ref{cd2} and the facts that $G$ is connected and $H_2$-free).
Furthermore, the graph $G - w$ is connected and it contains an induced $C_5$,
and hence $w$ belongs to~$I$ and $|I| = 4$
(since $G$ is a minimal counterexample).
We let $u$ be a vertex adjacent to $w$ and to a vertex of $C$.
In particular, $u$ does not belong to $I$ and $G - u$ contains an induced $C_5$.
We use the fact that $G$ is a minimal counterexample
and note that the graph $G - u$ is not connected,
and furthermore
$G - u$ has precisely two components one of which consists only of the vertex $w$
(since $G$ is $K_{1,3}$-free and minimal).
Consequently,
we observe that $u$ is the only vertex of $G$ whose neighbours in $C$ induce $K_2$
(since $G$ is $\{ H_3, H_4, H_5 \}$-free
and the graphs depicted in Figure~\ref{figNotH345} are not $K_{1,3}$-free).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{notH345.pdf}
\caption{Possible graphs induced by $C \cup \{u,w,z\}$,
where $z$ is another vertex whose neighbours in $C$ induce $K_2$,
and distinct from $H_3, H_4$ and $H_5$.
We note that the graphs are not $K_{1,3}$-free
(induced copies of $K_{1,3}$ are depicted in bold).}
\label{figNotH345}
\end{figure}
It follows that every vertex of $I \cap N(C)$ has at least four neighbours in~$C$
(by Claim~\ref{ctype3} and Observation~\ref{o8}),
and thus $e \geq 4|I \cap N(C)|$.
On the other hand,
we note that $w$ is the only vertex of $I$ which has no neighbour in $C$
(by Claim~\ref{cd2}).
Thus, we have $|I \cap (C \cup N(C))| = 3$,
and Claim~\ref{cineq} gives that
$e \leq 10 - 12 + 4|I \cap N(C)| = 4|I \cap N(C)| - 2$, a contradiction.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{notH7.pdf}
\caption{Possible graphs induced by $C \cup I$ which are distinct from $H_7$.
Each graph contains an induced $K_{1,3}$
(depicted in bold).}
\label{figNotH7}
\end{figure}
Hence, every vertex of $V(G) \setminus C$ is adjacent to a vertex of $C$.
In particular, every vertex of $V(G) \setminus C$ belongs to $I$
(since $G$ is a minimal counterexample).
We recall that $G$ is $\{ H_6, H_7 \}$-free, and we observe that at most two vertices of $I \setminus C$
have the property that its neighbourhood in $C$ induces $K_2$
(see Figure~\ref{figNotH7}).
Similarly as above, Claim~\ref{ctype3} and Observation~\ref{o8} imply that
$e \geq 4|I \setminus C| - 4$.
However, Claim~\ref{cineq} yields that
$e \leq 10 - 16 + 4|I \setminus C| = 4|I \setminus C| - 6$,
a contradiction.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l6}]
We consider such graph $G$
and note that $G$ is $\{ H_1, \dots, H_5 \}$-free and $H_7$-free
(since it is $2K_1 \cup K_3$-free).
We suppose that
$G$ is distinct from $E_1, \ldots, E_8$ (depicted in Figure~\ref{figExceptions}),
and show that $G$ is $H_6$-free.
For the sake of a contradiction, we suppose that $G$ contains a set $H$
of vertices inducing $H_6$.
We note that $G$ contains a vertex, say~$x$,
which does not belong to $H$ but is adjacent to a vertex of $H$
(since $G$ is connected and distinct from $E_1$, that is, $H_6$).
We discuss the adjacency of $x$ to the vertices of $H$
and show that there are essentially only three types of connecting $x$ to $H$
(see Figure~\ref{figH5andx}).
To this end,
we consider the set $I$ of four independent vertices of $H$
and the labelling of vertices of $H$ given in Figure~\ref{figH6}.
We note that $x$ is non-adjacent to at least two vertices of $I$
(since $G$ is $K_{1,3}$-free),
and we discuss the cases given by pairs of vertices of~$I$.
For each case, we use the assumption that $G$ is $\{ K_{1,3}, 2K_1 \cup K_3 \}$-free.
By symmetry, we need to consider four cases as follows.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{H6.pdf}
\caption{Labelling the vertices of $H_6$.
The vertices of $I$ are labelled $i_1, i_1', i_2, i_3$.}
\label{figH6}
\end{figure}
First, we show that if $x$ is adjacent to neither $i_2$ nor $i_3$,
then $x$ is adjacent to all remaining vertices of $H$
(this gives type $B$ as depicted in Figure~\ref{figH5andx}).
We use that the set $\{i_2, i_3, v_1, v_1', x \}$ cannot induce
$2K_1 \cup K_3$, and hence we can assume that $x$ is adjacent to $v_1$.
Using the edge $x v_1$, we get that $x$ is also adjacent to $i_1$
(by considering the graph induced by $\{i_1, i_2, v_1, x \}$).
Similarly, $x$ is adjacent to $v_2$.
Consequently, $x$ is adjacent to $i_1'$
(by considering $\{i_1, i_1', i_2, v_2, x \}$),
and similarly it is adjacent to $v_2'$.
Finally, it is adjacent to $v_1'$
(by considering $\{i_3, v_1', v_2', x \}$).
Second, we assume that $x$ is adjacent to neither $i_1'$ nor $i_2$.
Using the previous case, we can assume that $x$ is adjacent to $i_3$.
It follows that $x$ is adjacent to neither $v_1'$ nor $v_2$,
and consequently it is not adjacent to $v_1$.
Finally, $x$ is adjacent to $v_2'$,
and thus it is adjacent to $i_1$
(this gives type $A$).
Third, we suppose that $x$ is adjacent to neither $i_1$ nor $i_3$,
and we can assume that $x$ is adjacent to $i_2$.
We note that $x$ is adjacent to neither $v_2$ nor $v_1'$.
Since $x$ is not adjacent to $v_1'$,
it is adjacent to neither $v_1$ nor $v_2'$.
Hence, $x$ is not adjacent to~$v_2$.
A contradiction follows by considering
$\{i_1, v_1, v_2, v_2', x \}$.
Fourth, we assume that $x$ is adjacent to neither $i_1$ nor $i_1'$,
and that $x$ is adjacent to $i_2$ and $i_3$.
In addition, we can assume that $x$ is adjacent to $v_2'$
(by considering $\{i_3, v_2, v_2', x \}$).
It follows that $x$ is not adjacent to $v_2$,
and hence adjacent to $v_1$, and thus adjacent to $v_1'$
(this gives type $C$).
With the three types on hand (as depicted in Figure~\ref{figH5andx}),
we continue the argument.
In particular, we note that
$G$ contains at least two vertices which do not belong to $H$
(since $G$ is distinct from $E_2, E_3$ and $E_4$).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{H5andx.pdf}
\caption{The only three types of connecting $x$ to $H_6$.
The types are labelled $A, B$ and $C$.
The vertices of $I$ are depicted in white.}
\label{figH5andx}
\end{figure}
In addition, we have that every vertex of $G$ is adjacent to a vertex of $H$
(since $G$ is connected and $K_{1,3}$-free, and two non-adjacent vertices of $H$
are adjacent to~$x$).
Hence, these three types (of connecting $x$) apply to every vertex of $V(G) \setminus H$.
We consider a pair of such vertices,
and note that their neighbourhoods in $H$ cannot be the same
(since $G$ is $\{ K_{1,3}, 2K_1 \cup K_3 \}$-free).
In particular if adding two vertices of type $A$, then
one has to be adjacent to $i_1, i_3, v_2'$
and the other to $i_1', i_3, v_2$.
We consider the possible graphs obtained by adding two vertices of type $A$
(there are two graphs to consider since the additional vertices may or may not be adjacent),
and we note that none of them is $\{ K_{1,3}, 2K_1 \cup K_3 \}$-free.
Similarly, we discuss all remaining cases and observe that
there are precisely three options of
connecting two vertices to $H$ (see Figure~\ref{figH5plus2}).
\begin{figure}[ht]
\includegraphics[scale=0.7]{H5plus2.pdf}
\caption{Graphs obtained by adding two vertices to $H_6$.
The labelling indicates types of the additional vertices.
For each pair of the types, the additional vertices
might be non-adjacent (top) or adjacent (bottom).
Three of the graphs are $\{ K_{1,3}, 2K_1 \cup K_3 \}$-free.
In the remaining graphs, induced $K_{1,3}$ or $2K_1 \cup K_3$ is
highlighted.}
\label{figH5plus2}
\end{figure}
Since $G$ is distinct from $E_5, E_6$ and $E_7$,
it contains at least three vertices which do not belong to $H$.
We consider pairs of such vertices and the above discussion,
and we conclude that $G$ contains precisely three such vertices
and, in fact, $G$ is exactly $E_8$, a contradiction.
Hence, we can assume that $G$ is $H_6$-free.
Consequently, $G$ is $\{ H_1, \ldots, H_7 \}$-free,
and thus $\{ C_5, C_7 \}$-free by Lemma~\ref{l5}.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{families.pdf}
\caption{Families $\mathcal F_1, \mathcal F_2, \mathcal F_3, \mathcal F_4$ of graphs.
The families $\mathcal F_1$ and $\mathcal F_2$ consist of graphs which are obtained
from $C_{\ell}$ (where $\ell \geq 9$ and $\ell$ is odd)
by adding a vertex such that the set of its neighbours on $C_{\ell}$
induces $P_2$, $P_3$, respectively.
The families $\mathcal F_3$ and $\mathcal F_4$ consist of graphs which are obtained as follows.
We consider the graph labelled $\mathcal F_3$, $\mathcal F_4$, respectively,
and its distinguished vertex (depicted as large),
and add (in sequence) an arbitrary number of vertices
adjacent exactly to the distinguished vertex and to its neighbours.}
\label{figFamilies}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{DHBN.pdf}
\caption{The graphs $D$, $H, B$ and $N$.}
\label{figDHBN}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{l7}]
We use the assumption that $X$ is not an induced subgraph of any of
$P_6$, $K_1 \cup P_5$, $2P_3$, $Z_2$, $K_1 \cup Z_1$, $2K_1 \cup K_3$,
and we show that $X$ contains at least one of the graphs
$5K_1$, $3K_1 \cup K_2$, $3K_2$, $K_2 \cup P_4$,
$K_{1,3}$, $C_4$, $C_5$, $C_6$, $C_7$,
$K_4$, $D$, $H$, $B$, $K_2 \cup K_3$, $K_1 \cup Z_2$
as an induced subgraph
(the graphs $D$, $H$ and $B$ are depicted in Figure~\ref{figDHBN}).
Clearly, we can assume that $X$ is a chordal graph
(otherwise it contains at least one of
$C_4$, $C_5$, $C_6$, $C_7$, $K_2 \cup P_4$
as an induced subgraph and the claim is satisfied).
In addition, we can assume that $X$ contains at most one triangle
(otherwise there is at least one of $K_4$, $D$, $H$, $K_2 \cup K_3$
as an induced subgraph or $X$ is not chordal).
We discuss two cases.
For the case when $X$ contains no triangle,
we can also assume that every component of $X$ is a path
(otherwise there is $K_{1,3}$).
We use the assumption that $X$ is not an induced subgraph of any of
$P_6$, $K_1 \cup P_5$, $2P_3$,
and we let $M$ denote a set of all vertices of a largest component of $X$
(that is, vertices of a longest path),
and we observe the following.
If $|M| \geq 7$, then $X$ contains $K_2 \cup P_4$ as an induced subgraph.
If $|M| = 6$, then $X$ has at least two components
(since $X$ is not $P_6$),
and hence it contains induced $3K_1 \cup K_2$.
If $5 \geq |M| \geq 4$,
then $X$ has at least two vertices which do not belong to $M$
(since $X$ is not an induced subgraph of $K_1 \cup P_5$),
and thus $X$ contains $3K_1 \cup K_2$ or $K_2 \cup P_4$ as an induced subgraph.
If $|M| = 3$,
then $X$ has at least three components and at least three vertices outside $M$
(since $X$ is not an induced subgraph of $2P_3$).
It follows that $X$ contains induced $3K_1 \cup K_2$.
If $|M| = 2$, then we note that $X$ contains $3K_2$ or $3K_1 \cup K_2$
as an induced subgraph.
Lastly if $|M| = 1$, then $X$ has at least five components
and this gives $5K_1$.
For the other case, we consider the component of $X$ which contains the triangle.
We can assume that this component is a subgraph of $Z_2$
(otherwise there is $B$ or $K_2 \cup K_3$ as an induced subgraph),
and that every other component is trivial
(otherwise we have induced $K_2 \cup K_3$).
Since $X$ is not an induced subgraph of any of $Z_2$, $K_1 \cup Z_1$, $2K_1 \cup K_3$,
we conclude that $X$ contains $K_1 \cup Z_2$ or $3K_1 \cup K_2$
is an induced subgraph.
We proceed by considering the families $\mathcal F_1, \mathcal F_2, \mathcal F_3, \mathcal F_4$ of graphs
depicted in Figure~\ref{figFamilies},
and we note that each of the graphs is $K_{1,3}$-free,
of independence number at least $4$,
and contains an induced odd hole.
Furthermore, we observe that every graph of $\mathcal F_1$ is
$\{ C_4, C_5, C_6, C_7, K_4, D, H \}$-free,
and every graph of $\mathcal F_2$ is
$B$-free,
and every graph of $\mathcal F_3$ is
$\{ 5K_1, 3K_1 \cup K_2, K_1 \cup Z_2, 3K_2, K_2 \cup P_4 \}$-free,
and every graph of $\mathcal F_4$ is
$K_2 \cup K_3$-free.
\end{proof}
\section{Concluding remarks}
\label{sec:concl}
Finally, we remark that the class of all
imperfect connected $\{K_{1,3},B_{1,2}\}$-free graphs with
independence number at least $4$ admits a simple characterisation.
The graph $B_{1,2}$ is depicted in Figure~\ref{figChoicesOfX}
and the characterisation is given in Theorem~\ref{thm-bull-exceptions}.
(A similar, but more technical, structural statement can be shown for
$X$ chosen as the graph $N$ depicted in Figure~\ref{figDHBN}.)
We start by recalling the notation of an inflation of a cycle
(also used in~\cite{alpha3}).
We say that a graph is an {\it inflation of $C_k$}
if the graph can be obtained from $C_k$
by applying (in sequence) the following operation any number of times (possibly not at all).
Choose an arbitrary vertex of the graph on hand and add
a new vertex adjacent precisely to the chosen vertex and to all its neighbours.
An example of an inflation of $C_7$ is depicted in Figure~\ref{figInflation}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{inflation.pdf}
\caption{An inflation of $C_7$.}
\label{figInflation}
\end{figure}
We show the following fact
(a similar statement considering $\{K_{1,3},B\}$-free graphs
was shown in~\cite{alpha3}).
\begin{theorem}
\label{thm-bull-exceptions}
Let $G$ be a connected $\{K_{1,3},B_{1,2}\}$-free graph with
independence number at least $4$.
Then $G$ is either perfect or
it is an inflation of $C_k$ such that $k$ is odd and $k\geq 9$.
\end{theorem}
\begin{proof}
We note that each of the graphs
$H_1,\ldots,H_7$ (depiced in Figure~\ref{figH}) contains an induced $B_{1,2}$,
and so $G$ is $\{ C_5, C_7 \}$-free by Lemma~\ref{l5}.
Thus, $G$ contains no induced odd antihole by Lemma~\ref{lBR}.
We conclude that the statement follows by the combination of
Theorem~\ref{tSPGT} and Observation~\ref{prop-bull-cycle-class_C}
(stated below).
\end{proof}
We let $B_{i,j}$ denote the graph obtained by identifying
end-vertices of two vertex-disjoint paths $P_{i+1}$, $P_{j+1}$ (one end of each)
with two distinct vertices of a triangle
(for instance, see the graph $B_{1,2}$ depicted in Figure~\ref{figChoicesOfX}).
We show the following structural
observation on $\{K_{1,3},B_{1,p}\}$-free graphs.
(The argument goes along similar lines as in
the proof of~\cite[Lemma 4.2]{alpha3}, where
$\{K_{1,3},B_{1,p}\}$-free
graphs and $k \geq 5$ were considered.)
\begin{obs}
\label{prop-bull-cycle-class_C}
Let $p$ be an integer greater than $1$ and $G$ be a connected $\{K_{1,3},B_{1,p}\}$-free
graph which contains an induced $C_k$ such that $k \geq 2p+3$.
Then $G$ is an inflation of $C_k$.
\end{obs}
\begin{proof}
We consider a set $C$ inducing $C_k$ in $G$.
Clearly, we can assume that there is a vertex, say $x$, of $V(G)\setminus C$
(otherwise, the statement is satisfied trivially),
and that $x$ is adjacent to a vertex of $C$
(since $G$ is connected).
We note that $N(x)\cap C$ cannot induce any of the graphs
$K_2$, $P_4$, $2K_2$ (since $G$ is $B_{1,p}$-free),
and thus it induces $P_3$ (by Observation~\ref{o8}).
In particular, we get that every vertex of $G$ is adjacent to at least one vertex of $C$
(since $G$ is connected and $K_{1,3}$-free).
We consider a pair of vertices, say $x$ and $y$, of $V(G)\setminus C$,
and let $c$ be the number of their common neighbours in $C$.
Using that $G$ is $\{K_{1,3},B_{1,p}\}$-free, we discuss the cases
given by $c = 0,1,2,3$ (see Figure~\ref{figXy}),
and we conclude that $x$ and $y$ are adjacent if and only if
$c \geq 2$.
It follows that $G$ is an inflation of $C_k$.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{Xy.pdf}
\caption{Adjacencies of $x$ and $y$ and $C$ giving one of the forbidden subgraphs
(induced copies of $K_{1,3}$ and $B_{1,p}$ are depicted in bold).}
\label{figXy}
\end{figure}
\section*{Acknowledgements}
We thank the anonymous referees for their helpful remarks and suggestions.
The research was partly supported by the DAAD-PPP project
`Colourings and connection in graphs'
with project-ID 57210296 (German) and 7AMB16DE001 (Czech), respectively.
The research of the third, fourth, fifth and seventh author was also partly supported by project
GA20-09525S of the Czech Science Foundation.
|
{
"timestamp": "2021-09-03T02:02:14",
"yymm": "2102",
"arxiv_id": "2102.08783",
"language": "en",
"url": "https://arxiv.org/abs/2102.08783"
}
|
\section{Generalization of the toy-model}
In the Main Text we have examined the output of the ERT in an idealized working example. It consisted in a binary decision with options labeled as $A$ and $B$. Here, we extend this idealized working example to a multiple decision situation with options $A$, $B$, $C$ and $D$ (whose actual payoffs are $\mu_A$, $\mu_B$, $\mu_C$ and $\mu_D$, respectively). For this, the individual can sample successively data from the options, and from this data it can obtain estimates $E_{A,n}$, $E_{B,n}$, $E_{C,n}$ and $E_{D,n}$ of the payoffs, following the same procedure as described in the main text. To simplify, we assume that the process to generate the estimations $E_{A,n}$, $E_{B,n}$, $E_{C,n}$ and $E_{D,n}$ is such that at the $i$-th step, or sample, the piece of information obtained by the individual consists of four Gaussian variables $\epsilon_{A_i}$, $\epsilon_{B,i}$, $\epsilon_{C,i}$, $\epsilon_{D,i}$ with corresponding means $\mu_A$, $\mu_B$, $\mu_C$, $\mu_D$ respectively, and unit variance. Then, the information obtained provides an approximation to the actual values $\mu_A$, $\mu_B$, $\mu_C$, $\mu_D$ and the estimated payoff can be computed through the average over the information sampled to date, so $E_{j,n}= \frac{1}{n} \sum_{i=1}^{n} \epsilon_{j,i}$, with $j={A,B,C,D}$.
Once the estimated payoffs are available, we can successively compute (with the help of equation $2$ in the Main Text) the Shannon's entropy over the information sampled, and explore the decision dynamics if a threshold $S_{th}$ is used to trigger the decision. We explore the statistics of decision times, this is, the number of samples $n$ that the individual requires to reach the entropy threshold.
We carry out numerical experiments using the rules above and determine the distribution of decision times one typically finds when the ERT are used in this $4$ options scenario. The results in figure \ref{fig:gau_4} confirm that the multi-optional case also reports the same exponent $-3$ for multiple distances between the Gaussians means $d$ that was reported for the binary case in the Main Text.
\begin{figure}[h]
\centering
\includegraphics[width=0.99\linewidth]{figura_gaussianas_4opt_v2.png}
\caption{\textit{ a) Probability distributions for the stochastic variables $\epsilon_{j,i}$, with $j={A,B,C,D}$. The means $\mu_{A,B,C,D}$ correspond to the real value of each option $A$, $B$, $C$ and $D$. b) Probability distribution for the number of of necessary samples ($n$) to reach the corresponding entropic threshold for different different distances between the means $d$.}}
\label{fig:gau_4}
\end{figure}
\clearpage
\clearpage
\section{Prospective algorithm}
\subsection{Definition}
We propose an algorithm in which virtual subjects are able to prospect the paths available within $d_p$ steps in the lattice (we call this parameter the prospection length). For each path prospected, the walker assigns a payoff $E_{i}$ to the neighbour node at which that path starts (for a simple visualization, see Fig. \ref{fig:prosp}). The payoff is taken to be equal to the fraction of visited nodes that the prospected path crosses (so $E$ is bounded between $0$ and $1$, with $E=1$ for a path that does not cover any visited patches, and $E=0$ if all patches covered by the path have been previously visited).
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\linewidth]{dibujo_prosp_final_dashed.png}
\caption{\textit{Scheme of three different prospection paths corresponding to three different prospection lengths $d_{p}=1$ (black), $d_{p}=2$ (yellow) and $d_{p}=3$ (green). The patches that are already visited are marked in red (so they add $0$ to the payoff), while the unvisited ones appear in blue (so they add $1$ to the payoff). The payoff assigned in each one of the paths depicted would be a) $E_{f} = 0$ to option $f$ for $d_p=1$, b) $E_{i} = (1+1)/2 =1$ to option $h$ for $d_{p}=2$, and $E = (1+1+0)/3 =2/3$ to option $b$ for $d_p=3$. }}
\label{fig:prosp}
\end{figure}
The walker keeps in memory its previous trajectory during a characteristic number of steps. In particular, the visits are remembered by the virtual subject during a time $\tau$ obtained from the exponential distribution $P(\tau)=\frac{1}{\tau_{m}}e^{-\frac{t}{\tau_{m}}}$, with $\tau_m$ then representing the characteristic timescale of memory. After this time, the walker will forget that this particular patch has been previously visited and will contribute as a non-visited patch for computing the corresponding payoffs.
As a result, the algorithm will assign lower payoffs to the options that it remembers having visited and/or that are adjacent to regions that it remembers having visited. So, according to equation $2$ in the Main Text, the probability to choose those options will decrease, so leading the virtual subject to regions that are still unvisited (or at least it does not remember having visited before). A larger prospection length $d_p$ allows the walker to sample the state of further regions and to compute the payoff the information of distant patches, but this will be only efficient if the memory parameter $\tau_m$ is large enough.
Successive prospections of the paths available in each direction are carried out at random among all possible ones of length $d_p$, and so values of the payoffs and the probabilities $p_i$ are continuously updated. Note that for a given value of $d_p$ the number of paths that can be prospected is of the order of $\sim 4^{d_p}$ (if assuming that all bonds between neighbours are available). The algorithm makes the virtual walker to move to one of the available nodes according to the decision criterion described in the Main Text. After each single prospection of one path in each direction, the Shannon's entropy $S=\sum_{i} -p_{i} \ln p_{i}$ is computed; if the value falls below a fixed threshold $S_{th}$, the walker makes a move according to the probabilities $p_i$ computed at that time (we have checked to decide according to the highest probability does not change qualitatively the walker dynamics). On the contrary, if $S>S_{th}$ then the prospection process continues. However, we additionally introduce a rule such that the maximum number of prospections is limited to $100$ to avoid (extremely unusual) situations in which $S$ would never decay below $S_{th}$ because all options available persistently exhibit very similar payoffs. We have carefully checked that this rule doesn't modify any of the results reported in a significant way.
\subsection{Coverage time study}
The results of the algorithm shown in the Main Text have been obtained under the same conditions that in the task presented to the human subjects; this is, for $49$-step trajectories through the $7x7$ lattice with the same topological structure as presented in Fig. 3 in the Main Text. However, for the sake of completeness we also analyze here the dynamics of the prospective algorithm when removing the limitation of $49$ moves, and measuring instead the number of moves it takes to cover all the patches. This gives us an additional insight about the navigation efficiency of the algorithm as a function of the memory and prospection parameters, $\tau_m$ and $d_p$. In particular, we study the mean coverage time ($T_{Cov}$) (this is, the mean time required to cover all sites in the lattice). Minimization of this magnitude would then give an estimation of the navigation efficiency of the algorithm.
The main conclusion we can extract (as one can deduce from the results in Fig. \ref{fig:diag}) is that the ability to prospect future paths (so, having a large $d_p$) is useless unless the individual has good memory skills (this is, a large $\tau_m$ value in our context). This makes clear sense, as when the walker cannot remember the previously visited patches (low values of $\tau_{m}$), the optimal strategy consists of removing prospection ($d_{p}=1$); in that case the information provided by further patches represents just useless noise as the walker always sees them as non-visited patches. On the other side, for large $\tau_m$ the walker can correctly identify the previously visited patches (large values of $\tau_{m}$), so then progressively higher prospection lengths $d_{p}$ are found to optimize the coverage of the structure and the search of a target.
\begin{figure}[h!]
\centering
\includegraphics[width=0.69\linewidth]{best_S_fixed_TCOV.png}
\caption{\textit{Prospection length $d_p$ that minimizes the coverage process, i. e., the parameter that provides a performance that minimizes then mean coverage time ($T_{Cov}$) for different values of the memory time $\tau_{m}$.}}
\label{fig:diag}
\end{figure}
\subsection{Distributed prospection lengths}
Assigning a constant prospection length $d_p$ to all the prospected paths may seem rather unrealistic. Human individuals are expected instead to prospect paths with different lengths depending on the specific situation (complexity, number of choices available, etc). The results reported in fig 5 b) in the Main Text also support this statement (the number of gazed patches is not fixed to a constant number but exhibits a variation which spans almost one order of magnitude).
We have studied then our algorithm for the case when a distribution of $d_p$ is introduced instead of a constant value. We have tried in particular a distribution $P(d_{p}) \propto \frac{1}{d_{p}^{\gamma}}$ (for $d_p \geq 1$), with $\sum_{d_{p}=1}^{\infty} P(d_{p}) =1$ to guarantee normalization. For $\gamma \to \infty$, the paths are then fixed to $d_{p}=1$, so the prospection algorithm is only to identify whether the neighbour nodes have been visited or not. On the other side, for $\gamma \rightarrow 0$ the probability is uniformly distributed among all $d_p$ values (at practice we limit $d_p$ to $1 \leq d_{p \leq 6}$ since much larger values would be absurd, given the 7x7 maze we have used). Figure \ref{fig:gamma} reports that sampling a small (but not negligible) number of long paths combined with a majority of short paths (as happens for intermediate $\gamma$ values) is sufficient to recover the same dynamics as obtained for a large fixed $d_p$ value. This can be seen by comparing results obtained for lower values of $\gamma$ to those of large values of $d_p$, which are extremely similar. This result is remarkable from an evolutionary perspective, since it suggests that improving navigation efficiency would not necessarily require to process much more information continually (note that the number of paths available for prospection grows in general as $n^{d_p}$ for a sequential decision task in which $n$ choices are given to the subject at any step, so processing costs grow exponentially with $d_p$). Instead, having the ability to carry out longer prospections and use this ability just promptly would be enough to increase efficiency significantly.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{scores_taus_horizontal.png}
\caption{\textit{Averaged total number of covered patches after the 49 steps trajectory as a function of the memory time $\tau_{m}$ for the Virtual Walker. The graph a) corresponds to a walker prospecting with a fixed for prospection length $d_p$ and the graph b), to a walker prospecting with a variable prospection $d_p$ obtained from a power-law distribution with exponent $\gamma$.}}
\label{fig:gamma}
\end{figure}
By exploring the whole range of $\gamma$ and $\tau_m$ values, we can divide the parameter phase space into four regions (figure \ref{fig:comp_diag} b)), analogously as shown in the Main Text for fixed $d_p$. The region I produces an averaged performance that visits less patches than the individuals in any of the experimental graphs. The region II produces a performance which lies between the results obtained between Circular Ordered and Disordered. The region III overcomes the results for the Circular Ordered performance but not for the Rectangular. The region IV, finally, outperforms all the experimental results. The regions are equivalent to the obtained for fixed path lengths. Again, this shows that distributed values of $d_p$ can be used to obtain higher navigation efficiencies without consuming much higher times of information processing.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{diagram_performance_entropy_both.png}
\caption{\textit{Diagram for the walker covered patches in comparison with experimental results. Regime $I$ corresponds to a worse averaged performance than all geometries. Regime $II$ corresponds to a better performance than in Circular Disordered. Regime $III$ corresponds to a better performance than in Circular Ordered and Disordered. Regime $IV$ corresponds to a better performance than in all geometries. The graph a) corresponds to a walker prospecting with a fixed for prospection length $d_p$ and the graph b), to a walker prospecting with a variable prospection $d_p$ obtained from a power-law distribution with exponent $\gamma$.}}
\label{fig:comp_diag}
\end{figure}
\subsection{Robustness of the power-law exponent for the distribution of decision times}
We have reported in the Main Text that the decision time for the walker, defined as the number of required prospected paths that makes $S < S_{th}$, exhibits again the same power-law distribution (with exponent $-3$) as the Gaussian working example. The results in Fig. 5 in the Main Text correspond to the values $t_{m}$ and $d_p$ obtained from fits to the experimental data. Here we provide an analysis to check that the $-3$ exponent remains as a characteristic feature of the algorithm, independent of the memory and prospection parameters, as well as the threshold $S_{th}$ used in the algorithm.
First, in Fig. \ref{fig:entropies}a we show the explicit dependence on the entropy threshold, and verify that the power-law behavior is kept as long as reasonable values of this parameter are chosen (extreme choices, with, $S_{th} \rightarrow 0$ for example, would modify the results, but we stress that this represents a rather unrealistic case for the purposes here). While the behavior of the walker is equivalent for different $S_{th}$, we fix it in our results in the Main Text to $S_{th}=0.5$ so it can be conveniently applied to all choices in our maze, regardless the algorithm has two, three or four options available at that movement. On the other side, we observe at Fig. \ref{fig:entropies}b that neither variations in $d_p$ nor $\tau_{m}$ modify significantly the $\sim n^{-3}$ behavior as long as some significant level of memory and prospection is kept.
We stress that the classical SPRT criterion, as well as other variations we have numerically explored, are unable to reproduce the $-3$ exponent and would lead to much smaller exponents and/or faster (exponential-like) decays in $P(n)$. This, together with the robustness analysis reported here, provides significant robustness to the entropy threshold criterion proposed here.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{entropy_threshold_both.png}
\caption{\textit{a) Distribution of the number of prospections $n$ performed by the walker to force the entropy $S$ to fall below the threshold $S_{th}$. The parameters $d_p$ and $\tau_m$ are fixed to $d_{p}=6$ and $\tau_{m}=100$, respectively, while different $S_{th}$ are explored. b) Distribution of the number of prospections $n$ performed by the walker to force the entropy $S$ to fall below the threshold $S_{th}$. The parameter $S_{th}$ is fixed to $S_{th}=0.5$, while different $d_p$ and $\tau_m$ are explored.}}
\label{fig:entropies}
\end{figure}
\end{document}
\section{Introduction}
In our daily life, we constantly find ourselves in situations that imply making decisions: what I am going to eat, which film I will see or if I am on time for the next bus. In all these situations we need to evaluate the different options available as a way to elucidate the best one. While exploring such situations would lie within the field of psychology, in the recent years there has been a growing interdisciplinary interest in decision-making. Determining the neural correlates of decision mechanisms constitute an important subject in cognitive and behavioral neuroscience \cite{neural1,neural2,brain,biases}. Also, the mathematical study of decision strategies and its comparison to the subjects performance represents an important subject in game theory and econophysics \cite{game,game2}. Last but not least, ideas from statistical physics and/or complex systems have also made its way; while most contributions to date focus on decision-making at the level of groups or collectives (see \cite{active1,active2,active3,active4,voter1} for some reviews), tentative works suggesting physical principles that could be involved in individual decisions do also exist \cite{ortega2013,yukalov2014,schwartenbeck2015,roldan2015,favre2016}.
Up until now, large efforts have been put in understanding the dynamics and the characteristics of perceptual decisions, it is, those where sensory information provides direct evidence for choosing between the options available, as in the famous random dot motion task \cite{dotmot1,dotmot2}. As a result, a correspondence between such sensory information and the neuronal responses responsible for the evidence accumulation in the brain are assumed to be identifiable somehow. Alternatively, value-based or preferential decision-making (though the exact definition changes from one specific field to another) involves situations where a deliberative and subjective (up to some level) process is necessary to reach the decision, as for example when a subject is asked to choose between two food items. In such cases neural correlates become obviously more difficult to identify.
We can still introduce a third class of situations in which an objective answer to the task does exist but such an answer cannot be reached instantaneously from sensory information, because successive coupled decisions are involved. Following some existing literature (see, e.g, \cite{tartaglia2017,zhang2017} and references there in), we will denote these situations as \textit{sequential} decision-making. Since these obviously require a higher cognitive capacity and a more reflective response by the subject in order to process the information, these situations are essentially restricted to humans (or maybe some other higher organisms). They include tasks like playing board games as chess, or solving mazes or tasks presented in some intelligence tests. All these examples involve decisions where a tree-like structure of future possibilities must be ideally built. So that, in the present work we will use the term \textit{prospection} to denote such hypothetical, or mental, simulations of future events requiring high memory and abstraction capacities \cite{gilbert2007,suddendorf2007,pfeiffer2013}.
For the simpler case of perceptual decision-making, most theoretical frameworks aimed to explain their underlying mechanisms and dynamics lie within the so-called \textit{accumulator} framework. In it, cognitive evidence (described through some effective stochastic process) is gained throughout the time until it reaches a given threshold, which then triggers the decision. The paradigmatic example is the Drift-Diffusion model (DDM) \cite{ddm1}, where the relative evidence in favour of the different options is assumed to follow a Brownian diffusion process (which introduces cognitive fluctuations or noise in the process), with a drift that accounts for the trend towards the correct option. Nowadays, it is widely accepted among psychologists that the success of the DDM is overwhelming \cite{ratcliff,rangel}, though in many cases this requires non-trivial modifications or extensions, as time-dependent thresholds \cite{ddm_boundaries} or dynamic changes on the drift \cite{fontanesi}. Furthermore, recent works have shown that value-based decisions can be also accommodated within this framework provided that the thresholds are assumed to collapse progressively over time \cite{valuev,roxin}.
On the contrary, stochastic mechanisms able to capture the dynamics during sequential decision-making are scarce \cite{nguyen2019} due to their complexity. Here we will provide experimental evidence that those processes in humans, or at least those of a certain type, are compatible with a stochastic framework in which computational information (through Shannon's entropy) could be implicitly computed by the individual as a way to measure the information gathered. To illustrate this, we study the performance of subjects during a particular navigation task through a maze on the computer screen, combined with eye-tracking data to assess the corresponding behavioral dynamics. We do not introduce any explicit costs for prospecting or analyzing information, as there are no time constraints present in the task. Thus we pose an extreme situation where decisions are mostly driven by optimization of the prospection process, rather than by any speed-accuracy trade-off or any other constraint. This represents an idealized scenario in which the underlying process used by the individuals to reach decisions could be observed without the interference from other factors.
In Section \ref{theoretical} we will present our information-based framework and discuss its main conceptual differences with accumulator models used for perceptual decision-making. In Section \ref{sec:res} we will show our experimental results to describe the performance of the subjects in the navigation task. Comparison of those performances to those shown by virtual (random-walk) algorithms able to prospect information ideally, allows us to infer the level of information that humans really process during the task. This reveals that human performances can only be explained if prospection is actually being used in the task, and we can even quantify that level of prospection. Next, we explore the statistical properties of the response time dynamics observed during the task to provide quantitative evidence that humans performances are compatible with the entropy-based mechanism proposed here. The conclusions from these results are then discussed in Section \ref{conclusions}, and the experimental and numerical methods employed for the analysis are detailed in Section \ref{sec:methods}.
\section{Theoretical framework}\label{theoretical}
A relevant problem in decision-making is to establish a criterion to identify when we have enough information to discriminate between alternative options, e.g. options $A$ and $B$ (in a binary case). This can be accounted for by sequential analysis. Let $\mathbf{x_n} = \{ x_1, x_2, \ldots, x_n \}$ be a set of independent observations that provide some information about the options. We want to use this set to test the hypothesis $H_A$ (corresponding to the option $A$ being valid). Then we need a criterion to determine whether the set $\mathbf{x_n}$ provides either a sufficient level of evidence in favor of (or against) $H_A$, or a larger set is required and then additional information must be gathered. The solution to this problem originally developed by Abraham Wald was the well-known Sequential Probability Ratio Test (SPRT), which can be proved to minimize the size $n$ of the set required to accept or reject the hypothesis with a fixed level of reliability \cite{wald48}. Given the set $\mathbf{x_{n}}$, we can map all its information into the joint probabilities $p_{A,n}$ and $p_{B,n}$ (with $p_{A,n}=1-p_{B,n}$ if the two options are mutually exclusive) that we assign to options $A$ and $B$, respectively. The SPRT criterion establishes then from the corresponding cumulative log-likelihood function
\begin{equation}
W_n= \ln \left( \frac{p_{A,n}}{p_{B,n}} \right)
\label{eq:sprt}
\end{equation}
that a decision can be reliably taken as soon as $W_n$ exceeds (or falls below) a given threshold ($W_{th}$). Consequently, the SPRT criterion establishes that there is a minimum amount of evidence required to decide, and actually the DDM can be seen as a particular continuum implementation of it \cite{bogacz2006,wald48}.
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth]{figura_esquema.png}
\caption{\textit{Scheme for the accumulator and reliability mechanisms. (a) Payoff estimation during successive prospected samples $n$. (b) Wald's ratio $W_n$ evolution according to the payoff estimators (the decision is taken at $n_d$ when $W_n$ reaches the threshold $W_{th}$). (c) Shannon's entropy $S_n$ evolution according to the payoff estimators (the decision is taken at $n_d$ when $S_n$ reaches the threshold $S_{th}$).}}
\label{fig:concept}
\end{figure*}
In controlled experiments of perceptual decision-making, the set $\mathbf{x_n}$ corresponds to direct sensory evidence that can be mapped into the probabilities $p_{A,n}, p_{B,n}$ in a relatively easy manner. For example, $\mathbf{x_n}$ can typically account for visual evidence in favour of one of the two options gathered by the subject during the task. However, in sequential decision-making the existence of such a mapping is far less obvious. Still, some relation between sensory evidence and the internal (mental) assessment of probabilities for options $A$ and $B$ is expected to be carried out by the individual. To simplify, we start by assuming that in sequential-decision making subjects mainly use sensory evidence as a way to estimate the average payoff associated to each option. Following, as above, the binary example for simplicity, we denote such estimated payoffs as $E_{A,n}$ and $E_{B,n}$ after the information in $\mathbf{x_n}$ has been gathered. If we are able to find a criterion to determine how the estimations $E_{A,n}$ and $E_{B,n}$ are carried out, this can be translated into a probability map using the Maximum Entropy Principle (MEP) from information theory. According to the prescriptions from the MEP \cite{jaynes1957}, if the only information available we have from a stochastic variable (an estimation of a payoff, in this case) consists of its average $E_{i,n}$ (with $i={A,B}$), then the most neutral (or unbiased) choice of a probability map $p_{i,n}=p_{i,n}(E_{i,n})$ we can build out of it reads
\begin{equation}
p_{i,n}= \frac{e ^{\beta E_{i,n}}}{Z_n},
\label{canonical}
\end{equation}
where $\beta$ is a positive constant (which appears as a Lagrange multiplier when applying the formalism of the MEP) and $Z_n$ a normalization factor that guarantees that $\sum_i p_{i,n} =1$ holds.
Note that this formalism is equivalent to canonical or Maxwell-Boltzmann statistics in statistical physics (except for a minus sign in the exponential, that can be absorbed in the definition of $\beta$). Interestingly, combining (\ref{eq:sprt}) and (\ref{canonical}) leads to $W_n= \beta (E_{A,n} - E_{B,n})$, so the SPRT can be interpreted in this context as a criterion that imposes a threshold in the difference between the estimated payoffs to take the decision.
In perceptual decisions for which $\mathbf{x_n}$ translates easily into an estimation of probabilities, and time constraints are strong (these are the most typical experimental conditions used), the criterion to minimize the size of the data-set $\mathbf{x_n}$ given by the SPRT represents an adequate solution. However, when time is not a significant constraint and the decision process requires a slow and reflective processing, as in sequential decision-making, then alternative mechanisms should be explored.
Here, we argue that a plausible mechanism for such situations must be based on assessing the amount of information that the probability map (\ref{canonical}) contains. The most direct way for computing such information is obviously Shannon's entropy $S_n= - \sum_i p_{i,n} \log{(p_{i,n})}$ (where again $i={A,B}$ for the simplest case of binary decisions). Then, we hypothesize that the easiest way to address sequential decision-making would be to impose a threshold in entropy, $S_{th}$, such that the condition $S_n<S_{th}$ (with $S_n$ obtained, as explained above, from the evidence available) will trigger the decision in favor of the most likely option at that moment. At this point, we remember that Shannon's entropy reaches its maximum value when no information is still available (so $p_{A,n} = p_{B,n}$), and its value decreases as long as higher evidence in favour of one particular option is gained.
Then, the \textit{evidence accumulation} mechanism typically associated to the SPRT is here replaced by an \textit{entropy refinement} mechanism (see Fig. \ref{fig:concept}). Actually, we note that this idea is not completely novel but other authors have discussed before similar ideas \cite{entromec1,entromec2,entromec3}, though to our knowledge this specific criterion and its implications have never been tested experimentally.
\subsection{Working example}
\begin{figure*}[]
\centering
\includegraphics[width=0.99\linewidth]{figura_gaussianas_todas.png}
\caption{\textit{a) Probability distributions for the stochastic variables $\epsilon_{i,n}$ (with $n$ the number of samples and where $i$ labels the options $A$ and $B$). The means $\mu_{A}$, $\mu_{B}$ represent the actual payoffs for each option. b) Evolution of the estimator $E_{i,n}$ as a function of the number of samples $n$. c) Evolution of the cumulative $W_n$ with the number of samples $n$. d) Evolution of the Shannon's entropy $S_n$ with the number of samples $n$. e) Probability distribution for the number of samples to reach $S_{th}$ or $W_{th}$ for the ERT and the SPRT, respectively, and for different distances $d \equiv \mu_{A} - \mu_{A}$.}}
\label{fig:threshold_comparison}
\end{figure*}
We will illustrate some specific properties of the \textit{entropic refinement test} (ERT) through an idealized working example. Our main purpose is to detect some relevant (experimentally measurable) differences in the decision-time dynamics that allow us to discriminate between the ERT and the SPRT.
We will focus on the binary decision case again (though we will relax this assumption below). If the individual has to choose between options $A$ and $B$ (whose actual payoffs read $\mu_A$ and $\mu_B$, respectively), this will be done by sampling successively information from the two options to obtain estimates $E_{A,n}$, $E_{B,n}$ of the corresponding payoffs (with $n$ again representing the number of steps, or samples). At the $i$-th step the piece of information obtained by the individual will consist of two Gaussian variables $\epsilon_{A_i}$, $\epsilon_{B,i}$ with corresponding means $\mu_A$ $\mu_B$, respectively, and unit variance. Then, the information obtained provides an approximation to the actual values $\mu_A$, $\mu_B$, and the estimated payoffs can be computed through the average over the information sampled, so $E_{A,n}= \frac{1}{n} \sum_{i=1}^{n} \epsilon_{A,i}$ and $E_{B,n}= \frac{1}{n} \sum_{i=1}^{n} \epsilon_{B,i}$. Note that this is in agreement with our assumption above that the individual essentially uses sampling of information to obtain an averaged estimation of the actual payoffs.
Once we have the estimations for the payoffs, we can compute (with the help of (\ref{canonical})) the Shannon's entropy over the information sampled, and explore the decision dynamics if a particular threshold $S_{th}$ is used to trigger the decision. The main magnitude we will explore is, as usual in the field, the statistics of decision times, this is, the number of samples $n$ that the individual requires to reach the entropy threshold. Most works in decision-making experiments focus on the average values of the decision time, or alternatively decision time histograms are fitted to gamma distributions \cite{gamma1,gamma2}. From that analysis, however, it would be extremely difficult to discriminate between different decision mechanisms like the ERT and the SPRT, since an appropriate tuning of the parameters could easily lead to similar estimates from both.
Instead, here we will focus on the behavior at the tail of the probability distribution of decision times. Previous works based on ideas similar to the ERT have suggested that this mechanism can account for power-law distributions of decision times \cite{medina2014,medina2015}. This represents a significant qualitative difference to other mechanisms (as the SPRT) where such distributions often decay in an exponential way.
So that, we carry out numerical experiments using the rules above and determine the distribution of decision times for the ERT and the SPRT, as a function of the parameters $\mu_A$, $\mu_B$, and the thresholds $W_{th}$, $S_{th}$ (see Fig. \ref{fig:threshold_comparison}).
In summary, we find that the SPRT exhibits a time distribution that depends strongly on the distance between the means of the payoffs $d \equiv \mu_A - \mu_B$ (Fig. \ref{fig:threshold_comparison} e)), and for most situations it eventually decays exponentially (though transient power-law behaviors with exponent $-1.5$ are also found). Instead, for the ERT the distribution exhibits a power law behavior $P(n) \propto n^{-3}$ for a wide range of $d$ and $S_{th}$ values. Remarkably, the power-law behavior with the $-3$ exponent persists when considering decisions between more than two options; in the Supplementary Material file we show equivalent results for decisions between 4 possible options.
So, at this point we have at least one qualitative difference we can use to discriminate between the SPRT and the ERT.
\section{Results}\label{sec:res}
Sequential decision making requires a mental processing of the acquired information which can be hard to capture through monitorization of the brain activity. However, simple situations in which behavioral information reflects somehow such information processing can help inferring the actual mechanisms behind. With this purpose, we have designed a particular navigation task through a maze on the computer screen. Commercial eye-trackers are used during the task to determine where the subjects are gazing at, and from that we obtain information about the possible future paths that the subjects are prospecting at each moment.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{dibujo_version_final.png}
\caption{\textit{Scheme of the experimental setup. First column: Visualization of the $49$-node lattice used for the navigation task. The solid lines indicate the bonds allowed between neighbour nodes. Second column: a realization of an individual trajectory within the lattices (the color code denotes time , see legend). Third column: eye fixations obtained during the previous trajectory from eye-tracking data.}}
\label{fig:triored}
\end{figure*}
The subjects are asked to visit the maximum amount of nodes possible of a discrete 7x7 lattice on the screen within 49 moves if starting from the center of the structure. Moves are only allowed between neighbour nodes, and they are carried out by clicking with the mouse on the node to which one wants to move next. Heterogeneities in the lattice and three different levels of visual complexity (Rectangular, Circular Ordered and Circular Disordered, see left column in Fig. \ref{fig:triored}) are introduced in order to evaluate the subjects performance under different situations. However, all the structures presented to the subjects are topologically identical to facilitate comparison of the results, only visual representation changes from one to another. Further details about the experimental design and protocol are provided below in Section \ref{sec:methods}.
\subsection{\label{sec:perf} Overall performance in the navigation task}
The overall performance of the individuals is computed as the number of nodes that have been covered during the entire trajectory of 49 moves (Fig. \ref{fig:performance} a)). For the Rectangular level, the subjects visited in average $37.1 \pm 3.8$ nodes (this is, a $75,7 \%$ of the total $49$ nodes). For the Circular Ordered level, they covered $29.1 \pm 4.8$ nodes ($59,4 \%$) and for the Circular Disordered graph, $26.4 \pm 4.8$ nodes ($53,9 \%$). These results confirm that the navigation task (and so the sequential decision making involved) largely depends on the visual representation of the nodes in the lattice, with more complex representations probably preventing the subjects from planning their trajectories ahead (so suppressing prospection). Furthermore, analyzing the performances as a function of the averaged decision time shows us that a higher performance is not a result of spending more time before deciding (Fig. \ref{fig:performance} b)), but the difficulty of the task seems to be clearly the driving force explaining those differences (note that the decision time is here defined as the time between consecutive moves).
\subsection{\label{sec:time} Eye-tracking data capture prospection dynamics}
We next analyze the information gathering during the task with the help of the eye-tracking data. We define the distance $d_{b}$ as the minimum number of moves required to go from the current node of the lattice to the one the individual is gazing at. The corresponding distributions of $d_{b}$ found are again completely different for the three levels of visual organization (Fig. \ref{fig:performance} c)). Then it is clear that the individuals cannot prospect equally in the three cases. While for the Rectangular level a large amount of time is invested in gazing at nearby nodes, for the two Circular levels (specially for the Disordered one) frequent gazes at distant nodes are observed. These must be attributed either to (i) distractions caused by the presence of nodes which are close on the screen configuration though they are not easily accessible from the current one, or (ii) the difficulty at identifying easily the nodes which are available in the next few steps. Ideally, an efficient prospection of the future paths should combine an intensive exploration of closer nodes and a smaller (but non-negligible) exploration of further ones. We illustrate this in the inset of Fig. \ref{fig:performance} c), where the cumulative probability of gazing at nearby nodes (defined as those with $d_{b} \leq 4$) is shown to decrease drastically as a function of the visual difficulty of the task.
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth]{figura_per+prosp.png}
\caption{\textit{a) Performance of human subjects in the task for the three levels of visual organization presented in Fig. \ref{fig:triored}. b) Averaged decision times for the three levels. c) Distribution of the distance $d_{b}$ between the current node and nodes gazed between moves (Inset: cumulative probability that the nodes gazed satisfy $d_{b} \leq 4 $). d) Performance of the virtual walker in comparison with experimental ones, with regions II, III and IV accounting for virtual walker performances better than humans in the R, CO and CD cases,respectively. e) Best fit (lines) to the experimental distribution of performances (symbols) obtained from the virtual walker algorithm (see text for details of the fit). f) Evolution of the performance during the $49$-move trajectories obtained from experimental trajectories (symbols) and virtual walkers with best-fit parameters (lines).}}
\label{fig:performance}
\end{figure*}
\subsection{Quantifying prospection during navigation}
As a way to quantify and refine the ideas above, we compare the subjects performance in our task to that of virtual subjects following an algorithm which is able to automatically prospect the information of the paths available within a certain number of moves $d_p$ (called the \textit{prospection length}) in the lattice. While a classical random-walk algorithm would select its path completely at random, making uninformed decisions, our virtual subjects (walkers) have the ability to use the information from the paths prospected to avoid revisits to previous nodes as much as possible, using then a mechanism of self-avoidance. Actually, we follow similar rules to those of the true Self-Avoiding Random Walk \cite{tsaw1,tsaw2,tsaw3} and the Self-Attracting Random Walk \cite{asaw1,asaw2,asaw3} schemes to generate the payoffs $E_{j,n}$ (see Section \ref{sec:methods} for details), so then the procedure described in Section \ref{theoretical} can be applied within the lattice to generate our virtual random walks with prospection.
To enhance its performance, it is then necessary that the virtual walker keeps in memory its previous trajectory. To implement this in a realistic way, we consider that previous visits to a patch are kept in memory by the walker during a characteristic time $\tau_{m}$. As a result of this finite memory, the corresponding payoffs $E_{i,n}$ will be modified. For large values of $\tau_m$ memory remains untouched, and so all visited sites are remembered. On the contrary, for small $\tau_m$ previous nodes will be forgotten and so all values of $E_{i,n}$ will be always similar, leading to a very homogeneous probability map (note that if all $E_{i,n}$ values are the same, according to (\ref{canonical}) the same probability will apply to all neighbour nodes, and so the virtual walker will behave in a random, uninformed way).
The rules above allow the virtual random walkers to avoid overlaps in their trajectories. However, their performance, contrary to that of human subjects, is independent of the visual organization of the lattice (Rectangular or Circular). Thus, we can use the comparison between both to assess the prospection abilities that are being presumably used by the human subjects in each level of the experiment.
By exploring a reasonable range of $d_p$ and $\tau_m$ values in the algorithm, we observe that the parameter phase space can be divided into four regions (see Fig. \ref{fig:performance} d)). For region I the algorithm produces an averaged number of visited nodes lower than the individuals in any of the experiments. The region II produces a performance which lies between the results obtained between Circular Ordered and Circular Disordered. The region III overcomes the results for the Circular Ordered performance but not for the Rectangular. The region IV, finally, outperforms all the experimental results.
Hence, we conclude that relatively large values of both $\tau_{m}$ and $d_p$ are necessary for the virtual walkers to equal or improve the performance by the subjects in the Rectangular level. This seems to confirm that the subjects in this case do actually remember the previously visited nodes during the task, and predict future paths efficiently. The prospection ability, in particular, is indispensable to justify the performances seen in the experiments. Instead, for the Circular structures the individuals are probably not able to prospect the paths to distant nodes (information gathering is less efficient, as suggested before in Fig. \ref{fig:performance} c)); in consequence, the value of $d_p$ necessary to reproduce their performance is not necessarily high (though still some level of memory $\tau_m$ is necessary).
Next we determine those values of $d_p$ and $\tau_{m}$ that provide the best fit to the distribution of performances obtained from the experiments (see Fig. \ref{fig:performance} e)). These are (i) $\tau_{m}^{R}=70$, $d_{p}^{R}=5$, (ii) $\tau_{m}^{Co}=7$, $d_p^{Co}=3$, and (iii) $\tau_{m}^{Cd}=5$, $d_p^{Cd}=2$, for the Rectangular (R), the Circular Ordered (Co) and the Circular Disordered (Cd), respectively.
From this, we analyze the evolution of the performance throughout the task between humans and the virtual walkers with the fitted parameters (Figure \ref{fig:performance} f)). The performances increase almost linearly in the beginning, but the growth slows down as the time advances and trajectory overlaps appear in consequence. The experimental curves (symbols) and those obtained from the virtual walkers (lines) agree almost perfectly. This confirms that the behavior of virtual walkers with prospection reproduces in detail the dynamic performance of human subjects throughout the experiment, and so it provides a reliable approximation for it.
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth
]{figura_eyetracker_exponente.png}
\caption{\textit{Distributions for time between consecutive moves $t_m$ (a), the gazing times at a given node $t_g$ (b) and the number of gathered patches between consecutive movements $n_g$ (c) obtained form the experimental data. d) Distribution of the number of prospections $n$ performed by the virtual random walker. In all cases, the exponent obtained from a power-law fit to the distributions is highlighted, with the different colors representing the difficulty levels R, CO and CD.}}
\label{fig:ojo}
\end{figure*}
\subsection{Human decisions during maze navigation are compatible with the ERT}
\label{dynamics}
The working example explored in Section \ref{theoretical} yields a power-law scaling (with exponent $-3$) for the tail of the decision time distributions within the ERT framework. Actually, this result is not specific of that particular example (based on Gaussian estimations of the actual payoffs). Using the virtual random-walks with prospection described in the previous Section, we obtain exactly the same behavior (Fig. \ref{fig:ojo} d)) under a wide range of parameter values for $d_p$, $\tau_m$ and $S_{th}$, so we can infer that this represents a rather general property of the mechanism proposed (see Section \ref{sec:methods} for further details).
To check if the performance of human subjects in the navigation task shows also the same scaling, we use now the eye-tracking data from the experiments to analyse the distributions of $(i)$ times between consecutive moves in the experiment, $t_m$, $(ii)$ times during which the subjects gaze at the same patch, $t_g$, and $(iii)$ number of different nodes gazed before making the next move, $n_g$. The first one would represent our best estimation of the decision times in the experiment, while the other two are also provided as alternative measures for the sake of completeness.
The results found show a consistent evidence in favor of a power-law scaling with exponent close to $-3$ for the three cases $t_m$, $t_g$ and $n_g$ (Figs. \ref{fig:ojo} a) to \ref{fig:ojo} c)). Despite the different performances found above (Fig. \ref{fig:performance}) for the subjects in the three levels of organization (Rectangular, Circular Ordered and Disordered), it is remarkable that they all exhibit extremely similar behavior in this case. This suggests that a common underlying mechanism for decision-making is being used by the subjects in the experiment, though the different difficulty of each leads also to differences in the performances. While the range over which the power-law scaling extends is not very wide (since the decision times in the experiment only span through two orders of magnitude) the fits shown are quite robust. Only longer decision times (for which statistics is not very significant since very few decisions extend so much time) show significant departures from it. Furthermore, we noted in Section \ref{theoretical} that the classical SPRT often predicts gamma distributions of decision times with exponential decays, so we remark that this classical framework would be completely unable to explain these results.
\subsection{Information statistics at the moment of the decision}
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth]{figura_acu_entro_v2.png}
\caption{\textit{a) Schematic representation of how to estimate evidence from experimental gazes. The asterisks denote eye fixations so all fixations lying in the same quadrant of one option (e.g. option A) provide evidence in favor of that option. b) Maximum relative evidence between the options at the moment of the decision. Linear fits (for times larger than $3$ s) are given by $f(x)=0.26x +1.14$ (R),$g(x)=0.26x +1.69$ (CO) and $z(x)=0.16x +2.21$ (CD). c) Shannon's entropy $S_n$ at the moment of the decision. The horizontal lines correspond to the averaged entropy for times $>3$ s ($0.168$ (R),$0.131$ (CO) and $0.130$ (CD). Statistical test for the null hypothesis that the entropy is non-constant for times $>3$ s. The corresponding p-values are $p=0.82$ (R), $p=0.69$ (CO) and $p=0.92$ (CD).}}
\label{fig:entro_ener}
\end{figure*}
To provide further evidence of the compatibility of the experimental results with the ERT against the SPRT, we explicitly plot the estimated cumulative evidence experimentally obtained by the subjects also at the moment of the decision. Since we cannot now which paths the subject is really prospecting, as a proxy for computing this evidence we compute the fraction of time they have been gazing at regions which are in the same direction that the particular node (see Fig. \ref{fig:entro_ener} a)). Starting from the current node, we divide the lattice in four regions, so each eye fixation that lies in a particular region provides further evidence in favour of the corresponding option (A, B, C or D, for the example in Fig. \ref{fig:entro_ener} a)).
As mentioned above, the SPRT criterion with canonical probabilities (\ref{canonical}) is equivalent to assume that the decision is triggered once the relative evidence reaches a given threshold. Our data clearly shows that the relative evidence estimators computed at the moment of making a decision/move increase monotonically with the time that it has been necessary to take the decision. So, longer decisions involve longer evidence accumulation (figure \ref{fig:entro_ener} b)), which is in clear contradiction with the SPRT.
Instead, the ERT proposes that the decision is triggered by a threshold in the informational (Shannon's) entropy $S_n$. When plotting this entropy (computed from the procedure above) at the moment of the decision, it reaches always a value which is approximately constant (independent of the extent of the decision time), suggesting that this magnitude is really an invariant for all the decisions (Fig. \ref{fig:entro_ener} c)). The statistical significance of this result has been verified by testing the null hypothesis that the entropy is non-constant (see figure caption for details). Our conclusion is robust basically for intermediate and longer decisions, while the shorter ones ($<2$ seconds) may be probably induced by an automatic response by the subject, or may be based on prior information gathered during the previous move by the subject, so they step away from the decision dynamics above (actually, the $-3$ power-law scaling discussed above is essentially obtained for decision times in the same range, too).
\section{Conclusions}\label{conclusions}
Navigation efficiency in higher organisms (humans, in particular) must take into account the fact that they are able to prospect the future outcomes of their available options, and process the corresponding information in order to reach a decision. Here we have explored this idea within the context of human navigation through mazes when non-local information is available through visual inspection (and so information is processed in a tree-like fashion).
Our analysis (based on comparing the performances of human subjects and those of virtual walkers with the capacity to prospect future paths) provides evidence that prospection is actually being used by the humans, at least in those levels of visual organization that enable it (the Rectangular one, essentially). Besides, an approximate quantitative characterization of that prospection capacity ($d_p$) and the associated memory skills ($\tau_m$) has been obtained so, reaching an estimation of the quantity of information that humans are really managing during the task.
Furthermore, the distribution of times between moves, or gazing times, together with the study of the values for the entropy at the moment of the decisions allow us to think that the ERT hypothesis proposed here can account to a significant extent for how information is being processed by the subjects during the task. At this respect, we stress that traditionally mean times to decision, as well as the ratio of the times corresponding to choosing option $A$ or $B$ (for binary decisions) have been studied in detail by psychologists. On the contrary, the tails of the decision time distributions are rarely explored decision-making experiments. In contrast, here we have shown that this statistical analysis provides very significant information about the dynamics of decision that is being used in the experiment (and actually non-exponential decays in those distributions clearly seem to indicate that the classical SPRT can hardly be used to explain this kind of tasks/experiments).
Regarding the $-3$ value of the power-law exponent we have recurrently obtained from the ERT formalism and from experiments, a formal justification of its origin remains unknown for the moment. For the specific navigation task proposed here, note that decision times should be understood as the sum of the times that the individual has been gazing at each node before making a new move. Then, to explain the power-law scaling found for decision times one should argue that either (i) the distribution of times the subject keeps looking at a given patch, or (ii) the number of patches that are gazed between decisions, must have power-law tails. It is, however, the case (Fig. \ref{fig:ojo} a) and \ref{fig:ojo} b)) that both distributions present that scaling. So, the underlying mechanism yielding the power-law distribution for decision times is apparently a non-trivial combination of both. It is not still clear yet how general these results may be, or if they appear as a consequence of the conditions used in our experiment in particular. However, we stress that similar results have also been found in other experiments of human navigation through mazes \cite{maze} , so all together raises the need for a deeper and systematic exploration of these ideas in the future.
Finally, it is remarkable that all this information about sequential decision-making in humans has been reached simply with the help of eye-tracking data and monitoring of the decision times exhibited by the subjects on the computer screen, which require just easily available technologies. It is likely that the combination of such methods and data with EEG or other advanced physiological sensors could be used to refine our ideas, and provide more reliable estimates of the dynamics during similar tasks, also in more realistic environments than the one here. We expect that our results can stimulate further research in this line.
\section{\label{sec:methods}Methods}
\subsection{\label{sec:exp}Experimental Design}
$18$ clinically normal adults ($10$ women and $8$ men) aged from $18$ to $45$ carried out the experiment. In the first part of the task, subjects are presented a discrete 7x7 regular lattice on the screen (Fig \ref{fig:triored}, upper panel on the left). The patches are linked through bonds connecting them only to neighbour patches (4 paths per node, except for the boundaries where paths are only 2 or 3). However, we remove a part of the bonds between nodes ($20 \%$ of them, always preventing isolated regions in the structure from being formed) in order to introduce some level of heterogeneity in the lattice (Fig \ref{fig:triored}, left column), and then nodes are reorganized in different configurations (Rectangular, Circular Ordered, and Circular Disordered, as mentioned above).
The subjects are asked to visit the maximum amount of patches of the resulting lattice within 49 moves if starting from the center of the structure (one step is defined as a transition between connected nodes in the graph). They are not required to complete the trajectory in any given time, so time constraints are not present in the task and information processing can be extended as much as desired by the subject. They can move to neighbour nodes in the lattice by clicking with the mouse over the patch to which they want to move next (Fig \ref{fig:triored}, middle columns, show some realizations of the resulting trajectories). Heterogeneity in the lattice then makes the process non-trivial (for a homogeneous regular lattice the optimal strategy would be simply to perform a ladder-like trajectory until covering all nodes).
To facilitate visualization of the options available at each decision (especially in the Circular Disordered case, where visualization could be tough), the current node of the individual was depicted in a different color (green, with the rest of the nodes appearing in blue) and the possible moves available at each moment were emphasized (with thicker solid lines). On the contrary, the subjects have no visual guides to distinguish between previously visited and non-visited patches, so they can only use their memory skills to avoid overlaps and increase their performance.
To assess the subjects performance under different levels of difficulty, the nodes in the Rectangular lattice are then visually reorganized in a circular way. In one case (Circular Ordered), we keep in the circle the order of the rows of the first rectangular graph (Fig. \ref{fig:triored} middle row). For the other (Circular Disordered), we place the nodes following a circular structure but with random reorganization of nodes (Fig. \ref{fig:triored} lower row). We remark that topologically the three structures are completely equivalent, while visually different. Additionally, we rotated 90º, 180º and 270º the rectangular structure (with their corresponding Circular Ordered and Circular Disordered reorganizations) for randomizing the task (so 12 cases in total, all with the same topological structure, are presented to each subject). The final data-set comprised then $216$ trajectories with a mean duration of $77.1 \pm 2.9$ s each.
As a proxy for information prospection during the task, we use eye fixations measured through a commercial eye-tracker (Tobii X2-30, at 30 Hz). An eye fixation corresponds to the visual gaze on a single location within the screen. the See the right column in Figure \ref{fig:triored} for a visual trajectory example for each structure. We use this to analyze (i) the number of nodes at which the subject gazes between consecutive steps, and (ii) the time it remains gazing at particular patches. For this, each node was assumed to be represented by a circle of radius $0.05$ (the screen size is equal to $1$) around the center of the node, so all eye fixations lying within the circle are assumed to indicate that the subject is gazing at that particular node. This circle size prevents to assign the subject fixations to different nodes at the same time.
\subsection{Virtual walkers with prospection}
An algorithm for generating virtual random walks with prospection over the 7x7 lattice used in the experiment is proposed as a reference model against which to compare experimental data. Our virtual walkers are able to estimate the convenience of moving to a neighbour node $j$ by assigning successive values $ \epsilon_{j,1}, \epsilon_{j,2}, \ldots$ to that node by prospecting hypothetical paths that would use that node as a starting point. So that, at each time step the walker prospects one particular path (chosen at random from all the possible ones) of fixed length $d_p$ (\textit{prospection length}) starting from each of the neighbour nodes. The specific value $\epsilon_{j,n}$ assigned to the $n$-th prospected path for the neighbour node $j$ corresponds to the fraction of non-visited nodes that the path will cover, with $\epsilon_{j,n}=1$ representing a prospected path for which all sites are still unvisited, and $\epsilon_{j,n}=0$ representing a path for which all nodes have been already visited before. So that, the corresponding payoff associated to that neighbour node $j$ (after $n$ paths have been prospected) reads $E_{j,n} = \frac{1}{n} \sum_{i=1}^{n} \epsilon_{j,i}$, in analogy with the working example discussed above.
Once payoffs have been defined, the procedure described in Section \ref{theoretical} can then be applied within the lattice to generate our virtual random walks. After each single prospection of one path in each direction, the walker computes the corresponding Shannon's entropy $S_n=\sum_{i}^{n} -p_{j,i} \ln p_{j,i}$; if the computed value falls below a fixed threshold $S_{th}$, the walker makes the decision (it is, a move) according to the probabilities $p_{j,i}$ computed (we have checked that choosing instead the node with the highest probability leads to very similar results). On the contrary, if $S_n >S_{th}$ then the prospection process continues. However, at practice we introduce a rule such that the maximum number of prospections is limited to $100$ to avoid (extremely unusual) situations in which $S_n$ would never decay below $S_{th}$ because all options persistently exhibit very similar payoffs (this rule doesn't significantly modify any of the results reported here).
\textbf{Distributed prospection lengths.}
Assigning a constant prospection length $d_p$ to all the prospected paths may seem rather unrealistic. Human subjects are expected instead to prospect paths with different lengths depending on the specific situation (complexity, number of choices available, etc). The results in Fig. \ref{fig:ojo} b) also support this, as the number of gazed patches exhibits a variation which spans almost one order of magnitude.
We have studied then our virtual random-walk algorithm for the case when a distribution of $d_p$ is introduced instead of a constant value. We have tried in particular a distribution $P(d_{p}) \propto \frac{1}{d_{p}^{\gamma}}$ (for $d_p \geq 1$ and with $\gamma$ going from $0$ to $\infty$), with $\sum_{d_{p}=1}^{\infty} P(d_{p}) =1$ to guarantee normalization. The results, which are summarized in the Supplementary Material File, clearly show that the conclusions one obtains so are qualitatively the same as those presented for fixed $d_p$ values in the main text.
\textbf{Robustness of the distribution of decision times on the entropy threshold $S_{th}$.}
We have reported above that the decision time for the walker exhibits the power-law distribution with exponent $-3$. An analysis to check that this exponent remains approximately constant, independently of the memory and prospection parameters $d_p$ and $\tau_m$, as well as the threshold $S_{th}$, has been carried out using our virtual random-walk algorithm. According to the results found (see the Supplementary Material file), the conclusions reached in the article remain quite robust. Only when very large or very small values of $S_{th}$ are considered (which would represent the case in which decisions are either taken almost immediately without barely any information gathering, or an extremely large amount of information would be necessary to trigger the decision) the $\sim n^{-3}$ scaling breaks down.
\section{Acknowledgements}
This research was supported by the Spanish government
through Grant No. CGL2016-78156-C2-2-R.
|
{
"timestamp": "2021-09-09T02:15:09",
"yymm": "2102",
"arxiv_id": "2102.08802",
"language": "en",
"url": "https://arxiv.org/abs/2102.08802"
}
|
\section{Introduction}
A del Pezzo surface of degree four $X$ over a number field $k$ is a smooth projective surface in $\mathbb{P}^4$ given by the complete intersection of two quadrics defined over $k$. They are the simplest class of del Pezzo surfaces that have a positive dimensional moduli space and for which interesting arithmetic phenomena happen.
Indeed, del Pezzo surfaces of degree at least 5 with a $k$-point are birational to $\mathbb{P}^2_{k}$ and, in particular, have a trivial Brauer group. They satisfy the Hasse Principle and weak approximation. The Brauer group $\Br X = \HH_{\text{\'{e}t}}^2(X,\mathbb{G}_m)$ of $X$ is a birational invariant which encodes important arithmetic information such as failures of the Hasse principle and weak approximation via the Brauer--Manin obstruction. We refer the reader to \cite[\S8.2]{P17} for an in-depth description of this obstruction. The image $\Br_0 X$ of the natural map $\Br k \rightarrow \Br X$ does not play a r\^{o}le in detecting a Brauer--Manin obstruction and thus one can consider the quotient $\Br X / \Br_0 X$ instead of $\Br X$. We say that $X$ has a trivial Brauer group when this quotient vanishes.
In contrast to del Pezzo surfaces of higher degree, the Hasse principle may fail for del Pezzo surfaces of degree four \cite{JS17}. Yet, they form a tractable class. Colliot-Th\'el\`ene and Sansuc conjectured in \cite{CTS80} that all failures of the Hasse principle and weak approximation are explained by the Brauer-Manin obstruction. This is established conditionally for certain families (\cite{Wit07}, \cite{VAV14}).
In \cite{VAV14} V\'arily-Alvarado and Viray proved that del Pezzo surfaces of degree four that are everywhere locally soluble have a vertical Brauer group. In particular, given a Brauer element $\mathcal{A}$, they show that there is a genus one fibration $g$, with at most two reducible fibres, for which $\mathcal{A}\in g^*(\Br(k(\mathbb{P}^1)))$. The aim of this paper is to study this fibration in detail for a special family of quartic del Pezzo surfaces which we investigated from arithmetic and analytic point of view in \cite{MS20}.
Let $\mathbf{a}} \newcommand{\bfA}{\mathbf{A} = (a_0, \dots, a_4)$ be a quintuple with coordinates in the ring of integers $O_k$ of $k$. Define $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \subset \mathbb{P}_k^4$ by the complete intersection
\begin{equation}
\label{eq:dP4 main}
\begin{split}
x_0x_1 - x_2x_3 = 0, \\
a_0x_0^2 + a_1x_1^2 + a_2x_2^2 + a_3x_3^2 + a_4x_4^2 = 0
\end{split}
\end{equation}
and we shall assume from now on that $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is smooth. The latter is equivalent to $(a_0a_1 - a_2a_3)\prod_{i = 0}^4 a_i \neq 0$. This altogether gives the following family of interest to us in this article:
\[
\mathcal{F} = \{X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \mbox{ as in } \eqref{eq:dP4 main} \ : \ \mathbf{a}} \newcommand{\bfA}{\mathbf{A} \in O_k^5 \mbox{ and } (a_0a_1 - a_2a_3)\prod_{i = 0}^4 a_i \neq 0\}.
\]
There are numerous reasons behind our choice of this family. Firstly, surfaces in $\mathcal{F}$ admit two distinct conic bundle structures, making their geometry and hence their arithmetic considerably more tractable. Moreover, for such surfaces the conjecture of Colliot-Th\'{e}l\`{e}ne and Sansuc is known to hold unconditionally \cite{CT90}, \cite{Sal86}. Secondly, our surfaces can be thought of as an analogue of diagonal cubic surfaces as they also satisfy the interesting equivalence of $k$-rationality and trivial Brauer group. This is shown in Lemma~\ref{rational} which is parallel to \cite[Lem.~1.1]{CTKS}.
Our aim is to take advantage of the two conic bundle structures present in the surfaces to give a thorough description of a genus one fibration with two reducible fibres for which a Brauer element is vertical. More precisely, after studying the action of the absolute Galois group on the set of lines on the surfaces, we show that the two reducible fibres are of type $I_4$ and that the field of definition of the Mordell--Weil group of the associated elliptic surface depends on the order of the Brauer group modulo constants which in our case is 1, 2 or 4 \cite{Man74}, \cite{SD93}. The presence of the two conic bundle structures plays an important r\^ole forcing a bound on the degree and shape of the Galois group of the field of definition of the lines. We show in Theorem~\ref{thm:MWBrauer} that surfaces with Brauer group of size 2 are such that the genus one fibration only admits a section over a quadratic extension of $k$, while those with larger Brauer group, namely of order 4, have a section for the genus 1 fibration already defined over $k$.
\begin{theorem}
\label{thm:MWBrauer}
Let $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \in \mathcal{F}$ and let $\mathcal{E}$ be the genus one fibration on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ described in \S4.2. Then the following hold.
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} / \Br_0 X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \simeq \mathbb{Z}/2\mathbb{Z}$, then the genus 1 fibration $\mathcal{E}$ is an elliptic fibration i.e., admits a section, over a quadratic extension. Moreover, it admits a section of infinite order over a further quadratic extension. The Mordell--Weil group of $\mathcal{E}$ is fully defined over at most a third quadratic extension.
\item If $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} / \Br_0 X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \simeq (\mathbb{Z}/2\mathbb{Z})^2$, then $\mathcal{E}$ is an elliptic fibration with a 2-torsion section and a section of infinite order over $k$. Moreover, the full Mordell--Weil group of $\mathcal{E}$ is defined over a quadratic extension.
\end{enumerate}
\end{theorem}
Not surprisingly, this is in consonance with the bounds obtained in our earlier paper \cite[\S1]{MS20} when $k = \mathbb{Q}$, as surfaces with Brauer group of size 2 are generic in the family while those with larger Brauer group are special.
This paper is organized as follows. Section~\ref{theconics} contains some generalities on quartic del Pezzo surfaces that admit two conic bundles. There we also describe the two conic bundles on the surfaces of interest to us. Section~\ref{sec:lines} is devoted to the study of the action of the absolute Galois group on the set of lines on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$. We have also included there a description of the Brauer elements using lines, by means of results of Swinnerton-Dyer, giving the tools to, in Section~\ref{thegenus1}, describe a genus one fibration with exactly two reducible fibres for which a Brauer element is vertical.
\begin{acknowledgements}
We would like to thank Martin Bright, Yuri Manin and Bianca Viray for useful discussions. We are grateful to the Max Planck Institute for Mathematics in Bonn and the Federal University of Rio de Janeiro for their hospitality while working on this article. Cec\'ilia Salgado was partially supported by FAPERJ grant E-26/202.786/2019, Cnpq grant PQ2 310070/2017-1 and the Capes-Humboldt program.
\end{acknowledgements}
\section{Two conic bundles}\label{theconics}
Let $X$ be a quartic del Pezzo surface over a number field $k$. From this point on we assume that $X$ is $k$-minimal and moreover that it admits a conic bundle structure over $k$. It follows from \cite{Isk71} that there is a second conic bundle structure on $X$. In this context, given a line $L\subset X$ then $L$ plays simultaneously the r\^ole of a fibre component and of a section depending on the conic bundle considered.
Fix a separable closure $\bar{k}$ of $k$. In what follows we analyse the possible orbits of lines under the action of the absolute Galois group $\Gal(\bar{k}/k)$ when $\Br X \neq \Br_0 X$ in the light of the presence of two conic bundle structures over $k$. Firstly, we recall \cite[Prop.~13]{BBFL07} that tells us the possible sizes of the orbits of lines. In the statement of this proposition the authors consider a quartic del Pezzo surface over $\mathbb{Q}$ but its proof establishes the result for a del Pezzo surface of degree four over any number field.
\begin{lemma}\label{lem:BBFL}[{\cite[Prop.~13]{BBFL07}}]
\label{lem:sizes}
Let $X$ be a del Pezzo surface of degree four over $k$. Assume that $\Br X / \Br_0 X$ is not trivial. Then the $\Gal(\bar{k}/k)$-orbits of lines in $X$ are one of the following:
\[
(2,2,2,2,2,2,2,2), (2,2,2,2,4,4), (4,4,4,4), (4,4,8), (8,8).
\]
\end{lemma}
\begin{remark}\label{rmk:conic_orbit}
Recall that we have assumed that $X$ is minimal. In particular, every orbit contains at least two lines that intersect. Since each conic bundle is defined over $k$ and the absolute Galois group acts on the Picard group preserving intersection multiplicities, we can conclude further that each orbit is formed by conic bundle fibre(s). In other words, if a component of a singular fibre of a conic bundle lies in a given orbit, then the other component of the same fibre also lies in it.
\end{remark}
\subsection{A special family with two conic bundles}
We now describe the two conic bundle structures over $k$ in the del Pezzo surfaces given by \ref{eq:dP4 main}. It suffices to consider $\mathbb{F}(1, 1, 0) = \mathbb{P}(\mathcal{O}_{\mathbb{P}^1}(1)\oplus\mathcal{O}_{\mathbb{P}^1}(1)\oplus\mathcal{O}_{\mathbb{P}^1})$ which one can think of as $((\AA^2 \setminus 0) \times (\AA^3 \setminus 0))/ \mathbb{G}_m^2$, where $\mathbb{G}_m^2$ acts on $(\AA^2 \setminus 0) \times (\AA^3 \setminus 0)$ as follows:
\[
(\lambda, \mu) \cdot (s, t; x, y, z) = (\lambda s, \lambda t; \frac{\mu}{\lambda}x, \frac{\mu}{\lambda} y, \mu z).
\]
The map $\mathbb{F}(1, 1, 0) \rightarrow \mathbb{P}^4$ given by $(s, t; x, y, z) \mapsto (sx: ty: tx: sy: z)$ defines an isomorphism between $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ and
\begin{equation}
\label{eqn:conic bundle}
(a_0s^2 + a_2t^2)x^2 + (a_3s^2 + a_1t^2)y^2 + a_4z^2 = 0 \subset \mathbb{F}(1, 1, 0).
\end{equation}
A conic bundle structure $\pi_1 : X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \rightarrow \mathbb{P}^1$ on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is then given by the projection to $(s, t)$.
Similarly, one obtains $\pi_2 : X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \rightarrow \mathbb{P}^1$ via $(s, t; x, y, z) \mapsto (tx: sy: ty: sx: z)$. It gives a second conic bundle structure on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ as shown by the equation
\begin{equation}
\label{eqn:conic bundle2}
(a_0t^2 + a_3s^2)x^2 + (a_1s^2 + a_2t^2)y^2 + a_4z^2 = 0 \subset \mathbb{F}(1, 1, 0).
\end{equation}
This puts us in position to refine Lemma \ref{lem:BBFL} upon restricting our attention to surfaces in the family $\mathcal{F}$.
\begin{lemma}\label{lem:refine}
Let $X$ be a $k$-minimal del Pezzo surface of degree four described by equation \eqref{eq:dP4 main}. Then the $\Gal(\bar{k}/k)$-orbits of lines in $X$ are one of the following:
\[
(2,2,2,2,2,2,2,2), (2,2,2,2,4,4), (4,4,4,4).
\]
\end{lemma}
\begin{proof}
We only have to eliminate the possibility of orbits of size 8. One can see readily from \ref{eqn:conic bundle} and \ref{eqn:conic bundle2} that each line on $X$ is defined over at most a biquadratic extension of $k$.
\end{proof}
\section{Lines and Brauer elements}
\label{sec:lines}
Following Swinnerton-Dyer \cite{SD99} we detect the double fours that give rise to Brauer classes. Firstly, we show that a del Pezzo surface of degree 4 given by \eqref{eq:dP4 main} has a trivial Brauer group if and only if it is rational over the ground field (see Lemma~\ref{rational}). In particular, no $k$-minimal del Pezzo surface of degree 4 given by \eqref{eq:dP4 main} has a trivial Brauer group. We take a step further after Lemma \ref{lem:refine} and note that for a del Pezzo surface of degree 4 with a conic bundle structure the sizes of the orbits of lines are determined by the order of the Brauer group (but, of course, not vice-versa as a surface with eight pairs of conjugate lines can have both trivial or non-trivial Brauer group for example). On the other hand, if one assumes that the Brauer group is non-trivial then the size of the orbits does determine that of the Brauer group (see Lemma~\ref{sizeorbit}). Moreover, given a non-trivial Brauer element, we describe in detail a genus one fibration with exactly two reducible fibres as in \cite{VAV14} for which this element is vertical. We obtain a rational elliptic surface by blowing up four points, namely two singular points of fibres of the conic bundle \eqref{eqn:conic bundle} together with two singular points of fibres of the conic bundle \eqref{eqn:conic bundle2}. The field of definition of the Mordell--Weil group of the elliptic fibration is determined by the size of the Brauer group of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$. In general, it is fully defined over a biquadratic extension. We also show that the reducible fibres are both of type $I_4$.
\subsection{Conic bundles and lines}
Let $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ be given by \eqref{eq:dP4 main}. Then it admits two conic bundle structures given by \eqref{eqn:conic bundle} and \eqref{eqn:conic bundle2}. Each conic bundle has two pairs of conjugate singular fibres with Galois group $(\mathbb{Z}/2\mathbb{Z})^2$ acting on the 4 lines that form each of the two pairs. The intersection behavior of the lines on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is described in Figure \ref{intersectionlines}. Together, these 8 pairs of lines give the 16 lines on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$.
We now assign a notation to work with the lines. Given $i\in \{1, \cdots, 4\}$, the union of two lines $L_i^{+}$ and $L_i^{-}$ will denote the components of a singular fibre of the conic bundle \eqref{eqn:conic bundle}. Similarly, the union of two lines $M_i^{+}$ and $M_i^{}$ will denote the singular fibres of the conic bundle \eqref{eqn:conic bundle2}. More precisely, using the variables $(x_0: x_1: x_2: x_3 : x_4)$ to describe the conic bundles, we have the following
\tagsleft@true
\begin{align*}
\tag{$L_1^{\pm}$} x_0x_1=x_2x_3= -\sqrt{-\frac{a_2}{a_0}}, \quad x_4&=\pm \sqrt{\frac{d}{-a_0a_4}}x_1, \\
\tag{$L_2^{\pm}$} x_0x_1=x_2x_3= \sqrt{-\frac{a_2}{a_0}}, \quad x_4&=\pm\sqrt{\frac{d}{-a_0a_4}}x_1,\\
\tag{$L_3^{\pm}$} x_0x_1=x_2x_3= -\sqrt{-\frac{a_1}{a_3}}, \quad x_4&=\pm \sqrt{\frac{d}{a_3a_4}}x_2, \\
\tag{$L_4^{\pm}$} x_0x_1=x_2x_3= \sqrt{-\frac{a_1}{a_3}}, \quad x_4&=\pm \sqrt{\frac{d}{a_3a_4}}x_2, \\
\tag{$M_1^{\pm}$} x_0x_1=x_2x_3= -\sqrt{-\frac{a_0}{a_3}}, \quad x_4&=\pm \sqrt{\frac{d}{a_3a_4}}x_2,\\
\tag{$M_2^{\pm}$} x_0x_1=x_2x_3= \sqrt{-\frac{a_0}{a_3}}, \quad x_4&=\pm \sqrt{\frac{d}{a_3a_4}}x_2,\\
\tag{$M_3^{\pm}$} x_0x_1=x_2x_3= -\sqrt{-\frac{a_2}{a_1}}, \quad x_4&=\pm \sqrt{\frac{d}{-a_0a_4}}x_1,\\
\tag{$M_4^{\pm}$} x_0x_1=x_2x_3= \sqrt{-\frac{a_2}{a_1}}, \quad x_4&=\pm \sqrt{\frac{d}{-a_0a_4}}x_1.
\end{align*}
\tagsleft@false
One can readily determine the intersection behavior of these lines, which we describe in Lemma \ref{lemma:fours}. We also take the opportunity to identify fours and double fours defined over small field extensions. Recall that a \emph{four} in a del Pezzo surface of degree 4 is a set of four skew lines that do not all intersect a fifth one. A \emph{double four} is four together with the four lines that meet three lines from the original four (\cite[Lemma10]{SD93}).
\begin{lemma}\label{lemma:fours}
Let $i, j,k,l \in \{1,\cdots, 4\}$ with $j\neq i$. Consider $L_i^{+}, L_i^{-}, M_i^{+}$ and $M_i^{-}$ as above. Then
\begin{enumerate}[label=\emph{(\alph*)}]
\item $L_i^{+}$ intersects $L_i^{-}, M_i^{-}$ and $ M_j^{+}$, while $L_i^{-}$ intersects $L_i^{+}, M_i^{+}$, and $M_j^{-}$.
\item $M_i^{+}$ intersects $M_i^{-}, L_i^{-}$ and $L_j^{+}$, while $M_i^{-}$ intersects $M_i^{+}, L_i^{+}$ and $L_j^{-}$.
\item The lines $L_i^{+},L_j^{+},M_k^{-},M_l^{-}$ and the lines $L_i^{-},L_j^{-},M_k^{+},M_l^{+}$, with $i+j \equiv k+l\equiv 3 \bmod 4$, form two fours defined over the same field extension $L/ k$ of degree at most 2. Together they form a double four defined over $k$.
\end{enumerate}
\end{lemma}
\begin{proof}
Statements (a) and (b) are obtained by direct calculations. For the line $L_1$, for instance, one sees readily that it intersects $L_1^{-}, M_1^{-}, M_2^{+},M_3^{+}$ and $M_4^{+}$ respectively at the points $(-\sqrt{\frac{-a_2}{a_0}}:0:1:0:0),(-\sqrt{\frac{-a_2}{a_0}}:-\sqrt{\frac{-a_0}{a_3}}:1:-\sqrt{\frac{a_2}{a_3}}:-\sqrt{\frac{d}{{a_4a_3}}}),(-\sqrt{\frac{-a_2}{a_0}}:\sqrt{\frac{-a_0}{a_3}}:1:\sqrt{\frac{a_2}{a_3}}:\sqrt{\frac{d}{{a_4a_3}}}),(-\sqrt{\frac{-a_2}{a_0}}:-\sqrt{\frac{-a_2}{a_1}}:1:\frac{a_2}{\sqrt{a_0a_1}}:-\sqrt{\frac{d a_2}{a_4a_0a_1}})$ and $(-\sqrt{\frac{-a_2}{a_0}}:\sqrt{\frac{-a_2}{a_1}}:1:-\frac{a_2}{\sqrt{a_0a_1}}:\sqrt{\frac{d a_2}{a_4a_0a_1}})$.
Part (c) follows from (a) and (b). To see that one of such fours is defined over an extension of degree at most 2, note that each subset $\{L_i^{+},L_j^{+}\}$ and $\{M_k^{-},M_l^{-}\}$ is defined over the same extension of degree 2. For instance, taking $i=1,j=2,k=3$ and $l=4$, we see that the four is defined over $k(\sqrt{-a_0a_4d})$. The double four is defined over $k$ since both $\{L_i^{+},L_j^{+},L_i^{-},L_j^{-}\}$ and $\{M_k^{+},M_l^{+},M_k^{-},M_l^{-}\}$ are Galois invariant sets.
\end{proof}
\begin{figure}[h]
\label{intersectionlines}
\[
\begin{tikzpicture}[inner sep=0,x=25pt,y=15pt,font=\footnotesize]
\draw[line width=2pt, white] (-4,-5) -- (-3,5);
\draw[very thick, white] (-3,-5) -- (-4,5);
\draw[very thick, white] (-2,-5) -- (-1,5);
\draw[very thick, white] (-1,-5) -- (-2,5);
\draw[very thick, white] (1,-5) -- (2,5);
\draw[very thick, white] (2,-5) -- (1,5);
\draw[very thick, white] (3,-5) -- (4,5);
\draw[very thick, white] (4,-5) -- (3,5);
\draw (-4,-5) -- (-3,5);
\node at (-4.25,5.25) {$L_1^{+}$};
\draw (-3,-5) -- (-4,5);
\node at (-3.25,5.25) {$L_1^{-}1$};
\draw (-2,-5) -- (-1,5);
\node at (-2.25,5.25) {$L_2^{+}$};
\draw (-1,-5) -- (-2,5);
\node at (-1.25,5.25) {$L_2^{-}$};
\draw (1,-5) -- (2,5);
\node at (1.25,5.25) {$L_3^{+}$};
\draw (2,-5) -- (1,5);
\node at (2.25,5.25) {$L_3^{-}$};
\draw (3,-5) -- (4,5);
\node at (3.25,5.25) {$L_4^{+}$};
\draw (4,-5) -- (3,5);
\node at (4.25,5.25) {$L_4^{-}$};
\draw[line width=2pt, white] (-5,-4) -- (5,-3);
\draw[line width=2pt, white] (-5,-3) -- (5,-4);
\draw[line width=2pt, white] (-5,-2) -- (5,-1);
\draw[line width=2pt, white] (-5,-1) -- (5,-2);
\draw[line width=2pt, white] (-5,1) -- (5,2);
\draw[line width=2pt, white] (-5,2) -- (5,1);
\draw[line width=2pt, white] (-5,3) -- (5,4);
\draw[line width=2pt, white] (-5,4) -- (5,3);
\draw (-5,-4) -- (5,-3);
\node at (5.25,-4.25) {$M_1^{-}$};
\draw (-5,-3) -- (5,-4);
\node at (5.25,-3.25) {$M_1^{+}$};
\draw (-5,-2) -- (5,-1);
\node at (5.25,-1.75) {$M_2^{-}$};
\draw (-5,-1) -- (5,-2);
\node at (5.25,-0.75) {$M_2^{+}$};
\draw (-5,1) -- (5,2);
\node at (5.25,1.25) {$M_3^{-}$};
\draw (-5,2) -- (5,1);
\node at (5.25,2.25) {$M_3^{+}$};
\draw (-5,3) -- (5,4);
\node at (5.25,3.25) {$M_4^{-}$};
\draw (-5,4) -- (5,3);
\node at (5.25,4.25) {$M_4^{+}$};
\filldraw (-3.5,0) circle (2pt);
\filldraw (1.5,0) circle (2pt);
\filldraw (-1.5,0) circle (2pt);
\filldraw (3.5,0) circle (2pt);
\filldraw (0, -3.5) circle (2pt);
\filldraw (0,1.5) circle (2pt);
\filldraw (0,-1.5) circle (2pt);=0,
\filldraw (0,3.5) circle (2pt);
\filldraw (-3.9,-3.9) circle (2pt);
\filldraw (-3.2,-3.2) circle (2pt);
\filldraw (-3.3,-1.82) circle (2pt);
\filldraw (-3.6,1.15) circle (2pt);
%
\filldraw (-3.81,3.15) circle (2pt);
\filldraw (-1.12,-3.63) circle (2pt);
\filldraw (-1.34,-1.34) circle (2pt);
\filldraw (-1.62,1.3) circle (2pt);
\filldraw (-1.82,3.28) circle (2pt);
\filldraw (1.83,-3.3) circle (2pt);
\filldraw (1.65,-1.35) circle (2pt);
\filldraw (1.37,1.37) circle (2pt);
\filldraw (3.2,3.2) circle (2pt);
\filldraw (1.16,3.63) circle (2pt);
\filldraw (3.83,-3.1) circle (2pt);
\filldraw (3.63,-1.15) circle (2pt);
\filldraw (3.3,1.8) circle (2pt);
\filldraw (-1.82,-3.3) circle (2pt);
\filldraw (1.15,-3.67) circle (2pt);
\filldraw (3.15,-3.8) circle (2pt);
\filldraw (1.35,-1.64) circle (2pt);
\filldraw (1.68,1.68) circle (2pt);
\filldraw (1.84,3.32) circle (2pt);
\filldraw (3.33,-1.81) circle (2pt);
\filldraw (3.62,1.17) circle (2pt);
\filldraw (3.87,3.87) circle (2pt);
\filldraw (-3.6,-1.12) circle (2pt);
\filldraw (-1.68,-1.7) circle (2pt);
\filldraw (-3.3,1.82) circle (2pt);
\filldraw (-3.12,3.8) circle (2pt);
\filldraw (-1.35,1.62) circle (2pt);
\filldraw (-1.13,3.64) circle (2pt);
\end{tikzpicture}
\]
\caption{The lines on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ and their intersection behaviour. The intersection points of pairs of lines are marked with $\bullet$.}
\end{figure}
Among the 40 distinct fours on a del Pezzo surface of degree 4, the ones that appear in the previous lemma are special. More precisely, given a four as in Lemma \ref{lemma:fours} such that its field of definition has degree $d\in \{1,2\}$, the smallest degree possible among such fours, then any other four is defined over an extension of degree at least $d$.
\begin{definition}
Given a four as in Lemma \ref{lemma:fours} part (c), we call it a \emph{minimal four} if the field of definition of its lines has the smallest degree among such fours.
\end{definition}
For the sake of simplicity and completion we state a result proved in \cite[Prop. 2.2]{MS20} that determines the Brauer group of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ in terms of the coefficients $\mathbf{a}} \newcommand{\bfA}{\mathbf{A}=(a_0,\cdots, a_4)$. We remark that the statement of the proposition below does not require that the set of adelic points $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} (\bfA_k)$ of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is non-empty and that the proof presented in \cite{MS20} works over an arbitrary number field $k$.
\begin{proposition}
\label{prop:BrXconic}
Let $(*)$ denote the condition that $-a_0a_4d \notin k(\sqrt{-a_0a_2})^{*2}$, $-a_1a_4d \notin k(\sqrt{-a_1a_3})^{*2}$ and that one of $-a_0a_2$, $-a_1a_3$ or $a_0a_1$ is not in $k^{*2}$. Then we have
\[
\Br X_{\mathbf{a}} \newcommand{\bfA}{\mathbf{A}} / \Br_0 X_{\mathbf{a}} \newcommand{\bfA}{\mathbf{A}} =
\begin{cases}
(\mathbb{Z}/2\mathbb{Z})^2 &\mbox{if } a_0a_1, a_2a_3, -a_0a_2 \in k^{*2} \mbox{ and } -a_0a_4d \not\in k^{*2}, \\
\mathbb{Z}/2\mathbb{Z} &\mbox{if } (*),\\
\{\id\} &\mbox{otherwise.}
\end{cases}
\]
\end{proposition}
Recall the definition of a rank of a fibration \cite{Sko96}, which (as in \cite{FLS18}) for the sake of clarity to be distinguished from the Mordell--Weil rank or the Picard rank we call complexity here. That is the sum of the degrees of the fields of definition of the non-split fibres. It is clear that the conic bundles in $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ have complexity at most four. This allows us to obtain in our setting the following lemma.
\begin{lemma}\label{rational}
Let $k$ be a number field and $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ given by \eqref{eq:dP4 main}. Assume that $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}(\bfA_{k}) \neq \emptyset$. Then $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is $k$-rational if and only if $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} = \Br k$.
\end{lemma}
\begin{proof}
The \emph{if} implication holds for any $k$-rational variety since $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is a birational invariant. To prove the non-trivial direction, we make use of \cite{KM17} which shows that conic bundles of complexity at most 3 with a rational point are rational. Firstly, note that the assumption $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}(\bfA_{k})\neq \emptyset$ implies that $\Br k$ injects into $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$. If $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} / \Br k$ is trivial, then either $-a_0a_4d \in k(\sqrt{-a_0a_2})^{*2}$ or $-a_1a_4d \in k(\sqrt{-a_1a_3})^{*2}$ by Proposition~\ref{prop:BrXconic}. Thus the complexity of the conic bundle $\pi_1$ is at most 2. It remains to show that $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ admits a rational point. This follows from the independent work in \cite{CT90} and \cite{Sal86} which show that the Brauer--Manin obstruction is the only obstruction to the Hasse principle for conic bundles with 4 degenerate geometric fibres. There is no such obstruction when $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} / \Br k$ is trivial. Under the assumption $X(\bfA_{k})\neq \emptyset$ we conclude that $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ admits a rational point and hence is rational.
\end{proof}
\begin{remark}
Lemma \ref{rational} is parallel to \cite[Lem.~1]{CTKS} which deals with diagonal cubic surfaces whose Brauer group is trivial. Moreover, a simple exercise shows that in our case, if the Brauer group is trivial, then the surface is a blow up of a Galois invariant set of four points in the ruled surface $\mathbb{P}^1 \times \mathbb{P}^1$, while the diagonal cubic satisfying the hypothesis of \cite[Lem.~1]{CTKS} is a blow up of an invariant set of six points in the projective plane. The Picard group over the ground field of the former is of rank four while that of the latter has rank three.
\end{remark}
\subsection{Brauer elements and double fours}
The following two results of Swinnerton-Dyer allow one to describe Brauer elements via the lines in a double four, and to determine the order of the Brauer group.
The first result is contained in \cite[Lem. 1, Ex. 2]{SD99}.
\begin{theorem}
\label{doublefour-Brauer}
Let $X$ be a del Pezzo surface of degree 4 over a number field $k$ and $\alpha$ a non-trivial element of $\Br X$. Then $\alpha$ can be represented by an Azumaya algebra in the following way: there is a double-four defined over $k$ whose constituent fours are not rational but defined over $k(\sqrt{b})$, for some non-square $b \in k$. Further, let $V$ be a divisor defined over $k(\sqrt{b})$ whose class is the sum of the classes of one line in the double-four and the classes of the three lines in the double-four that meet it, and let $V'$ be the Galois conjugate of $V$ . Let $h$ be a hyperplane section of S. Then the $k$-rational divisor $D = V + V'-2h$ is principal, and if $f$ is a function whose divisor is $D$ then $\alpha$ is represented by the quaternion algebra $(f,b)$.
\end{theorem}
The following can be found at \cite[Lem.~11]{SD93}.
\begin{lemma}
\label{doublefour-sizeBrauer}
The Brauer group $\Br X$ cannot contain more than three elements of order 2. It
contains as many as three if and only if the lines in $X$ can be partitioned into four
disjoint cohyperplanar sets $T_i ,\, \, i=1,.., 4$, with the following properties:
\begin{enumerate}[label=\emph{(\arabic*)}]
\item the union of any two of the sets $T_i$ is a double-four;
\item each of the $T_i$ is fixed under the absolute Galois group;
\item if $\gamma$ is half the sum of a line $\lambda$ in some $T_i$, the two lines in the same $T_i$ that meet $\lambda$, and one other line that meets $\lambda$, then no such $\gamma$ is in $\Pic X \otimes \mathbb{Q} + \Pic \bar{X}$.
\end{enumerate}
\end{lemma}
We proceed to analyse how the conic bundle structures in $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ and the two results above can be used to describe the Brauer group of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$.
\subsection{The general case}
We first describe the general case, i.e., on which there are four Galois orbits of lines of size four.
\begin{proposition}\label{prop: generalfours}
Let $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} \in \mathcal{F}$ and assume that $\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ satisfies hypothesis $(*)$ of Proposition \ref{prop:BrXconic}. Then there are exactly two distinct double fours on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ defined over $k$ with constituent fours defined over a quadratic extension. In other words, there are exactly 4 minimal fours which pair up in a unique way to form two double fours defined over $k$.
\end{proposition}
\begin{proof}
Part (c) of Lemma~\ref{lemma:fours} tells us that the minimal fours are given by the double four formed by the fours $\{L_1^{+},L_2^{+},M_3^{-},M_4^{-}\}, \{L_1^{-},L_2^{-},M_3^{+},M_4^{+}\}$ and that formed by $\{L_3^{+},L_4^{+},M_1^{-},M_2^{-}\}$ and $\{L_3^{-},L_4^{-},M_1^{+},M_2^{+}\}$. By the hypothesis, each four is defined over a quadratic extension and the two double fours are defined over $k$. The hypothesis on the coefficients of the equations defining $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ also imply that any other double four is defined over a non-trivial extension of $k$. For instance, consider a distinct four containing $L_1^{+}$. For a double four containing this four to be defined over $k$, we need that the second four contains $L_1^{-}$ and that one of the fours contains $L_2^{+}$ and the other $L_2^{-}$. The hypothesis that each four is defined over a degree two extension gives moreover that $L_2^{+}$ is in the same four as $L_1^+$ and hence, due to their intersecting one of the lines, $M_1^{+}$ and $M_2^{+}$ cannot be in the same four. We are left with $L_3^{+},L_4^{+},M_3^{+},M_4^{+}$ and their conjugates. But if $L_3^{+}$ is in one of the fours then $L_3^{-}$ would be in the other four. This is impossible as neither $L_3^{+}$ nor $L_3^{-}$ intersect $L_1^{-}$ or $L_2^{-}$, and each line on a double four intersects three lines of the four that do not contain it.
\end{proof}
\begin{corollary}
Let $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ be as above. Then $\Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}/\Br_0 X_{\mathbf{a}} \newcommand{\bfA}{\mathbf{A}}$ is of order 2.
\end{corollary}
\begin{proof}
This is a direct consequence of Proposition~\ref{prop: generalfours} together with Theorem~\ref{doublefour-Brauer}.
\end{proof}
We shall now allow further assumptions on the coefficients of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ to study how they influence the field of definition of double fours and hence the Brauer group.
\subsection{Trivial Brauer group}
Suppose that one of $-a_0a_4d, -a_1a_4d, a_2a_4d, a_3a_4d$ is in $k^{*2}$. Assume, to exemplify, that $-a_0a_4d$ is a square. Consider the conic bundle structure given by \eqref{eqn:conic bundle}. Then the lines $L_1^{+}$ and $L_2^{+}$ are conjugate and, clearly, do not intersect. Indeed, they are components of distinct fibres of \eqref{eqn:conic bundle}. Contracting them we obtain a del Pezzo surface of degree 6. If $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ has points everywhere locally, the same holds for the del Pezzo surface of degree 6 by Lang--Nishimura \cite{Lang}, \cite{Nishimura}. As the latter satisfies the Hasse principle, it has a $k$-point. In particular, $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is rational, which gives us an alternative proof of Lemma~\ref{rational}.
\subsection{Brauer group of order four}
For the last case, assume that $a_0a_1, a_2a_3, -a_0a_2 \in k^{*2}, -a_0a_4d, -a_1a_4d, a_2a_4d, a_3a_4d \not\in k^{*2}$. We produce two double fours that give distinct Brauer classes. Firstly note that all the singular fibres of the two conic bundles are defined over $k$. In particular, their singularities are $k$-rational points and thus there is no Brauer--Manin obstruction to the Hasse principle and $\Br_0 X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} = \Br k$. Moreover, every line is defined over a quadratic extension, but no pair of lines can be contracted since each line intersects its conjugate. Secondly, note that since $-a_0a_2$ is a square, thus $k(\sqrt{-a_0a_4d})=k(\sqrt{a_2a_4d})$. We have the double four as above, given by $L_1^{+},L_2^{+},M_3^{-},M_4^{-}$ and the correspondent intersecting components, and a new double four given by $\{L_1^{+},L_3^{+},M_2^{-},M_4^{-}\} ,\{L_2^{+},L_4^{+},M_1^{-},M_4^{-}\}$, which under this hypothesis is formed by two \emph{minimal} fours.
The Picard group of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is generated by $L_1^{+},L_2^{+}, L_3^{+},L_4^{+}$, a smooth conic and a section, say $M_1^{+}$ of the conic fibration (\ref{eqn:conic bundle}). We can apply Lemma~\ref{doublefour-sizeBrauer} with $T_i=\{L_i^{+},L_i^{-}, M_i^{+}, M_i^{-}\}$ to check that in this case the Brauer group has indeed size four.
\begin{lemma}
\label{sizeorbit}
Let $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ be as in \eqref{eq:dP4 main}. Assume that $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ does not contain a pair of skew conjugate lines or, equivalently, $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is not $k$-rational. Then the following hold:
\begin{enumerate}[label=\emph{(\roman*)}]
\item $\# \Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} /\Br_0 X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} =4$ if and only if the set of lines on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ has orbits of size \newline $(2,2,2,2,2,2,2,2)$.
\item $\# \Br X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} /\Br_0 X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A} =2$ if and only if the set of lines on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ has orbits of size \newline $(2,2,2,2,4,4)$ or $(4,4,4,4)$.
\end{enumerate}
\end{lemma}
\begin{proof}
This is an application of \cite[Lem.~11]{SD93} or a reinterpretation of Proposition~\ref{prop:BrXconic} together with the description of the lines given in this section and the construction of Brauer elements via fours given by Swinnerton-Dyer (see for instance \cite[Lem.~10]{SD93} and \cite[Thm 10.]{BBFL07} for the construction of the Brauer elements via fours).
\end{proof}
\section{A genus 1 fibration and vertical Brauer elements}\label{thegenus1}
In what follows we will give a description of the genus 1 fibration $ X_{\mathbf{a}} \newcommand{\bfA}{\mathbf{A}} \dashrightarrow \mathbb{P}^1$ from \cite{VAV14} for which a given Brauer element is vertical. First we recall some basic facts about elliptic surfaces. We then obtain the Brauer element and the genus 1 fibration as in \cite{VAV14} to afterwards reinterpret it in our special setting of surfaces admitting two non-equivalent conic bundles over the ground field. We study how the order of the Brauer group influences the arithmetic of this genus 1 fibration. More precisely, after blowing up the base points of the genus one pencil, we show that the field of definition of its Mordell--Weil group depends on the size of the Brauer group.
\subsection{Background on elliptic surfaces}
Let $k$ be a number field.
\begin{definition}\label{def: ellsurf}
An \emph{elliptic surface} over $k$ is a smooth projective surface $X$ together with a morphism $\mathcal{E}: X \to B$ to some curve $B$ whose generic fibre is a smooth curve of genus $1$, i.e., a genus 1 fibration. If it admits a section we call the fibration \emph{jacobian}. In that case, we fix a choice of section to act as the identity element for each smooth fibre. The set of sections is in one-to-one correspondence with the $k(B)$-points of the generic fibre, hence it has a group structure and it is called the \emph{Mordell--Weil group} of the fibration, or of the surface if there is no doubt on the fibration considered.
\end{definition}
\begin{remark}
If $X$ is a rational surface and an elliptic surface, we call it a \emph{rational elliptic surface}. If the fibration is assumed to be minimal, i.e., no fibre contains $(-1)$-curves as components, then by the adjunction formula the components of reducible fibres are $(-2)$-curves. In that case, the sections are precisely the $(-1)$-curves and the fibration is jacobian over a field of definition of any of the $(-1)$-curves.
\end{remark}
Given a smooth, projective, algebraic surface $X$ its Picard group has a lattice structure with bilinear form given by the intersection pairing. If $X$ is an elliptic surface then, thanks to the work of Shioda, we know that its Mordell--Weil group also has a lattice structure, with a different bilinear pairing \cite{ShiodaMWL}. Shioda also described the N\'eron--Tate height pairing via intersections with the zero section and the fibre components. This allows us to determine, for instance, if a given section is of infinite order and the rank of the subgroup generated by a subset of sections. We give a brief description of the height pairing below.
\begin{definition}
Let $\mathcal{E}: X\rightarrow B$ be an elliptic surface with Euler characteristic $\chi$. Let $O$ denote the zero section and $P, Q$ two sections of $\mathcal{E}$. The N\'eron--Tate height pairing is given by
$$\langle P,Q \rangle= \chi+ P\cdot O +Q\cdot O- P\cdot Q -\sum_{F \in \text{ reducible fibres }} \text{contr}_F(P,Q),$$
where $\text{contr}_F(P,Q)$ denotes the contribution of the reducible fibre $F$ to the pairing and depends on the type of fibre (see \cite[\S8]{ShiodaMWL} for a list of all possible contributions).
Upon specializing at $P=Q$ we obtain a formula for the height of a section (point in the generic fibre):
$$h(P)= \langle P, P \rangle = 2\chi +2P\cdot O - \sum_{F \in \text{ reducible fibres}} \text{contr}_F(P).$$
\end{definition}
\begin{remark}
The contribution of a reducible fibre depends on the components that $P$ and $Q$ intersect. In this article we deal only with fibres of type $I_4$, thus for the sake of completion and brevity we give only its contribution. Denote by $\Theta_0$ the component that is met by the zero section, $\Theta_1$ and $\Theta_3$ the two components that intersect $\Theta_0$, and let $\Theta_2$ be the component opposite to $\Theta_0$. If $P$ and $Q$ intersect $\Theta_i$ and $\Theta_j$ respectively, with $i\leq j$ then $\text{ contr}_{I_4}(P,Q)=\frac{i(4-j)}{4}$.
\end{remark}
\subsection{Vertical elements}
\begin{definition}
Let $X$ be a smooth surface. Given a genus 1 fibration $\pi: X\rightarrow \mathbb{P}^1$, the vertical Picard group, denoted by $\Pic_{vert}$, is the subgroup of the Picard group generated by the irreducible components of the fibres of $\pi$. The vertical Brauer group $\Br_{vert}$ is given by the algebras in $\Br k(\mathbb{P}^1)$ that give Azumaya algebras when lifted to $X$ (see \cite[Def.~3]{Bri06}).
\end{definition}
There is an isomorphism $\Br X/ \Br_0 X \simeq \mathrm{H}^1(k, \Pic \bar{X})$ and, as described by Bright \cite[Prop.4]{Bri06}, a further isomorphism between $B:=\{\mathcal{A} \in \Br k(\mathbb{P}^1); \pi^* \mathcal{A} \in \Br X\}$ and $\mathrm{H}^1(k, \Pic_{vert})$. Combining these with Theorem \ref{doublefour-Brauer}, allows us to describe vertical Brauer elements as those for which the lines in Theorem \ref{doublefour-Brauer} are fibre components of $\pi$.
\begin{definition}
We call a Brauer element horizontal w.r.t. $\pi$ if the lines used in Theorem \ref{doublefour-Brauer} to describe it are sections or multisections of $\pi$.
\end{definition}
\begin{remark} As a line cannot be both a fibre component and a (multi)-section simultaneously, a Brauer element that is horizontal cannot be vertical and vice-versa. For a general fibration $\pi$ some Brauer elements might be neither horizontal nor vertical.
\end{remark}
The following result shows that for a specific elliptic fibration, all Brauer elements are either horizontal or vertical.
\begin{lemma}\label{lemma: genusonefibration}
Assume that the Brauer group of $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ is non-trivial. Let $F=L_1^{+}+L_2^{+}+M_3^{+}+M_4^{+}$ and $F'=L_1^{-}+L_2^{-}+M_3^{-}+M_4^{-}$. The pencil of hyperplanes spanned by $F$ and $F'$ gives a genus one fibration with exactly two reducible fibres on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ which are of type $I_4$, for which a non-trivial element of its Brauer group is vertical. The other Brauer elements are horizontal.
\end{lemma}
\begin{proof}
The linear system spanned by $F$ and $F'$ is a subsystem of $|-K_{X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}}|$. Hence it gives a genus one pencil on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$. Its base points are precisely the four singular points of the following fibres of the conic bundle fibrations: $L_1^{+}\cup L_1^{-}, L_2^{+}\cup L_2^{-}, M_3^{+}\cup M_3^{-}, M_4^{+} \cup M_4^{-}$. The blow up of these four base points produces a geometrically rational elliptic surface\footnote{not necessarily with a section over the ground field} with two reducible fibres given by the strict transforms of $F$ and $F'$. Since each of the latter is given by four lines in a square configuration and the singular points of this configuration are not blown up, these are of type $I_4$. There are no other reducible fibres as the only $(-2)$-curves are the ones contained in the strict transforms of $F$ and $F'$. Let $\mathcal{E}$ denote the fibration map.
The Azumaya algebra $(f,b)$ with $f$ and $b$ as in Theorem \ref{doublefour-Brauer} taking as double four the components of $F$ and $F'$, gives a Brauer element which is vertical for the genus one fibration $\mathcal{E}$. Indeed, the lines that give such a double four are clearly in $\Pic_{vert}$ and hence the algebra $(f,b)$ lies in the image of $H^1(k, \Pic_{vert}) \rightarrow H^1(k, \bar{X_{\mathbf{a}} \newcommand{\bfA}{\mathbf{A}}})$. By \cite[Prop.~4]{Bri06} it gives an element of the form $\mathcal{E}^*\mathcal{A}$, where $\mathcal{A}$ is in $\Br k(\mathbb{P}^1)$.
The other Brauer elements on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ are described by double fours, i.e., pairs of sets of four $(-1)$-curves on $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$, subject to intersection conditions. Since each component intersects each reducible fibre in exactly one point, after passing to its field of definition, these give sections of the genus one fibration. That is, such Brauer elements are horizontal with respect to this genus one fibration.
\end{proof}
\begin{figure}\label{figure: badfibresgenus1}
\[
\begin{tikzpicture}[inner sep=0,x=25pt,y=15pt,font=\footnotesize]
\draw (-3.5,3.25) -- (0,3.5);
\draw (-2.5,1) -- (0,1.5);
\draw (-2.5,-1.75) -- (0,-1.5);
\draw (-3.5,-3.75) -- (0,-3.5);
\draw (3.5,3.25) -- (0,3.5);
\draw (2.5,1) -- (0,1.5);
\draw (2.5,-1.75) -- (0,-1.5);
\draw (3.5,-3.75) -- (0,-3.5);
\draw (-3.5,3.25) -- (-2.5,1);
\draw (-2.5,1) -- (-2.5,-1.75);
\draw (-2.5,-1.75) -- (-3.5,-3.75);
\draw (-3.5,-3.75) -- (-3.5,3.25);
\draw (3.5,3.25) -- (2.5,1);
\draw (2.5,1) -- (2.5,-1.75);
\draw (2.5,-1.75) -- (3.5,-3.75);
\draw (3.5,-3.75) -- (3.5,3.25);
\filldraw (-3.5,3.25) node[left=5pt]{$\Theta_{0,1}=L_1^{+}$} circle (2pt);
\filldraw (-2.5,1) node[left=7pt]{$\Theta_{1,1}=M_3^{+}$} circle (2pt);
\filldraw(-2.5,-1.75) node[left=9pt] {$\Theta_{2,1}=L_2^{+}$} circle (2pt);
\filldraw (-3.5,-3.75) node[left=5pt]{$\Theta_{3,1}=M_4^{+}$} circle (2pt);
\filldraw[fill=white] (0, -3.5) node[above=5pt]{$E_4$} circle (2pt);
\filldraw[fill=white] (0,1.5) node[above=5pt]{$E_3$} circle (2pt);
\filldraw[fill=white] (0,-1.5) node[above=5pt]{$E_2$} circle (2pt);
\filldraw[fill=white] (0,3.5) node[above=5pt]{$E_1$} circle (2pt);
\filldraw (3.5,3.25) node[right=5pt]{$\Theta_{0,2}=L_1^{-}$} circle (2pt);
\filldraw (2.5,1) node[right=5pt]{$\Theta_{1,2}=M_3^{-}$}circle (2pt);
\filldraw(2.5,-1.75) node[right=5pt]{$\Theta_{2,2}=L_2^{-}$} circle (2pt);
\filldraw (3.5,-3.75) node[right=5pt]{$\Theta_{3,2}=M_4^{-}$} circle (2pt);
\end{tikzpicture}
\]
\caption{The reducible fibres of the genus one fibration $\mathcal{E}$. The eight $\bullet$ denote fibre components and the four $\circ$ denote sections given by the blow up of the 4 base points.}
\end{figure}
\begin{remark}
The genus one fibration for which a Brauer element is vertical described in \cite{VAV14} has in general two reducible fibres given as the union of two geometrically irreducible conics, i.e., they are of type $I_2$. In our setting all the conics are reducible and hence give rise to fibres of type $I_4$. More precisely, let $C_1\cup C_2$ and $C'_1\cup C'_2$ be the two reducible fibres with $C_i$ and $C'_i$ conics, then $C_1\cup C'_1$ is linearly equivalent to one of the fours, say $L_1^{+}\cup L_2^{+}\cup L_1^{-}\cup L_2^{-} $ and $C_1\cup C'_2$ is linearly equivalent to $M_3^{+}\cup M_4^{+} \cup M_3^{-}+M_4^{-}$.
This seems to be very particular of the family considered in this note. More precisely, the presence of two conic bundle structures does not seem to be enough to guarantee that the reducible fibres of the genus one fibration are of type $I_4$. For that one needs that the largest Galois orbit of lines has size at most four and moreover that the field of definition of two of such orbits is the same.
\end{remark}
\subsection{Mordell--Weil meets Brauer}
In what follows we will keep the letter $\mathcal{E}$ for the genus one fibration on the blow up surface just described. We now give a proof of our main result, Theorem~\ref{thm:MWBrauer}.
\begin{proof}
To prove (i) notice that the hypothesis of Proposition~\ref{prop:BrXconic} imply that the four blown up points form two distinct orbits of Galois conjugate points. To exemplify, we work with the genus one fibration given by $F$ and $F'$ as in Lemma~\ref{lemma: genusonefibration}. Let $P_i$ be the intersection point of $L_i^{+}$ and $L_i^{-}$ for $i=1,2$ and that of $M_i^{+}$ and $M_i^{-}$, for $i=3,4$. Denote by $E_i$ the exceptional curve after the blow up of $P_i$. Then $\{E_1,E_2\}$ and $\{E_3,E_4\}$ give two pairs of conjugate sections of $\mathcal{E}$. Moreover, the sections on a pair intersect opposite, i.e., disjoint, components of the fibres given by $F$ and $F'$. Fixing one as the zero section of $\mathcal{E}$, say $E_1$, then a height computation gives that $E_2$ is the 2-torsion section of $\mathcal{E}$. Indeed, as we have fixed $E_1$ as the zero section, the strict transform of $L_1^{+}$ and $L_1^{-}$ are the zero components of the fibres $F$ and $F'$, respectively. We denote them by $\Theta_{0,j}$ with $j=1,2$ respectively. Keeping the standard numbering of the fibre components, the strict transforms of $L_2$ and $L'_2$ are denoted by $\Theta_{2,j}$, with $j=1,2$, respectively. Finally, in this notation, $M_3^{+}$ and $M_3^{-}$ correspond to $\Theta_{1,j}$ while $M_4^{+}$ and $M_4^{-}$ correspond to $\Theta_{3,j}$, for $j=1,2$ respectively.
To compute the height of the section $E_2$ we need the contribution of each $I_4$ to the pairing which in this case is $1$ (see \cite[\S11]{ShiodaSchuett} for details on the height pairing on elliptic surfaces and the contribution of each singular fibre to it). We have thus
$$\langle E_2, E_2 \rangle= 2-0-1-1=0.$$
In particular, $E_2$ is a torsion section. Since $E_2$ is distinct from the zero section $E_1$ and such fibrations admit torsion of order at most $2$ (see \cite{Persson} for the list of fibre configurations and torsion on rational elliptic surfaces), we conclude that $E_2$ is a 2-torsion section.
The two other conjugate exceptional divisors $E_3$ and $E_4$ give sections of infinite order as one can see, for example, after another height pairing computation.
To show (ii) it is enough to notice that the hypothesis of Proposition~\ref{prop:BrXconic} imply that the four base points of the linear system spanned by $F$ and $F'$ are defined over $k$. From the discussion above we have that the zero section, the 2-torsion and also a section of infinite order, say $E_3$, are defined over $k$ since each of them is an exceptional curve above a $k$-point. The height matrix of the sections $E_3$ and $E_4$ has determinant zero, hence the section $E_4$ is linearly dependent on $E_3$. Moreover, it follows from the Shioda--Tate formula for $\Pic(X)^{\Gal(\bar{k}/ k)}$ that any section defined over $k$ of infinite order is linearly dependent on $E_3$. Indeed, the rank of the Picard group of the rational elliptic surface is 6 since that of $X_{\mathbf{a}} \newcommand{\bfA}{\mathbf{A}}$ has rank 2 and we blow up 4 rational points. The non-trivial components of the two fibres of type $I_4$ give a contribution of 3 to the rank. The other 3 come from the zero section, a smooth fibre and a section of infinite order, say $E_3$. For a second section of infinite order which is independent in the Mordell--Weil group of $E_3$ one can consider the pull-back of a line in $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$. The hypothesis on the Brauer group implies that $X_\mathbf{a}} \newcommand{\bfA}{\mathbf{A}$ has no line defined over $k$ but each is defined over a quadratic extension.
\end{proof}
\bibliographystyle{amsalpha}{}
|
{
"timestamp": "2021-02-18T02:20:03",
"yymm": "2102",
"arxiv_id": "2102.08798",
"language": "en",
"url": "https://arxiv.org/abs/2102.08798"
}
|
\section{Introduction}\label{sec:intro}
Red blood cells (RBCs) are biological cells made of a viscoelastic membrane enclosing a viscous fluid (cytoplasm): their main features are the biconcave shape and the absence of a nucleus and most organelles, that allow them to carry oxygen even inside the smallest capillaries~\cite{popel2005microcirculation,skalak1969deformation,thesis:kruger}.
In fact, during circulation, RBCs deform multiple times, rearranging their shape to adapt to the physiological conditions of the blood flow.
The mechanical properties of RBC's membrane have been deeply investigated, both numerically~\cite{thesis:kruger,thesis:mountrakis,thesis:janoschek,gross2014rheology,art:kruger13,art:fedosov10,art:pan11,art:fedosov2010systematic,art:kruger14deformability,art:fedosov2011predicting,art:guckenberger2018numerical,fedosov2014multiscale} and experimentally~\cite{mills2004nonlinear,art:suresh2005connections,braunmuller2012hydrodynamic,chien1978theoretical,art:prado15,art:henon99,art:hochmuth79,art:baskurt96,art:bronkhorst95}.
Research interest on the mechanical response of RBC's membrane was prompted by several reasons: among the others, the link between its properties and the erythrocyte's health conditions~\cite{art:kruger14deformability,art:suresh2005connections}, or the role played by the membrane dynamics in the design of biomedical devices\cite{murakami1979nonpulsatile,nonaka2001development,art:behbahani09,art:arora2006hemolysis}.
A huge effort has been devoted to the characterisation of the time-independent properties of the membrane, for the study of the corresponding steady-state configurations.
In recent years, also the dynamical behaviour of RBCs has been investigated in several works, both numerically~\cite{zhu2019response,cordasco2017shape,cordasco2016dynamics,cordasco2014intermittency,art:fedosov10,fedosov2010multiscale,D0SM00587H,guglietta2020lattice} and experimentally~\cite{braunmuller2012hydrodynamic,art:prado15,art:henon99,dao2003mechanics}. When dealing with time-dependent properties of biological cells (or capsules, in general), the membrane viscosity plays a crucial role~\cite{li2020finite,li2020similar,art:yazdanibagchi13,D0SM00587H,guglietta2020lattice,graessel2021rayleigh,noguchi2005dynamics,noguchi2007swinging,barthes2016motion,diaz2000transient}. Evans~\cite{evans19891} showed that the RBC relaxation time is affected by both the membrane viscosity and the dissipation in the adjacent aqueous phases (i.e., cytoplasm and external solution); neglecting the membrane viscosity, i.e., $\mu_{\mbox{\scriptsize m}}=0$, he predicted a relaxation time $t_{\mbox{\tiny R}}\approx 1\times 10^{-3}$~s (also confirmed by numerical simulations~\cite{D0SM00587H}), a remarkably lower value compared to other works in the literature~\cite{art:baskurt96,art:prado15,braunmuller2012hydrodynamic,hochmuth1979red,D0SM00587H}. Several works aimed at the precise estimation of the value of the membrane viscosity~\cite{evans1976membrane,chien1978theoretical,hochmuth1979red,tran1984determination,art:baskurt96,riquelme2000determination,art:tomaiuolo11,braunmuller2012hydrodynamic,art:prado15,fedosov2010multiscale}, finding that $\mu_m$ roughly ranges between $10^{-7}\mbox{ m Pa s}$ and $10^{-6}\mbox{ m Pa s}$.
Such variability may be ascribed to different factors, e.g., the different theoretical models used to infer $\mu_{\mbox{\scriptsize m}}$~\cite{art:prado15,evans1976membrane}, the different experimental apparatuses (such as micro-pipette aspiration~\cite{evans1976membrane,chien1978theoretical,hochmuth1979red,braunmuller2012hydrodynamic}, microchannel deformation~\cite{art:tomaiuolo11}, or other setups~\cite{tran1984determination,art:baskurt96,riquelme2000determination}), etc.
As a matter of fact, although $\mu_{\mbox{\scriptsize m}}$ is an essential parameter to quantitatively characterise the time dynamics of RBCs~\cite{guglietta2020lattice,li2020similar}, its precise value has not been accurately determined so far, which warrants a parametric investigation. Moreover, some earlier studies~\cite{keller1982motion} proposed to use an increased apparent viscosity ratio to account for the energy dissipation due to the presence of a viscous membrane: even though this assumption provides a qualitative description of the effects of the membrane viscosity, it does not account for a quantitative characterisation, as shown by recent studies~\cite{matteoli2021impact,li2020similar,guglietta2020lattice,noguchi2007swinging,noguchi2005dynamics}.
Our previous work~\cite{D0SM00587H} aimed to investigate the effect of membrane viscosity $\mu_{\mbox{\scriptsize m}}$ on the relaxation dynamics of a single RBC, and we found that increasing the value of $\mu_{\mbox{\scriptsize m}}$, as well as increasing the intensity of the loading strength, leads to faster recovery dynamics. Moreover, we simulated two experimental setups, i.e., the stretching with optical tweezers and the deformation due to an imposed shear flow, and we found a dependency on the kind of mechanical load when the strengths of load are large enough.\\
The relaxation dynamics, however, gives only a partial characterisation of the time-dependent response of RBCs to external forces: therefore, the loading process should be considered as well, as already pointed out in earlier literature papers. Chien {\it et al.}~\cite{chien1978theoretical} experimentally studied both the loading and the relaxation dynamics of RBC membrane through micro-pipette aspiration, providing evidence that the two dynamics are not symmetrical in certain conditions; however, a systematic study involving different stress values and different typologies of mechanical loads was not performed. Diaz {\it et al.}~\cite{diaz2000transient} studied the dynamics of a pure elastic capsule with a hyperelastic membrane deformed by an elongational flow: they focused on both loading and relaxation, finding an asymmetry. However, their model did not take into account the membrane viscosity and the asymmetry was not studied for different typologies of mechanical loads. Thus, although previous literature points to two distinct dynamics for loading and relaxation~\cite{chien1978theoretical,diaz2000transient,barthes2016motion}, a comprehensive parametric study on the effects of $\mu_{\mbox{\scriptsize m}}$ for different typologies of mechanical loads and flow conditions (such as simple shear flow or elongational flow) has never been attempted, so far.
This paper aims at filling this gap with the help of mesoscale numerical simulations. Indeed, for this kind of characterisation, numerical simulations can be thought of as the appropriate tool of analysis~\cite{thesis:kruger,thesis:mountrakis,thesis:janoschek,gross2014rheology,art:kruger13,art:fedosov10,art:pan11,art:fedosov2010systematic,art:kruger14deformability,art:fedosov2011predicting,art:guckenberger2018numerical,fedosov2014multiscale}, due to the obvious experimental difficulties in carrying out such systematic investigation~\cite{braunmuller2012hydrodynamic,chien1978theoretical,art:prado15,art:hochmuth79,art:bronkhorst95}. We provide a quantitative characterisation of loading and relaxation dynamics exploring three typologies of mechanical loads. To do this, we built three different simulation setups: the stretching simulation (STS), which simulates the deformation with optical tweezers~\cite{art:suresh2005connections} (see Fig.~\ref{fig:sketch}, panel (a)); the shear simulation (SHS), i.e., the deformation in simple shear flow (see Fig.~\ref{fig:sketch}, panel (c)); the four-roll mill simulation (FRMS), where the deformation is induced by an elongational axisymmetric flow made by the rotation of four cylinders (see Fig.~\ref{fig:sketch}, panel (e)). These three numerical setups are chosen to inspect the different roles that the membrane rotation and/or membrane deformation have in the time-dependent dynamics. This information is summarised in Tab.~\ref{tab:summary}. First, we systematically study the characteristic times of both the loading ($t_{\mbox{\tiny L}}$) and the relaxation ($t_{\mbox{\tiny R}}$) processes and their ratio $\tilde{t}=t_{\mbox{\tiny L}}/t_{\mbox{\tiny R}}$, as a function of the load strength and membrane viscosity $\mu_{\mbox{\scriptsize m}}$. For small strengths, the two characteristic times are essentially equal and set by the value of $\mu_{\mbox{\scriptsize m}}$; however, for strengths large enough, the loading dynamics is found to be faster than the relaxation dynamics, leading to a non-universal behaviour while changing the typology of the mechanical load. Such non-universal contributions are further characterised in terms of the importance of rotation and deformation of the membrane, according to the different load mechanisms. Some useful parametrizations for both $t_{\mbox{\tiny R}}$ and $t_{\mbox{\tiny L}}$ as a function of the membrane viscosities are also provided.
Finally, since different pathologies that cause the reduction of RBC membrane elasticity are known~\cite{art:kruger14deformability, art:suresh2005connections,suresh2006mechanical,brandao2003optical,briole2021molecular,brandao2003elastic,fedosov2011quantifying,luo2013inertia,ye2013stretching,hosseini2012malaria}, we also study the dependency of both the loading ($t_{\mbox{\tiny L}}$) and the relaxation ($t_{\mbox{\tiny R}}$) times as well as their ratio $\tilde{t}$ on the elastic shear modulus $k_{\mbox{\scriptsize S}}$, for a fixed value of membrane viscosity.\\
\begin{figure*}
\centering
\begin{tabular}{c c}
\includegraphics[width=.4\linewidth]{Figures/sts.jpeg} & \includegraphics[width=.4\linewidth]{Figures/d_vs_t_stretching.jpeg}\\
\small (a) {\bf ST}retching {\bf S}imulation (STS) & \small (b) Stretching simulation (STS): deformation. \vspace{.5 cm}
\\
\includegraphics[width=.4\linewidth]{Figures/shs.jpeg} & \includegraphics[width=.4\linewidth]{Figures/d_vs_t_shs.jpeg}\\
\small (c) {\bf SH}ear {\bf S}imulation (SHS) & \small (d) Shear simulation (SHS): deformation. \vspace{.5 cm}\\
\includegraphics[width=.4\linewidth]{Figures/frms.jpeg} & \includegraphics[width=.4\linewidth]{Figures/d_vs_t_frms.jpeg}\\
\small (e) {\bf F}our-{\bf R}oll {\bf M}ill {\bf S}imulation (FRMS) & \small (f) Four-roll mill simulation (FRMS): deformation. \vspace{.5 cm}
\end{tabular}
\caption{Loading-relaxation (L-R) simulations for red blood cell (RBC) at changing the typology of mechanical load. \textit{\underline{Left panels}}: the three different L-R simulations investigated in the paper are sketched: grey arrows refer to the mechanical load, either an applied force $\vec{F}$ or an applied velocity $\text{U}_{\mbox{\scriptsize w}}$, while the RBC membrane forces ($\vec{F}_{\mbox{\scriptsize mem}}$) are sketched with green arrows. In all simulations, the deformation $D(t)$, i.e., the ratio between the difference and the sum of the axial and transversal diameters (see Eq.~\eqref{eq:d}), is used to fit the loading and relaxation times ($t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$, respectively; see Eqs.~\eqref{eq:lst}~\eqref{eq:lsh}~\eqref{eq:r}).
\textit{\underline{Right panels}}: we report the deformation $D(t)$ (see Eq.~\eqref{eq:d}) as a function of time for two values of membrane viscosity~$\mu_{\mbox{\scriptsize m}}$.
{\it \underline{Panels (a-b)}}: we simulate the stretching with optical tweezers~\cite{art:suresh2005connections} (STS), in which two forces with the same intensity and opposite direction stretch the membrane in two areas at the ends of the RBC (see Sec.~\ref{sec:stretching}); the deformation $D(t)$ is reported for $F=90\times 10^{-12}$ N.
{\it \underline{Panels (c-d)}}: deformation induced by simple shear flow (SHS), with $\text{U}_{\mbox{\scriptsize w}} = (\pm\dot{\gamma}/2,0,0)$, where $\dot{\gamma}$ is the shear rate (see Sec.~\ref{sec:shear}); the deformation $D(t)$ is reported for $\dot{\gamma}=86 \mbox{ s}^{-1}$.
{\it \underline{Panels (e-f)}}: in the four-roll mill simulation (FRMS) we simulate four rotating cylinders to reproduce an elongational flow that deforms the membrane~\cite{malaspinas2010lattice} (see Sec.~\ref{sec:four-roll_mill}); the deformation $D(t)$ is reported for $\gammadot_{\mbox{\tiny FRMS}}=80 \mbox{ s}^{-1}$.
Four videos showing these simulations are available (see ESI\dag).
\label{fig:sketch}}
\end{figure*}
The paper is organised in the following way: in Sec.~\ref{sec:method} we provide some details on the numerical method used to simulate both the fluid and the membrane of the RBC; in Sec.~\ref{sec:loading_relaxation} we analyse the three simulated loading mechanisms (the stretching simulation, Sec.~\ref{sec:stretching}; the shear simulation, Sec.~\ref{sec:shear}; the four-roll mill simulation, Sec.~\ref{sec:four-roll_mill}); a detailed discussion section with comparisons between the loading mechanisms is provided in Sec.~\ref{sec:comparison}; finally, conclusions are reported in Sec.~\ref{sec:conclusion}.\\
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
\hline
Load Type & Rotation & Direct Forcing\\
\hline
STS & NO & YES
\\
SHS & YES & NO
\\
FRMS & NO & NO
\\
\hline
\end{tabular}
\vspace{0.5cm}
\caption{Summary of the main characteristics of the three kinds of applied mechanical loads (see Fig.~\ref{fig:sketch}). For each load type, we specify if the rotation is induced on the membrane while loading and if the forcing is directly applied on the nodes of the mesh used to discretise the membrane (otherwise, the membrane is forced indirectly via hydrodynamic flow).}\label{tab:summary}
\end{table}
\section{Numerical method} \label{sec:method}
We perform three-dimensional numerical simulations in the framework of the Immersed Boundary -- Lattice Boltzmann method (IB--LBM)~\cite{book:kruger,succi2001lattice}. The methodology, as well as the membrane model, are the same already used and validated in \cite{D0SM00587H}: here we report an essential summary.\\
The equation of motion for a fluid with viscosity $\mu$ is given by the Navier-Stokes (NS) equations:
\begin{equation}\label{eq:NS}
\rho\left(\frac{\partial\vec{u}}{\partial t}+(\vec{u}\cdot\pmb{\nabla})\vec{u}\right)=-\pmb{\nabla} p+\mu\pmb{\nabla}^2\vec{u}+\vec{F}\; ,
\end{equation}
where $\rho$ and $\vec{u}$ are the density and the velocity of the fluid, respectively; $p$ is the isotropic pressure; $\vec{F}$ is an external body force density. If the fluid is incompressible (as in the present work) the condition $\pmb{\nabla}\cdot\vec{u}=0$ holds.\\
In the LBM, instead of directly solving the NS equations by integrating Eq.~\eqref{eq:NS}, the fluid is represented by the so-called populations $f_i(\vec{x},t)$, that stand for the density of fluid molecules moving with velocity $\vec{c}_i$ at position $\vec{x}$ and time $t$. The populations evolve according to the Lattice Boltzmann equation:
\begin{equation}\label{LBMEQ}
\begin{split}
f_i(\vec{x}+\vec{c}_i\Delta t, t+ \Delta t) - f_i(\vec{x}, t) = \\
=-\frac{\Delta t}{\tau}\left(f_i(\vec{x}, t) - f_i^{(\mbox{\tiny eq})}(\vec{x}, t)\right)& + f_i^{(F)}\; ,
\end{split}
\end{equation}
in which $\Delta t$ is the discrete time step, $\tau$ is the relaxation time, $f_i^{(F)}$ is the source term that takes into account the force density (it has been implemented according to the ``Guo'' scheme~\cite{PhysRevE.65.046308}), and $ f_i^{(\mbox{\tiny eq})}$ is the equilibrium distribution function (we refer back to~\cite{book:kruger,succi2001lattice} for the details). The fluid density $\rho$ and the velocity $\vec{u}$ are given by:
\begin{equation}
\begin{split}
\rho(\vec{x}, t) = \sum_{i} f_i(\vec{x}, t)\; , \\ \rho\vec{u}(\vec{x}, t) = \sum_{i} \vec{c}_i f_i(\vec{x}, t)\; .
\end{split}
\end{equation}
The link between NS and LB equations (Eq.~\eqref{eq:NS} and Eq.~\eqref{LBMEQ}, respectively) is given by the following relation:
\begin{equation}
\mu= \rho c_{\mbox{\scriptsize s}}^2\left(\tau-\frac{\Delta t}{2}\right)\; ,
\end{equation}
where $c_{\mbox{\scriptsize s}}=\Delta x/\Delta t\sqrt{3}$ is the speed of sound. In the following, we considered both the lattice spacing $\Delta x$ and the time interval $\Delta t$ equal to 1.
\\
We simulate two fluids: one outside the membrane (the plasma, with viscosity $\mu_{\mbox{\scriptsize out}} = 1.2\times 10^{-3}$ Pa s) and one inside it (the cytosol, with viscosity $\mu_{\mbox{\scriptsize in}} = 6 \times 10^{-3}$ Pa s). The viscosity ratio is given by
\begin{equation}\label{visc_ratio}
\lambda = \frac{\mu_{\mbox{\scriptsize in}}}{\mu_{\mbox{\scriptsize out}}}\; ,
\end{equation}
providing $\lambda = 5$. We implement the parallel Hoshen-Kopelman algorithm to recognise which lattice sites are inside or outside the membrane (see~\cite{art:frijters15} for details). \\
The RBC membrane is described as a 3D triangular mesh of $\approx~4000$ elements, whose shape at rest is given by~\cite{evans1972improved}
\begin{equation}
\begin{split}
z(x,y) = \pm \sqrt{1-\frac{x^2+y^2}{r^2}}\cdot \\ \cdot\left(C_0+C_1\frac{x^2+y^2}{r^2}+ C_2\left(\frac{x^2+y^2}{r^2}\right)^2\right)\; ,
\end{split}
\end{equation}
with $C_0 = 0.81\times 10^{-6} \mbox{ m}$, $C_1 = 7.83\times 10^{-6} \mbox{ m}$ and $C_2 = -4.39\times 10^{-6} \mbox{ m}$; $r=3.91\times 10^{-6} \mbox{ m}$ is the large radius. \\
The membrane is characterised by a resistance to shear deformation, area dilation and bending; the viscoelastic behaviour is implemented, as well. The first two terms form the {strain energy} $W_{\mbox{\scriptsize S}}$ are described by Skalak model~\cite{art:skalaketal73}:
\begin{equation}\label{eq:skalak}
W_{\mbox{\scriptsize S}} = \sum_j A_j\left[ \frac{k_{\mbox{\scriptsize S}}}{12}\left(I_1^2+2I_1-2I_2\right) + \frac{k_{\scriptsize \alpha}}{12} I_2^2\right]\; ,
\end{equation}
where $k_{\mbox{\scriptsize S}} = 5.3\times 10^{-6} \mbox{ N m}^{-1}$~\cite{art:suresh2005connections} and $k_{\scriptsize \alpha} = 50 k_{\mbox{\scriptsize S}}$~\cite{thesis:kruger} are the surface elastic shear modulus and the area dilation modulus, respectively; $I_1 = \lambda_1^2+\lambda_2^2-2$ and $I_2 = \lambda_1^2\lambda_2^2-1$ are the strain invariants for the $j$-th element, while $ \lambda_1$ and $ \lambda_2$ are the principal stretch ratios~\cite{art:skalaketal73,thesis:kruger}; $A_j$ is the surface are of the $j$-th element.
We adopt the Helfrich formulation to compute the free-energy $W_{\mbox{\scriptsize B}}$ related to the resistance to bending~\cite{art:helfrich73}. Following~\cite{thesis:kruger}, we discretise the {bending energy} as:
\begin{equation}\label{eq:helfrich}
W_{\mbox{\scriptsize B}} = \frac{k_{\mbox{\scriptsize B}}\sqrt{3}}{2}\sum_{\langle i,j\rangle}\left(\theta_{ij}-\theta_{ij}^{(0)}\right)^2\; ,
\end{equation}
where $k_{\mbox{\scriptsize B}} = 2\times 10^{-19}\mbox{ N m}$~\cite{book:gommperschick} is the bending modulus; the sum runs over all the neighbouring triangular elements, and $\theta_{ij}$ is the angle between the normals of the $i$-th and $j$-th elements ($\theta_{ij}^{(0)}$ is the same angle in the unperturbed configuration). Once we have the total free-energy $W = W_{\mbox{\scriptsize S}}+W_{\mbox{\scriptsize B}}$, we compute the force acting on the $i-$th node by performing the derivative of $W$ with respect to the coordinates of the node $\vec{x}_i$:
\begin{equation}\label{eq:nodal_force_energy}
\vec{F}_i = -\frac{\partial W(\vec{x}_i)}{\partial \vec{x}_i}\; .
\end{equation}
Note that we are implementing neither area nor volume conservation: in fact, as stated in \cite{D0SM00587H}, we checked that both area and volume were conserved, even without an explicit area or volume conservation law (see Electronic Supplementary Information in \cite{D0SM00587H}). \\
Regarding the viscoelastic term, we implement the Standard Linear Solid (SLS) model~\cite{art:lizhang19,li2020finite}. The viscous stress tensor is given by
\begin{equation}\label{eq:mv2}
\pmb{\tau}^\nu = \mu_{\mbox{\scriptsize s}} \left(2\dot{\vec{E}} -\mbox{tr}(\dot{\vec{E}})\mathbb{1}\right) + \mu_{\mbox{\scriptsize d}} \mbox{tr}(\dot{\vec{E}})\mathbb{1}\; ,
\end{equation}
where $\vec{E}$ is the strain tensor (see~\cite{art:lizhang19,D0SM00587H}); $\mu_{\mbox{\scriptsize s}}$ and $\mu_{\mbox{\scriptsize d}}$ are the shear and dilatational membrane viscosity: in this work, we assume $\mu_{\mbox{\scriptsize s}}~=~\mu_{\mbox{\scriptsize d}}~=~\mu_{\mbox{\scriptsize m}}$~\cite{D0SM00587H}. We refer to~\cite{art:lizhang19,D0SM00587H} for the computation of the stress tensor $\pmb{\tau}^\nu$ (Eq.~\eqref{eq:mv2}) as well as for the nodal force $\vec{F}_i$ (Eq.~\eqref{eq:nodal_force_energy}). \\
Finally, once we have the nodal force $\vec{F}_i$ for each node $i$ of the 3D mesh, we spread this force to the lattice nodes via the IBM (see~\cite{book:kruger} for details). We adopt the same scheme as in~\cite{D0SM00587H}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.\linewidth]{Figures/t_vs_simulations_ninepanels.pdf}
\caption{Characteristic times $t_{\mbox{\tiny L}}$ (first column of panels) and $t_{\mbox{\tiny R}}$ (second column of panels), as well as the ratio $\tilde{t}=t_{\mbox{\tiny L}}/t_{\mbox{\tiny R}}$ (third column of panels) are reported for the three simulations performed, i.e., stretching simulation (STS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!100!yellow](){};}}, panels (a-c), Sec.~\ref{sec:stretching}), shear simulation (SHS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!100!yellow,rotate=0](){};}}, panels (d-f), Sec.~\ref{sec:shear}), four-roll mill simulation (FRMS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!100!yellow](){};}}, panels (g-i), Sec.~\ref{sec:four-roll_mill}), for different values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$ (from lightest to darkest color): $\mu_{\mbox{\scriptsize m}}=0$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!60!yellow](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!60!yellow,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!60!yellow](){};}}), $\mu_{\mbox{\scriptsize m}}=0.64 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!50!brown](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!50!brown,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!50!brown](){};}}), $\mu_{\mbox{\scriptsize m}}=1.59 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!0!brown](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!0!brown,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!0!brown](){};}}), $\mu_{\mbox{\scriptsize m}}=3.18 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!60!red](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!60!red,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!60!red](){};}}). The red dashed line represents the reference value for the symmetric case, i.e., $\tilde{t}=1$.\label{fig:t_vs_simulations}}
\end{figure*}
\section{Loading and relaxation time}\label{sec:loading_relaxation}
In this section, we quantitatively study the loading time $t_{\mbox{\tiny L}}$ and the relaxation time $t_{\mbox{\tiny R}}$ with three different loading mechanisms (see Fig.~\ref{fig:sketch}): the stretching with optical tweezers (STS, see Sec.~\ref{sec:stretching}), the deformation in simple shear flow (SHS, see Sec.~\ref{sec:shear}) and the deformation in an elongational flow (FRMS, see Sec.~\ref{sec:four-roll_mill}).
These three simulations differ mainly for two aspects (summarised in Tab.~\ref{tab:summary}): the first one is that the membrane can be deformed by an external force that acts directly on the membrane (like in the STS) or by the viscous friction with the fluid (SHS and FRMS); moreover, the membrane can rotate (like in the SHS) or not (STS and FRMS).
The idea underlying the choice of these three different setups is to catch which of the aforementioned characteristics affects the loading and relaxation dynamics. \\
To quantify the loading time $t_{\mbox{\tiny L}}$ and the relaxation time $t_{\mbox{\tiny R}}$, we define the deformation parameter
\begin{equation}\label{eq:d}
D(t) = \frac{d_{\mbox{\scriptsize A}}(t)-d_{\mbox{\scriptsize T}}(t)}{d_{\mbox{\scriptsize A}}(t)+d_{\mbox{\scriptsize T}}(t)}\; ,
\end{equation}
where $d_{\mbox{\scriptsize A}}$ and $d_{\mbox{\scriptsize T}}$ represent the length of the axial and transversal diameters, i.e., the greater and medium eigenvalues of the inertia tensor (see~\cite{D0SM00587H}).
In our computational domain, $d_{\mbox{\scriptsize A}}$ and $d_{\mbox{\scriptsize T}}$ lie in the $x-y$ plane. We define the average deformation $D_{\mbox{\tiny av}}$, i.e., the value of the deformation $D$ such that $\lim_{t\to \infty}D(t)=D_{\mbox{\tiny av}}$ in the loading simulation and $D(0)=D_{\mbox{\tiny av}}$ in the relaxation simulation.\\
Qualitatively, the loading time $t_{\mbox{\tiny L}}$ is the characteristic time the deformation $D(t)$ takes to reach $D_{\mbox{\tiny av}}$; the relaxation time $t_{\mbox{\tiny R}}$ is the characteristic time needed to relax to the initial shape, after the arrest of the mechanical load.
Quantitatively, we can get $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ via a fit of $D(t)/D_{\mbox{\tiny av}}$ with the following functions:
\begin{equation}\label{eq:lst}
L_{\mbox{\tiny 1}}(t) = 1 - \exp\left\{-\left(\frac{t}{t_{\mbox{\tiny L}}}\right)^{\delta_{\mbox{\tiny L}}}\right\}\; ,
\end{equation}
\begin{equation}\label{eq:lsh}
L_{\mbox{\tiny 2}}(t) = 1 - \exp\left\{-\left(\frac{t}{t_{\mbox{\tiny L}}}\right)^{\delta_{\mbox{\tiny L}}}\right\}\ \cos\left(\frac{t}{t_{\mbox{\tiny L}}^{\mbox{\tiny cos}}}\right)\; ,
\end{equation}
\begin{equation}\label{eq:r}
R(t) = \exp\left\{-\left(\frac{t}{t_{\mbox{\tiny R}}}\right)^{\delta_{\mbox{\tiny R}}}\right\}\; ,
\end{equation}
where $L_{\mbox{\tiny 1}}$ is used to fit the loading time for the STS and the FRMS (see Sec.~\ref{sec:stretching}-\ref{sec:four-roll_mill}, respectively); $L_{\mbox{\tiny 2}}$ is used to fit the loading time for the SHS (see Sec.~\ref{sec:shear}); $R$ is used to fit the relaxation time $t_{\mbox{\tiny R}}$ for all three simulations; $\delta_{\mbox{\tiny L}}$ and $\delta_{\mbox{\tiny R}}$ are parameters introduced to improve the fit~\cite{fedosov2010multiscale} (see~\cite{D0SM00587H} for some more details) and will be characterised in Sec.~\ref{sec:comparison}.\\
Note that we propose two different functions to fit data during the loading (i.e., Eq.~\eqref{eq:lst} and Eq.~\eqref{eq:lsh}); this is due to the different deformation process of the RBC: in the STS and FRMS, $D(t)$ is a monotonic increasing function with an asymptote in $D = D_{\mbox{\tiny av}}$ (see Fig.~\ref{fig:sketch}, panels (b) and (f)); in the SHS, $D(t)$ oscillates around $D_{\mbox{\tiny av}}$, and the amplitude of the oscillations varies with the value of the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ (see Fig.~\ref{fig:sketch}, panel (d)). These oscillations have been also observed for viscoelastic capsules\cite{art:yazdanibagchi13,art:lizhang19,D0SM00587H,barthes1985}. Before starting the relaxation, for the STS and FRMS, we waited long enough to achieve the steady value of the deformation $D_{\mbox{\tiny av}}$; for the SHS, we ensured the oscillations were small enough if compared to $D_{\mbox{\tiny av}}$. Notice, however, that depending on the importance of viscous effects with respect to elastic ones, there may be cases where such oscillations are damped on very long times~\cite{barthes1985,rallison1980note,art:yazdanibagchi13}.\\
Moreover, while the values of the external mechanical loads we simulated are comparable to each other in terms of stress (see Sec.~\ref{sec:comparison}), the values of $D_{\mbox{\tiny av}}$ for the SHS are much smaller than for the STS and FRMS (see Fig.~\ref{fig:d_and_phi}, panel (c)).\\
Since we want to compare the loading time $t_{\mbox{\tiny L}}$ and the relaxation time $t_{\mbox{\tiny R}}$, we define the ratio
\begin{equation}\label{eq:t_tilde}
\tilde{t} = \frac{t_{\mbox{\tiny L}}}{t_{\mbox{\tiny R}}}.
\end{equation}
In all the following simulations, the membrane viscosity ranges between $\mu_{\mbox{\scriptsize m}}\in[0,3.18]\times 10^{-7}\mbox{ m Pa s}$~\cite{evans1976membrane,chien1978theoretical,hochmuth1979red,tran1984determination,art:baskurt96,riquelme2000determination,art:tomaiuolo11,braunmuller2012hydrodynamic,art:prado15,fedosov2010multiscale}. In the STS, the applied force is in the range $F\in[5,70]\times 10^{-12}\mbox{ N}$; in the SHS, we simulated shear rates in the range $\dot{\gamma}\in[1.23, 123]\mbox{ s}^{-1}$; finally, in the FRMS, we simulated $\gammadot_{\mbox{\tiny FRMS}}\in[1,120]\mbox{ s}^{-1}$ (see Eq.~\eqref{eq:force_FRMS}).
Values for all parameters used in the simulations in both physical and lattice units are reported in Tab.~1 in ESI\dag.\\
\subsection{Stretching simulation (STS)}\label{sec:stretching}
In order to simulate the stretching with optical tweezers, we apply two forces with the same intensity $F$ and opposite directions at the ends of the RBC (see Fig.~\ref{fig:sketch}, panel (a)).
Simulations are performed in a 3D box $L_{\mbox {\tiny x}}\timesL_{\mbox {\tiny y}}\timesL_{\mbox {\tiny z}}=(28,12,12)\times 10^{-6}$~m.
In Fig.~\ref{fig:t_vs_simulations}, we report the loading time $t_{\mbox{\tiny L}}$ (panel (a)) and the relaxation time $t_{\mbox{\tiny R}}$ (panel (b)) as a function of $F$, for different values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$. In both cases, increasing the loading strength (as well as decreasing the value of membrane viscosity $\mu_{\mbox{\scriptsize m}}$) results in faster dynamics. It is interesting to compare $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$: in Fig.~\ref{fig:t_vs_simulations}, panel (c), we report the ratio $\tilde{t}$ (see Eq.~\eqref{eq:t_tilde}).
\subsection{Shear simulation (SHS)}\label{sec:shear}
In the shear simulation, the deformation is due to a linear shear flow with intensity $\dot{\gamma}$. We set the wall velocity $\text{U}_{\mbox{\scriptsize w}} = (\pm\dot{\gamma}L_{\mbox {\tiny z}}/2,0,0)$ on the two plane walls at $z=\pm L_{\mbox {\tiny z}}/2$, and the RBC is oriented as reported in Fig.~\ref{fig:sketch}, panel (c).
This choice is surely the optimal one for the purpose of the present study, since we can focus on the relaxation/loading process without any further complication. In real-world experiments, indeed, RBCs do not necessarily orient in the shear plane and evolve into a complex dynamics with many dynamical modes~\cite{minetti2019dynamics,cordasco2017shape}. In particular, for the values of shear rate $\dot{\gamma}$ we are interested in, a rolling dynamics is expected if the RBC is not oriented in the shear plane~\cite{mauer2018flow}. Further complications can be introduced by polydispersity, i.e., that RBCs may have different sizes and viscoelastic properties~\cite{art:suresh2005connections,dell1983molecular}. We have preferred to avoid all these extra complications which could not allow us for a fair assessment in the comparison between a pure relaxation dynamics and a pure loading dynamics.
Simulations are performed in a 3D box $L_{\mbox {\tiny x}}\timesL_{\mbox {\tiny y}}\timesL_{\mbox {\tiny z}}=(20,32,20)\times 10^{-6}$~m.
In Fig.~\ref{fig:t_vs_simulations}, the loading time $t_{\mbox{\tiny L}}$ (panel (d)) and the relaxation time $t_{\mbox{\tiny R}}$ (panel (e)) as a function of $\dot{\gamma}$ for different values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$ are reported, as well as the ratio $\tilde{t}$ (panel (f)). While the relaxation time $t_{\mbox{\tiny R}}$ shows a similar behaviour compared to the STS (see Fig.~\ref{fig:t_vs_simulations}, panel (b)), a few more words are needed regarding the loading time $t_{\mbox{\tiny L}}$. Unlike the STS, now we have two characteristic times, that are $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny L}}^{\mbox{\tiny cos}}$ (see Eq.~\eqref{eq:lsh}): $t_{\mbox{\tiny L}}$ measures the time the membrane takes to reach the average deformation $D_{\mbox{\tiny av}}$, while $t_{\mbox{\tiny L}}^{\mbox{\tiny cos}}$ measures the period of the oscillations. Data for $t_{\mbox{\tiny L}}^{\mbox{\tiny cos}}$ are reported in ESI\dag, Fig.~1. In contrast to the STS, the loading time $t_{\mbox{\tiny L}}$ first decreases, and then it does not change much with the intensity of the mechanical load.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.\linewidth]{Figures/t_vs_sigma.pdf}
\caption{Comparison between the characteristic times $t_{\mbox{\tiny L}}$ (panel (a)), $t_{\mbox{\tiny R}}$ (panel (b)) and $\tilde{t}=t_{\mbox{\tiny L}}/t_{\mbox{\tiny R}}$ (panel (c)) as a function of the stress $\sigma$ (see Sec.~\ref{sec:comparison}) for the three simulations performed, i.e., stretching simulation (STS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!100!yellow](){};}}, Sec.~\ref{sec:stretching}), shear simulation (SHS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!100!yellow,rotate=0](){};}}, Sec.~\ref{sec:shear}), four-roll mill simulation (FRMS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!100!yellow](){};}}, Sec.~\ref{sec:four-roll_mill}), for different values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$ (from lightest to darkest color): $\mu_{\mbox{\scriptsize m}}=0$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!60!yellow](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!60!yellow,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!60!yellow](){};}}), $\mu_{\mbox{\scriptsize m}}=0.64 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!50!brown](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!50!brown,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!50!brown](){};}}), $\mu_{\mbox{\scriptsize m}}=1.59 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!0!brown](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!0!brown,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!0!brown](){};}}), $\mu_{\mbox{\scriptsize m}}=3.18 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!60!red](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!60!red,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!60!red](){};}}). The red dashed line represents the reference value for the symmetric case, i.e., $\tilde{t}=1$.\label{fig:t_vs_sigma}}
\end{figure*}
\begin{figure}[ht!]
\centering
\includegraphics[width=.8\linewidth]{Figures/phi.jpeg}\vspace{.5cm}
\includegraphics[width=.8\linewidth]{Figures/d_vs_sigma_AND_phi_vs_t.pdf}
\caption{{\it\underline{Panel (a)}}:
snapshots of RBC at selected times. A point is selected on the membrane (green sphere) to perform a Lagrangian tracking and determine the time dependency of the angle $\phi$ that the point direction forms with the y axis in the deformation plane. {\it \underline{Panel (b)}}: we report the angle $\phi$ (see panel (a) above) as a function of time for the stretching simulation (STS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!60!red](){};}}, Sec.~\ref{sec:stretching}), shear simulation (SHS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!60!red,rotate=0](){};}}, Sec.~\ref{sec:shear}), four-roll mill simulation (FRMS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!60!red](){};}}, Sec.~\ref{sec:four-roll_mill}), for $\mu_{\mbox{\scriptsize m}}=3.18 \times 10^{-7}$~m~Pa~s. The red and blue shades represent loading and relaxation regions, respectively. {\it \underline{Panel (c)}}: average deformation $D_{\mbox{\tiny av}}$ (see text for details) as a function of the stress $\sigma$ for the three simulations performed (STS, SHS and FRMS). SHS data are displayed for different values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$ (from lightest to darkest color): $\mu_{\mbox{\scriptsize m}}=0$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!60!yellow,rotate=0](){};}}), $\mu_{\mbox{\scriptsize m}}=0.64 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!50!brown,rotate=0](){};}}), $\mu_{\mbox{\scriptsize m}}=1.59 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!0!brown,rotate=0](){};}}), $\mu_{\mbox{\scriptsize m}}=3.18 \times 10^{-7}$~m~Pa~s (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!60!red,rotate=0](){};}}). STS data (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!60!red](){};}}) and FRMS data (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!60!red](){};}}) data are only reported for $\mu_{\mbox{\scriptsize m}}=3.18 \times 10^{-7}$~m~Pa~s.
\label{fig:d_and_phi}}
\end{figure}
\subsection{Four-roll mill simulation (FRMS)}\label{sec:four-roll_mill}
In this case, we simulate the effect of four cylinders rotating~\cite{malaspinas2010lattice}, as shown in Fig.~\ref{fig:sketch}, panel (e), in order to create a flow similar to a pure elongational one.
Simulations are performed in a 3D box $L_{\mbox {\tiny x}}\timesL_{\mbox {\tiny y}}\timesL_{\mbox {\tiny z}}=(48,48,20)\times 10^{-6}$~m.
The idea is to simulate a loading mechanism that is a mixture of stretching with optical tweezers and deformation in simple shear flow (see Tab.~\ref{tab:summary}): in fact, in this case, the membrane does not rotate (like in the STS) and the deformation is caused by the flow (like in the SHS).\\
To create such a flow, we impose a force density~\cite{malaspinas2010lattice}
\begin{equation}\label{eq:force_FRMS}
\vec{F}(x,y) = 2k \mu \gammadot_{\mbox{\tiny FRMS}} \left(\begin{array}{c}
\sin(kx)\cos(ky) \\
-\cos(kx)\sin(ky) \\
0
\end{array}\right)\; ,
\end{equation}
where $k=2\pi/L_{\mbox {\tiny x}}$, $\mu$ is the local fluid viscosity, and $\gammadot_{\mbox{\tiny FRMS}}$ is used to tune the load strength.
We multiplied Eq.~\eqref{eq:force_FRMS} by $k$ to make the velocity gradient independent of the size of the fluid domain\footnote{The following result is valid in a homogeneous fluid with dynamics viscosity $\mu$.}:
\begin{equation}\label{eq:vel_grad_FRMS}
\frac{\partial \vec{u}}{\partial\vec{x}} =
\gammadot_{\mbox{\tiny FRMS}}\left(\begin{matrix}
\cos(kx)\cos(ky) & -\sin(kx)\sin(ky) \\
\sin(kx)\sin(ky) & -\cos(kx)\cos(ky)
\end{matrix}\right)\; ,
\end{equation}
where we have reported only $x$ and $y$ components, i.e., the components in the plane of the shear. Note that Eq.~\eqref{eq:force_FRMS} gives a pure elongational flow only in $x=\pi/2, 3\pi/2$ and in $y=\pi/2, 3\pi/2$.\\
In Fig.~\ref{fig:t_vs_simulations} we report the loading time $t_{\mbox{\tiny L}}$ (panel (g)) and the relaxation time $t_{\mbox{\tiny R}}$ (panel (h)) as a function of $\gammadot_{\mbox{\tiny FRMS}}$. As for the STS and the SHS, both $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ decrease when the loading force increases or the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ decreases. In Fig.~\ref{fig:t_vs_simulations}, panel (i), the ratio $\tilde{t}$ is reported.
\section{Discussion\label{sec:comparison}}
In our simulations, the intensity of the three kinds of mechanical loads is changed by varying different quantities, i.e., $F$ for the STS, $\dot{\gamma}$ for the SHS and $\gammadot_{\mbox{\tiny FRMS}}$ for the FRMS. To facilitate a comparison between them, we first consider the characteristic times $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ as well as the ratio $\tilde{t}$ as a function of the characteristic simulation stress $\sigma$ (Fig.~\ref{fig:t_vs_sigma}). To evaluate the stress $\sigma$ for the STS, we computed the area $A$ at the end of the RBC where the force $F$ is applied. Then, the stress is given by $\sigma^{\mbox{\scriptsize STS}}=F/A$; for the SHS, we wrote the stress as $\sigma^{\mbox{\scriptsize SHS}}=2\dot{\gamma}\mu_{\mbox{\scriptsize out}}$~\cite{thesis:kruger}. Finally, for the FRMS, the stress is given by the stress-peak $\sigma^{\mbox{\scriptsize FRMS}}=\mu_{\mbox{\scriptsize out}}\gammadot_{\mbox{\tiny FRMS}}$. In all three simulations, the loading and relaxation times ($t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$, respectively) show qualitatively the same behaviour, i.e., they decrease when the loading strength increases or when the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ decreases (see Fig.~\ref{fig:t_vs_sigma}, panels (a) and (b)); the ratio $\tilde{t}=t_{\mbox{\tiny L}} / t_{\mbox{\tiny R}}$ is reported in Fig.~\ref{fig:t_vs_sigma}, panel (c). For small forces ($\sigma\to 0$) we observe a clear tendency towards \textit{symmetry} between loading and relaxation ($\tilde{t}(\sigma,\mu_{\mbox{\scriptsize m}}) \rightarrow 1$), meaning that the characteristic times $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ tend to be equal. This is the limit where one expects to recover the {\it intrinsic} dynamics of the membrane, which depends only on the value of membrane viscosity $\mu_{\mbox{\scriptsize m}}$~\cite{D0SM00587H,art:prado15}. \\
On the other hand, for force strengths large enough, loading and relaxation dynamics are \textit{asymmetrical}, i.e., $\tilde{t} \neq 1$. As already noticed elsewhere~\cite{barthes2016motion}, this asymmetry could be explained by energetic considerations: in fact, during the loading phase, the deformation is driven by the external load (i.e., an external source of energy), while during the relaxation, the membrane provides the whole energy. Beyond these qualitative considerations, results in Fig.~\ref{fig:t_vs_sigma} provide a systematic characterisation of the relaxation times, as a function of either the stress $\sigma$ or the membrane viscosity $\mu_{\mbox{\scriptsize m}}$: an important message conveyed by our analysis is that the {\it asymmetry is not universal}, i.e., on equal values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$, the ratio $\tilde{t}$ depends on the kind of mechanical load. Just to give some numbers, the difference between the values of $\tilde{t}$ for the STS and FRMS is roughly constant ($\approx$ 30\%) and it goes to zero for small values of $\sigma$; for the SHS the situation is a bit more complex because $\tilde{t}$ depends on $\mu_{\mbox{\scriptsize m}}$. However, if we compare SHS against STS for $\mu_{\mbox{\scriptsize m}}=3.18\times 10^{-7}$~m~Pa~s, we find a difference of less then 30\% for small values of $\sigma$ (i.e., $\sigma<0.1$~Pa), while
such a difference goes over the 50\% for large values of $\sigma$ (i.e., $\sigma>0.1$~Pa).\\
If we think that the asymmetry comes from the presence of a mechanical load with load strength large enough~\cite{diaz2000transient,barthes2016motion}, it comes natural to expect a non-universality and a dependency on the details of the loading mechanism. Thanks to our analysis, we are in a condition to further characterise this non-universality: indeed, we observe that while $\tilde{t}$ does not depend on $\mu_{\mbox{\scriptsize m}}$ for the STS and FRMS ($\tilde{t}^{\mbox{\scriptsize{STS}}} = \tilde{t}^{\mbox{\scriptsize{STS}}}(\sigma)$, $\tilde{t}^{\mbox{\scriptsize{FRMS}}} = \tilde{t}^{\mbox{\scriptsize{FRMS}}}(\sigma)$), it actually does in the SHS ($\tilde{t}^{\mbox{\scriptsize{SHS}}} = \tilde{t}^{\mbox{\scriptsize{SHS}}}(\sigma,\mu_{\mbox{\scriptsize m}})$). The collapse shown by $\tilde{t}$ in the STS and FRMS (see Fig.~\ref{fig:t_vs_sigma}, panel (c)) suggests a factorisation of the loading and relaxation times in two contributions: one depending on the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ and one on the load intensity $\sigma$:
\begin{equation}\label{eq:tl_k}
t_{\mbox{\tiny L}}^K(\sigma,\mu_{\mbox{\scriptsize m}})\approxt_{\mbox{\tiny L}}^{*K}(\sigma)t_0^K(\mu_{\mbox{\scriptsize m}})\qquad\mbox{for }K=\mbox{STS, FRMS}\; ,
\end{equation}
\begin{equation}\label{eq:tr_k}
t_{\mbox{\tiny R}}^K(\sigma,\mu_{\mbox{\scriptsize m}})\approx t_{\mbox{\tiny R}}^{*K}(\sigma)t_0^K(\mu_{\mbox{\scriptsize m}})\qquad\mbox{for }K=\mbox{STS, FRMS}\; ,
\end{equation}
where the superscript $K$ stands for the kind of mechanical load. Given this factorisation, we have
\begin{equation}\label{eq:ttilde_k}
\tilde{t}^K(\sigma) = \frac{t_{\mbox{\tiny L}}^{*K}(\sigma)}{t_{\mbox{\tiny R}}^{*K}(\sigma)}\qquad\mbox{for }K=\mbox{STS, FRMS}\; .
\end{equation}
\begin{figure*}[t!]
\centering
\includegraphics[width=1.\linewidth]{Figures/t_vs_mu.pdf}
\caption{Characteristic times $t_{\mbox{\tiny L}}$ (panels (a-c)) and $t_{\mbox{\tiny R}}$ (panels (d-f)) as a function of the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ (see Sec.~\ref{sec:comparison}) for the three performed simulations: stretching simulation (STS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!100!yellow](){};}}, Sec.~\ref{sec:stretching}), shear simulation (SHS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!100!yellow,rotate=0](){};}}, Sec.~\ref{sec:shear}), four-roll mill simulation (FRMS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!100!yellow](){};}}, Sec.~\ref{sec:four-roll_mill}), for different values of stress $\sigma$ (from lightest to darkest color):
$\sigma=0.001$~Pa (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!95!blue](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!95!blue,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!95!blue](){};}}),
$\sigma=0.01$~Pa (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!45!blue](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!45!blue,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!45!blue](){};}}),
$\sigma=0.1$~Pa (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!5!blue](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!5!blue,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!5!blue](){};}}).\label{fig:t_vs_mu}}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.\linewidth]{Figures/t_vs_simulations_ninepanels_ks.pdf}
\caption{Characteristic times $t_{\mbox{\tiny L}}$ (first column of panels) and $t_{\mbox{\tiny R}}$ (second column of panels) as well as the ratio $\tilde{t}=t_{\mbox{\tiny L}}/t_{\mbox{\tiny R}}$ (third column of panels) are reported for the three simulations performed, i.e., stretching simulation (STS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!100!yellow](){};}}, panels (a-c), Sec.~\ref{sec:stretching}), shear simulation (SHS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!100!yellow,rotate=0](){};}}, panels (d-f), Sec.~\ref{sec:shear}), four-roll mill simulation (FRMS, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!100!yellow](){};}}, panels (g-i), Sec.~\ref{sec:four-roll_mill}), for different values of surface elastic shear modulus $k_{\mbox{\scriptsize S}}$ (from lightest to darkest color): $k_{\mbox{\scriptsize S}}=5.3\times 10^{-6}\mbox{ N m}^{-1}$ (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!95!green](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!95!green,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!95!green](){};}}), $k_{\mbox{\scriptsize S}}=53\times 10^{-6}\mbox{ N m}^{-1}$ (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=white!45!green](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=white!45!green,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=white!45!green](){};}}), $k_{\mbox{\scriptsize S}}=530\times 10^{-6}\mbox{ N m}^{-1}$ (\protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,regular polygon, regular polygon sides=4,fill=black!50!green](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.4,regular polygon, regular polygon sides=3,fill=black!50!green,rotate=0](){};}}, \protect \raisebox{0.5pt}{\tikz{\node[draw,scale=0.5,circle,fill=black!50!green](){};}}). The value of membrane viscosity $\mu_{\mbox{\scriptsize m}}$ was kept fixed in all the simulations ($\mu_{\mbox{\scriptsize m}}=1.59\times 10^{-7}\mbox{ m Pa s}$).
The red dashed line represents the reference value for the symmetric case, i.e., $\tilde{t}=1$.\label{fig:t_vs_simulations_ks}}
\end{figure*}
To make progress, we investigated what is the physical ingredient at the core of the factorisation given in Eqs.~\eqref{eq:tl_k}-\eqref{eq:tr_k}, or, alternatively, why the SHS does not factorize as in Eqs.~\eqref{eq:tl_k}-\eqref{eq:tr_k}.
We investigated if such non-factorisation could be related to the oscillations of $D(t)$ that appear only in the SHS (see Sec.~\ref{sec:loading_relaxation}): however, since $\tilde{t}^{\mbox{\scriptsize{SHS}}}$ does not factorise even when the deformation $D(t)$ does not oscillate (e.g., for small values of the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ and/or the shear rate $\dot{\gamma}$), we conclude that these oscillations cannot be fully responsible for the non-factorisation. We rather think that the difference between SHS and STS/FRMS is mainly due to the different dynamics that are induced by the mechanical load (see Tab.~\ref{tab:summary}).
In fact, regarding the SHS, one can split the velocity gradient $\frac{\partial \vec{u}}{\partial\vec{x}}$ in the symmetric (rotational) and antisymmetric (elongational) parts:
\begin{equation}
\frac{\partial \vec{u}}{\partial\vec{x}} =
\left(\begin{matrix}
0 & \dot{\gamma} \\
0 & 0
\end{matrix}\right)=
\left(\begin{matrix}
0 & \frac{\dot{\gamma}}{2} \\
\frac{\dot{\gamma}}{2} & 0
\end{matrix}\right) + \left(\begin{matrix}
0 & \frac{\dot{\gamma}}{2} \\
-\frac{\dot{\gamma}}{2} & 0
\end{matrix}\right)\; ,
\end{equation}
where the only two components in the shear plane are reported.
The rotational part causes the rolling motion of the membrane (see Fig.~\ref{fig:d_and_phi}, panels (a) and (b)), while the elongational one tends to deform the RBC and pushes the main diameter to an angle of $\pi/4$ with respect to the shear direction; an increase in the membrane viscosity causes an increase in the time needed for the membrane to adapt to the flow and to deform; meanwhile, the rotational component promotes a rotation of the main diameter. Overall, the increase in membrane viscosity $\mu_{\mbox{\scriptsize m}}$ leads to a decrease of the average deformation $D_{\mbox{\tiny av}}$ (see also Fig.~\ref{fig:d_and_phi}, panel (c)). To make these arguments clearer, we have made two videos available in the ESI\dag: in one we show the simulation with $\dot{\gamma}=123$ s$^{-1}$ and $\mu_{\mbox{\scriptsize m}} = 3.18\times 10^{-7}\mbox{ m Pa s}$, while in the other one the simulation with $\dot{\gamma}=123$ s$^{-1}$ and $\mu_{\mbox{\scriptsize m}} = 0\mbox{ m Pa s}$ is reported. In both cases, the tank-treading motion of the membrane appears, but, for $\mu_{\mbox{\scriptsize m}} = 0\mbox{ m Pa s}$, the membrane deforms more than in the case with $\mu_{\mbox{\scriptsize m}} = 3.18\times 10^{-7}\mbox{ m Pa s}$. In Fig.~\ref{fig:d_and_phi}, panel (c), we report the average deformation $D_{\mbox{\tiny av}}$ as a function of the stress $\sigma$ for all three mechanical loads at changing the membrane viscosity $\mu_{\mbox{\scriptsize m}}$. As already observed in~\cite{D0SM00587H} for the STS, we found that $D_{\mbox{\tiny av}}$ is not sensitive to the value of membrane viscosity $\mu_{\mbox{\scriptsize m}}$. Moreover, in the FRMS we found that $D_{\mbox{\tiny av}}$ shows very little dependency on $\mu_{\mbox{\scriptsize m}}$, at least in the range of $\mu_{\mbox{\scriptsize m}}$ and $\sigma$ analysed. Hence, for STS and FRMS, we report only points for $\mu_{\mbox{\scriptsize m}}=3.18 \times 10^{-7}$~m~Pa~s. It emerges that, in the SHS, the average deformation $D_{\mbox{\tiny av}}$ saturates at a constant value: an increase in the shear rate $\dot{\gamma}$ causes an initial increase of the average deformation $D_{\mbox{\tiny av}}$; then, $D_{\mbox{\tiny av}}$ reaches a plateau and increasing the shear rate $\dot{\gamma}$ beyond a certain value does not result in an increased deformation. The higher the membrane viscosity $\mu_{\mbox{\scriptsize m}}$, the lower is the value of $\dot{\gamma}$ for which the plateau is reached. Furthermore, when compared to the STS and FRMS, we can see that on the same values of $\sigma$ the average deformation $D_{\mbox{\tiny av}}$ is much smaller in the SHS. Again, this is due to the rotation of the membrane during the loading. In both STS and FRMS, the membrane does not rotate, so that the energy injected by the flow is used to deform the membrane. These investigations reveal that it is impossible to predict the loading and relaxation times if we only know the deformation and have no information about the kind of mechanical load.\\
In view of the above considerations on the deformation, it appears also natural to study the characteristic times as a function of the average deformation $D_{\mbox{\tiny av}}$. We performed this analysis (see ESI\dag, Fig.~2), confirming the picture displayed in Fig.~\ref{fig:t_vs_sigma}: again, $\tilde{t}$ shows a collapse for the STS and the FRMS and does not depend on the value of the membrane viscosity $\mu_{\mbox{\scriptsize m}}$; for the SHS $\tilde{t}$ shows a dependency on both $D_{\mbox{\tiny av}}$ and the membrane viscosity $\mu_{\mbox{\scriptsize m}}$. The results on $t_{\mbox{\tiny L}}(D_{\mbox{\tiny av}})$ and $t_{\mbox{\tiny R}}(D_{\mbox{\tiny av}})$ (Fig.~2 in ESI\dag, panels (a) and (b), respectively) further confirm that in general there is no correlation between the degree of deformation of the membrane and the characteristic times for different kinds of mechanical loads.\\
In our previous work~\cite{D0SM00587H}, we have already seen that, for small forces, $t_{\mbox{\tiny R}}$ is linear in $\mu_{\mbox{\scriptsize m}}$, in agreement with literature predictions~\cite{art:prado15}. Now we can go further, and we study the dependency of both $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ as a function of $\mu_{\mbox{\scriptsize m}}$ for different values of $\sigma$. This will help further to determine to what degree these two kinds of dynamics can be regarded as different dynamics~\cite{diaz2000transient,barthes2016motion}. In Fig.~\ref{fig:t_vs_mu}, we report both $t_{\mbox{\tiny L}}(\mu_{\mbox{\scriptsize m}})$ and $t_{\mbox{\tiny R}}(\mu_{\mbox{\scriptsize m}})$ (first and second row of panels, respectively) for three values of $\sigma$ spanning two orders of magnitude as well as their linear fit (whose coefficients are reported in ESI\dag, Tab.~2) for all three simulations (STS in panel (a) and (d); SHS in panel (b) and (e); FRMS in panel (c) and (f)). In all three simulations, for a fixed value of $\sigma$, the linear approximation is reasonably good. For small values of $\sigma$ (e.g., $\sigma=0.001$~Pa), both $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ are similar for all three simulations; that is not surprising, since at small values of stress $\sigma$ the intrinsic properties of the membrane arise. Regarding the sensitivity of the linear trend with respect to a change in $\sigma$, we observe different behaviours in the two dynamics. Regarding the loading dynamics, we observe that for high values of $\sigma$, the three load mechanisms provide similar linear fits, while in the intermediate region of $\sigma$, the SHS shows a different behaviour than the STS and FRMS. Regarding the relaxation dynamics, the variability of the linear trends with the value of the stress is more pronounced in presence of hydrodynamical forces (i.e., SHS and FRMS), while in the STS the linear behaviour of $t_{\mbox{\tiny R}}$ with respect to $\mu_{\mbox{\scriptsize m}}$ is only slightly perturbed by a change in the stress if compared to the others. These quantitative observations are summarized in Tab.~2 in ESI\dag. \\
A dimensionless analysis could be performed to try to gain a deeper insight into the problem; however, if we try to make both the characteristic times $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ and the shear rates $\dot{\gamma}$ and $\gammadot_{\mbox{\tiny FRMS}}$ dimensionless by using the characteristic elastic time $t_{\mbox{\tiny el}}=\mu_{\mbox{\scriptsize out}} r/k_{\mbox{\scriptsize S}}$, as well as the force $F$ by using the characteristic elastic force $F_{\mbox{\tiny el}}=rk_{\mbox{\scriptsize S}}$, we get only a rescaling along the x and y axis. Making the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ dimensionless by introducing the Boussinesq number $\text{Bq}=\mu_{\mbox{\scriptsize m}}/r\mu_{\mbox{\scriptsize out}}$ (see also~\cite{art:lizhang19,art:yazdanibagchi13,D0SM00587H}) results again in a constant rescaling of all the values, without giving a new insight into the problem.
In principle, one can look for some more refined non-dimensionalisation procedure combining membrane viscosity and rotational/deformation contributions: in the case of the SHS, for example, this would mean to find a shear time dependent on both $\text{Bq}$ and $\text{Ca}$. This would anyhow require a more precise knowledge (e.g. a phenomenological model~\cite{art:prado15}) on how the rotational/deformation contributions couple with the membrane viscosity effects.\\
Next, we discuss the parameters $\delta_{\mbox{\tiny L}}$ and $\delta_{\mbox{\tiny R}}$ used to improve the fit (see Eqs.~\eqref{eq:lst}-\eqref{eq:lsh}-\eqref{eq:r}). In all three setups, $\delta_{\mbox{\tiny L}}$ and $\delta_{\mbox{\tiny R}}$ are close to one, especially in the STS and FRMS (see Fig.~3 in ESI\dag). The biggest deviation can be found during the loading in the SHS, where the parameter $\delta_{\mbox{\tiny L}}$ seems to tend asymptotically to $\delta_{\mbox{\tiny L}}\approx 0.6$ at increasing values of shear rate $\dot{\gamma}$: this deviation from 1 reflects the effect of the kind of mechanical load also on $\delta_{\mbox{\tiny L}}$, showing that, during the loading in the SHS, $D(t)$ is not that close to an exponential function, and then multiple time scales arise~\cite{D0SM00587H}. Indeed, having fitting parameters $\delta_{\mbox{\tiny L}}$ and $\delta_{\mbox{\tiny R}}$ different from 1 is symptomatic of the presence of multiple loading and relaxation times, respectively. This issue has already been investigated for both SHS and STS during the relaxation dynamics in our earlier study~\cite{D0SM00587H}; here we go deeper with the investigation during the loading dynamics. We have monitored the time evolution of the deformation $\frac{D(t)}{D_{\mbox{\tiny av}}}$ in log-lin scale (see Figs.~4-5-6 in ESI\dag). The initial stage of the loading/relaxation process is well characterised by a single "dominant" time scale, and only later, when the difference of $D(t)$ from $D_{\mbox{\tiny av}}$ (during the loading) or from 0 (during the relaxation) is less (or even much less) than about $10\%$, other time scales appear. As already pointed out in~\cite{D0SM00587H}, it is difficult to make quantitative assessments on the "late" dynamics, because then the deformation is close to its steady value (for the loading) and/or to the rest value (for the relaxation), and in this situation, discretisation errors could have more influence. One could make a deeper analysis of these multiple relaxation times and fit data with two (or more) characteristic times, both during loading and relaxation; however, we notice that the values of $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ found by fitting with the stretched exponential (i.e., by using $\delta_{\mbox{\tiny L}},\delta_{\mbox{\tiny R}}\neq 1$) are in good agreement with the "dominant" time scale (see Figs.~4-5-6 in ESI\dag). On the other hand, the small difference between the fitted characteristic times and the "dominant" characteristic times also explains why the ratio $\tilde{t}$ becomes greater than 1 for one case in the FRMS (see Fig.~\ref{fig:t_vs_simulations} panel (i) and Fig.~7 in ESI\dag).\\
Before closing this section, we also discuss the effect of the surface elastic shear modulus $k_{\mbox{\scriptsize S}}$ (see Eq.~\eqref{eq:skalak}) on the loading and relaxation times. As already stated in the introduction, some pathologies can affect the value of membrane elasticity~\cite{art:kruger14deformability, art:suresh2005connections,suresh2006mechanical,brandao2003optical,briole2021molecular,brandao2003elastic,fedosov2011quantifying,luo2013inertia,ye2013stretching,hosseini2012malaria}: for example, for RBCs infected with the malaria parasite Plasmodium falciparum, experiments with optical tweezers estimated values of elastic shear modulus ranging from $k_{\mbox{\scriptsize S}} = 5.3\times 10^{-6} \mbox{ N m}^{-1}$ (i.e., for the healthy RBC) to $k_{\mbox{\scriptsize S}} = 100\times 10^{-6} \mbox{ N m}^{-1}$~\cite{art:suresh2005connections}. For this purpose, we fixed the value of membrane viscosity $\mu_{\mbox{\scriptsize m}}=1.59\times 10^{-7}\mbox{ m Pa s}$ and we varied the elastic shear modulus $k_{\mbox{\scriptsize S}}$ in a range of two orders of magnitude: from $k_{\mbox{\scriptsize S}} = 5.3\times 10^{-6} \mbox{ N m}^{-1}$ to $k_{\mbox{\scriptsize S}} = 530\times 10^{-6} \mbox{ N m}^{-1}$. Results are reported in Fig.~\ref{fig:t_vs_simulations_ks}. For all three kinds of mechanical loads simulated, increasing the value of the elastic shear modulus by a factor 10 or 100 results in a reduction of the characteristic times by about the same factor, as expected~\cite{,evans1976membrane,art:prado15}. Moreover, we analysed the ratio $\tilde{t}$, finding that it gets closer to 1 at increasing values of $k_{\mbox{\scriptsize S}}$. Mechanical balance at the interface tells us that the relative importance of viscous to elastic effects rescales as the ratio between the shear rate $\dot{\gamma}$ developed in the fluid and the surface elastic modulus $k_{\mbox{\scriptsize S}}$ of the membrane. The shear rate developed in the fluid during both loading and relaxation is larger at increasing load strength; thus, for a fixed $\dot{\gamma}$, by increasing $k_{\mbox{\scriptsize S}}$ we should have the same results observed for $k_{\mbox{\scriptsize S}} = 5.3\times 10^{-6} \mbox{ N m}^{-1}$ at shear rates that are smaller by a factor given by the ratio of the $k_{\mbox{\scriptsize S}}$'s. In other words, we expect $\tilde{t} \rightarrow 1$ when $k_{\mbox{\scriptsize S}}$ gets very large. This fact is confirmed by our results.
We hasten to remark that these are only preliminary results, for at least two reasons: first, a proper dimensionless analysis of the governing equations~\cite{art:yazdanibagchi13,luo2013inertia} reveals that also the importance of the bending modulus $k_{\mbox{\scriptsize B}}$ with respect to $k_{\mbox{\scriptsize S}}$ needs to be taken into account via a suitable dimensionless number $k_{\mbox{\scriptsize B}}^*$.
Since it is not known whether certain blood-related pathologies, such as Plasmodium falciparum malaria parasite infection, affect the value of the bending modulus $k_{\mbox{\scriptsize B}}$~\cite{fedosov2011quantifying,luo2013inertia,ye2013stretching,hosseini2012malaria}, we have kept it fixed at the value for the healthy RBC (see Sec.~\ref{sec:method}). Therefore, in our simulations, at changing $k_{\mbox{\scriptsize S}}$, the dimensionless number $k_{\mbox{\scriptsize B}}^*=\frac{k_{\mbox{\scriptsize B}}}{r^2\ k_{\mbox{\scriptsize S}}}$ is changing: a more comprehensive analysis on the effects of $k_{\mbox{\scriptsize S}}$ should be done by assessing also the impact of a variation in $k_{\mbox{\scriptsize B}}^*$ separately.
Second, the SLS model we implemented to take into account the viscoelastic effects (see Sec.~\ref{sec:method}) contains an artificial elastic contribution $k'$ that is proportional to $k_{\mbox{\scriptsize S}}$ and needs to be tuned in order to recover physical results (see~\cite{D0SM00587H,art:lizhang19} for further details and validations). Varying the value of the elastic shear modulus $k_{\mbox{\scriptsize S}}$ modifies also the value of $k'$: if and how this affects the physical dynamics of the RBC at the very large $k_{\mbox{\scriptsize S}}$ we considered requires a detailed computational analysis by its own, which is out of the scope of the present paper. All these considerations surely warrant dedicated studies in the future.
\section{Conclusions\label{sec:conclusion}}
A comprehensive characterisation of the viscoelastic properties of the RBC membrane, as well as the way the membrane responds to an external force, is of paramount interest in different fields, from the detection of pathologies~\cite{art:kruger14deformability,art:suresh2005connections,art:prado15}, to the design of biomedical devices~\cite{murakami1979nonpulsatile,nonaka2001development,art:behbahani09,art:arora2006hemolysis}. A paradigmatic example is provided by ventricular assist devices~\cite{hassler2020finite} where RBCs evolve in a complex flow and their fate is closely linked to their residence time inside the device: if the residence time inside the impeller (that is the region where the RBCs experience a wide range of stress) is much shorter then the loading time, RBCs deform without reaching a steady state configuration; on the contrary, a higher residence time leads to a deformation that can cause hemolysis (that is, the release of the cytoplasm into surrounding plasma due to damage or rupture of the membrane).\\
In general, when an external force acts on a viscoelastic membrane, two main kinds of dynamics arise: the {\it loading} and the {\it relaxation} dynamics with associated times $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$. Earlier investigations pointed to the fact that these two kinds of dynamics are two distinct processes, since during the relaxation there is no external force to drive the membrane, in contrast to the loading~\cite{diaz2000transient,barthes2016motion}. To the best of our knowledge, however, an exhaustive comparative characterisation of these two kinds of dynamics has never been conducted. This motivated our work to investigate these two kinds of dynamics with different setups that involve different typologies of mechanical loads (whose main features are summarised in Tab.~\ref{tab:summary}) while performing a parametric study on the values of membrane viscosity $\mu_{\mbox{\scriptsize m}}$. The latter choice is motivated by the large variability of membrane viscosity values reported in the literature~\cite{evans1976membrane,chien1978theoretical,hochmuth1979red,tran1984determination,art:baskurt96,riquelme2000determination,art:tomaiuolo11,braunmuller2012hydrodynamic,art:prado15,fedosov2010multiscale}.\\
The two kinds of dynamics are {\it symmetrical} ($\tilde{t}=t_{\mbox{\tiny L}}/t_{\mbox{\tiny R}} \rightarrow 1$) in the limit of small load strengths ($\sigma \rightarrow 0$), i.e., in the limit where the response function of the RBC is dominated by the "intrinsic" properties of the membrane; in marked contrast, we found an {\it asymmetry} in the two kinds of dynamics for load strengths large enough ($\tilde{t}=t_{\mbox{\tiny L}}/t_{\mbox{\tiny R}}\ne 1$ for $\sigma >0$), meaning that the loading dynamics is always faster than the relaxation one. We found that the asymmetry profoundly depends on the kind of mechanical load and we have demonstrated this non-universality via a quantitative study in terms of the applied load strength $\sigma$ and the value of membrane viscosity $\mu_{\mbox{\scriptsize m}}$. There are some realistic load mechanisms, like shear flows, that make the membrane rotate during loading while leaving the membrane relaxing to the shape at rest without rotation: in this case, the contribution that the membrane viscosity gives to the characteristic times $t_{\mbox{\tiny L}}$ and $t_{\mbox{\tiny R}}$ differs, and then the ratio $\tilde{t}$ is a function of both the stress $\sigma$ and the membrane viscosity $\mu_{\mbox{\scriptsize m}}$.
From the other side, there are other realistic load mechanisms, like the stretching with optical tweezers or the deformation with an elongational flow, in which the membrane deforms without rotating during both processes. In this case, the contribution given by the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ to the characteristic times is the same during both loading and relaxation, and as a consequence, the ratio $\tilde{t}$ is a function of the stress $\sigma$ only. Even though we showed that both loading and relaxation dynamics are not universal, we found that for a given value of the stress $\sigma$, a linear increase of the characteristic times as a function of the membrane viscosity $\mu_{\mbox{\scriptsize m}}$ is a fair approximation in all cases.\\
Finally, since some blood-related diseases~\cite{art:kruger14deformability, art:suresh2005connections,suresh2006mechanical,brandao2003optical,briole2021molecular,brandao2003elastic,fedosov2011quantifying,luo2013inertia,ye2013stretching,hosseini2012malaria} can alter the values of the elastic shear modulus $k_{\mbox{\scriptsize S}}$, we also investigated the loading and relaxation dynamics at changing $k_{\mbox{\scriptsize S}}$ for a fixed value of $\mu_{\mbox{\scriptsize m}}$, finding that larger values of $k_{\mbox{\scriptsize S}}$ promote symmetrization ($\tilde{t} \rightarrow 1$) of the dynamics.\\
We argue our findings offer interesting physical and practical insights on the response function and the unsteady dynamics of RBCs driven by realistic mechanical loads.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
The authors acknowledge L. Biferale and G. Koutsou. This project has received funding from the European Union Horizon 2020 Research and Innovation Program under the Marie Skłodowska-Curie grant agreement No. 765048. We also acknowledge support from the project ‘‘Detailed Simulation of Red blood Cell Dynamics accounting for membRane viscoElastic propertieS’’ (SorCeReS, CUP No. E84I19002470005) financed by the University of Rome ‘‘Tor Vergata’’ (‘‘Beyond Borders 2019’’ call).
|
{
"timestamp": "2021-06-01T02:38:12",
"yymm": "2102",
"arxiv_id": "2102.08752",
"language": "en",
"url": "https://arxiv.org/abs/2102.08752"
}
|
\section{Introduction}\label{S:Intro}
With our attention placed on the tremendous data traffic demands that are expected to be brought together with the sixth generation (6G) application scenarios~\cite{A:LC_CR_vs_SS,A:ED_in_FD_with_Residual_RF_impairments,ref14_Al_hard_imperf,C:Energy_Detection_under_RF_impairments_for_CR}, two technological approaches are examined as candidate solutions~\cite{Dang2020,Bariah2020,PhD:Boulogeorgos}. The first one is to move to higher-frequency bands, with emphasis on the terahertz (THz) one~\cite{Boulogeorgos2018,Rappaport2019,Boulogeorgos2019,Boulogeorgos2020a,Boulogeorgos2018a,WP:Wireless_Thz_system_architecture_for_networks_beyond_5G,C:ADistanceAndBWDependentAdaptiveModulationSchemeForTHzCommunications,C:UserAssociationInUltraDenseTHzNetworks,Boulogeorgos2019a}, while the second one is to exploit reconfigurable intelligent surfaces (RISs) capable of devising a beneficial wireless propagation environment~\cite{A:Exploration_of_intercell_wireless_millimeter_wave_communication_in_the_landscape_of_intelligent_metasurfaces,A:Smart_radio_enviroments,Boulogeorgos2021}.
In the technical literature, several contributions appear on analyzing, optimizing, designing, and demonstrating wireless THz systems~\cite{jornet2011,EuCAP2018_cr_ver7,Merkle2017}. All of them agree that line-of-sight (LoS) channel attenuation and blockage are the main limiting factors of THz wireless systems. To break the barriers set by blockage, recently, some research works proposed the use of RIS~\cite{A:Performance_analysis_of_LISs,A:Reconfigurable_Intelligent_Surfaces_for_EE_in_WC,Thirumavalavan2020,Renzo2020,Bjornson2020,Boulogeorgos2020b}. In particular, in~\cite{Bariah2020}, and~\cite{A:Wireless_communications_through_RIS}, the authors explained how RIS can be used to mitigate the impact of blockage and introduced the idea of reflected LoS links. In this sense, in~\cite{A:Performance_analysis_of_LISs}, the authors conducted an asymptotic uplink ergodic capacity study, assuming that the transmitter (TX)-RIS and RIS-receiver (RX) channels follow Rician distribution. Similarly, in~\cite{A:Reconfigurable_Intelligent_Surfaces_for_EE_in_WC} the joint maximization of the sum-rate and energy efficiency was studied for a multi-user downlink scenario, in which connectivity was established by means of reflected LoS.
Additionally, in~\cite{Thirumavalavan2020}, an error analysis was performed for RIS-assisted non-orthogonal multiple access networks. Moreover, in~\cite{Renzo2020}, di Renzo et. al highlighted the fundamental similarities and differences between RISs and relays. In the same direction, in~\cite{Bjornson2020}, the authors compared the performance of RIS-assisted systems against decode-and-forward relaying ones in terms of energy efficiency, while, in~\cite{Boulogeorgos2020b}, the authors conducted a performance comparison between RIS and amplify-and-forward (AF) relays in terms of average received signal-to-noise-ratio (SNR), outage probability, diversity order and gain, symbol error rate and ergodic capacity, which revealed that, in general, RIS-assisted wireless systems can outperform the corresponding AF relaying ones.
Despite the paramount importance of combining THz wireless and RIS technologies, there are only a few published works that investigate the performance of RIS-assisted THz wireless systems~\cite{Ma2020,Qiao2020,Tekbiyik2020}. In ~\cite{Ma2020} and~\cite{Qiao2020}, although the directional nature of the THz links was taken into account, the pathloss (PL) characteristics of the transmission path were neglected, while, in~\cite{Tekbiyik2020}, the impact of molecular absorption loss was ignored. The main reason behind this is the lack of tractable PL model for RIS-assisted systems operating in the THz band.
To cover this research gap, this paper focuses on providing a low-complexity PL model that takes into account the particularities of the THz propagation medium as well as the physical characteristic of the RIS.
In more detail, the model takes into account not only the access point (AP)-RIS and RIS-user equipment (UE) distances, but also the RIS size, the radiation pattern and the reflection coefficient of the RIS reflection unit (RU), the AP and UE antenna gain, the transmission frequency, as well as the environmental conditions\footnote{Note that there are two already published contributions that provided the end-to-end (e2e) PL in RIS-assisted wireless sytsems~\cite{Ellingson2019,Tang2019}. However, both~\cite{Ellingson2019} and~\cite{Tang2019} refer to low frequency band communications; thus, they neglect the impact of molecular absorption loss.}.
Building upon the channel attenuation expression, we provide a closed-form expression that determines the phase shift that should be implemented on each RIS element in order to steer the reflected by the RIS beam towards the~UE.
\section{System Model}\label{sec:SM}
As shown in Fig.~\ref{fig:SM}, downlink scenario of a RIS-assisted wireless THz system is considered, where a single AP serves a UE through a RIS.
The AP and the UE are equipped with high-directional antennas of gains $G_{a}$ and $G_{u}$, respectively. Both the AP and UE antennas are assumed to point at the center of the RIS.
The RIS consists of $M\times N$ orthogonal RUs of dimensions $d_x$ and $d_y$.
Moreover, the UE position is assumed to be known to the RIS controller. A three dimensional (3D) Cartesian system is defined centered at the RIS center. The RIS horizontal and vertical directions respectively define the $x$ and $y$ axis. Hence, the position of the RU, $\mathcal{U}_{m,n}$ can be obtained~as
$ \mathbf{d}_{m,n} = \left(n-\frac{1}{2}\right) d_x \mathbf{x}_o + \left(m-\frac{1}{2}\right) d_y \mathbf{y}_o + 0 \text{ } \mathbf{z}_o,
\label{Eq:d_mn}$
with $n\in\left[1-\frac{N}{2}, \frac{N}{2}\right]$ and $m\in\left[1-\frac{M}{2}, \frac{M}{2}\right]$. Also, $\mathbf{x}_o$, $\mathbf{y}_o$, and $\mathbf{z}_o$ stand for the unitary vectors at the $x$, $y$, and $z$ direction,~respectively. Finally, let $d_1$ and $d_2$ respectively denote the AP-RIS and RIS-UE distances.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth,trim=0 0 0 0,clip=false]{images/SM.pdf}
\caption{System model.}
\label{fig:SM}
\end{figure}
\section{Path-loss model}\label{SS:CM}
Let, $\theta_{m,n}^{t}$ and $\theta_{m,n}^{r}$ be the elevation angle from the $(m, n)$ RU, $\mathcal{U}_{m,n}$, to the AP and to the UE, respectively, while $\phi_{m,n}^{t}$ and $\phi_{m,n}^{r}$ stand for the corresponding azimuth angles. Finally, we use $l_{m,n}^t$ and $l_{m,n}$ to respectively define the distances from AP to the $\mathcal{U}_{m,n}$ RU and the one from the $\mathcal{U}_{m,n}$ RU to the UE. The following theorem returns the e2e pathloss.
\begin{thm}\label{Thm:PL_general}
The e2e PL can be evaluated~as
in~\eqref{Eq:L_GC}, given at the top of the following page.
\begin{figure*}
\begin{align}
L &= M^{-2} N^{-2} \frac{64\pi^3 d_1^2 d_{2}^2} {d_x d_y \lambda^2 |R|^2 U^{r}\left(\theta_i, \phi_i\right) U^{t}\left(\theta_r, \phi_r\right) G_{t} }
\frac{\mathrm{sinc}^2\left( \frac{\pi}{\lambda} \left( \sin\left(\theta_i\right) \cos\left(\theta_i\right)+ \sin\left( \theta_r \right) \cos\left(\phi_r\right) + \zeta_1\right) d_x \right)}{\mathrm{sinc}^2\left( \frac{N\pi}{\lambda} \left( \sin\left(\theta_i\right) \cos\left(\theta_i\right)+ \sin\left( \theta_r \right) \cos\left(\phi_r\right) + \zeta_1 \right) d_x \right)}
\nonumber \\ & \times
\frac{\mathrm{sinc}^2\left( \frac{\pi}{\lambda} \left( \sin\left(\theta_i\right) \sin\left(\phi_i\right)+ \sin\left( \theta_r \right) \sin\left(\phi_r\right) +\zeta_2 \right) d_y \right)}{\mathrm{sinc}^2\left( \frac{M\pi}{\lambda} \left( \sin\left(\theta_i\right) \sin\left(\phi_i\right)+ \sin\left( \theta_r \right) \sin\left(\phi_r\right) +\zeta_2\right) d_y \right)}
\exp\left(\kappa(f) \left( d_1 + d_{2}\right) \right)
\label{Eq:L_GC}
\end{align}
\hrulefill
\end{figure*}
In~\eqref{Eq:L_GC},
\begin{align}
\zeta_1 \left(n-\frac{1}{2}\right) d_x &+ \zeta_2 \left(m-\frac{1}{2}\right) d_y = \frac{\lambda\phi_{m,n}}{2\pi},
\label{Eq:phi_mn}
\end{align}
and
\begin{align}
G_{t} &= G_{a} G G_{u}.
\end{align}
Additionally, $\phi_{m,n}$ and $|R|$ are respectively the controllable phase shift and the absolute value of the reflection coefficient introduced by the $(m,n)$ RU, while $U^{r}\left(\theta, \phi\right)$, $U^{t}\left(\theta, \phi\right)$ and $G$ are the normalized received, the normalized transmitted power ratio patterns and the RU gain, respectively. Moreover, $\theta_i$ and $\phi_i$ are respectively the elevation and the azimuth angles from the center of the the RIS to the AP, while $\theta_r$ and $\phi_r$ respectively denotes the the elevation and the azimuth angles from the center of the the RIS to the center of the cluster.
Finally, in~\eqref{Eq:L_GC}, $\kappa(f)$ stands for the molecular absorption coefficient and can be obtained as in~\cite{Kokkoniemi2020}\footnote{In practice THz wireless systems are expected to operate in the $100-500\text{ }\mathrm{GHz}$ band, we employ a simplified model for this band, which was introduced in~\cite{EuCAP2018_cr_ver7} and then extended in~\cite{Kokkoniemi2020}.}.
\begin{align}
\kappa(f) &= \sum_{i=1}^{6}
\frac{A_i\left(\mu\right)}{B_i\left(\mu\right)+\left(\frac{f}{100c}-q_i\right)^2}
+C\left(\mu, f\right),
\label{Eq:Kappa_f}
\end{align}
where
$A_1\left(\mu\right) = a_1 \left(1-\mu\right) \left(a_2 \left(1-\mu\right) + a_3\right),$
$A_3\left(\mu\right) = f_1 \mu \left(f_2 \mu + f_3 \right),$
$A_2\left(\mu\right) = c_1 \mu \left(c_2 \mu + c_3 \right), $
$A_4\left(\mu\right) = i_1 \mu \left(i_2 \mu + i_3 \right),$
$A_5\left(\mu\right) = k_1 \mu \left(k_2 \mu + k_3 \right),$
$A_6\left(\mu\right) = m_1 \mu \left(m_2 \mu + m_3 \right),$
$B_1\left(\mu\right) = \left(b_1 \left(1-\mu\right) + b_2\right)^2,$
$B_2\left(\mu\right) = \left(e_1 \mu + d_2\right)^2,$
$B_3\left(\mu\right) = \left(g_1 \mu + g_2\right)^2,$
$B_4\left(\mu\right) = \left(j_1 \mu + j_2\right)^2, $
$B_5\left(\mu\right) = \left(l_1 \mu + l_2\right)^2, $
$B_6\left(\mu\right) = \left(n_1 \mu + n_2\right)^2,$
$C\left(\mu, f\right) = \frac{\mu}{r_1} \left(r_2 + r_3 f^{r_4}\right),$
with $a_1=5.159\times 10^{-5}$, $a_2 = - 6.65\times 10^{-5}$, $a_3=0.0159$, $b_1=-2.09\times 10^{-4}$, $b_2=0.05$, $c_1=0.1925$, $c_2=0.135$, $c_3=0.0318$, $e_1=0.4241$, $e_2=0.0998$, $f_1=0.2251$, $f_2=0.1314$, $f_3=0.0297$, $g_1=0.4127$, $g_2=0.0932$, $i_1=2.053$, $i_2=0.1717$, $i_3=0.0306$, $j_1=0.5394$, $j_2=0.0961$, $k_1=0.177$, $k_2=0.0832$, $k_3=0.0213$, $l_1=0.2615$, $l_2=0.0668$, $m_1=2.146$, $m_2=0.1206$, $m_3=0.0277$, $n_1=0.3789$, $n_2=0.0871$, $r_1=0.0157$, $r_2=2\times 10^{-4}$, $r_3=0.915\times 10^{-112}$, $r_4=9.42$, $q_1=3.96$, $q_2=6.11$, $q_3=10.84$, $q_4=12.68$, $q_5=14.65$, and $q_6=14.94$.
Moreover, $c$ is the speed of light, and $\mu$ is the volume mixing ratio of the water vapor and can be obtained~as
$\mu = p_1\left(p_2 + p_3 P\right) \exp\left(\frac{p_4\left(T-p_6\right)}{T+p_5-p_6}\right),$
where $p_1=6.1121$, $p_2=1.0007$, $p_3=3.46\times 10^{-8}$, $p_4=17.502$, $p_5=240.97\,^{o}K$, and $p_6=273.15\,^{o}K$. Furthermore, $T$ stands for the air temperature, and $P$ is the atmospheric~pressure.
\end{thm}
\begin{IEEEproof}
Please refer to Appendix~A.
\end{IEEEproof}
\begin{rem}
To steer the beam at the desired direction $\theta_r=\theta_o$ and $\phi_r=\phi_o$, the parameters $\zeta_1$ and $\zeta_2$ should be
\begin{align}
\zeta_1 &= -\left( \sin\left(\theta_i\right) \cos\left(\phi_i\right)+ \sin\left( \theta_o \right) \cos\left(\phi_o\right) \right)
\end{align}
and
\begin{align}
\zeta_2&= -\left(\sin\left(\theta_i\right) \sin\left(\phi_i\right)+ \sin\left( \theta_o \right) \sin\left(\phi_o\right)\right).
\end{align}
In this case, based on~\eqref{Eq:phi_mn}, the phase shift of the $(m,n)$ element can be obtained as in~\eqref{Eq:phi_mn_o}, given at the top of the following page.
\begin{figure*}
\begin{align}
\phi_{m,n}^{o}=& -\frac{2\pi}{\lambda}\left(n-\frac{1}{2}\right) \left( \sin\left(\theta_i\right) \cos\left(\theta_i\right)+ \sin\left( \theta_o \right) \cos\left(\phi_o\right) \right) d_x
- \frac{2\pi}{\lambda} \left(m-\frac{1}{2}\right) \left(\sin\left(\theta_i\right) \sin\left(\theta_i\right)+ \sin\left( \theta_o \right) \sin\left(\phi_o\right)\right) d_y
\label{Eq:phi_mn_o}
\end{align}
\hrulefill
\end{figure*}
In this case, according to~\eqref{Eq:L_GC}, the minimum PL~is
\begin{align}
L_{o} &= M^{-2} N^{-2} \frac{64\pi^3 d_1^2 d_{n_u}^2}{d_x d_y \lambda^2 |R|^2 U^{r}\left(\theta_o, \phi_o\right) U^{t}\left(\theta_o, \phi_o\right) G_{t} }
\nonumber \\ & \times
\exp\left(\kappa(f) \left( d_1 + d_{n_u}\right) \right).
\label{Eq:L_n_u_max}
\end{align}
\end{rem}
\section{Numerical Results \& Discussion}\label{sec:Results}
In this section, we present numerical results, which verify the accuracy of the PL model and highlight the propagation characteristics of RIS-assisted THz wireless systems. In this direction, unless otherwise stated, we investigate the following insightful scenario. We consider standard environmental conditions, i.e., relative humidity $50\%$, atmospheric pressure $101325\text{ }\mathrm{Pa}$, and temperature $296^{o}\mathrm{K}$. The AP transmission antenna gain is $50\text{ }\mathrm{dBi}$, which, according to~\cite{Boulogeorgos2020,A:Wireless_Sub_THz_Communication_System_With_High_Data_Rate,A:Advances_in_THz_communications_accelerated_by_photonics}, is a realistic value for THz wireless systems, while the UE received antenna gains are $20\text{ }\mathrm{dBi}$. The antenna pattern of the RUs is described by~\cite{Stutzman2013}
\begin{align}
U\left(\theta, \phi\right) = \left\{
\begin{array}{l l}
\cos\left(\theta\right), & \theta\in[0, \frac{\pi}{2}] \text{ and } \phi\in[0, 2\pi]\\
0, & \text{otherwise}.
\end{array}
\right.
\label{Eq:U}
\end{align}
Thus, $G$ can be obtained~as
$G =\int_{0}^{2\pi} \int_{0}^{\frac{\pi}{2}} U\left(\theta, \phi\right) \sin\left(\theta \right) \text{ }\mathrm{d\theta} \text{ } \mathrm{d\phi},$
which by substituting~\eqref{Eq:U} and performing the integration returns~$G=4$. Moreover, $|R|$ is set to $0.9$, which is in-line with~\cite{Asadchy2016}. Finally, note that, in what follows, we use continuous lines and markers to respectively denote theoretical and simulation results.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth,trim=0 0 0 0,clip=false]{images/Graph25.eps}
\caption{PL vs $\phi_r$, for $\theta_r=\frac{\pi}{4}$, $\theta_o=\pi/6$, $\phi_o=\pi/3$, and different values of $f$.}
\label{fig:PL_vs_phir_f}
\end{figure}
In Fig.~\ref{fig:PL_vs_phir_f}, the PL is depicted as a function of $\phi_t$, for different transmission frequencies, assuming that $d_1=d_{2}=1\text{ }\mathrm{m}$, $d_x=d_y=0.3\text{ }\mathrm{mm}$, $|R|=0.9$, $G_{AP}=50\text{ }\mathrm{dBi}$, $G_{nu}=20\text{ }\mathrm{dBi}$, $\theta_i=\frac{\pi}{4}$, $\phi_i=\pi$, $\theta_o=\pi/6$, and $\phi_o=\pi/3$. As expected the minimum PL is observed for $\phi_r=\pi/3$. Moreover, it is apparent that for a fixed $\phi_r$, as the transmission frequency increases, the PL also increases. Finally, we observe that as the transmission frequency increases, the azimuth half power beamwidth decreases.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth,trim=0 0 0 0,clip=false]{images/Graph10.eps}
\caption{PL vs $f$ for different RIS sizes.}
\label{fig:PL_vs_f}
\end{figure}
Figure~\ref{fig:PL_vs_f} illustrates the PL as a function of $f$ for different values of $M=N$, assuming that $\theta_i=\theta_r=\theta_o=\phi_r=\phi_o=\frac{\pi}{4}$, $\phi_i=\frac{3\pi}{4}$, $d_1=d_{2}=10\text{ }\mathrm{m}$, and $d_x=d_y=0.3\text{ }\mathrm{mm}$. From this figure, it is revealed that there exists two frequency regions, the first one from $370$ to $390\text{ }\mathrm{GHz}$ and the second one from $430$ to $455\text{ }\mathrm{GHz}$, in which the PL is maximized. This is due to water molecules resonance. In other words, from $100$ to $500\text{ }\mathrm{GHz}$, there exists three transmission windows; the first one from $100$ to $365\text{ }\mathrm{GHz}$, the second one from $375$ to approximately $430\text{ }\mathrm{GHz}$, and the third one from $460$ to $500\text{ }\mathrm{GHz}$.
Outside these regions, for fixed $M$ and $N$, as the transmission frequency increases, the PL also increases. For example, for $M=N=20$, as $f$ increases from $100$ to $300\text{ }\mathrm{GHz}$, the PL increases for about $10\text{ }\mathrm{dB}$. Finally, it is observed that, for a given transmission frequency, as the RIS size increases, the PL decreases.For example, as $M=N$ increases from $10$ to $100$, the PL decreases for about $40\text{ }\mathrm{dB}$.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/Graph7.eps}
\caption{PL vs temperature and relative humidity.}
\label{fig:PL_vs_T_phi}
\end{figure}
In Fig.~\ref{fig:PL_vs_T_phi}, the PL is plotted as a function of the atmospheric temperature and relative humidity, assuming $M=N=100$, $f=380\text{ }\mathrm{GHz}$, $d_1=1\text{ }\mathrm{m}$, $d_{2}=10\text{ }\mathrm{m}$, $d_x=d_y\approx0.3\text{ }\mathrm{mm}$, $\theta_i=45^o$, $\phi_i=180^o$, and $\theta_r=\theta_o=\phi_r=\phi_o=45^o$.
As expected, for a fixed atmospheric temperature, as the relative humidity increases, the water molecules' density increases; as a consequence, the molecular absorption and the PL increase. For instance, for $T=273^{o}K$, the PL increases by approximately $2\text{ }\mathrm{dB}$ as the relative humidity increases from $10\%$ to $90\%$. Similarly, for a given relative humidity, as the atmospheric temperature increases, the PL also increases. For example, for a $50\%$ relative humidity, the PL increases by $2\text{ }\mathrm{dB}$ as the atmospheric temperature increase from $270^oK$ to $290^oK$. Finally, by taking into account that neglecting the molecular absorption loss would lead to a PL approximately equal to $34.4\text{ }\mathrm{dB}$, it becomes evident that in this case the PL computation error could exceed $20\text{ }\mathrm{dB}$. This indicates the importance of taking into account the molecular absorption loss when evaluating the PL and the performance of RIS-assisted THz systems.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/Graph11.eps}
\caption{PL vs temperature and frequency.}
\label{fig:PL_vs_T_fr}
\end{figure}
Figure~\ref{fig:PL_vs_T_fr} illustrates the PL as a function of the air temperature and the transmission frequency, assuming $M=N=100$, $d_1=1\text{ }\mathrm{m}$, $d_{2}=10\text{ }\mathrm{m}$, $d_x=d_y\approx0.3\text{ }\mathrm{mm}$, $\theta_i=45^o$, $\phi_i=180^o$, and $\theta_r=\theta_o=\phi_r=\phi_o=45^o$. As expected, for a given transmission frequency, as the air temperature increases, the PL also increases. For example, for $f=250\text{ }\mathrm{GHz}$, the PL increases by about $0.1\text{ }\mathrm{dB}$, as the air temperature increases from $270$ to $320^oK$. Moreover, from this figure, it is verified that there exist two frequency regions in which the PL is maximized. In these regions, temperature variations cause a more severe impact on PL. For instance, increasing the air temperature from $270$ to $280^oK$ results in $0.02\text{ }\mathrm{dB}$ PL increase, if $f=280\text{ }\mathrm{GHz}$, while, the same temperature increase cause a $0.5\text{ }\mathrm{dB}$ PL increase, when $f=383\text{ }\mathrm{GHz}$. This indicates the importance of taking into account the air-temperature and its variations, when selecting the transmission~frequency.
\section{Conclusions} \label{sec:Conclusions}
In this paper, we described the system model and we employed electromagnetic theory tools in order to extract a generalized formula for the e2e PL. This formula revealed the relationships between the RIS specifications, namely size, number of RIS RUs, RU size and reflection coefficient, RU's radiation patterns, as well as phase shift of each RU, the transmission parameters, such as transmission frequency AP to center of RIS and center of RIS to UE distance, AP transmission and UE reception antenna gains, azimuth and elevation angles from the AP to the center of the RIS as well as from the center of the RIS to the UE, and THz-specific parameters, like the environmental conditions that affect the molecular absorption. Building upon this expression, we determined the optimal phase shift of each RU in order to steer the RIS-generated beam to a desired direction. This work is expected to contribute on analyzing, simulating, and designing RIS-assisted THz systems.
\section*{Appendix}
\section*{Proof of Theorem 1}
As $l_{m,n}^{t}>> \lambda$, where $\lambda$ is the wavelength of the transmission signal, the the incident signal power at $\mathcal{U}_{m,n}$ can be obtained~as
\begin{align}
P_{m,n}^{i} &=
\frac{G_{AP} U^{r}\left(\theta_{m,n}^{t}, \phi_{m,n}^{t}\right) d_x d_y P_{AP}}{4\pi \left(l_{m,n}^{t}\right)^2} \exp\left(-\kappa(f) l_{m,n}^{t}\right) .
\label{Eq:Pmn_i_s2}
\end{align}
Thus, the incident signal's electric field at $\mathcal{U}_{m,n}$ can be written~as
\begin{align}
E_{n,m}^{i} = \sqrt{\frac{2 Z_o P_{m,n}^{i}}{d_x d_y}} \exp\left(-j \frac{2\pi l_{m,n}^{t}}{\lambda}\right),
\end{align}
where $Z_o$ is the air characteristic impedance.
The total reflected signal power by $\mathcal{U}_{mn}$ can be obtained~as
$P_{m,n}^{r} = |R_{m,n}|^2 P_{m,n}^{i},$
or
\begin{align}
P_{m,n}^{r} &= \exp\left(-\kappa(f) l_{m,n}^{t}\right) \frac{|R_{m,n}|^2 d_x d_y}{4\pi \left(l_{m,n}^{t}\right)^2}
\nonumber \\ & \times
U^{r}\left(\theta_{m,n,n_u}^{t}, \phi_{m,n,n_u}^{t}\right) G_{AP} P_{AP}.
\label{Eq:Pmn_r}
\end{align}
By assuming that $l_{m,n}>>\lambda$, we can obtain the received signal power at the UE from $\mathcal{U}_{m,n}$~as
\begin{align}
P_{m,n} &= \exp\left(-\kappa(f)\left( l_{m,n}^{t} + l_{m,n}\right)\right)
\nonumber \\ & \times
\frac{G\text{ } U^{r}\left(\theta_i, \phi_i\right) P_{m,n}^{r}}{4 \pi \left(l_{m,n}\right)^2}
U^{t}\left(\theta_r, \phi_r\right) S_{r},
\label{Eq:P_m_n_nu}
\end{align}
where
$ S_{r} = \frac{G_{u} \lambda^2}{4\pi}$
is the aperture of the UE receive antenna.
Thus, the electrical field of the received signal at the UE from $\mathcal{U}_{m,n}$ can be expressed~as
\begin{align}
E_{m,n} =\sqrt{2 Z_o \frac{P_{m,n}}{S_{r}}} \exp\left(-j\frac{2\pi}{\lambda}\left( l_{m,n}^{t} + l_{m,n}\right) \right),
\end{align}
or
\begin{align}
E_{m,n} &= \frac{R_{m,n}\sqrt{{ 2 Z_o d_x d_y U^{r}\left(\theta_i, \phi_i\right) U^{t}\left(\theta_r, \phi_r\right) G G_{AP} P_{AP}}}}{4 \pi l_{m,n} l_{m,n}^{t}}
\nonumber \\ & \times
\exp\left(-\left(\frac{1}{2}\kappa(f) + j\frac{2\pi}{\lambda} \right)\left( l_{m,n}^{t} + l_{m,n}\right) \right).
\label{E_mnnu_s2}
\end{align}
Hence, by taking into account that $\left|R_{m,n}\right|\approx \left|R\right|$, the total electric field at the UE can be evaluated~as
in~\eqref{Eq:Enu}, given at the top of this page.
\begin{figure*}
\begin{align}
E_{r} & = \frac{|R|\sqrt{{ 2 Z_o d_x d_y U^{r}\left(\theta_i, \phi_i\right) U^{t}\left(\theta_r, \phi_r\right) G G_{AP} P_{AP}}}}{4 \pi }
\hspace{-0.3cm}
\sum_{m=-\frac{M}{2}+1}^{\frac{M}{2}} \sum_{n=-\frac{N}{2}+1}^{\frac{N}{2}} \frac{\exp\left(-\left(\frac{1}{2}\kappa(f) + j\frac{2\pi}{\lambda} \right)\left( l_{m,n}^{t} + l_{m,n}\right) + j \phi_{m,n} \right)}{l_{m,n} l_{m,n}^{t}}
\label{Eq:Enu}
\end{align}
\hrulefill
\end{figure*}
The AP position can be obtained~as
\begin{align}
\mathbf{r}_t &= d_1 \sin\left(\theta_i\right) \cos\left(\phi_i\right) \mathbf{x}_o + d_1 \sin\left(\theta_i\right) \sin\left(\phi_i\right) \mathbf{y}_o
\nonumber \\ &
+ d_1 \cos\left(\theta_i\right) \mathbf{z}_o.
\label{Eq:r_t}
\end{align}
By combining~\eqref{Eq:r_t} with the AP-$\mathcal{U}_{m,n}$ distance expression, applying the Taylor expansion in the resulting expression and keeping only the first term, the distance between the AP and the $\mathcal{U}_{m,n}$ can be approximated~as
\begin{align}
l_{m,n}^{t}& \approx d_1 - \sin\left( \theta_i \right) cos\left(\phi_i\right) \left(n-\frac{1}{2}\right) d_x
\nonumber \\ &
- \sin\left( \theta_i \right) sin\left(\phi_i\right) \left(m-\frac{1}{2}\right) d_y
\label{Eq:lmnt}
\end{align}
Following the same steps, we prove that
\begin{align}
l_{m,n}\approx & d_{2} - \sin\left( \theta_r \right) cos\left(\phi_r\right) \left(n-\frac{1}{2}\right) d_x
\nonumber \\ &
- \sin\left( \theta_r \right) sin\left(\phi_r\right) \left(m-\frac{1}{2}\right) d_y.
\label{Eq:lmnnu}
\end{align}
By substituting~\eqref{Eq:lmnt} and~\eqref{Eq:lmnnu} into~\eqref{Eq:Enu}, and taking into account that in practice $d_x$ and $d_y$ are at the order of $\lambda/10$, while $d_1, d_{2}>>\lambda$, we can tightly approximate the electric field at the UE~as in~\eqref{Eq:E_r_n_u}, given at the top of the following page.
\begin{figure*}
\begin{align}
E_{r} & \approx \frac{|R|\sqrt{{ 2 Z_o d_x d_y U^{r}\left(\theta_i, \phi_i\right) U^{t}\left(\theta_r, \phi_r\right) G G_{AP} P_{AP}}}}{4 \pi d_1 d_{2} } \exp\left(-\frac{1}{2}\kappa(f) \left( d_1 + d_{n_u}\right) \right)
\nonumber \\ & \times
\sum_{m=-\frac{M}{2}+1}^{\frac{M}{2}} \sum_{n=-\frac{N}{2}+1}^{\frac{N}{2}} \exp\left( j\frac{2\pi}{\lambda} \left( d_1 + d_{2}-\beta+\frac{\lambda}{2\pi}\phi_{m,n}\right) \right)
\label{Eq:E_r_n_u}
\end{align}
\hrulefill
\end{figure*}
Of note, in~\eqref{Eq:E_r_n_u},
\begin{align}
\beta &= d_1 - \sin\left(\theta_i\right) \cos\left(\theta_i\right) \left(n-\frac{1}{2}\right) d_x
\nonumber \\ &
- \sin\left(\theta_i\right) \sin\left(\theta_i\right) \left(m-\frac{1}{2}\right) d_y
\nonumber \\ &
+ d_{2} - \sin\left( \theta_r \right) cos\left(\phi_r\right) \left(n-\frac{1}{2}\right) d_x
\nonumber \\ &
- \sin\left( \theta_r \right) sin\left(\phi_r\right) \left(m-\frac{1}{2}\right) d_y.
\end{align}
The received signal power at UE can be evaluated~as
\begin{align}
P_r = \frac{|E_{n_u}^r|^2}{2 Z_o} S_{r},
\end{align}
which, with the aid of~\eqref{Eq:E_r_n_u}, can be written~as
\begin{align}
P_r &= \frac{d_x d_y \lambda^2 |R|^2 U^{r}\left(\theta_t, \phi_t\right) U^{t}\left(\theta_r, \phi_r\right) G G_{AP} G_{n_u} P_{AP} }{64\pi^3 d_1^2 d_{2}^2}
\nonumber \\ & \hspace{+2.3cm} \times
\exp\left(-\kappa(f) \left( d_1 + d_{2}\right) \right) \left|\gamma\right|^2.
\label{Eq:Pr}
\end{align}
In~\eqref{Eq:Pr},
\begin{align}
\gamma &= \gamma_1 \gamma_2,
\label{Eq:gamma_s2}
\end{align}
where $\gamma_1$ and $\gamma_2$ can respectively be obtained as in~\eqref{Eq:gamma1} and~\eqref{Eq:gamma_2_s2}, given at the top of the following page.
\begin{figure*}
\begin{align}
\gamma_1 = \sum_{n=-\frac{N}{2}+1}^{\frac{N}{2}}
\exp\left(j\frac{2\pi}{\lambda} \left( \sin\left(\theta_i\right) \cos\left(\theta_i\right) \left(n-\frac{1}{2}\right) d_x + \sin\left( \theta_r \right) cos\left(\phi_r\right) \left(n-\frac{1}{2}\right) d_x + \zeta_1 \right) \right)
\label{Eq:gamma1}
\end{align}
\hrulefill
\end{figure*}
\begin{figure*}
\begin{align}
\gamma_2 = \sum_{m=-\frac{M}{2}+1}^{\frac{M}{2}}\exp\left(j\frac{2\pi}{\lambda} \left( \sin\left(\theta_i\right) \sin\left(\theta_i\right) \left(m-\frac{1}{2}\right) d_y+ \sin\left( \theta_r \right) sin\left(\phi_r\right) \left(m-\frac{1}{2}\right) d_y +\zeta_2\right) \right)
\label{Eq:gamma_2_s2}
\end{align}
\hrulefill
\end{figure*}
In~\eqref{Eq:gamma1} and~\eqref{Eq:gamma_2_s2}, $\zeta_1$ and $\zeta_2$ are defined in~\eqref{Eq:phi_mn}.
By taking into account the sum of geometric progression~theorem, and after performing some simple mathematical manipulations, \eqref{Eq:gamma1} can be rewritten~as
\begin{align}
\gamma_1 = N \frac{\mathrm{sinc}\left( \frac{N\pi}{\lambda} \left( \sin\left(\theta_i\right) \cos\left(\phi_i\right)+ \sin\left( \theta_r \right) \cos\left(\phi_r\right) +\zeta_1\right) d_x \right)}{\mathrm{sinc}\left( \frac{\pi}{\lambda} \left( \sin\left(\theta_i\right) \cos\left(\phi_i\right)+ \sin\left( \theta_r \right) \cos\left(\phi_r\right) +\zeta_1\right) d_x \right)}.
\label{Eq:gamma_1_s5}
\end{align}
Similarly,~\eqref{Eq:gamma_2_s2} can be expressed~as
\begin{align}
\gamma_2 = M \frac{\mathrm{sinc}\left( \frac{M\pi}{\lambda} \left( \sin\left(\theta_i\right) \sin\left(\phi_i\right)+ \sin\left( \theta_r \right) \sin\left(\phi_r\right) +\zeta_2\right) d_y \right)}{\mathrm{sinc}\left( \frac{\pi}{\lambda} \left( \sin\left(\theta_i\right) \sin\left(\phi_i\right)+ \sin\left( \theta_r \right) \sin\left(\phi_r\right) +\zeta_2\right) d_y \right)}.
\label{Eq:gamma_2_s5}
\end{align}
Finally, by substituting~\eqref{Eq:gamma_1_s5} and~\eqref{Eq:gamma_2_s5} into~\eqref{Eq:gamma_s2} and then to~\eqref{Eq:Pr}, we obtain $P_r= \frac{P_{AP}}{L}$, where $L$ can be evaluated as in~\eqref{Eq:L_GC}. This concludes the proof.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-02-18T02:17:56",
"yymm": "2102",
"arxiv_id": "2102.08757",
"language": "en",
"url": "https://arxiv.org/abs/2102.08757"
}
|
\section{Introduction}
Collimated streams of particles, produced in interactions of quarks and gluons and reconstructed as jets, are described by the theory of strong interactions, quantum chromodynamics (QCD).
Multijet events provide exemplary signatures in high-energy collider experiments, and modeling their characteristics plays an important role in precision measurements, as well as in searches for new physics.
The understanding of the structure of multijet final states is therefore crucial for analyses of those events.
Theoretical predictions for multijet events are based on a matrix element (ME) expansion to a fixed perturbative order, supplemented by the parton shower (PS) approach to approximate higher-order perturbative contributions.
The ME expansion incorporates color correlations between quarks and gluons, including interference terms, as well as kinematic correlations between the partons, without any approximation at fixed perturbative order.
Its application is, however, currently limited to final states with less than $\mathcal{O}(10)$ partons.
The PS can simulate final states containing many partons, but with probabilities calculated using the approximations of soft and collinear kinematics and partial or averaged color structures.
The best descriptions of multijet final states are based on a combination of both approaches~\cite{Catani:2001cc,Buckley:2011ms,Bengtsson:1986hr,Mrenna:2003if}.
Other features implemented in simulations, such as multiple parton interactions (MPI) and hadronization, also play an important role, \eg, in describing angular correlations between jets~\cite{Chatrchyan:2013fha,Abe:1994nj,Abbott:1997bk}.
In this paper, we investigate collinear (small-angle) and large-angle radiation in different regions of jet transverse momentum (\pt) by concentrating on two different topologies, one using three-jet events and another with \PZ + two-jet\ events.
We label the hardest jet, or \PZ boson as $j_1$, the next hardest as $j_2$, and the softest as $j_3$.
We introduce two observables that are sensitive to the dynamic properties of multijet final states.
One observable is the \pt ratio of $j_3$ to $j_2$, \jetratio.
The other observable is the angular distance between the jet centers of $j_2$ and $j_3$ in the rapidity-azimuth ($y$-$\phi$) phase space, $\deltaR = \sqrt{\smash[b]{(y_{3} -y_{2})^{2} + (\phi_{3} - \phi_{2})^{2}}}$.
The definition of rapidity is $y = \ln\sqrt{(E+p_{z}c)/(E-p_{z}c)}$, and the definitions of other kinematic variables are given in Ref.~\cite{Chatrchyan:2008zzk}.
As indicated in Fig.~\ref{fig1}, we classify three-jet and \PZ + two-jet\ events into different categories using these two observables:
\begin{enumerate}
\item soft ($\jetratio < 0.3$) or hard ($\jetratio > 0.6$) radiation, depending on the ratio \protect\jetratio;
\item small-angle ($\deltaR < 1.0$) or large-angle ($\deltaR > 1.0$) radiation, depending on the angular separation \deltaR.
\end{enumerate}
According to these classifications, events in the soft and small-angle radiation region, as shown in Fig.~\ref{fig1} (a), can only be described if soft gluon resummation, \eg, in form of a parton shower, is included, whereas events in the hard and large-angle radiation region, as shown in Fig.~\ref{fig1} (d), would be better described when including the ME calculations.
The events in Figs.~\ref{fig1} (b) and (c) are also of interest, since they should include effects from both the PS and ME.
We report on proton-proton ($\Pp\Pp$) collision data collected at the CMS experiment containing three-jet events at center-of-mass energies of 8 and 13\TeV, and \PZ + two-jet\ events at a center-of-mass energy of 8\TeV.
The measurements are compared to calculations based on a leading-order (LO) or next-to-leading-order (NLO) ME supplemented with effects from PS, MPI, and hadronization.
The NLO ME descriptions apply to the lowest parton multiplicities relevant to the selected events: 2 jets for the three-jet analysis and \PZ+1j for the \PZ + two-jet\ analysis.
The measurements using three-jet final states are complementary to those with \PZ + two-jet\ events in a sense that different kinematic regions and initial-state flavor compositions are being probed.
The jets are also fully color connected, while the \PZ boson is color neutral, so color coherence effects should not appear so strongly in \PZ + two-jet\ events.
The goal of the measurements is: (i) to untangle the different features of the radiation in the collinear and large-angle events;
(ii) to investigate how well the PS approach describes the hard and large-angle radiation patterns;
and (iii) to illustrate how ME calculations can attempt to describe the soft and collinear regions.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Figure_001}
\caption{\label{fig1} Four categories of parton radiation. (a)~soft and small-angle radiation, (b)~hard and small-angle radiation, (c)~soft and large-angle radiation, (d)~hard and large-angle radiation.}
\end{figure*}
\section{The CMS detector}
{\tolerance=800
The central feature of the CMS detector is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}.
A silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections, reside within the volume of the solenoid.
Charged-particle trajectories are measured in the tracker with full azimuthal acceptance within pseudorapidities $\abs{\eta} < 2.5$.
The ECAL, which is equipped with a preshower detector in the endcaps, and the HCAL cover the region $\abs{\eta} < 3.0$.
Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors to the region $3.0<\abs{\eta} < 5.2$.
Finally, muons are measured up to $\abs{\eta} < 2.4$ in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid.
Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}.
The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100\unit{kHz} within a fixed latency of about 4\mus.
The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz} before data storage.
\par}
A more detailed description of the CMS detector, together with a definition of the coordinate system and the kinematic variables, is given in Ref.~\cite{Chatrchyan:2008zzk}.
\section{Event samples and selection}
The data in this study were collected with the CMS detector at the LHC using pp collisions at center-of-mass energies of 8 and 13\TeV.
The $\sqrt{s} = 8\TeV$ data, taken in 2012 during LHC Run 1, correspond to an integrated luminosity of 19.8\fbinv, and the $\sqrt{s} = 13\TeV$ data, taken in 2015 during LHC Run 2, correspond to an integrated luminosity of 2.3\fbinv.
Particles are reconstructed and identified using a particle-flow (PF) algorithm~\cite{bib:parflow}, that utilizes an optimized combination of information from the various elements of the CMS detector.
Jets are reconstructed by clustering the four-vectors of the PF candidates with the infrared and collinear-safe anti-\kt clustering algorithm~\cite{Cacciari:2008gp} using a distance parameter $R_{\mathrm{jet}} =$ 0.5 (0.4) at $\sqrt{s} = 8\,(13)\TeV$.
The clustering is performed with the \FASTJET software package~\cite{Cacciari:2011ma}.
The jets are ordered in \pt and all events with additional jets are analyzed.
In addition, three-jet events use the charged-hadron subtraction (CHS) technique~\cite{bib:parflow} to mitigate the effect of extraneous $\Pp\Pp$ collisions in the same bunch crossing (pileup, PU).
The CHS technique reduces the contribution to the reconstructed jets from PU by removing tracks identified as originating from PU vertices.
Muons are reconstructed using a simultaneous global fit performed with the hits in the silicon tracker and the muon system.
They are required to pass standard identification criteria~\cite{Chatrchyan:2013sba,Sirunyan:2018fpa} based on the minimum number of hits in each detector, quality of the fit, and the consistency with the primary vertex by requiring the longitudinal (transverse) impact parameters to be less than 0.5 (0.2) \cm.
The efficiency to reconstruct and identify muons is greater than 95\% over the entire region of pseudorapidity covered by the CMS muon system ($\abs{\eta} > 2.4$).
The overall momentum scale is measured to a precision of 0.2\% with muons from \PZ decays.
The transverse momentum resolution varies from 1\% to 6\% depending on pseudorapidity for muons with \pt for a few \GeV to 100\GeV and reaches 10\% for 1\TeV muons~\cite{Chatrchyan:2012xi}.
Observed distributions for muons are well reproduced by Monte Carlo (MC) simulation.
Corresponding scale factors for the difference between data and MC simulations are measured with good accuracy~\cite{CMS-DP-2013-009}.
Muons must be isolated from other activity in the tracker by requiring the \pt sum of other tracks within a cone of radius $\Delta R = \sqrt{(\Delta\eta)^{2} + (\Delta\phi)^{2}} = 0.3$ centered on the muon candidate, is less than 10\% of the muon \pt.
If the two muons with the highest \pt in an event are within the isolation cone of one another, the other muon candidate is removed from the isolation sum of each muon.
{\tolerance=1600
Three-jet events are collected using single jet HLT requirements that are not pre-scaled.
The $\sqrt{s} = 8\,(13)\TeV$ data use a 320 (450)\GeV trigger \pt threshold.
In the offline analyses, the \pt threshold starts at 510\GeV for both sets of data.
The \PZ + two-jet\ events with the \PZ boson decaying into a pair of muons are collected at $\sqrt{s} = 8\TeV$ with a single-muon HLT that requires a muon $\pt > 24\GeV$ and $\abs{\eta} < 2.1$.
\par}
In the three-jet systems, the leading jet is required to have a $\pt > 510\GeV$, because of a decreasing efficiency for single jet triggers below this value~\cite{Khachatryan:2016bia, Khachatryan:2016mlc, Khachatryan:2016wdh}.
Events with at least three jets of $\pt > 30\GeV$ are selected for further consideration.
The leading and subleading jets must be within a rapidity range of $\abs{y} < 2.5$, and the third jet is therefore implicitly restricted to $\abs{y} < 4$ by requiring $\deltaR < 1.5$.
A dijet topology with an extra jet is selected by requiring the difference in azimuthal angle between the first and second jet to be $ \pi-1 < \Delta \phi_{12} < \pi$.
The missing transverse momentum vector \ptvecmiss is defined as the projection onto the plane perpendicular to the beam axis of the negative vector sum of the momentum of all reconstructed PF objects in an event.
Its magnitude is referred to as \ptmiss.
Events with a \ptmiss divided by the scalar sum of all transverse momenta $> 0.3$ are rejected to remove the contamination from \PW or \PZ boson decays~\cite{CMS-PAPERS-JME-10-009, CMS-PAPERS-JME-13-003, CMS-PAPERS-JME-17-001}.
To avoid an overlap between $j_2$ and $j_3$, \deltaR\ is required to be larger than the distance parameter $R_{\mathrm{jet}}$.
We thus require \deltaR\ to be larger than 0.6 (0.5) for $\sqrt{s} = 8\,(13)\TeV$ data. The maximum \deltaR\ is set to 1.5 to ensure that $j_3$ is closer to $j_2$ than to $j_1$.
We further require that $0.1<\jetratio\ < 0.9$ to avoid \ensuremath{p_{\mathrm{T3}}}\xspace threshold effects and to ensure \pt ordering for hard radiation.
In \PZ + two-jet\ events, the \PZ boson is reconstructed from a pair of oppositely charged, isolated muons with $\pt > 25~(5)\GeV$ and $\abs{y} < 2.1$ (2.4) for the leading (subleading) muon.
Muons are required to be from primary vertex with distance $dr < 0.2 \cm$ and $dz < 0.5 \cm$.
The dimuon invariant mass is required to be $70 < m_{\mu^+\mu^-} < 110\GeV$ with the dimuon momentum satisfying $\ensuremath{p_{\mathrm{T1}}}\xspace > 80\GeV$ and $\abs{y_1} < 2$.
At least two jets are required in the final state with the leading jet (labeled $j_2$) satisfying $\ptnl > 80\GeV$ and $\abs{y_{2}} < 1$ and the subleading jet (labeled $j_3$) required to have $\ensuremath{p_{\mathrm{T3}}}\xspace > 20\GeV$ with $\abs{y_{3}} < 2.4$.
The distance between muons from \PZ bosons and jets are requested to be more then 0.5.
The \PZ + two-jet\ topology is further restricted by requiring a difference in the azimuthal angle between the \PZ boson and $j_{2}$ of $ \Delta\phi_{12} > 2$.
Table~\ref{tabphasespace} shows a summary of the event selection requirements for both samples.
\begin{table*}[htbp]
\centering
\topcaption{Phase space selection for the three-jet and \PZ + two-jet\ analyses.}
\label{tabphasespace}
\begin{tabular}{ l l }
Three-jet events & \\
\hline
Transverse momentum of the leading jet ($j_1$) & $\ensuremath{p_{\mathrm{T1}}}\xspace > 510\GeV$\\
Transverse momentum of each jet and rapidity of $j_{1,2}$ & $\pt > 30\GeV$ , $\abs{y_{1,2}} < 2.5 $\\
Azimuthal angle difference between $j_1$ and $j_2$ & $\pi-1 < \Delta\phi_{12} < \pi$ \\
Transverse momentum ratio between $j_2$ and $j_3$ & $0.1 <\jetratio < 0.9$ \\
Angular distance between $j_2$ and $j_3$ & $R_{\mathrm{jet}}+0.1 < \deltaR < 1.5$ \\
Number of selected events at $\sqrt{s} = 8\,(13) \TeV$ & 777\,618 (613\,254) \\
\\[-1.5ex]
\PZ + two-jet\ events & \\
\hline
Transverse momentum of the \PZ boson ($j_1$) & $\ensuremath{p_{\mathrm{T1}}}\xspace > 80\GeV$, $\abs{y_1} < 2$ \\
Transverse momentum and rapidity of $j_2$ & $\ptnl > 80 \GeV$ , $\abs{y_{2}} < 1 $\\
Transverse momentum and rapidity of $j_3$ & $\ensuremath{p_{\mathrm{T3}}}\xspace > 20 \GeV $, $\abs{y_{3}} < 2.4 $\\
Azimuthal angle difference between \PZ and $j_2$ & $ 2 < \abs{\Delta \phi_{12}} < \pi $ \\
Dimuon mass & $70< m_{\mu^+\mu^-} < 110 \GeV$ \\
Angular distance between $j_3$ and $j_2$ & $0.5 < \deltaR < 1.5$ \\
Number of selected events & 15\,466\\
\end{tabular}
\end{table*}
Generator jets are reconstructed from stable particles by clustering the four-vectors with an anti-\kt clustering algorithm excluding neutrinos.
The kinematical rerquirements for muons and jets are the same as applied for reconstructed objects.
For \PZ + two-jet\ events, the distance between muons from \PZ boson and jets must have $\Delta R > 0.5$.
The \ptmiss selection is not applied at the generator level for QCD multijet events.
\section{Theoretical models} \label{sec:theory}
{\tolerance=1600
Reconstructed data are compared to predictions from MC event generators, where the generated events are passed through a full detector simulation based on \GEANTfour~\cite{bib:geant} and the simulated events are reconstructed using standard CMS software.
Reconstruction-level predictions are obtained for three-jet events at $\sqrt{s}= 8\TeV$ with the \MADGRAPH~\cite{bib:madgraph5} software package matched to \PYTHIA~6~\cite{Sjostrand:2006za} with the CTEQ6L1~\cite{Pumplin:2002vw} parton distribution function (PDF) set and the Z2Star tune~\cite{CMS-PAPERS-QCD-10-010}, as well as with standalone \PYTHIA~8.1~\cite{Sjostrand:2007gs} with the CTEQ6L1 PDF set and the 4C~\cite{Corke_2011} tune.
At 13\TeV, \MADGRAPH interfaced to \PYTHIA~8.2~\cite{Sjostrand:2014zea} and standalone \PYTHIA~8.2 are used with the NNPDF2.3LO~\cite{Ball:2012cx} PDF set and the CUETP8M1~\cite{Khachatryan:2015pea} tune.
The \SHERPA~\cite{Gleisberg:2008ta} event generator interfaced to {\textsc{csshower++}}~\cite{Schumann:2007mg} with the CT10~\cite{Lai:2010vv} PDF set and the AMISIC++~\cite{PhysRevD.36.2019} tune and \MADGRAPH interfaced to \PYTHIA~6 with the CTEQ6L1 PDF set and the Z2Star tune provide \PZ + two-jet\ events at 8\TeV.
Table~\ref{detMC} summarizes the event generator versions, PDF sets and tunes.
\par}
\begin{table*}[htbp]
\centering
\topcaption{Event generator versions, PDF sets, and tunes used to produce MC samples at reconstruction level.}
\label{detMC}
\begin{tabular}{l l l}
Event generator & PDF set & Tune \\
Three-jet events at $\sqrt{s} = 8\TeV$ & & \\
\hline
\MADGRAPH~5.1.3.30 + \PYTHIA~6.425 & CTEQ6L1 & Z2Star \\
\PYTHIA~8.153 & CTEQ6L1 & 4C \\
\\[-1.5ex]
Three-jet events at $\sqrt{s} = 13\TeV$ & & \\
\hline
\MADGRAPH~5.2.3.3 + \PYTHIA~8.219 & NNPDF2.3LO & CUETP8M1 \\
\PYTHIA~8.219 & NNPDF2.3LO & CUETP8M1 \\
\\[-1.5ex]
\PZ + two-jet\ events & & \\
\hline
\SHERPA~1.4.0 + {\textsc{csshower++}} & CT10 & AMISIC++ \\
\MADGRAPH~5.1.3.30 + \PYTHIA~6.425 & CTEQ6L1 & Z2Star \\
\end{tabular}
\end{table*}
Results corrected to stable-particle level are compared to predictions obtained with the models presented below.
An overview of these models is given in Table~\ref{tableMC}.
The \PYTHIA~8~\cite{Sjostrand:2014zea} event generator provides hard-scattering events using a ME calculated at LO supplemented with PS.
These event samples are labeled as ``\PYTHIA LO 2j+PS" for the three-jet and as ``\PYTHIA LO Z+1j+PS" for \PZ + two-jet\ events.
The PDF set NNPDF2.3LO and the CUETP8M1 parameter set for the simulation of the underlying event (UE) are used with free parameters adjusted to measurements in $\Pp\Pp$ collisions at the LHC and proton-antiproton collisions at the Fermilab Tevatron.
The Lund string model~\cite{Andersson:1998tv} is applied for the hadronization process.
The \MGvATNLO event generator, labeled as ``\MADGRAPH{}" in the following, is used to simulate hard processes with up to 4 final-state partons at LO accuracy.
It is interfaced to \PYTHIA~8 with the CUETP8M1 tune and the NNPDF2.3LO PDF set for the simulation of PS, hadronization, and MPI, for three-jet, and to \PYTHIA~6 with the Z2Star tune and the CTEQ6L1 PDF set for \PZ + two-jet\ events.
The three-jet sample is labeled as ``\MADGRAPH \lofourps" and the \PZ + two-jet\ sample is labeled as ``\MADGRAPH LO Z+4j+PS".
The \kt-MLM procedure \cite{Alwall:2007fs} is used to match jets from the ME and PS with a matching scale of 10\GeV.
Predictions are also included using the \POWHEG\ {\textsc{box}} library \cite{bib:Nason:2004rx,bib:Frixione:2007vw,bib:Alioli:2010xd}, with the CT10 NLO~\cite{Lai:2010vv} PDFs and with the \PYTHIA~8 CUETP8M1 tune applied to simulate PS, MPI, and hadronization.
The \POWHEG generator is run in the dijet mode \cite{bib:POWHEG_Dijet} providing an NLO $2\to2$ calculation, labeled as ``\POWHEG \nlops".
The matching between the \POWHEG ME calculations and the \PYTHIA UE~\cite{Khachatryan:2015pea} simulation is performed using the shower-veto procedure (UserHook option 2~\cite{Sjostrand:2014zea}).
The \SHERPA software package is used to simulate \PZ + two-jet\ events.
The hard process is calculated at LO for a ME with up to four final-state partons and the CT10 PDF set is used.
This sample is labeled as ``\SHERPA LO Z+4j+PS". The \SHERPA\ generator has its own PS~\cite{Schumann:2007mg}, hadronization, and MPI tune~\cite{PhysRevD.36.2019}.
Finally, the \MGvATNLO generator is also used in the \MCATNLO mode, providing a \PZ + one-jet ME at NLO accuracy.
This event generator is interfaced to \PYTHIA~8, using the CUETP8M1 tune and the NNPDF3.0NLO~\cite{Ball:2014uwa} PDF set, to produce \PZ + two-jet\ events. The sample is labeled as ``a\MCATNLO NLO Z+1j+PS".
The background from \PW, \PZ, top quark, and diboson production for the three-jet analysis is negligible and not further considered.
The main background for \PZ + two-jet\ events comes from \ttbar, single top, and diboson production.
The \ttbar, ZZ, and WZ events are simulated with \MADGRAPH 5.1.3.30 + \PYTHIA 6.425 using the same tune and PDF set as for generating \PZ + two-jet\ samples.
WW events are generated with \PYTHIA 6.425 with CTEQ6L1 PDF set and Z2Star tune.
Single top events are generated with \POWHEG (CT10 PDF set, Z2Star tune).
\begin{table*}[htbp]
\centering
\topcaption{MC event generators and version numbers, parton-level processes, PDF sets, and UE tunes used for the comparison with measurements.}
\label{tableMC}
\cmsTable{
\begin{tabular}{l l l l}
Event generator & {Parton-level process} & {PDF set} & {Tune}\\
\\[-1.5ex]
{Three-jet events} & & & \\
\hline \PYTHIA~8.219 & LO 2j+PS & NNPDF2.3LO & CUETP8M1 \\
\MADGRAPH~5.2.3.3 + \PYTHIA~8.219 & \lofourps & NNPDF2.3LO & CUETP8M1 \\
\POWHEG 2 + \PYTHIA~8.219 & \nlops & CT10 NLO & CUETP8M1 \\
\\[-1.5ex]
{\PZ + two-jet\ events} & & & \\
\hline
\PYTHIA~8.219 & LO Z+1j+PS & NNPDF2.3LO & CUETP8M1 \\
\MADGRAPH~5.1.3.30 + \PYTHIA~6.425 & LO Z+4j+PS & CTEQ6L1 & Z2Star \\
\SHERPA~1.4.0 + {\textsc{csshower++}} & LO Z+4j+PS & CT10 & AMISIC++ \\
a\MCATNLO + \PYTHIA~8.223 & NLO Z+1j+PS & NNPDF30\_nlo\_nf\_5\_pdfas & CUETP8M1 \\
\\[-1.5ex]
\end{tabular} }
\end{table*}
\section{Data correction and study of systematic uncertainties} \label{sec:systematic}
To facilitate the comparison of data with theory, the data are unfolded from reconstruction to stable-particle level, defined by a mean decay length larger than 1\unit{cm}, so that measurement effects are removed and that the true distributions in the observables are determined.
The unfolding is performed using the D'Agostini algorithm~\cite{DAgostini:1994zf} as implemented in the \RooUnfold\ software package~\cite{bib:RooUnfold} for three-jet events, while the singular value decomposition method~\cite{Hocker:1995kb} is used for \PZ + two-jet\ events.
The response matrices are obtained from the full detector simulation using \MADGRAPH for three-jet events and \SHERPA\ for \PZ + two-jet\ events.
We estimate the influence of \ttbar, single top, and diboson backgrounds by adding generated events produced with event generator \MADGRAPH LO Z+4j+PS and comparing the predictions for the observables \jetratio\ and \deltaR\ using the same generator without the backgrounds.
For \ttbar production with fully leptonic decay and dibosons the probability of $j_3$ emission increases from 2\% (soft radiation) to 10\% (hard radiation) depending on the phase space.
For semileptonic and hadronic decays and single top production the change is negligible.
Since the background effect is comparable to the systematic uncertainties, it is not included in the theoretical estimations and it is not subtracted from the data.
The distributions are normalized to the integral of the spectra for three-jet events and to the number of inclusive $\PZ$ + one-jet events in the \PZ + two-jet\ analysis.
The \PZ + two-jet\ analysis normalization thus reflects the probability to have more than one jet in the event.
Systematic uncertainties associated to the jet energy scale (JES) calibration, the jet energy resolution (JER), PU modeling, model dependence, as well as the unfolding method, are estimated.
Muon-related uncertainties (single muon trigger efficiency, muon isolation, muon scale and resolution) for the \PZ + two-jet\ channel are negligible with respect to other systematic sources.
The treatment of the uncertainty depends on the uncertainty source and is estimated separately for each bin (see below).
The overall uncertainty for each bin is estimated summing in quadrature uncertainties from the various sources.
The systematic uncertainty from the JES is 0.15 (0.24)\% at $\sqrt{s} = 8\,(13)\TeV$ for the three-jet case and 5--10\% for the \PZ + two-jet\ events.
The JER observed in data differs from that obtained from simulation and simulated jets are therefore smeared to obtain the same resolution as in the data~\cite{bib:jes8}.
The systematic uncertainty from JER is estimated by varying the simulated JER uncertainty up and down by one standard deviation, which results in a systematic uncertainty of 0.16 (0.12)\% at $\sqrt{s} = 8\,(13)\TeV$ for three-jet and 2--3\% for \PZ + two-jet\ events.
When the distributions of \PZ + two-jet\ events are normalized to the integrals of the histograms, instead of the number of \PZ + one-jet events, the systematic uncertainties due to the JES and JER decrease to 0.3--0.5\%, except for the \jetratio\ shape, which is still sensitive to the JES with changes of up to 3\%.
The distribution in the number of primary vertices is sensitive to the PU difference between data and simulation.
To estimate the uncertainty due to the PU modeling, the number of PU events in simulation is changed by shifting the total inelastic cross section by $\pm$5\%~\cite{Chatrchyan:2012nj}.
The resulting PU uncertainties are 0.10 (0.17)\% at $\sqrt{s} = 8\,(13)\TeV$ for the three-jet and 1\% for the \PZ + two-jet\ events.
The dependence on the event generator used for the unfolding is estimated with MC event samples from \MADGRAPH and \PYTHIA for three-jet, and \SHERPA\ and \MADGRAPH\ for the \PZ + two-jet\ events.
The means of both sets of unfolded data are used as the nominal values.
This uncertainty is $\approx 1.1$ (0.25)\% at $\sqrt{s} = 8\,(13)\TeV$ for the three-jet and 1\% for the \PZ + two-jet\ events, which is half of the difference between the results obtained with the respective event generators.
The difference in the results is due to statistical fluctuations from the limited number of events in the MC simulation.
Table~\ref{sys} summarizes the systematic uncertainties in the measurements.
\begin{table*}[htbp]
\centering
\topcaption{Systematic uncertainties in the measurements in \%.}
\label{sys}
\begin{tabular}{l c c}
{Source} & three-jet 8/13\TeV& {\PZ + two-jet\ 8\TeV} \\
\hline
Jet energy scale & 0.15/0.24 & 5--10 \\
Jet energy resolution & 0.16/0.12 & 2--3 \\
Pileup & 0.1/0.17 & 1 \\
Unfolding and model dependence & 1.1/0.25 & 1\\
\end{tabular}
\end{table*}
The systematic uncertainties from various sources are similar for the three-jet samples at $\sqrt{s} = 8$ and 13 \TeV, except for unfolding and model dependence at $\sqrt{s} = 8\TeV$.
The systematic uncertainties between the three-jet and \PZ + two-jet\ analysis cannot be compared directly because each analysis uses a different normalization and also differs in statistical significance.
The JES uncertainty is especially sensitive to the jet \pt range, and the \PZ + two-jet\ phase space has a lower \pt threshold than the one used in the three-jet events.
The figures of Sec.~\ref{sec:results} show the total systematic uncertainty as a band in the panels displaying the ratio of predictions over data.
\section{Results} \label{sec:results}
We compare the distributions in the ratio \jetratio\ in data to predictions for events with small-angle ($\deltaR < 1.0$) and large-angle radiation ($\deltaR > 1.0$).
We also compare the \deltaR\ distributions in data to predictions with soft ($\jetratio < 0.3$) and hard radiation ($\jetratio > 0.6$).
The events with $0.3 < \jetratio < 0.6$ are not used in the comparisons for the \deltaR\ observable because we focus on the limits in soft and hard radiation.
This classification is summarized in Fig.~\ref{fig1}, within the phase space defined in Table~\ref{tabphasespace}.
The data measurements are provided at the Durham High Energy Physics Database (HEPData)~\cite{hepdata}.
The uncertainties in the PDF and in the renormalization and factorization scales are investigated for the \POWHEG and a\MCATNLO\ models.
Other theoretical predictions are expected to have comparable uncertainties.
The PDF uncertainties are calculated as recommended in PDF4LHC~\cite{Butterworth:2015oua} following the description of the PDF sets: for CT10 using the Hessian approach; and for NNPDF using MC replicas.
The renormalization and factorization scales are varied by a factor 2 up and down, excluding the (2,1/2) and (1/2,2) cases.
Finally, the theoretical uncertainties are obtained as the quadratic sum of the PDF variance and the envelope of the scale variations, and displayed as a band around the theoretical predictions in the Fig.~\ref{fig:8TeVpt}--\ref{fig:ZfigDelta}.
\subsection{Three-jet selection}
We show the $\sqrt{s}=8\TeV$ measurements of \jetratio\ in Fig.~\ref{fig:8TeVpt} and of \deltaR\ in Fig.~\ref{fig:8TeVDelta}, and compare them to theoretical expectations.
In Figs.~\ref{fig:13TeVpt} and \ref{fig:13TeVDelta} the distributions are given for $\sqrt{s} = 13\TeV$.
Figure~\ref{fig:8TeVpt} (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) shows the \jetratio\ distribution for the small \deltaR\ region.
All predictions show significant deviations from the measurements.
Interestingly, the \lofourps\ prediction shows different behavior compared with LO 2j+PS\ and \nlops.
We see that the number of partons in the ME calculation and the merging method with the PS in the present simulations lead to different predictions.
In Fig.~\ref{fig:8TeVpt} (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) the \jetratio\ distribution is shown for large \deltaR.
This region of phase space is well described by the \lofourps~calculations, while the LO 2j+PS\ and \nlops\ predictions show large deviations from the measurements.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Figure_002-a}
\includegraphics[width=0.4\textwidth]{Figure_002-b}
\caption{Three-jet events at $\sqrt{s} = 8\TeV$ compared to theory: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \protect\jetratio\ for small-angle radiation ($\deltaR < 1.0$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \protect\jetratio\ for large-angle radiation ($\deltaR > 1.0$).}
\label{fig:8TeVpt}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Figure_003-a}
\includegraphics[width=0.4\textwidth]{Figure_003-b}
\caption{Three-jet events at $\sqrt{s} = 8\TeV$ and comparison to theoretical predictions: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \deltaR\ for soft radiation ($\jetratio < 0.3$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \deltaR\ for hard radiation ($\jetratio > 0.6$).}
\label{fig:8TeVDelta}
\end{figure}
In Fig.~\ref{fig:8TeVDelta}, the \deltaR\ distribution is shown for two regions of \jetratio .
Figure~\ref{fig:8TeVDelta} (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) shows $\jetratio < 0.3$.
The predictions from LO 2j+PS\ and \nlops\ describe the measurement well, while the prediction from \lofourps\ shows a larger deviation from the data.
In Fig.~\ref{fig:8TeVDelta} (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) the \deltaR\ distribution is shown for $\jetratio > 0.6$.
In contrast to Fig.~\ref{fig:8TeVDelta} (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}), the predictions for distributions from LO 2j+PS\ differ from the measurement, whereas the predictions from \nlops\ and \lofourps\ agree well with it.
This indicates that in this region the contribution from higher-multiplicity ME calculations supplemented with PS should be included.
The same comparisons are performed for the $\sqrt{s} = 13\TeV$ measurements as shown in Figs.~\ref{fig:13TeVpt} and \ref{fig:13TeVDelta}.
A similar behavior is observed for $\sqrt{s} = 8\TeV$.
In conclusion, none of the simulations simultaneously describes to simultaneously describe both the \jetratio\ and the \deltaR\ distributions in three-jet events.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Figure_004-a}
\includegraphics[width=0.4\textwidth]{Figure_004-b}
\caption{\label{fig:13TeVpt} Three-jet events at $\sqrt{s} = 13\TeV$ compared to theory: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \protect\jetratio\ for small-angle radiation ($\deltaR < 1.0$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \protect\jetratio\ for large-angle radiation ($\deltaR > 1.0$).}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Figure_005-a}
\includegraphics[width=0.4\textwidth]{Figure_005-b}
\caption{\label{fig:13TeVDelta} Three-jet events at $\sqrt{s} = 13\TeV$ and comparison to theoretical predictions: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \deltaR\ for soft radiation ($\jetratio < 0.3$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \deltaR\ for hard radiation ($\jetratio > 0.6$).}
\end{figure}
\subsection{\texorpdfstring{$\PZ$}{Z} + two-jet selection}
The measurement of \jetratio\ for \PZ + two-jet\ events is presented in Fig.~\ref{fig:Zfigpt} for data at $\sqrt{s} = 8\TeV$.
All distributions are normalized to the selected number of \PZ + one-jet events.
All predictions from \PYTHIA, \SHERPA, \MADGRAPH, and a\MCATNLO\ agree with data within the uncertainties of the measurement except for the phase space region with hard radiation.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Figure_006-a}
\includegraphics[width=0.4\textwidth]{Figure_006-b}
\caption{\label{fig:Zfigpt} \PZ + two-jet\ events at $\sqrt{s} = 8\TeV$ compared to theory: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \protect\jetratio\ for small-angle radiation ($\deltaR < 1.0$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \protect\jetratio\ for large-angle radiation ($\deltaR > 1.0$).}
\end{figure}
Figure~\ref{fig:ZfigDelta} shows the measurement as a function of \deltaR.
The a\MCATNLO\ prediction deviates from the data at high \deltaR\ and small \jetratio, while \PYTHIA, \SHERPA, \MADGRAPH, and a\MCATNLO\ describe the shape of the distribution in the high-\jetratio\ range, but underestimate the data due to a smaller contribution from production of $j_3$.
This feature is based on the normalization of \PZ + two-jet\ distributions by the number of inclusive \PZ + one-jet events selected.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{Figure_007-a}
\includegraphics[width=0.4\textwidth]{Figure_007-b}
\caption{\label{fig:ZfigDelta} \PZ + two-jet\ events at $\sqrt{s} = 8\TeV$ compared to theory: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \deltaR\ for soft radiation ($\jetratio < 0.3$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \deltaR\ for hard radiation ($\jetratio > 0.6$).}
\end{figure}
Figures~\ref{fig:Zfig6pt} and \ref{fig:Zfig6Delta} compare the event distributions with predictions from \PYTHIA~8 with the final-state PS and MPI switched off.
The initial-state PS was kept, because one of the jets must originate from PS when \PZ + two-jet\ events are selected.
Multiple parton interactions play a very minor role, while the final-state PS in \PYTHIA~8 is very important.
When the final-state PS is switched off, events where both jets come from the initial-state PS are kept with a tendency to be close to each other in \deltaR.
In general, the measurements with \PZ + two-jet\ events are well described by all theoretical predictions, except for the underestimation of the $j_3$ emission.
The contribution of background from \ttbar production and dibosons can partially compensate the lack of the $j_3$ emission.
The contribution of the background (\ttbar production with fully leptonic decay and dibosons) increases the probability of $j_3$ emission from 2\% (soft radiation) to 10\% (hard radiation) depending on the phase space region.
The effect of the other processes (\ttbar production with semileptonic and hadronic decays, single top production) is negligible.
In comparison with the three-jet measurements, we observe significant differences; only in the region of large \deltaR\ and large \jetratio\ (hard and large-angle radiation) do the theoretical predictions agree with the measurement.
The accessible range in \pt is rather small in \PZ + two-jet\ events because of the limit in the \pt of the \PZ bosons ($\ensuremath{p_{\mathrm{T1}}}\xspace > 80\GeV$), while the three-jet selection, on the contrary, can have a rather large range ($\ensuremath{p_{\mathrm{T1}}}\xspace > 510\GeV$).
This may explain why the region of small \jetratio\ is better described by predictions that include PS in the latter case.
In addition, the large-angle radiation is best described by fixed-order ME calculations.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{Figure_008-a}
\includegraphics[width=0.4\textwidth]{Figure_008-b}
\caption{\label{fig:Zfig6pt} \PZ + two-jet\ events at $\sqrt{s} = 8\TeV$ compared to theoretical predictions from \PYTHIA~8 without initial-state parton showers (IPS), final-state parton showers (FPS), and MPI: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \protect\jetratio\ for small-angle radiation ($\deltaR < 1.0$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \protect\jetratio\ for large-angle radiation ($\deltaR > 1.0$).}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{Figure_009-a}
\includegraphics[width=0.4\textwidth]{Figure_009-b}
\caption{\label{fig:Zfig6Delta} \PZ + two-jet\ events at $\sqrt{s} = 8\TeV$ and comparison to theoretical predictions from \PYTHIA~8 without initial-state parton showers (IPS), final-state parton showers (FPS), and MPI: (upper\xspace}}{\newcommand{\cmsLeft}{left\xspace}) \deltaR\ for soft radiation ($\jetratio < 0.3$), (lower\xspace}}{\newcommand{\cmsRight}{right\xspace}) \deltaR\ for hard radiation ($\jetratio > 0.6$).}
\end{figure}
In conclusion, the \PZ + two-jet\ measurement has a different distribution in \jetratio, which originates from the different kinematic selection criteria relative to three-jet events, thus reducing the sensitivity in the soft and collinear region.
Within the available phase space, the measurements are in reasonable agreement with both PS and ME calculations, apart from the emission of $j_3$ in the high-\jetratio\ region.
\section{Summary}
Two kinematic variables are introduced to quantify the radiation pattern in multijet events: (i) the transverse momentum ratio (\jetratio) of two jets, and (ii) their angular separation (\deltaR). The variable \jetratio\ is used to distinguish between soft and hard radiation, while \deltaR\ classifies events into small- and large-angle radiation types.
Events with three or more energetic jets as well as inclusive \PZ + two-jet\ events are selected for study using data collected at $\sqrt{s} = 8\TeV$ corresponding to an integrated luminosity of 19.8\fbinv.
Three-jet events at $\sqrt{s} = 13\TeV$ corresponding to an integrated luminosity of 2.3\fbinv are also analyzed.
No significant dependence on the center-of-mass energy is observed in the differential distributions of \jetratio\ and \deltaR.
{\tolerance=800
Overall, large-angle radiation (large \deltaR) and hard radiation (large \jetratio) are well described by the matrix element (ME) calculations (using \lofourps\ formulations), while the parton shower (PS) approach (LO 2j+PS\ and \nlops) fail to describe the regions of large-angle and hard radiation.
The collinear region (small \deltaR) is not well described; LO 2j+PS, \nlops, and \lofourps\ distributions show deviations from the measurements.
In the soft region (small \jetratio), the PS approach describes the measurement also in the large-angle region (full range in \deltaR), while for large \jetratio\ higher-order ME contributions are needed to describe the three-jet measurements.
The distributions in \PZ + two-jet\ events are reasonably described by all tested generators.
Nevertheless, we find an underestimation of third-jet emission at large \jetratio\ both in the collinear and large-angle regions, for all of the tested models.
Contribution from \ttbar and dibosons production may partially cover the difference.
These results illustrate how well the collinear/soft, and large-angle/hard regions are described by different approaches.
The different kinematic regions and initial-state flavor composition may be the reason why the three-jet measurements are less consistent with the theoretical predictions relative to the \PZ + two-jet\ final states.
These results clearly indicate that the methods of merging ME with PS calculations are not yet optimal for describing the full region of phase space.
The measurements presented here serve as benchmarks for future improved predictions coming from ME calcualtions combined with parton showers.
\par}
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid and other centers for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC, the CMS detector, and the supporting computing infrastructure provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RIF (Cyprus); SENESCYT (Ecuador); MoER, ERC PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 724704, 752730, and 765710 (European Union); the Leventis Foundation; the Alfred P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Deutsche Forschungsgemeinschaft (DFG), under Germany's Excellence Strategy -- EXC 2121 ``Quantum Universe" -- 390833306, and under project number 400140256 - GRK2497; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Higher Education, project no. 0723-2020-0041 (Russia); the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
\end{acknowledgments}
|
{
"timestamp": "2021-09-30T02:03:36",
"yymm": "2102",
"arxiv_id": "2102.08816",
"language": "en",
"url": "https://arxiv.org/abs/2102.08816"
}
|
\section{Introduction}
Unmanned Aerial Vehicles (UAVs), also called \emph{drones}, are enabling a wide range of applications in smart cities~\cite{mohammed2014uavs}, such as traffic monitoring~\cite{kanistras2013survey}, construction surveys~\cite{george2019towards}, package delivery~\cite{sorbelli2020energy}, localization~\cite{sorbelli2020measurement}, and disaster (including COVID-19) management~\cite{costa2020covid}, assisted by 5G wireless roll-out~\cite{gapeyenko2018flexible}.
The mobility, agility, and hovering capabilities of drones allow them to rapidly fly to points of interest (i.e., \emph{waypoints}) in the city to accomplish specific \emph{activities}.
Usually, such activities involve hovering and recording a scene using the drone's camera, and analyzing the videos to take decisions.
Advancements of computer vision algorithms and \emph{Deep Neural Networks} (DNNs) enable video analytics to be performed over such recordings for automated decision-making.
Typically, these are inferred once the recordings are transferred to a ground station (GS) after the drones land. In-flight transfer of videos to a GS is limited by the intermittent bandwidth of current communications technologies.
However, certain activities may require low-latency analysis and decisions, as soon as the video is captured at a location. Hence, the on-board \emph{edge computing} capability~\cite{jung2018perception}
available on commercial drones can be leveraged to process the recorded videos, and quickly report concise results to the GS over 3/4/5G wireless networks~\cite{zeng2019accessing}.
Since the transferred results are brief and the \emph{on-board} processing times dominate, we ignore communication constraints like data rate, latency, and reliability that are affected by the UAV's altitude, antenna envelope, etc.
UAVs are \emph{energy-constrained vehicles} with limited battery capacity, and commercial drones can currently fly for less than an hour. The flying distance between waypoints will affect the number of activities that can be completed in one \emph{trip} on a full battery. Besides hovering and recording videos at waypoints, performing edge analytics also consumes energy. So, the drone's battery capacity should be judiciously managed for the flying, hovering and computing tasks. Nevertheless, once a drone lands, its exhausted battery can be quickly replaced with a full one, to be ready for a new trip.
This paper examines how a \emph{UAV fleet operator} in a city can plan \emph{missions} for a captive set of drones to accomplish activities periodically provided by the users. An \emph{activity} involves visiting a waypoint, hovering and capturing video at that location for a specific time period, and optionally performing on-board analytics on the captured data. Activities also offer \emph{utility} scores depending on how they are handled. The novel problem we propose here is for the fleet operator to \emph{co-schedule flight routing among waypoints \underline{and} on-board computation so that the drones complete (a subset of) the provided activities, within the energy and computation constraints of each drone, while maximizing the total utility.}
Existing works have examined routing of one or more drones for
capturing and relaying data to the backend~\cite{motlagh2019energy}, off-loading computations from mobile devices~\cite{hu2019uav}, and cooperative video surveillance~\cite{trotta2018uavs}.
There also exists literature on scheduling tasks for edge computing that are compute- and energy-aware, operate on distributed edge resources, and consider deadlines and device reliability~\cite{meng2019dedas}. However, none of these examine co-scheduling a fleet of physical drones and digital applications on them to meet the objective, while efficiently managing the energy capacity to maximize utility.
Specifically, our \emph{Mission Scheduling Problem (MSP)} combines elements of the \emph{Vehicle Routing Problem (VRP)}~\cite{clarke1964scheduling}, which generalizes the well known Traveling Salesman Problem (TSP) to find optimal routes for a set of vehicles and customers~\cite{toth2002vehicle}, and the \emph{Job-shop Scheduling Problem (JSP)}~\cite{manne1960job} for mapping jobs of different execution duration to the available resources, which is often used for parallel scheduling of computing tasks to multiprocessors~\cite{kwok1999static}.
\ysnoted{Experiment results...anything that stands out? Counter-intuitive?}
We make the following contributions in this paper.
\begin{itemize}
\item We characterize the system and application model, and formally define the \textit{Mission Scheduling Problem (\prob)} to co-schedule routes and analytics for a fleet of drones, maximizing the obtained utility (Sections~\ref{sec:model} and~\ref{sec:prob-def}).
\item We prove that \prob is \textit{NP-hard}, and optimally solve it using a \textit{mixed integer linear programming (MILP)} design, \algopt, which is feasible for small inputs (Section~\ref{sec:algorithms:opt}).
\item We design \textit{two time-efficient heuristic algorithms, \algjsc and \algvrc,} that solve the MSP for arbitrary-sized inputs, and offer complexity bounds for their execution (Section~\ref{sec:algorithms:appox}).
\item We \textit{evaluate and analyze} the utility and scheduling runtime trade-offs for these three algorithms, for diverse drone workloads based on real drone traces (Section~\ref{sec:evaluation}).
\end{itemize}
\section{Related Work}\label{sec:related}
This section reviews literature on vehicle routing and job-shop scheduling, contrasting them with MSP and our solutions.
\subsection{Vehicle Routing Problem (VRP)}
VRP is a variant of TSP with multiple salespersons~\cite{clarke1964scheduling} and it is NP-hard~\cite{lenstra1981complexity}.
This problem has had several extensions to handle realistic scenarios, such as temporal constraints that impose deliveries only at specific time-windows~\cite{desaulniers2016exact}, capacity constraints on vehicle payloads~\cite{uchoa2017new}, multiple trips for vehicles~\cite{cattaruzza2016vehicle}, profit per vehicle~\cite{stavropoulou2019vehicle} and traffic congestion~\cite{gayialis2018developing}.
VRP has also been adapted for route planning for a fleet of ships~\cite{fagerholt1999optimal}, and for drone assisted delivery of goods~\cite{khoufi2019survey}.
In~\cite{motlagh2019energy}
the scheduling of \emph{events} is performed by UAVs at specific locations, involving data sensing/processing and communication with the GS. The goal here is to minimize the drone's energy consumption and operation time. Factors like wind and temperature that may affect the route and execution time are also considered.
While they combine sensing and processing into one monolithic event, these are independent tasks which need to be co-scheduled, as we do. Also, they minimize the operating time and energy while we maximize the utility to perform tasks within a time and energy budget.
In~\cite{hu2019uav} the use of UAVs is explored to off-load computing from the users' mobile devices, and for relaying data between mobile devices and GS. The authors considered the drones' trajectory, bandwidth, and computing optimizations in an iterative manner. The aim is to minimize energy consumption of the drones and mobile devices. It is validate through simulation for four mobile devices. We instead consider a more practical problem for a fleet of drones with possibly hundreds of locations to visit and on-board computing tasks to perform.
Trotta et al.~\cite{trotta2018uavs} propose a novel architecture for energy-efficient video surveillance of points of interest (POIs) in a city by drones. The UAVs use bus rooftops for re-charging and being transported to the next POI based on known bus routes. Drones also act as relays for other drones capturing videos. The mapping of drones to bus routes is formulated as an MILP problem and a TSP-based heuristic is proposed. Unlike ours, their goal is not to schedule and process data on-board the drone. Similarly, we do not examine any data off-loading from the drone, nor any piggy-backing mechanisms.
\subsection{Job-shop Scheduling (JSC)}
Scheduling of computing tasks on drones is closely aligned with scheduling tasks on edge and fog devices~\cite{varshney2020characterizing}, and broadly with parallel workload scheduling~\cite{kwok1999static} and JSC~\cite{manne1960job}
In~\cite{meng2019dedas}, an online algorithm is proposed for deadline-aware task scheduling for edge computing. It highlights that workload scheduling on the edge has several dimensions, and it jointly optimizes networking and computing to yield the best possible schedule.
Feng at al.~\cite{feng2018mobile} propose a framework for cooperative edge computing on autonomous road vehicles, which aims to increase their decentralized computational capabilities and task execution performance.
Others~\cite{li2019joint} combine optimal placement of data blocks with optimal task scheduling to reduce computation delay and response time for the submitted tasks while improving user experience in edge computing.
In contrast, we co-schedule UAV routing and edge computing.
There exist works that explore task scheduling for mobile clients, and off-load computing to another nearby edge or fog resource. These may be categorized based on their use of \emph{predictable} or \emph{unpredictable} mobility models. In~\cite{ning2019mobile}, the mobility of a vehicle is predicted and used to select the road-side edge computing unit to which the computation is off-loaded. Serendipity~\cite{shi2012serendipity} takes an alternate view and assumes that mobile edge devices interacts with each other intermittently and at random. This makes it challenging to determine if tasks should be off-loaded to another proximate device for reliable completion. The problem we solve is complementary and does not involve off-loading. The possible waypoints are known ahead, and we perform predictable UAV route planning and scheduling of the computing locally on the edge.
Scheduling on the energy-constrained edge has also been investigated by Zhang et al.~\cite{zhang2017energy}, where an energy-aware off-loading scheme is proposed to jointly optimize communication and computation resource allocation on the edge, and to limit latency. Our proposed problem also considers energy for the drone flight while meet deadlines for on-board computing.
\ysnoted{Graph based approaches have been tried for VRP and JSP: \cite{beck2002graph}}
\ysnoted{Discuss scheduling of tasks to processors. Show how our proposed problem is similar to scheduling a set of DAGs that are submitted at different times, each DAG has a sequential set of tasks, and the is a utility for completing as many tasks as possible within a given deadline. Find relevant citations.}
\section{Models and Assumptions}\label{sec:model}
This section introduces the UAV system model, application model, and utility model along with underlying assumptions.
\begin{figure*}[t]
\centering
\def1{1}
\input{figures/msp_big_picture.pdf_tex}
\caption{Sample MSP scenario. a) shows a city with the depot ($\ensuremath{\widehat{\lambda}}\xspace$); 6 waypoints to visit ($\lambda_i$) with some utility; and possible trip routes for drones ($R^i_j$). b) has the corresponding 6 activities ($\alpha_i$) with data capture duration (shaded) and compute deadline (vertical line) and the two available drones.}
\label{fig:msp_big_picture}
\end{figure*}
\subsection{UAV System Model}
Let $\ensuremath{\widehat{\lambda}}\xspace=(0, 0, 0)$ be the \emph{location} of a UAV depot in the city (see Figure~\ref{fig:msp_big_picture}, left) centered at the origin of a 3D Cartesian coordinate system.
Let $D = \{d_1, \ldots, d_m\}$ be the set of $m$ available drones. For simplicity, we assume that all the drones are homogeneous. Each drone has a camera for recording videos, which is subsequently processed. This processing can be done using the on-board computing, or done offline once the drone lands (which is outside the scope of our problem). The on-board \emph{processing speed} is $\pi$ floating point operations per second (FLOPS). For simplicity, this is taken as cumulative across CPUs and GPUs on the drone, and this capacity is orthogonal to any computation done for navigation.
The battery on a drone has a fixed \emph{energy capacity} $E$, which is used both for flying and for on-board computation.
The drone's energy consumption has three components --
\emph{flying}, \emph{hovering} and \emph{computing}.
Let $\epsilon^f$ be the energy required for flying for a unit time duration at a constant energy-efficient speed $s$ within the Cartesian space;
let $\epsilon^h$ be the energy for hovering for a unit time duration;
and let $\epsilon^c$ be the energy for performing computation for a unit time duration.
For simplicity, we ignore the energy for video capture since it is negligible in practice.
Also, a drone that returns to the depot can swap-in a full battery and immediately start a new trip.
\subsection{Application Model}
Let $A = ( \alpha_1, \ldots, \alpha_n )$ be the set of $n$ activities to be performed starting from time $\widehat{t}=0$, where each {\em activity} $\alpha_i$ is given by the tuple $\langle \lambda_i, t_i, \bar{t}_i, \kappa_i, \delta_i, \gamma_i, \bar{\gamma}_i, \bar{\bar{\gamma}}_i \rangle$. Here, $\lambda_i=(x_i, y_i, z_i)$ is the waypoint \emph{location} coordinates where the video data for that activity has to be captured by the drone, relative to the depot location $\ensuremath{\widehat{\lambda}}\xspace$. The \emph{starting and ending times} for performing the \emph{data capture task} are $t_i$ and $\bar{t}_i$. The \emph{compute requirements} for subsequently processing all of the captured data is $\kappa_i$ floating point operations.
Lastly, $\delta_i$ is the \emph{time deadline} by which the \emph{computation task} should be completed on the drone to derive on-time utility of processing, while $\gamma_i, \bar{\gamma}_i$, and $\bar{\bar{\gamma}}_i$ are the \emph{data capture}, \emph{on-time processing} and \emph{on-board processing utility} values that are gained for completing the activity. These are described in the next sub-section.
The computation may be performed incrementally on subsets of the video data, as soon as they are captured. This is common for analytics over constrained resources~\cite{bianco2018benchmark}.
Specifically, for an activity $\alpha_{i}$, the data captured between $(\bar{t}_{i} - t_{i})$ is divided into \emph{batches} of a fixed duration $\beta$, with the sequence of batches given by $B_{i} = (b_{i}^1, \ldots, b_{i}^{q_i})$, where $q_i = |B_i| = \big\lceil \frac{\bar{t}_{i} - t_{i}}{\beta} \big\rceil$. The computational cost to process each batch is $\kappa_{i}^k = \frac{\kappa_{i}}{q_i}$ floating-point operations, and is constant for all batches of an activity.
So, the \emph{processing time} for the batch, given the processing speed $\pi$ for a drone, is $\rho_{i}^k = \big\lceil\kappa_{i}^k \cdot \frac{1}{\pi}\big\rceil$; for simplicity, we discretize all time-units into integers.
We make some simplifying assumptions. Only one batch may be executed at a time on-board a drone and it should run to completion before scheduling another. There is no concurrency, pre-emption or check-pointing. The data capture for an activity's batch may overlap with the computation of a previous batch of the same or a different activity.
All batches for a single activity should be executed in sequence, i.e., complete processing $b_{i}^{k}$ before processing $b_{i}^{k+1}$. Once a batch is processed, its compact results are immediately and deterministically communicated to the GS.
\subsection{Utility Model}
The primary goal of the drone is to capture videos at the various activity locations for the specified duration. This is a \emph{necessary} condition for an activity to be successful. We define this as the \emph{data capture utility ($\gamma_{i}$)} accrued by a drone for an activity $\alpha_{i}$.
The secondary goal is to opportunistically process the captured data using the on-board computing on the drone. Here, we have two scenarios. Some activities may not be time sensitive, and performing on-board computing is just to reduce the costs for offline computing. Here, processing the data captured by an activity using the drone's computing resources will provide an \emph{on-board processing utility ($\bar{\gamma}_{i}$)}. Other activities may be time-sensitive and have a \emph{soft-deadline} $\delta_i$ for completing the processing. For these, if we process its captured data on the drone by this deadline, we receive an extra \emph{on-time processing utility ($\bar{\bar{\gamma}}_{i}$)}. The processing utilities accrue \emph{pro rata}, for each batch of the activity completed.
\section{Problem Formulation}\label{sec:prob-def}
The Mission Scheduling Problem (\prob) is summarized as: \emph{Given a UAV depot in a city with a fleet of captive drones, and a set of observation and computing activities to be performed at locations in the city, each within a given time window and with associated utilities, the goal is to co-schedule the drones onto mission routes and the computation to the drones, within the energy and compute constraints of the drones, such that the total utility achieved is maximized.}
It is formalized below.
\subsection{Mission Scheduling Problem (\prob)}
A UAV fleet operator receives and queues activities. Periodically, a mission schedule is planned to serve some or all the activities using the whole fleet to maximize the utility. There is a fixed cost for operating the captive fleet that we ignore.
Multiple activities can be assigned to the same drone $d_j$ as part of the drone's \emph{mission},
and the same drone $d_j$ can perform multiple \emph{trips} from the depot for a mission.
The \emph{mission activities} for the $r^{th}$ trip of a drone $d_j$ is the ordered sequence $A^r_j = ( \alpha^r_{j_1}, \ldots, \alpha^r_{j_n} ) \subseteq A$ where $\alpha^r_{j_x} \in A$, $j_n \le n$, and no activity appears twice within a mission. Further, we have $\alpha^r_{j_x} \prec \alpha^r_{j_{x+1}}$, i.e., the observation start and end times of an activity in the mission sequence fully precede those of the next activity in it, $\bar{t}^r_{j_{x}} \le t^r_{j_{x+1}} $. Also, $A^x_j \cap A^y_k = \varnothing~\forall j,k,x,y$ to ensure that an activity is mapped to just one drone. Depending on the feasibility and utility, some activities may not be part of any mission and are dropped, i.e., $\sum_{j} \sum_{r} |A^r_j| \leq n$.
The \emph{route} for the $r^{th}$ trip of drone $d_j$ is given by $R^r_j = ( \ensuremath{\widehat{\lambda}}\xspace, \lambda^r_{j_1}, \ldots, \lambda^r_{j_n}, \ensuremath{\widehat{\lambda}}\xspace )$, where the starting and ending waypoints of the drone are the depot location $\ensuremath{\widehat{\lambda}}\xspace$, and each intermediate location corresponds to the video capture location $\lambda^r_{j_k}$ for the activity $\alpha^r_{j_k}$ in the mission sequence. For uniformity, we denote the first and the last depot location in the route as $\lambda^r_{j_0}$ and $\lambda^r_{j_{n+1}}$, respectively.
Clearly, $|R^r_j| = j_n + 2$.
A drone $d_j$, given the $r^{th}$ trip of its route $R^r_j$, starts at the depot, visits each waypoint in the sequence and returns to the depot, where it may instantly get a fresh battery and start the $(r+1)^{th}$ route.
Let drone $d_j$ leave a waypoint location in its route, $\lambda^r_{j_i}$, at \emph{departure time} $\tau^r_{j_i}$ and reach the next waypoint location, $\lambda^r_{j_{i+1}}$, at \emph{arrival time} $\bar{\tau}^r_{j_{i+1}}$.
Let the function $\mathcal{F}(\lambda_p, \lambda_q)$ give the \emph{flying time} between $\lambda_i$ and $\lambda_j$. Since the drone has a constant flying speed, we have $\bar{\tau}^r_{j_{i+1}} = \tau^r_{j_i} + \mathcal{F}(\lambda^r_{j_i}, \lambda^r_{j_{i+1}})$.
The drone must hover at each waypoint $\lambda^r_{j_i}$ between $t^r_{j}$ and $\bar{t}^r_{j}$ while recording the video, and it departs the waypoint after this, i.e., $\tau^r_{j_i} = \bar{t}^r_{j_i}$. If the drone arrives at this waypoint at time $\bar{\tau}^r_{j_i}$, i.e., before the observation start time $t_j$, it \emph{hovers} here for a duration of $t^r_j - \bar{\tau}^r_{j_i}$, and then continues hovering during the activity's video capture.
If a drone arrives at $\lambda^r_{j_i}$ after $t^r_j$, it is invalid since the video capture for the activity cannot be conducted for the whole duration. So, $\bar{\tau}^r_{j_i} \le t^r_{j_i} \le \tau^r_{j_i}$.
Also, since the deadline for on-time computation over the captured data is $\delta^r_{j_i}$, we require $\delta^r_{j_i} \ge \bar{t}^r_{j_i}$.
Once the drone finishes capturing video for the last activity in its $r^{th}$ trip, it returns back to the depot location at time $\bar{\tau}^r_{j_{n+1}} = \tau^r_{j_n} + \mathcal{F}(\lambda^r_{j_n}, \widehat{\lambda})$.
Hence, the \emph{total flying time} for a drone $d_j$ for its $r^{th}$ trip is:
\[ f^r_j = \sum_{i=0}^{n} (\bar{\tau}^r_{j_{i+1}} - \tau^r_{j_i}) \]
\noindent
and the \emph{total hover time} for the drone on that trip is:
\[ h^r_j = \sum_{i=1}^{n} (t^r_{j_i} - \bar{\tau}^r_{j_i}) + \sum_{i=1}^{n} (\bar{t}^r_{j_i} - t^r_{j_i}) = \sum_{i=1}^{n} (\bar{t}^r_{j_i} - \bar{\tau}^r_{j_i}) \]
which includes hovering due to early arrival at a waypoint, and hovering during the data capture.
Let the scheduler assign the \emph{time slot} $[\theta_{j_i}^{k}, \bar{\theta}_{j_i}^{k})$ for executing a batch $b_{j_i}^{k}$ of activity $\alpha_{j_i}$ on drone $d_j$, where $\bar{\theta}_{j_i}^{k} = \theta_{j_i}^{k} + \rho_{i}^{k}$, based on the batch execution time.
We define a completion function for each activity $\alpha_{j_i}$, for the three utility values:
\begin{itemize}
\item The \emph{data capture completion} $u_{j_i} \in \{0, 1\}$ has a value of $1$ if the drone hovers at location $\lambda_{j_i}$ for the entire period from $t_{j_i}$ to $\bar{t}_{j_i}$, and is $0$ otherwise.
\item The \emph{on-board completion} $0.0 \le \bar{u}_{j_i} \le 1.0$ indicates the fraction of batches of that activity that are completed on-board the drone.
Let $\bar{\mu}^k_i=1$ if the batch $b^k_i$ of activity $\alpha_i$ is completed on-board,
and $\bar{\mu}^k_i=0$ if it is not completed on-board the drone. Then, $\bar{u}_{j_i} = \frac{\sum_k \bar{\mu}^k_i}{q_i}$.
\item The \emph{on-time completion} $0.0 \le \bar{\bar{u}}_{j_i} \le 1.0$ gives the fraction of batches of that activity that are fully completed within the deadline.
As before, let $\bar{\bar{\mu}}^k_i=1$ if the batch $b^k_i$ of activity $\alpha_i$ is completed
on-time, i.e., $\bar{\theta}_{i}^{k} \leq \delta_i$, and $\bar{\bar{\mu}}^k_i=0$ otherwise. So, $\bar{\bar{u}}_{j_i} = \frac{\sum_k \bar{\bar{\mu}}^k_i}{q_i}$.
\end{itemize}
The \emph{total utility} for an activity $\alpha_i$ is
$U_i = u_{i} \gamma_i + \bar{u}_{i} \bar{\gamma}_i + \bar{\bar{u}}_{i} \bar{\bar{\gamma}}_i$,
and the \emph{total computation time} of batches on a drone $d_j$ is:
\[c_j = \sum_{\alpha_i \in A} {(\bar{\mu}^k_{j_i} + \bar{\bar{\mu}}^k_{j_i}) \cdot \rho^k_i} \]
\subsection{Optimization of \prob}
Based on these, the \textbf{objective} of the optimization is $\arg \max \sum_{\alpha_i \in A} {U_i}$, i.e., assign drones to activity waypoints and activity batches to the drones' computing slots to maximize the utility from data capture, on-board and on-time computation.
These are subject to the following constraints on the execution slot assignments for a batch on a drone:
\[ (t_{j_i} + k \cdot \beta) \leq \theta_{j_i}^{k} ~\qquad~ \bar{\theta}_{j_i}^{k} \leq \theta_{j_i}^{k+1} ~\qquad~ \bar{\theta}_{i}^{k} \leq \bar{\tau}_{j_{n+1}}\]
i.e., the data capture for a duration of $\beta$ for the $k^{th}$ batch of the activity is completed before the execution slot of the batch starts; the batches for an activity are executed in sequence; and the execution completes before the drone lands.
Also, there can only be one batch executing at a time on a drone. So $\forall [\theta_{j_p}^{x}, \bar{\theta}_{j_p}^{x})$ and $[\theta_{j_q}^{y}, \bar{\theta}_{j_q}^{y})$ slots assigned to batches $b_p^x$ and $b_q^y$ on drone $d_j$, we have $[\theta_{j_p}^{x}, \bar{\theta}_{j_p}^{x}) \cap [\theta_{j_q}^{y}, \bar{\theta}_{j_q}^{y}) = \varnothing$.
Lastly, the \emph{energy expended} by drone $d_j$ on the $r^{th}$ trip, to fly, hover and compute, should be within its battery capacity:
\[
E^r_j = f^r_j \epsilon^f + h^r_j \epsilon^h + c^r_j \epsilon^c \le E
\]
\noindent{\em Model Applicability:}
Our novel model can be abstracted to describe diverse applications.
In \emph{entity localization}~\cite{de2015board}, $\bar{\gamma}_{i}=0$ and $\bar{\bar{\gamma}}_{i}>0$ captures the importance of an entity being tracked.
In \emph{traffic monitoring}~\cite{kanistras2013survey} it is useful to have timely insights, appropriately tuning $\bar{\gamma}_{i}$ and $\bar{\bar{\gamma}}_{i}$.
In \emph{construction survey}~\cite{george2019towards} there are no strict time deadlines, so $\bar{\bar{\gamma}}_{i}=0$.
\begin{table*}[t]
\centering
\caption{Constraints for \algopt MILP formulation.}
\label{tab:constraints}
\begin{tabular}{p{0.15cm}|p{9.5cm}|p{7cm}}
\toprule
\bf C. & \bf Expression & \bf Meaning \\
\midrule
$1$ & $\sum_{k \in \mathcal{D}} \sum_{l \in \mathcal{R}} \sum_{j\in \overrightarrow{i}} x_{ij}^{kl} \leq 1,
\qquad \forall i \in \mathcal{V}'$ & The waypoint for an activity $\alpha_i$ is visited only once.\\
$2$ & $\sum_{j \in \overrightarrow{0}} x_{0j}^{kl} - \sum_{j \in \overleftarrow{0}} x_{j0}^{kl} = 0,
\qquad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & A drone trip $l$ starting from the depot must also end there.\\
$3$ & $\sum_{j \in \overrightarrow{0}} x_{0j}^{kl} = 1 \iff \sum_{j \in \overrightarrow{i}} x_{ij}^{kl} = 1,
\qquad \forall i \in \mathcal{V}', k \in \mathcal{D}, l \in \mathcal{R}$ & A drone $k$ must visit at least one waypoint on each trip $l$.\\
$4$ & $\sum_{i \in \overleftarrow{j}} x_{ij}^{kl} - \sum_{i \in \overrightarrow{j}} x_{ji}^{kl} = 0,
\qquad \forall k \in \mathcal{D}, j \in \mathcal{V}', l \in \mathcal{R}$ & A drone $k$ visiting waypoint $j$ must also fly out from there.\\
$5$ & $\left(t_j - \mathcal{F}_{0j}\right) \cdot \sum_{k \in \mathcal{D}}\sum_{l \in \mathcal{R}} x_{0j}^{kl} \ge 0,
\qquad \forall j \in \mathcal{V}'$ & Any drone flying to waypoint $j$ from the depot must reach before its observation start time $t_j$.\\
$6$ & $ (t_j - \bar{t}_i - \mathcal{F}_{ij}) \cdot \sum_{k \in \mathcal{D}}\sum_{l \in \mathcal{R}} x_{ij}^{kl} \ge 0,
\qquad \forall i \in \mathcal{V}', j \in \overrightarrow{i}$ & Any drone flying to waypoint $j$ from $i$ must reach before its observation start time $t_j$.\\
${7}$ & $\bar{\tau}^l_{k_{n+1}} = \sum_{i \in \mathcal{V}'} x_{i0}^{kl} \cdot (\bar{t}_i + \mathcal{F}_{i0}),
\qquad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & Decides the landing time of drone $k$ at the depot after trip $l$.\\
${8}$ & $\bar{\tau}^l_{k_{n+1}} \le \tau_{\max},
\quad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & Depot landing times for all trips is within the maximum time.\\
\hline
$9$ & $t_i + (g + 1) \cdot \beta \le \theta^g_i,
\qquad \forall i \in \mathcal{V}', g \in \mathcal{B}_i$ & Batch $g$ of activity $\alpha_i$ must be observed before it is processed. \\
${10}$ & $\bar{\theta}^g_i < \theta^{g+1}_i,
\qquad \forall i \in \mathcal{V}', g \in \mathcal{B}_i$ & Processing of batch $g$ of activity $\alpha_i$ must precede batch $g+1$. \\
${11}$ & $\sum_{j \in \overrightarrow{i}} x_{ij}^{kl} + \sum_{b \in \overrightarrow{a}} x_{ab}^{kl} - 1 \leq w^{gh}_{ia} + w^{hg}_{ai},
\qquad \forall i,a \in \mathcal{V}', i < a, g \in \mathcal{B}_i, h \in \mathcal{B}_a, k \in \mathcal{D}, l \in \mathcal{R}$ & \multirow{2}{7cm}{Compute time slots of two batches $g$ and $h$ from activities $\alpha_i$ and $\alpha_a$ on the same drone $k$ and trip $l$ should not overlap \cite{manne1960job}.} \\
${12}$ & $\bar{\theta}^g_i - \theta^h_a \leq M \cdot (1 - w^{gh}_{ia}),
\qquad \forall i,a \in \mathcal{V}, i \neq a, g \in \mathcal{B}_i, h \in \mathcal{B}_a $ & \\
${13}$ & $y_{ik}^{lg} = 1 \Rightarrow \bar{\theta}^g_i + M \left(1-\sum_{j \in \overrightarrow{i}} x_{ij}^{kl}\right) \le \delta_i, ~~~~~\forall i \in \mathcal{V}', g \in \mathcal{B}_i, k \in \mathcal{D}, l \in \mathcal{R}$ & Decision variable for batches that complete before deadline.\\
${14}$ & $z_{ik}^{lg} = 1 \Rightarrow \bar{\theta}^g_i + M\left(1 - \sum_{j \in \overrightarrow{i}} x_{ij}^{kl}\right) \le \bar{\tau}^l_{k_{n+1}},
\forall i \in \mathcal{V}', g \in \mathcal{B}_i, k \in \mathcal{D}, l \in \mathcal{R}$ & Decision variable for batches that complete before landing.\\
\hline
${15}$ & $\sum_{i \in \mathcal{V}} \Big( \sum_{j \in \overrightarrow{i}} \left( x_{ij}^{kl} \cdot \mathcal{F}_{ij} \cdot \epsilon^f \right) ~+~ \allowbreak
\sum_{g \in \mathcal{B}_i} \left(z_{ik}^{lg} \cdot \kappa^g_i \cdot \epsilon^c \right) ~+~ \allowbreak
{\qquad \sum_{j \in \overrightarrow{i}} \left( x_{ij}^{kl} \cdot (\bar{t}_j \!-\! (\bar{t}_i + \mathcal{F}_{ij})) \cdot \epsilon^h \right)} \Big)
\le E,
\quad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & Sum of energy consumed for flying, hovering and computing on trip $l$ of drone $k$ should be within the battery capacity. \\
\bottomrule
\end{tabular}
\end{table*}
\section{Optimal Solution for \prob}
\label{sec:algorithms:opt}
In this section, we prove that \prob is NP-hard, and we define an optimal, but computationally slow, algorithm called \textsc{Optimal Mission Scheduling}\xspace (\algopt) based on MILP.
\ysnoted{We're actually defining a solution here, not a problem. Should we call it "Optimal Mission Scheduler (OPT)"? The short form OPT is also earlier to remember for readers.}
\subsection{NP-hardness of \prob}
As discussed earlier, the \prob combines elements of the VRP and the JSP in assigning routes and batches to drones, for maximizing the overall utility, subject to energy constraints.
\begin{theorem}
\prob is NP-hard.
\end{theorem}
\begin{proof}
The VRP is NP-hard~\cite{lenstra1981complexity}.
In addition, \prob considers multiple-trips, time-windows, energy-constraints, and utilities.
The VRP variant with multiple-trips (MTVRP), which considers a maximum travel time horizon $T_{h}$, is NP-hard. Any instance of VRP can be reduced in polynomial time to MTVRP by fixing the number of vehicles to the number of waypoints, $m = n$, and setting the time horizon $T_{h} = \sum_{e \in \mathcal{E}} \mathcal{F}(e)$, where $\mathcal{E}$ is the set of edges and $\mathcal{F}(e)$ is the flying time for traversing an edge~\cite{olivera2007adaptive}, and limiting the number of trips to one.
The VRP variant with time-windows (TWVRP), which limits the start and end time for visiting a vertex, $[t_i, \bar{t}_i)$, is NP-hard. Any instance of VRP can be reduced in polynomial time to TWVRP by just setting $t_i = 0$ and $\bar{t}_i = + \infty$~\cite{toth2002vehicle}.
Clearly, a VRP variant with energy-constrained vehicles is still NP-hard, by just relaxing those constraints to match VRP.
In the above VRP variants, the goal is only to minimize the costs. But \prob aims at maximizing the utility while bounding the energy and compute budget.
In literature, the VRP variant with profits (PVRP) is NP-hard~\cite{cattaruzza2016vehicle} since any instance of MTVRP can be reduced in polynomial time to PVRP by just setting all vertices to have the same unit-profit.
Moreover, \prob has to deal with scheduling of batches for maximizing the profit.
The original JSP is NP-hard~\cite{graham1966bounds}. So, any variant which introduces constraints is again NP-hard by a simple reduction, by relaxing those constraints, to JSP.
As \prob is a variant of VRP and JSP, it is NP-hard too.
\end{proof}
\subsection{The \algopt Algorithm}
The \textsc{Optimal Mission Scheduling}\xspace (\algopt) algorithm offers an optimal solution to \prob
by modeling it as a multi-commodity flow problem (MCF), similar to~\cite{zmazek2006multiple, trotta2018uavs}.
We reformulate the \prob definition as an MILP formulation.
The paths in the city are modeled as a \emph{complete graph}, $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, between the $n$ activity waypoint vertices, $\mathcal{V} = \{0, 1, \ldots, n\}$, where $0$ is the depot \ensuremath{\widehat{\lambda}}\xspace.
Let $\overrightarrow{i}$ and $\overleftarrow{i}$ be the set of \emph{out-edges} and \emph{in-edges} of a vertex $i$, and $\mathcal{V}' = \mathcal{V} \setminus \{ 0 \}$ be the set of all waypoint vertices.
We enumerate the $m$ drones as $\mathcal{D} = \{1, \ldots, m\}$.
Let $\tau_{\max}$ be the maximum time for completing all the missions, and
$r_{\max}$ the maximum trips a drone can do. Let $\mathcal{R} = \{1, \ldots, r_{\max}\}$ be the possible trips.
Let $x_{ij}^{kl} \in \{0,1\}$ be a decision variable that equals $1$ if the drone $k \in \mathcal{D}$ in its trip $l \in \mathcal{R}$ traverses the edge $(i,j)$, and $0$ otherwise. If $x_{ij}^{kl} = 1$ for $i \in \mathcal{V}'$, then the waypoint for activity $\alpha_i$ was visited by drone $k$ on trip $l$.
Let $\mathcal{B}_i = \{0, \ldots, q_i\}$ be the set of batches of activity $\alpha_i$.
Let $w^{gh}_{ia}$ be a binary decision variable used to linearize the batch computation whose value is $1$ if batch $b^g_i$ is processed before $b^h_a$, and $0$ otherwise~\cite{manne1960job}.
Let $y_{ig}^{kl}$ be a decision variable that equals $1$ if the drone $k \in \mathcal{D}$ in trip $l \in \mathcal{R}$ processes the batch $g$ of activity $\alpha_i$ within its deadline $\delta_i$, and $0$ otherwise; and similarly, $z_{ig}^{kl}$ equals $1$ if the batch is processed before the drone completes the trip and lands, and $0$ otherwise.
Let the per batch utility for on-board completion be $\bar{\Gamma}_i = \frac{\bar{\gamma}_i}{q_i}$, and on-time completion be $\bar{\bar{\Gamma}}_i = \frac{\bar{\bar{\gamma}}_i}{q_i}$, for activity $\alpha_i$.
Finally, let $M$ be a sufficiently large constant.
Using these, the MILP objective is:
{\small
\begin{align}
\max \sum_{k \in \mathcal{D}} \sum_{l \in \mathcal{R}} \sum_{i \in \mathcal{V}} \bigg( \sum_{j \in \overrightarrow{i}} x_{ij}^{kl} \cdot \Gamma_i \bigg)\!\! + \!\!\bigg( \sum_{g \in \mathcal{B}_i} y_{ig}^{kl} \cdot \bar{\Gamma_i} + z_{ig}^{kl} \cdot \bar{\bar{\gamma_i}}\bigg) \label{eq:main}
\end{align}
}
\noindent subject to the constraints listed in Table~\ref{tab:constraints}.
\section{Heuristic Algorithms for \prob}
\label{sec:algorithms:appox}
Since \prob is NP-hard, \algopt is tractable only for small-sized inputs.
So, time-efficient but sub-optimal algorithms are necessary for larger-sized inputs.
In this section, we propose two heuristic algorithms, called \textsc{Job Scheduling Centric}\xspace (\algjsc) and \textsc{Vehicle Routing Centric}\xspace (\algvrc).
\subsection{The \algjsc Algorithm}
The \textsc{Job Scheduling Centric}\xspace (\algjsc) algorithm aims to find near-optimal scheduling of batches while ignoring the optimizations of routing to conserve energy.
\algjsc is split into two phases: \emph{clustering} and \emph{scheduling}.
\subsubsection{Clustering Phase}
First, we use the ST-DBSCAN algorithm~\cite{birant2007st} to find time-efficient spatio-temporal clusters of activities.
It returns a set of clusters $\mathbb{C}$ such that for activities within a cluster $C_i \in \mathbb{C}$, certain spatial and temporal distance thresholds are met.
Drones are then allocated to clusters depending on their availability.
For each cluster $C_i$, let $T_i^U = \max_{\alpha_j \in C_i}{(\bar{t}_j + \mathcal{F}(\lambda_j, \ensuremath{\widehat{\lambda}}\xspace))}$ be the upper bound for the \emph{latest landing time} for a drone servicing activities in $C_i$;
analogously, let $T_i^L = \min_{\alpha_j \in C_i}{(t_j-\mathcal{F}(\ensuremath{\widehat{\lambda}}\xspace, \lambda_j))}$
be the lower bound for the \emph{earliest take-off time}.
Then, all the temporal windows $[T_i^L, T_i^U]$ for each $C_i \in \mathbb{C}$ are sorted with respect to $T_i^L$.
Recalling that there are $m$ drones available at $\widehat{t}=0$, they are proportionally allocated to clusters depending on the current availability, which in turn depends on the temporal window.
So, $c_1 = \frac{m}{n}\cdot |C_1|$ drones are allocated to $C_1$ at time $T_1^L$ and released at time $T_1^U$; $c_2 = \frac{m-c_1}{n} \cdot |C_2|$ allocated to $C_2$ from $T_2^L$ to $T_2^U$ (assuming $T_2^L < T_1^U$), and so on.
\ysnoted{Just to be clear, in this e.g., the time window of C1 and C2 overlap and hence we have (m-c1)? Else it would be m. \fbsnote{yes yes, they overlap.}}
\subsubsection{Scheduling Phase}
Here, the activities are assigned to drones.
The \emph{feasibility} of assigning $\alpha_{i}$ to $d_j$, is tested by checking if the required flying and hovering energy is enough to visit $A_j \cup \alpha_i$;
here, we ignore the batch processing energy.
If feasible, the drone can update its take-off and landing times accordingly, and then schedule the subset of batches $\widehat{B_i} \subseteq B_i$ within the energy requirements.
Assignments are done in two steps: \emph{default assignment}\xspace and \algtas.
\noindent {\textbf{Default Assignment.}}
For each $b_i^k \in \widehat{B_i}$, let $P_{b^k_i} = [t_k + i \beta, \delta_k)$ be the \emph{preferred interval}; $Q_{b^k_i} \subseteq P_{b^k_i}$ be the \emph{available preferred sub-intervals}, i.e., the set of periods where no other batch is scheduled; and $S_{b^k_i} = [\delta_k, \bar{\tau}_{j_{n+1}})$ be the \emph{schedulable interval}, which exceeds the deadline but completes on-board.\ysnoted{Swapping P and Q; P is now "preferred"}\fbsnoted{Great idea, P as preferred!}
Clearly, $P_{b^k_i} \cap S_{b^k_i} = \varnothing$.
The \emph{default schedule} determines a suitable time-slot for $b_i^k$.
If $Q_{b^k_i} \neq \varnothing$, $b_i^k$ is \emph{first-fit} scheduled within intervals of $Q_{b^k_i}$; else, if $Q_{b^k_i} = \varnothing$, the same first-fit policy is applied over intervals of $S_{b^k_i}$.
If $b_i^k$ cannot be scheduled even in $S_{b^k_i}$, it remains unscheduled.
\noindent {\textbf{Test and Swap Assignment.}}
If the \emph{default assignment}\xspace has batches that \emph{violate their deadline}, i.e., scheduled in $S$ but not in $P$, we use the \algtas to improve the schedule.
Let $P^+_i = \bigcup_i {P_{b^k_i}}$ be the union of the preferred intervals forming the \emph{total preferred interval} for an activity $\alpha_i$.
Each batch $b^k_i$ is tested for violating its deadline. \ysnoted{If $b^k_i$ violates, then $b^{k+1}_i$ must also violate}
If it violates, then batches $b^h_j$ from other activities already scheduled in $P^+_i$ are identified and tested if they too violate their deadline.
If so, $b^h_j$ is moved to the next available slot in $S_{b^h_j}$, and its old time slot given to $b^k_i$.
If $b^h_j$ is in its \emph{preferred interval} but has more slots available in this interval, then $b^h_j$ is moved to another free slot in $P_{b^h_j}$ and $b^k_i$ assigned to the slot that is freed.
Else, the current configuration does not contain violations, except for the current batch $b^k_i$, but all available slots are occupied.
So, the utility for $b^k_i$ is compared with another $b^h_j$ in $P^+_i$,
and the batch with a higher utility gets this slot.
\subsubsection{The Core of \algjsc}
The \algjsc algorithm works as follows (Algorithm~\ref{alg:algjsc}).
After the initial \emph{clustering phase}, activities are tested for their feasibility.
If so, the \emph{default assignment}\xspace is initially evaluated in terms of total utility.
If this creates deadline violation, the \algtas performed, and the best scheduling is applied.
\begin{algorithm}[htbp]
$\mathbb{C} \gets $ \emph{clustering phase}\;\label{code:2-preproc}
\For {$C_k \in \mathbb{C}$} {
\For {$\alpha_i \in C_k$} {
\For {$d_j$ assigned to $C_k$} {
\If {$\alpha_i \cup A_j$ is feasible} {\label{code:2-feasibility}
apply best scheduling among \emph{default} and \algtas on $\widehat{B_i}$\;\label{code:2-best}
}
}
}
}
\caption{$\algjsc(A, D)$}
\label{alg:algjsc}
\end{algorithm}
\subsubsection{Time Complexity of \algjsc}
ST-DBSCAN's time complexity is $\mathcal{O}(n \log n)$ for $n$ waypoints.
Unlike $k$-means clustering, ST-DBSCAN automatically picks a suitable number of clusters,
$k$, with $\approx \frac{n}{k}$ waypoints each.
For $k$ times, we compute the min-max of sets of size $\frac{n}{k}$, sort the $k$ elements and finally make $\frac{n}{k}$ assignments.
So this drones-to-clusters allocation takes $\mathcal{O}(k \frac{n}{k} + k \log k + \frac{n}{k})$ time.
Hence, this \emph{clustering phase} takes $\mathcal{O}(n \log n)$ time.
For the \algtas, we maintain an interval tree for fast temporal operations.
If $l$ is the maximum number of batches to schedule per activity, building the tree costs $\mathcal{O}(\frac{nl}{k} \log(\frac{nl}{k}))$, while search, insertion and deletion cost $\mathcal{O}(\log(\frac{nl}{k}))$.
Finding free time slots makes a pass over the batches in $\mathcal{O}(\frac{nl}{k})$.
This is repeated for $l$ batches, to give an overall time complexity of $\mathcal{O}(\frac{nl}{k} \log(\frac{nl}{k}) + \frac{n}{k}l^2)$.
Also the \emph{default assignment}\xspace relies on the same interval tree, reporting the same complexity as \algtas.
Finally, for the $k$ clusters and each application in a cluster, two schedule assignments are calculated for all the drones.
Thus, the time complexity of \algjsc is $\mathcal{O}(n \log n) + \mathcal{O}(k \frac{n}{k} m (\frac{nl}{k} \log(\frac{nl}{k}) + \frac{n}{k}l^2))$.
However, since the clustering can result in single cluster, $m \rightarrow n$, and the \emph{overall complexity} of \algjsc is $\mathcal{O}(n^3 l^2)$ in the worst case.
\subsection{The \algvrc Algorithm}
The \textsc{Vehicle Routing Centric}\xspace (\algvrc) algorithm aims to find near-optimal waypoint routing while initially ignoring efficient scheduling of the batch computation.
\algvrc is split into three phases: \emph{routing}, \emph{splitting}, and \emph{scheduling}.
\subsubsection{Routing Phase}
In this phase, \algvrc builds routes while satisfying the \emph{temporal constraint} for activities, i.e., for any two consecutive activities $(\alpha_i, \alpha_{i+1})$ in the route, $\bar{t}_i + \mathcal{F}(\lambda_i, \lambda_{i+1}) \le t_{i+1}$.
This is done using a modified version of $k$-nearest neighbors ($k$\textsc{-nn}\xspace) algorithm, whose solution is then locally optimized using the \textsc{2-opt*}\xspace heuristic~\cite{potvin1995exchange}.
The modified $k$\textsc{-nn}\xspace works as follows:
Starting from \ensuremath{\widehat{\lambda}}\xspace, a route is iteratively built by selecting, from among the $k$ nearest waypoints which meet the \emph{temporal constraint},
the one, say, $\lambda_1$ whose activity has the earliest \emph{observation start time}.
This process resumes from $\lambda_1$ to find $\lambda_2$, and so on until there is no feasible neighbor. \ensuremath{\widehat{\lambda}}\xspace is finally added to conclude the route.
This procedure is repeated to find other routes until all the possible waypoints are chosen.
This initial set of routes is optimized to minimize the flying and hovering energy using \textsc{2-opt*}\xspace, which lets us find a local optimal solution from the given one~\cite{toth2002vehicle}.
However, routes found here may be infeasible for a drone to complete within its energy constraints.
\subsubsection{Splitting Phase}
Say $R_{i,j} = (\ensuremath{\widehat{\lambda}}\xspace, \lambda_i, \ldots, \lambda_j, \ensuremath{\widehat{\lambda}}\xspace)$ be an energy-infeasible route from the routing phase, which visits $\lambda_i$ and $\lambda_j$ as the first and last waypoints from \ensuremath{\widehat{\lambda}}\xspace.
The goal is to find a suitable waypoint $\lambda_g$ for $i \le g < j$ such that by splitting $R_{i,j}$ at $\lambda_g$ and $\lambda_{g+1}$, we can find an energy-feasible route while also improving the overall utility and reducing scheduling conflicts for batches.
For each edge $(\lambda_g,\lambda_{g+1})$, we compute a \emph{split score} whose value sums up three components: \emph{energy score}, \emph{utility score}, and \emph{compute score}.
\noindent {\textbf{Energy score.}}
Let $E(a,b)$ be the cumulative flying and hovering energy required for some route $R_{a,b} \subseteq R_{i,j}$.
Here we sequentially partition the route $R_{i,j}$ into multiple \emph{viable trips} $R_{(i,k_{1}-1)}$, $R_{(k_{1},k_{2}-1)}$, \ldots, $R_{(k_{x},j)}$ such that each is a maximal trip and is energy-feasible, i.e., $E(k_{y},k_{y+1}-1) \leq E$ while $E(k_{y},k_{y+1}) > E$.
For each edge $(\lambda_g, \lambda_{g+1}) \in R_{(k_{y},k_{y+1}-1)}$, the \emph{energy score} is the ratio $\frac{E(k_{y},g)}{E} \leq 1$.
A high value indicates that a split at this edge improves the battery utilization.
\noindent \textbf{{Utility score.}}
Say $U(a,b)$ gives the cumulative data capture utility from visiting waypoints in a route $R_{a,b} \subseteq R_{i,j}$. Say edge $(\lambda_g, \lambda_{g+1}) \in R_{(k_{y},k_{y+1}-1)} \subseteq R_{i,j}$ is also part of a viable trip from above. Here, we find the data capture utility of a sub-route of $R_{i,j}$ that starts a new maximal viable trip at $\lambda_{g+1}$ and spans until $\lambda_{l}$, as $U(g,l)$. The utility score of edge $(\lambda_g, \lambda_{g+1})$ is the ratio between this new maximal viable trip and the original viable trip the edge was part of, $\frac{U(g,l)}{U(k_{y},k_{y+1}-1)}$.
A value $>1$ indicates that a split at this edge improves the utility relative to the earlier sequential partitioning of the route.
\noindent \textbf{{Compute score.}}
We first do a soft scheduling of the batches of all waypoints in $R_{i,j}$ using the \emph{first-fit} scheduling policy, mapping them to their \emph{preferred interval}, which is assumed to be free. Say there are $|R_{i,j}|$ such batches.
Then, for each edge edge $(\lambda_g, \lambda_{g+1}) \in R_{i,j}$, we find the overlap count $O_{g}$ as the number of batches from $\alpha_g$ whose execution slot overlaps with batches from all other activities.
The overlap score for edge $(\lambda_g, \lambda_{g+1})$ is given as $\frac{O_{g}}{|R_{i,j}|}$.
If this value is higher, splitting the route at this point will avoid batches from having schedule conflicts in their preferred time slot.
Once the three scores are assigned, the edge with the highest \emph{split score} is selected as the split-point to divide the route into two sub-routes.
If a sub-route meets the energy constraint, it is selected as a \emph{valid trip}.
If either or both of the sub-routes exceed the energy capacity, the splitting phase is recursively applied to that sub-route till all waypoints in the original route are part of some valid trip.
\subsubsection{Scheduling Phase}
Trips are then sorted in decreasing order of their total utility, and drones are allocated to trips depending their temporal availability.
Once assigned to a trip, the drone's scheduling is done by comparing the \emph{default assignment}\xspace and the \algtas used in \algjsc.
\subsubsection{The Core of \algvrc}
The \algvrc algorithm works as follows (Algorithm~\ref{alg:algvrc}).
After the initial \emph{routing phase}, energy-unfeasible routes are split into feasible ones in the \emph{splitting phase}, and then drones are allocated to them.
Finally, the \emph{scheduling phase} is applied to find the best schedule between the \emph{default assignment}\xspace and the \algtas.
\begin{algorithm}[htbp]
$\mathbb{R} \gets $ \emph{routing phase}\;
\For {$R_{ij} \in \mathbb{R}$} {
\For {$(\lambda_g, \lambda_{g+1}) \in R_{ij}, i \le g < j$} {
$s(g) \gets $ \emph{energy score} + \emph{utility score} + \emph{compute score}\;
}
}
$\mathbb{R}' \gets $ \emph{splitting phase} based on scores $s(i), 1 \le i \le n$\;
\For {$d_j$ assigned to $R_{ij} \in \mathbb{R}'$} {
apply best scheduling among \emph{default assignment}\xspace and \algtas on $R_{ij}$\;
}
\caption{$\algvrc(A, D)
}
\label{alg:algvrc}
\end{algorithm}
\subsubsection{Time Complexity of \algvrc}
In the \emph{routing phase}, the modified $k$\textsc{-nn}\xspace takes $\mathcal{O}(kn)$, with $n$ waypoints and $k$ number of neighbors. The \textsc{2-opt*}\xspace algorithm has time complexity $\mathcal{O}(n^4)$. Hence this phase overall has a cost of $\mathcal{O}(n^4)$.
In the \emph{splitting phase}, calculating the energy score for a route with length $n$ edges takes $\mathcal{O}(n)$. Calculating the energy score has $\mathcal{O}(n^2)$ complexity, and calculating the compute score has $\mathcal{O}(n)$ complexity. Considering a recursion of length $n-1$, the complexity of this phase is $\mathcal{O}(n^3)$
Combining \emph{default assignment}\xspace and \algtas, \algvrc's \emph{overall complexity} is $\mathcal{O}(n^4)$ in the worst case.
\section{Performance Evaluation}\label{sec:evaluation}
\subsection{Experimental Setup}
The \algopt solution is implemented using IBM's CPLEX MILP solver v12~\cite{cplex2009v12}. It uses Python to wrap the objective and constraints, and invokes the parallel solver. Our \algjsc and \algvrc heuristics have a sequential implementation using native Python.
By default, these scheduling algorithms on our workloads run on an \emph{AWS c5n.4xlarge VM} with Intel Xeon Platinum 8124M CPU, $16$ cores, $3.0 \unit{GHz}$, and $42 \unit{GB}$ RAM.
\algopt runs on $16$ threads and the heuristics on $1$ thread.
We perform real-world benchmarks on flying, hovering, DNN computing and endurance, for a fleet of custom, commercial-grade drones. The X-wing quad-copter is designed with a top speed of $6 \unit{m/s}$ ($20 \unit{km/h}$), $<120 \unit{m}$ altitude, a $24000 \unit{mAh}$ Li-Ion battery and a payload capacity of $3 \unit{kg}$. It includes dual front and downward HD cameras, GPS and LiDAR Lite, and uses the Pixhawk2 flight controller. It also has an NVIDIA Jetson TX2 compute module with $4$-Core ARM64 CPU, $256$-core Pascal CUDA cores, $8 \unit{GB}$ RAM and $32 \unit{GB}$ eMMC storage.
The maximum flying time is $\approx 30 \unit{min}$ with a range of $3.5 \unit{km}$.
Based on our benchmarks, we use the following drone parameters in our analytical experiments.
\begin{center}
\small
\setlength\tabcolsep{2pt}
\begin{tabular}{c|c|c|c|c
\toprule
$s$ & $\epsilon^f$ & $\epsilon^h$ & $\epsilon^c$ & $E$ \\%& Max.Flight & Max.Range\\
\midrule
$4 \unit{m/s}$ & $750 \unit{J/s}$ & $700 \unit{J/s}$ & $20 \unit{J/s}$ & $1350 \unit{kJ}$\\% & $\approx 30~mins$ & $3.5~km$ \\
\bottomrule
\end{tabular}
\end{center}
\begin{figure*}[t]
\centerin
\subfloat[Utility per drone, RND]{
\includegraphics[width=0.42\textwidth]{figures/W1-util-per-drone}
\label{fig:exp:w1:util}
}
\subfloat[Utility per drone, DFS]{
\includegraphics[width=0.42\textwidth]{figures/W2-util-per-drone}
\label{fig:exp:w2:util}
}
\hfill
\subfloat[Alg. runtime, RND]{
\includegraphics[width=0.42\textwidth]{figures/W1-execution}
\label{fig:exp:w1:exec}
}
\subfloat[Alg. runtime, DFS]{
\includegraphics[width=0.42\textwidth]{figures/W2-execution}
\label{fig:exp:w2:exec}
}
\caption{\emph{Expected utility per drone} and \emph{algorithm runtime} of the three MSP algorithms, for the RND and DFS workloads on MNet. On the X axis, the number of drones (outer) and activities per drone (inner) increase. \algopt is solved on $16\times$ cores while \algjsc and \algvrc run on just $1$. DNF indicates \algopt did not finish.}
\label{fig:exp:util-exec}
\end{figure*}
\subsection{Workloads}
We evaluate the scheduling algorithms for two \emph{application workloads}: Random (RND) and Depth First Search (DFS). Both have a maximum mission time of $4 \unit{h}$ over multiple trips. In the \emph{RND workload}, $n$ waypoints are randomly placed within a $3.5 \unit{km}$ radius from the depot, and with a random activity start time within $(0,240] \unit{mins}$. This is an adversarial scenario with no spatio-temporal locality. The \emph{DFS workload} is motivated by realistic traffic monitoring needs. We perform a depth-first traversal over a $3.5 \unit{km}$ radius of our local city's road network, centered at the depot. With a $\mathcal{P}=\frac{1}{10}$ probability, we pick a visited vertex as an activity waypoint; $\mathcal{P}$ grows by $\frac{1}{10}$ for every vertex that is not selected, and $n$ are chosen. The start time of these activities monotonically grows.
The table below shows the activity and drone \emph{scenarios} for each workload. These are based on reasonable operational assumptions and schedule feasibility. We vary the data capture time ($\bar{t}-t$); batching interval ($\beta$); batch execution time on $2$ DNNs ($\rho_M$, $\rho_R$)\footnote{We run \emph{SSD Mobilenet v2 DNN} (MNet, $\rho_M$)~\cite{Sandler_2018_CVPR}, popular for analyzing drone footage~\cite{wang2018bandwidth}, and \emph{FCN Resnet18 DNN} (RNet, $\rho_R$)~\cite{long2015fully} on the TX2.}; deadline ($\delta$); utility ($\gamma$); and number of drones ($m$). The \emph{load factor} $x$ decides the count of activities per mission, $n = x \cdot m$. Drones take at most $r_{\max}=\frac{n}{m}$ trips.
\begin{center}
\small
\setlength\tabcolsep{2pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
$\bar{t}-t$ & $\beta$ & $\rho_M$ & $\rho_R$ & $\delta$ & $\gamma$ & $m$ & $x$ & $n= x \cdot m$ \\
\midrule
$[1,5]$ & $60\unit{s}$ & $11\unit{s}$ & $98\unit{s}$ &$120\unit{s}$ & $[1,5]$ & $5,10,20,50$ & $2, 4, 8$ & $10,\ldots,200$\\
\bottomrule
\end{tabular}
\end{center}
\noindent For brevity, RNet is only run on DFS. $10$ \emph{instances} of each of these $33$ viable workload scenarios are created. We run \algopt, \algjsc and \algvrc for each to return a schedule and expected utility
\ysnoted{Try and provide citations to justify above assumptions/workloads}
\subsection{Experimental Results}
Figures~\ref{fig:exp:w1:util},~\ref{fig:exp:w2:util} and~\ref{fig:exp:w2r:util} show the \emph{expected utility per drone} for the schedules from the $3$ algorithms, for different drone counts and activity load factors. Similarly, Figures~\ref{fig:exp:w1:exec},~\ref{fig:exp:w2:exec}, and~\ref{fig:exp:w2r:exec} show the \emph{algorithm execution time} (log, $secs$) for them. Each bar is averaged for $10$ instances and the standard deviations shown as whiskers. The per drone utility lets us uniformly compare the schedules for different workload scenarios. The \emph{total utility} -- MSP objective function -- is the product of the per drone utility shown and the drone count. \algopt \emph{did not finish} (DNF) within $7 \unit{h}$ for scenarios with $40$ or more activities
\begin{figure}[t]
\centerin
\subfloat[Utility per drone]{
\includegraphics[width=0.42\textwidth]{figures/W2-DNN2-util-per-drone}
\label{fig:exp:w2r:util}
}%
\hfill
\subfloat[Alg. runtime]{
\includegraphics[width=0.42\textwidth]{figures/W2-DNN2-execution}
\label{fig:exp:w2r:exec}
}
\caption{\emph{Expected utility per drone} and \emph{algorithm runtime} for RNet on DFS.}
\label{fig:exp:util-exec:w2r}
\end{figure}
\subsubsection{\algopt offers the highest utility, if it completes executing, followed by \algvrc, and \algjsc}
Specifically, for the $5$-drone scenarios for which \algopt completes, it offers an average of $42\%$ higher expected utility than \algjsc. \algvrc gives $26\%$ more average utility than \algjsc for these scenarios, and $75\%$ more for all scenarios they run for.
This is as expected for \algopt. Since a bulk of the energy is consumed for flying and hovering, \algvrc, which starts with an energy-efficient route, schedules more activities within the time and energy budget, as compared to \algjsc.
This is evidenced by Figure~\ref{fig:exp:act}, which reports for MNet the \emph{average fraction of activities}, which are submitted and successfully scheduled by the algorithms. The remaining activities are not part of any trip. Among all workloads, \algjsc only schedules $60\%$ of activities, VRC $90\%$, and \algopt $98\%$. So \algopt and \algvrc are better at packing routes and analytics on the UAVs.
\algopt and \algvrc offer more utility for the DFS workload than RND
since $\geq 96\%$ of DFS activities are scheduled.
They exploit the spatial and temporal locality of activities in DFS.
\subsubsection{The average flying time per activity in each trip is higher for \algvrc compared to \algjsc} Interestingly, at $728 \unit{s}$ vs. $688 \unit{s}$ per activity, the route-efficient schedules from \algvrc manage to fly to waypoints farther away from the depot and/or from each other, within the energy constraints, when compared to the schedules from \algjsc.
\ysnoted{??? Avg dist of waypoints visited by VRC in a trip vs JSC in a trip; Former may be more.}
As a result, it schedules a larger fraction of the activities to gain a higher expected utility.
\ysnoted{comments on std dev}
\begin{figure}[tbp]
\centerin
\subfloat[RND]{
\includegraphics[width=0.42\textwidth]{W1-app-sch.pdf}
\label{fig:exp:w1:act}
}
\hfill
\subfloat[DFS]{
\includegraphics[width=0.42\textwidth]{W2-app-sch.pdf}
\label{fig:exp:w2:act}
}
\caption{Fraction (\%) of submitted activities scheduled per mission for MNet.}
\label{fig:exp:act}
\end{figure}
\subsubsection{The execution times for \algvrc and \algjsc match their time complexity}
We use the execution times for \algjsc to schedule the $300+$ workload instances to fit a \emph{cubic function} in $n$, the number of activities, to match its time complexity of $\mathcal{O}(n^3 \cdot l^2)$; since in our runs, $l \in [1,5]$ and $l \leq n$, we omit that term in the fit.
Similarly, we fit a \emph{degree-4 polynomial} for \algvrc in $n$.
The \emph{correlation coefficient} for these two fits are high at $0.86$ and $0.99$, respectively. So, the real-world execution time of our scheduling heuristics match our complexity analysis.
\subsubsection{\algopt is the slowest to execute, followed by \algvrc and \algjsc}
Despite \algopt using $16\times$ more cores than \algjsc and \algvrc, its average execution times are $>100 \unit{s}$ for just $20$ activities. The largest scenario to complete in reasonable time is $40$ activities on $5$ drones, which took $7 \unit{h}$ on average. This is consistent with the NP-hard nature of MSP. As our mission window is $4 \unit{h}$, any algorithm slower than that is not useful.
\algjsc is fast, and on average completes within $1 \unit{s}$ for up to $80$ activities. Even for the largest scenario with $50$ drones and $200$ activities, it takes only $90 \unit{s}$ for RND and $112 \unit{s}$ for DFS.
\algvrc is slower but feasible for a larger range of activities than \algopt. It completes within $3 \unit{min}$ for up to $100$ activities. But, it takes $\approx 45 \unit{min}$ to schedule $200$ activities on $50$ drones.
\subsubsection{The choice of a good scheduling algorithm depends on the fleet size and activity count}
From these results, we can conclude that \algopt is well suited for small drone fleets with about $20$ activities scheduled per mission. This completes within minutes and offers about $20\%$ better utility than \algvrc.
\algvrc offers a good trade-off between utility and execution time for medium workloads with $100$ activities and $50$ drones. This too completes within minutes and gives on average about $75\%$ better utility than \algjsc and schedules over $80\%$ of all submitted activities.
For large fleets with $200$ or more activities being scheduled, \algjsc is well suited with fast solutions but has low utility and leaves a majority of activities unscheduled.
\subsubsection{A higher load factor increases the utility, but causes fewer \% of activities to be scheduled}
As $x$ increases, we see that the utility derived increases. This is partly due to adequate energy and time being available for the drones to complete more activities in multiple trips.
E.g., for the 5-drone case, we use load factors of $x=\{2,4,8,16,32\}$ for \algjsc and \algvrc. There is a consistent growth in the total utility, from $109$ to $523$ for \algjsc, and from $121$ to $1080$ for \algvrc. There is also a corresponding growth in the number of trips performed per mission, e.g., from $7.5$ to $43.2$ in total for \algvrc.
However, the fraction of submitted activities that are scheduled falls. For \algjsc, its activity scheduled \% linearly drops with $x$ from $76\%$ to $23\%$. But for \algvrc, the scheduled \% stays at about $80\%$ until $x=8$, at which point the activities saturate the drone fleet's capacity and the scheduled \% falls linearly to $37\%$ for $x=32$.
Interestingly, the utility increases faster than the number of activities scheduled for \algvrc. This is due to the scheduler favoring activities that offer a higher utility, while avoiding those with a lower utility, causing a $20\%$ increase in utility received per activity between $x=8$ to $x=32$.
\subsubsection{Longer-running edge analytics offer lower on-time utility}
\ysnoted{Why does DNN2 not compelte on OPT for 5/10s and beyond, within time?}
We run the same scenarios using RNet and MNet DNNs for the DFS workload. For both \algjsc and \algvrc, the \emph{data capture utility} that accrues from their schedules for the two DNNs is similar. However, since the RNet execution time per batch is much higher than MNet, there is a drop in \emph{on-time utility}, by about $32\%$ for both \algjsc and \algvrc, due to more deadline violations.
As a result, this also causes a drop in total utility for RNet by about $15.9\%$ for \algjsc and $19\%$ for \algvrc, relative to MNet. Even for \algopt we see a similar trend with a $15.8\%$ drop in the total utility. The runtime of \algjsc and \algvrc do not exhibit a significant change between RNet and MNet.
\subsubsection{Effect of real-world factors}
The \textit{expected utilities} reported above are under ideal conditions. Here, we evaluate their practical efficacy by emulating these schedules using real drone traces to get the \textit{effective utility} and \textit{trip completion rate}.
Ideally, each trip generated by \algjsc and \algvrc should complete be within a drone's energy capacity.
In practice, factors such as wind or non-linear battery performance can increase or decrease the actual energy consumed. Figure~\ref{fig:exp:real-trips} shows the \% of scheduled trips that do not complete when using the drone trace. With $<80$ activities, all trips complete (not plotted). But but with $\geq 80$ activities, some trips in the planned schedule start to fail. At worst, $12\%$ of trips are incomplete in some schedules. So the effect of real-world factors can be significant. Interestingly, for the failed trips, an average $3.6\%$ and a maximum of $7.9\%$ extra battery capacity would allow them to finish the trip.
So by maintaining a buffer battery capacity of $\approx 10\%$ when planning a schedule, we can ensure that the drones can complete a trip and return to the depot.
\begin{figure*}[t]
\centerin
\subfloat[MNet, RND]{
\includegraphics[width=0.31\textwidth]{figures/W1-prec-trips-mnet}
\label{fig:exp:w1:util_2}
}
\subfloat[MNet, DFS]{
\includegraphics[width=0.31\textwidth]{figures/W2-prec-trips-mnet}
\label{fig:exp:w2:util_2}
}
\subfloat[RNet, DFS]{
\includegraphics[width=0.31\textwidth]{figures/W2-prec-trips-rnet}
\label{fig:exp:w1:exec_2}
}
\caption{\% of incomplete trips using drone trace. It is $0\%$ for $<80$ activities.}
\label{fig:exp:real-trips}
\end{figure*}
\section{Conclusion and Future Work}
\label{sec:conclusions}
\ysnoted{Add 1-2 paras with high level takeways from the current problem and its results...}
\ysnoted{parallel variants of VRC to speed up processing?}
This paper introduces a novel Mission Scheduling Problem (\prob) that co-schedules routes and analytics for drones, maximizing the utility for completing activities.
We proposed an optimal algorithm, \algopt, and two time-efficient heuristics, \algjsc and \algvrc.
Evaluations using two workloads, varying drone counts and load factors, and real traces exhibit different trade-offs between utility and execution time. \algopt is best for $\leq20$ activities and $\leq 5$ drones, \algvrc for $\leq100$ activities and $\leq 50$ drones, and \algjsc for $>100$ activities.
Their time complexity matches reality.
The schedules work well for fast and slow DNNs, though on-time utility drops for the latter.
The MSP proposed here is just one variant of an entire class of fleet co-scheduling problems for drones. Other architectures can be explored considering 4G/5G network coverage to send edge results to the back-end, or even off-load captured data to the cloud if it is infeasible to compute on the drone. This will allow more pathways for data sharing among UAVs and GS, but impose energy, bandwidth and latency costs for communications. Even the routing can be aware of cellular coverage to ensure deterministic off-loading on a trip
We can use alternate cost models by assigning an operational cost per trip or per visit, and convert the MSP into a profit maximization problem. The activity time-windows may be relaxed rather than be defined as a static window. Drones with heterogeneous capabilities, in their endurance, compute capabilities, and sensors, will also be relevant for performing diverse activities such as picking up a package using an on-board claw and visually verifying it using a DNN.
Finally, we need to deal with dynamics and uncertainties like wind, obstacles and non-linear battery or compute behavior that affect flight paths, energy consumption and utilities. We can use probability distributions and stochastic approaches coupled with real-time information, which can decide and enact on-line rescheduling and rerouting while on a trip.
Such on-the-fly route updates for drones also allows us to accept and schedule activities continuously, rather accumulate a mission over hours, and prioritize the profitable activities. These will also need to be validated using more robust real-world experiments and traces.
\let\thefootnote\relax\footnotetext{\noindent \textbf{Acknowledgments.} \textit{This work is supported by AWS Research Grant, Intelligent Systems Center at Missouri S\&T, and NSF grants CCF-1725755 and SCC-1952045. A. Khochare is funded by a Ph.D. fellowship from RBCCPS, IISc, Bangalore.
S. K. Das was partially supported by a Satish Dhawan Visiting Chair Professorship at IISc. We thank RBCCPS for access to the drone, and Vishal, Varun and Srikrishna for helping collect the drone traces.}}
\clearpage
\bibliographystyle{ieeetr}
\section{Introduction}
Unmanned Aerial Vehicles (UAVs), also called \emph{drones}, are enabling a wide range of applications in smart cities~\cite{mohammed2014uavs}, such as traffic monitoring~\cite{kanistras2013survey}, construction surveys~\cite{george2019towards}, package delivery~\cite{sorbelli2020energy}, localization~\cite{sorbelli2020measurement}, and disaster (including COVID-19) management~\cite{costa2020covid}, assisted by 5G wireless roll-out~\cite{gapeyenko2018flexible}.
The mobility, agility, and hovering capabilities of drones allow them to rapidly fly to points of interest (i.e., \emph{waypoints}) in the city to accomplish specific \emph{activities}.
Usually, such activities involve hovering and recording a scene using the drone's camera, and analyzing the videos to take decisions.
Advancements of computer vision algorithms and \emph{Deep Neural Networks} (DNNs) enable video analytics to be performed over such recordings for automated decision-making.
Typically, these are inferred once the recordings are transferred to a ground station (GS) after the drones land. In-flight transfer of videos to a GS is limited by the intermittent bandwidth of current communications technologies.
However, certain activities may require low-latency analysis and decisions, as soon as the video is captured at a location. Hence, the on-board \emph{edge computing} capability~\cite{jung2018perception}
available on commercial drones can be leveraged to process the recorded videos, and quickly report concise results to the GS over 3/4/5G wireless networks~\cite{zeng2019accessing}.
Since the transferred results are brief and the \emph{on-board} processing times dominate, we ignore communication constraints like data rate, latency, and reliability that are affected by the UAV's altitude, antenna envelope, etc.
UAVs are \emph{energy-constrained vehicles} with limited battery capacity, and commercial drones can currently fly for less than an hour. The flying distance between waypoints will affect the number of activities that can be completed in one \emph{trip} on a full battery. Besides hovering and recording videos at waypoints, performing edge analytics also consumes energy. So, the drone's battery capacity should be judiciously managed for the flying, hovering and computing tasks. Nevertheless, once a drone lands, its exhausted battery can be quickly replaced with a full one, to be ready for a new trip.
This paper examines how a \emph{UAV fleet operator} in a city can plan \emph{missions} for a captive set of drones to accomplish activities periodically provided by the users. An \emph{activity} involves visiting a waypoint, hovering and capturing video at that location for a specific time period, and optionally performing on-board analytics on the captured data. Activities also offer \emph{utility} scores depending on how they are handled. The novel problem we propose here is for the fleet operator to \emph{co-schedule flight routing among waypoints \underline{and} on-board computation so that the drones complete (a subset of) the provided activities, within the energy and computation constraints of each drone, while maximizing the total utility.}
Existing works have examined routing of one or more drones for
capturing and relaying data to the backend~\cite{motlagh2019energy}, off-loading computations from mobile devices~\cite{hu2019uav}, and cooperative video surveillance~\cite{trotta2018uavs}.
There also exists literature on scheduling tasks for edge computing that are compute- and energy-aware, operate on distributed edge resources, and consider deadlines and device reliability~\cite{meng2019dedas}. However, none of these examine co-scheduling a fleet of physical drones and digital applications on them to meet the objective, while efficiently managing the energy capacity to maximize utility.
Specifically, our \emph{Mission Scheduling Problem (MSP)} combines elements of the \emph{Vehicle Routing Problem (VRP)}~\cite{clarke1964scheduling}, which generalizes the well known Traveling Salesman Problem (TSP) to find optimal routes for a set of vehicles and customers~\cite{toth2002vehicle}, and the \emph{Job-shop Scheduling Problem (JSP)}~\cite{manne1960job} for mapping jobs of different execution duration to the available resources, which is often used for parallel scheduling of computing tasks to multiprocessors~\cite{kwok1999static}.
\ysnoted{Experiment results...anything that stands out? Counter-intuitive?}
We make the following contributions in this paper.
\begin{itemize}
\item We characterize the system and application model, and formally define the \textit{Mission Scheduling Problem (\prob)} to co-schedule routes and analytics for a fleet of drones, maximizing the obtained utility (Sections~\ref{sec:model} and~\ref{sec:prob-def}).
\item We prove that \prob is \textit{NP-hard}, and optimally solve it using a \textit{mixed integer linear programming (MILP)} design, \algopt, which is feasible for small inputs (Section~\ref{sec:algorithms:opt}).
\item We design \textit{two time-efficient heuristic algorithms, \algjsc and \algvrc,} that solve the MSP for arbitrary-sized inputs, and offer complexity bounds for their execution (Section~\ref{sec:algorithms:appox}).
\item We \textit{evaluate and analyze} the utility and scheduling runtime trade-offs for these three algorithms, for diverse drone workloads based on real drone traces (Section~\ref{sec:evaluation}).
\end{itemize}
\section{Related Work}\label{sec:related}
This section reviews literature on vehicle routing and job-shop scheduling, contrasting them with MSP and our solutions.
\subsection{Vehicle Routing Problem (VRP)}
VRP is a variant of TSP with multiple salespersons~\cite{clarke1964scheduling} and it is NP-hard~\cite{lenstra1981complexity}.
This problem has had several extensions to handle realistic scenarios, such as temporal constraints that impose deliveries only at specific time-windows~\cite{desaulniers2016exact}, capacity constraints on vehicle payloads~\cite{uchoa2017new}, multiple trips for vehicles~\cite{cattaruzza2016vehicle}, profit per vehicle~\cite{stavropoulou2019vehicle} and traffic congestion~\cite{gayialis2018developing}.
VRP has also been adapted for route planning for a fleet of ships~\cite{fagerholt1999optimal}, and for drone assisted delivery of goods~\cite{khoufi2019survey}.
In~\cite{motlagh2019energy}
the scheduling of \emph{events} is performed by UAVs at specific locations, involving data sensing/processing and communication with the GS. The goal here is to minimize the drone's energy consumption and operation time. Factors like wind and temperature that may affect the route and execution time are also considered.
While they combine sensing and processing into one monolithic event, these are independent tasks which need to be co-scheduled, as we do. Also, they minimize the operating time and energy while we maximize the utility to perform tasks within a time and energy budget.
In~\cite{hu2019uav} the use of UAVs is explored to off-load computing from the users' mobile devices, and for relaying data between mobile devices and GS. The authors considered the drones' trajectory, bandwidth, and computing optimizations in an iterative manner. The aim is to minimize energy consumption of the drones and mobile devices. It is validate through simulation for four mobile devices. We instead consider a more practical problem for a fleet of drones with possibly hundreds of locations to visit and on-board computing tasks to perform.
Trotta et al.~\cite{trotta2018uavs} propose a novel architecture for energy-efficient video surveillance of points of interest (POIs) in a city by drones. The UAVs use bus rooftops for re-charging and being transported to the next POI based on known bus routes. Drones also act as relays for other drones capturing videos. The mapping of drones to bus routes is formulated as an MILP problem and a TSP-based heuristic is proposed. Unlike ours, their goal is not to schedule and process data on-board the drone. Similarly, we do not examine any data off-loading from the drone, nor any piggy-backing mechanisms.
\subsection{Job-shop Scheduling (JSC)}
Scheduling of computing tasks on drones is closely aligned with scheduling tasks on edge and fog devices~\cite{varshney2020characterizing}, and broadly with parallel workload scheduling~\cite{kwok1999static} and JSC~\cite{manne1960job}
In~\cite{meng2019dedas}, an online algorithm is proposed for deadline-aware task scheduling for edge computing. It highlights that workload scheduling on the edge has several dimensions, and it jointly optimizes networking and computing to yield the best possible schedule.
Feng at al.~\cite{feng2018mobile} propose a framework for cooperative edge computing on autonomous road vehicles, which aims to increase their decentralized computational capabilities and task execution performance.
Others~\cite{li2019joint} combine optimal placement of data blocks with optimal task scheduling to reduce computation delay and response time for the submitted tasks while improving user experience in edge computing.
In contrast, we co-schedule UAV routing and edge computing.
There exist works that explore task scheduling for mobile clients, and off-load computing to another nearby edge or fog resource. These may be categorized based on their use of \emph{predictable} or \emph{unpredictable} mobility models. In~\cite{ning2019mobile}, the mobility of a vehicle is predicted and used to select the road-side edge computing unit to which the computation is off-loaded. Serendipity~\cite{shi2012serendipity} takes an alternate view and assumes that mobile edge devices interacts with each other intermittently and at random. This makes it challenging to determine if tasks should be off-loaded to another proximate device for reliable completion. The problem we solve is complementary and does not involve off-loading. The possible waypoints are known ahead, and we perform predictable UAV route planning and scheduling of the computing locally on the edge.
Scheduling on the energy-constrained edge has also been investigated by Zhang et al.~\cite{zhang2017energy}, where an energy-aware off-loading scheme is proposed to jointly optimize communication and computation resource allocation on the edge, and to limit latency. Our proposed problem also considers energy for the drone flight while meet deadlines for on-board computing.
\ysnoted{Graph based approaches have been tried for VRP and JSP: \cite{beck2002graph}}
\ysnoted{Discuss scheduling of tasks to processors. Show how our proposed problem is similar to scheduling a set of DAGs that are submitted at different times, each DAG has a sequential set of tasks, and the is a utility for completing as many tasks as possible within a given deadline. Find relevant citations.}
\section{Models and Assumptions}\label{sec:model}
This section introduces the UAV system model, application model, and utility model along with underlying assumptions.
\begin{figure*}[t]
\centering
\def1{1}
\input{figures/msp_big_picture.pdf_tex}
\caption{Sample MSP scenario. a) shows a city with the depot ($\ensuremath{\widehat{\lambda}}\xspace$); 6 waypoints to visit ($\lambda_i$) with some utility; and possible trip routes for drones ($R^i_j$). b) has the corresponding 6 activities ($\alpha_i$) with data capture duration (shaded) and compute deadline (vertical line) and the two available drones.}
\label{fig:msp_big_picture}
\end{figure*}
\subsection{UAV System Model}
Let $\ensuremath{\widehat{\lambda}}\xspace=(0, 0, 0)$ be the \emph{location} of a UAV depot in the city (see Figure~\ref{fig:msp_big_picture}, left) centered at the origin of a 3D Cartesian coordinate system.
Let $D = \{d_1, \ldots, d_m\}$ be the set of $m$ available drones. For simplicity, we assume that all the drones are homogeneous. Each drone has a camera for recording videos, which is subsequently processed. This processing can be done using the on-board computing, or done offline once the drone lands (which is outside the scope of our problem). The on-board \emph{processing speed} is $\pi$ floating point operations per second (FLOPS). For simplicity, this is taken as cumulative across CPUs and GPUs on the drone, and this capacity is orthogonal to any computation done for navigation.
The battery on a drone has a fixed \emph{energy capacity} $E$, which is used both for flying and for on-board computation.
The drone's energy consumption has three components --
\emph{flying}, \emph{hovering} and \emph{computing}.
Let $\epsilon^f$ be the energy required for flying for a unit time duration at a constant energy-efficient speed $s$ within the Cartesian space;
let $\epsilon^h$ be the energy for hovering for a unit time duration;
and let $\epsilon^c$ be the energy for performing computation for a unit time duration.
For simplicity, we ignore the energy for video capture since it is negligible in practice.
Also, a drone that returns to the depot can swap-in a full battery and immediately start a new trip.
\subsection{Application Model}
Let $A = ( \alpha_1, \ldots, \alpha_n )$ be the set of $n$ activities to be performed starting from time $\widehat{t}=0$, where each {\em activity} $\alpha_i$ is given by the tuple $\langle \lambda_i, t_i, \bar{t}_i, \kappa_i, \delta_i, \gamma_i, \bar{\gamma}_i, \bar{\bar{\gamma}}_i \rangle$. Here, $\lambda_i=(x_i, y_i, z_i)$ is the waypoint \emph{location} coordinates where the video data for that activity has to be captured by the drone, relative to the depot location $\ensuremath{\widehat{\lambda}}\xspace$. The \emph{starting and ending times} for performing the \emph{data capture task} are $t_i$ and $\bar{t}_i$. The \emph{compute requirements} for subsequently processing all of the captured data is $\kappa_i$ floating point operations.
Lastly, $\delta_i$ is the \emph{time deadline} by which the \emph{computation task} should be completed on the drone to derive on-time utility of processing, while $\gamma_i, \bar{\gamma}_i$, and $\bar{\bar{\gamma}}_i$ are the \emph{data capture}, \emph{on-time processing} and \emph{on-board processing utility} values that are gained for completing the activity. These are described in the next sub-section.
The computation may be performed incrementally on subsets of the video data, as soon as they are captured. This is common for analytics over constrained resources~\cite{bianco2018benchmark}.
Specifically, for an activity $\alpha_{i}$, the data captured between $(\bar{t}_{i} - t_{i})$ is divided into \emph{batches} of a fixed duration $\beta$, with the sequence of batches given by $B_{i} = (b_{i}^1, \ldots, b_{i}^{q_i})$, where $q_i = |B_i| = \big\lceil \frac{\bar{t}_{i} - t_{i}}{\beta} \big\rceil$. The computational cost to process each batch is $\kappa_{i}^k = \frac{\kappa_{i}}{q_i}$ floating-point operations, and is constant for all batches of an activity.
So, the \emph{processing time} for the batch, given the processing speed $\pi$ for a drone, is $\rho_{i}^k = \big\lceil\kappa_{i}^k \cdot \frac{1}{\pi}\big\rceil$; for simplicity, we discretize all time-units into integers.
We make some simplifying assumptions. Only one batch may be executed at a time on-board a drone and it should run to completion before scheduling another. There is no concurrency, pre-emption or check-pointing. The data capture for an activity's batch may overlap with the computation of a previous batch of the same or a different activity.
All batches for a single activity should be executed in sequence, i.e., complete processing $b_{i}^{k}$ before processing $b_{i}^{k+1}$. Once a batch is processed, its compact results are immediately and deterministically communicated to the GS.
\subsection{Utility Model}
The primary goal of the drone is to capture videos at the various activity locations for the specified duration. This is a \emph{necessary} condition for an activity to be successful. We define this as the \emph{data capture utility ($\gamma_{i}$)} accrued by a drone for an activity $\alpha_{i}$.
The secondary goal is to opportunistically process the captured data using the on-board computing on the drone. Here, we have two scenarios. Some activities may not be time sensitive, and performing on-board computing is just to reduce the costs for offline computing. Here, processing the data captured by an activity using the drone's computing resources will provide an \emph{on-board processing utility ($\bar{\gamma}_{i}$)}. Other activities may be time-sensitive and have a \emph{soft-deadline} $\delta_i$ for completing the processing. For these, if we process its captured data on the drone by this deadline, we receive an extra \emph{on-time processing utility ($\bar{\bar{\gamma}}_{i}$)}. The processing utilities accrue \emph{pro rata}, for each batch of the activity completed.
\section{Problem Formulation}\label{sec:prob-def}
The Mission Scheduling Problem (\prob) is summarized as: \emph{Given a UAV depot in a city with a fleet of captive drones, and a set of observation and computing activities to be performed at locations in the city, each within a given time window and with associated utilities, the goal is to co-schedule the drones onto mission routes and the computation to the drones, within the energy and compute constraints of the drones, such that the total utility achieved is maximized.}
It is formalized below.
\subsection{Mission Scheduling Problem (\prob)}
A UAV fleet operator receives and queues activities. Periodically, a mission schedule is planned to serve some or all the activities using the whole fleet to maximize the utility. There is a fixed cost for operating the captive fleet that we ignore.
Multiple activities can be assigned to the same drone $d_j$ as part of the drone's \emph{mission},
and the same drone $d_j$ can perform multiple \emph{trips} from the depot for a mission.
The \emph{mission activities} for the $r^{th}$ trip of a drone $d_j$ is the ordered sequence $A^r_j = ( \alpha^r_{j_1}, \ldots, \alpha^r_{j_n} ) \subseteq A$ where $\alpha^r_{j_x} \in A$, $j_n \le n$, and no activity appears twice within a mission. Further, we have $\alpha^r_{j_x} \prec \alpha^r_{j_{x+1}}$, i.e., the observation start and end times of an activity in the mission sequence fully precede those of the next activity in it, $\bar{t}^r_{j_{x}} \le t^r_{j_{x+1}} $. Also, $A^x_j \cap A^y_k = \varnothing~\forall j,k,x,y$ to ensure that an activity is mapped to just one drone. Depending on the feasibility and utility, some activities may not be part of any mission and are dropped, i.e., $\sum_{j} \sum_{r} |A^r_j| \leq n$.
The \emph{route} for the $r^{th}$ trip of drone $d_j$ is given by $R^r_j = ( \ensuremath{\widehat{\lambda}}\xspace, \lambda^r_{j_1}, \ldots, \lambda^r_{j_n}, \ensuremath{\widehat{\lambda}}\xspace )$, where the starting and ending waypoints of the drone are the depot location $\ensuremath{\widehat{\lambda}}\xspace$, and each intermediate location corresponds to the video capture location $\lambda^r_{j_k}$ for the activity $\alpha^r_{j_k}$ in the mission sequence. For uniformity, we denote the first and the last depot location in the route as $\lambda^r_{j_0}$ and $\lambda^r_{j_{n+1}}$, respectively.
Clearly, $|R^r_j| = j_n + 2$.
A drone $d_j$, given the $r^{th}$ trip of its route $R^r_j$, starts at the depot, visits each waypoint in the sequence and returns to the depot, where it may instantly get a fresh battery and start the $(r+1)^{th}$ route.
Let drone $d_j$ leave a waypoint location in its route, $\lambda^r_{j_i}$, at \emph{departure time} $\tau^r_{j_i}$ and reach the next waypoint location, $\lambda^r_{j_{i+1}}$, at \emph{arrival time} $\bar{\tau}^r_{j_{i+1}}$.
Let the function $\mathcal{F}(\lambda_p, \lambda_q)$ give the \emph{flying time} between $\lambda_i$ and $\lambda_j$. Since the drone has a constant flying speed, we have $\bar{\tau}^r_{j_{i+1}} = \tau^r_{j_i} + \mathcal{F}(\lambda^r_{j_i}, \lambda^r_{j_{i+1}})$.
The drone must hover at each waypoint $\lambda^r_{j_i}$ between $t^r_{j}$ and $\bar{t}^r_{j}$ while recording the video, and it departs the waypoint after this, i.e., $\tau^r_{j_i} = \bar{t}^r_{j_i}$. If the drone arrives at this waypoint at time $\bar{\tau}^r_{j_i}$, i.e., before the observation start time $t_j$, it \emph{hovers} here for a duration of $t^r_j - \bar{\tau}^r_{j_i}$, and then continues hovering during the activity's video capture.
If a drone arrives at $\lambda^r_{j_i}$ after $t^r_j$, it is invalid since the video capture for the activity cannot be conducted for the whole duration. So, $\bar{\tau}^r_{j_i} \le t^r_{j_i} \le \tau^r_{j_i}$.
Also, since the deadline for on-time computation over the captured data is $\delta^r_{j_i}$, we require $\delta^r_{j_i} \ge \bar{t}^r_{j_i}$.
Once the drone finishes capturing video for the last activity in its $r^{th}$ trip, it returns back to the depot location at time $\bar{\tau}^r_{j_{n+1}} = \tau^r_{j_n} + \mathcal{F}(\lambda^r_{j_n}, \widehat{\lambda})$.
Hence, the \emph{total flying time} for a drone $d_j$ for its $r^{th}$ trip is:
\[ f^r_j = \sum_{i=0}^{n} (\bar{\tau}^r_{j_{i+1}} - \tau^r_{j_i}) \]
\noindent
and the \emph{total hover time} for the drone on that trip is:
\[ h^r_j = \sum_{i=1}^{n} (t^r_{j_i} - \bar{\tau}^r_{j_i}) + \sum_{i=1}^{n} (\bar{t}^r_{j_i} - t^r_{j_i}) = \sum_{i=1}^{n} (\bar{t}^r_{j_i} - \bar{\tau}^r_{j_i}) \]
which includes hovering due to early arrival at a waypoint, and hovering during the data capture.
Let the scheduler assign the \emph{time slot} $[\theta_{j_i}^{k}, \bar{\theta}_{j_i}^{k})$ for executing a batch $b_{j_i}^{k}$ of activity $\alpha_{j_i}$ on drone $d_j$, where $\bar{\theta}_{j_i}^{k} = \theta_{j_i}^{k} + \rho_{i}^{k}$, based on the batch execution time.
We define a completion function for each activity $\alpha_{j_i}$, for the three utility values:
\begin{itemize}
\item The \emph{data capture completion} $u_{j_i} \in \{0, 1\}$ has a value of $1$ if the drone hovers at location $\lambda_{j_i}$ for the entire period from $t_{j_i}$ to $\bar{t}_{j_i}$, and is $0$ otherwise.
\item The \emph{on-board completion} $0.0 \le \bar{u}_{j_i} \le 1.0$ indicates the fraction of batches of that activity that are completed on-board the drone.
Let $\bar{\mu}^k_i=1$ if the batch $b^k_i$ of activity $\alpha_i$ is completed on-board,
and $\bar{\mu}^k_i=0$ if it is not completed on-board the drone. Then, $\bar{u}_{j_i} = \frac{\sum_k \bar{\mu}^k_i}{q_i}$.
\item The \emph{on-time completion} $0.0 \le \bar{\bar{u}}_{j_i} \le 1.0$ gives the fraction of batches of that activity that are fully completed within the deadline.
As before, let $\bar{\bar{\mu}}^k_i=1$ if the batch $b^k_i$ of activity $\alpha_i$ is completed
on-time, i.e., $\bar{\theta}_{i}^{k} \leq \delta_i$, and $\bar{\bar{\mu}}^k_i=0$ otherwise. So, $\bar{\bar{u}}_{j_i} = \frac{\sum_k \bar{\bar{\mu}}^k_i}{q_i}$.
\end{itemize}
The \emph{total utility} for an activity $\alpha_i$ is
$U_i = u_{i} \gamma_i + \bar{u}_{i} \bar{\gamma}_i + \bar{\bar{u}}_{i} \bar{\bar{\gamma}}_i$,
and the \emph{total computation time} of batches on a drone $d_j$ is:
\[c_j = \sum_{\alpha_i \in A} {(\bar{\mu}^k_{j_i} + \bar{\bar{\mu}}^k_{j_i}) \cdot \rho^k_i} \]
\subsection{Optimization of \prob}
Based on these, the \textbf{objective} of the optimization is $\arg \max \sum_{\alpha_i \in A} {U_i}$, i.e., assign drones to activity waypoints and activity batches to the drones' computing slots to maximize the utility from data capture, on-board and on-time computation.
These are subject to the following constraints on the execution slot assignments for a batch on a drone:
\[ (t_{j_i} + k \cdot \beta) \leq \theta_{j_i}^{k} ~\qquad~ \bar{\theta}_{j_i}^{k} \leq \theta_{j_i}^{k+1} ~\qquad~ \bar{\theta}_{i}^{k} \leq \bar{\tau}_{j_{n+1}}\]
i.e., the data capture for a duration of $\beta$ for the $k^{th}$ batch of the activity is completed before the execution slot of the batch starts; the batches for an activity are executed in sequence; and the execution completes before the drone lands.
Also, there can only be one batch executing at a time on a drone. So $\forall [\theta_{j_p}^{x}, \bar{\theta}_{j_p}^{x})$ and $[\theta_{j_q}^{y}, \bar{\theta}_{j_q}^{y})$ slots assigned to batches $b_p^x$ and $b_q^y$ on drone $d_j$, we have $[\theta_{j_p}^{x}, \bar{\theta}_{j_p}^{x}) \cap [\theta_{j_q}^{y}, \bar{\theta}_{j_q}^{y}) = \varnothing$.
Lastly, the \emph{energy expended} by drone $d_j$ on the $r^{th}$ trip, to fly, hover and compute, should be within its battery capacity:
\[
E^r_j = f^r_j \epsilon^f + h^r_j \epsilon^h + c^r_j \epsilon^c \le E
\]
\noindent{\em Model Applicability:}
Our novel model can be abstracted to describe diverse applications.
In \emph{entity localization}~\cite{de2015board}, $\bar{\gamma}_{i}=0$ and $\bar{\bar{\gamma}}_{i}>0$ captures the importance of an entity being tracked.
In \emph{traffic monitoring}~\cite{kanistras2013survey} it is useful to have timely insights, appropriately tuning $\bar{\gamma}_{i}$ and $\bar{\bar{\gamma}}_{i}$.
In \emph{construction survey}~\cite{george2019towards} there are no strict time deadlines, so $\bar{\bar{\gamma}}_{i}=0$.
\begin{table*}[t]
\centering
\caption{Constraints for \algopt MILP formulation.}
\label{tab:constraints}
\begin{tabular}{p{0.15cm}|p{9.5cm}|p{7cm}}
\toprule
\bf C. & \bf Expression & \bf Meaning \\
\midrule
$1$ & $\sum_{k \in \mathcal{D}} \sum_{l \in \mathcal{R}} \sum_{j\in \overrightarrow{i}} x_{ij}^{kl} \leq 1,
\qquad \forall i \in \mathcal{V}'$ & The waypoint for an activity $\alpha_i$ is visited only once.\\
$2$ & $\sum_{j \in \overrightarrow{0}} x_{0j}^{kl} - \sum_{j \in \overleftarrow{0}} x_{j0}^{kl} = 0,
\qquad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & A drone trip $l$ starting from the depot must also end there.\\
$3$ & $\sum_{j \in \overrightarrow{0}} x_{0j}^{kl} = 1 \iff \sum_{j \in \overrightarrow{i}} x_{ij}^{kl} = 1,
\qquad \forall i \in \mathcal{V}', k \in \mathcal{D}, l \in \mathcal{R}$ & A drone $k$ must visit at least one waypoint on each trip $l$.\\
$4$ & $\sum_{i \in \overleftarrow{j}} x_{ij}^{kl} - \sum_{i \in \overrightarrow{j}} x_{ji}^{kl} = 0,
\qquad \forall k \in \mathcal{D}, j \in \mathcal{V}', l \in \mathcal{R}$ & A drone $k$ visiting waypoint $j$ must also fly out from there.\\
$5$ & $\left(t_j - \mathcal{F}_{0j}\right) \cdot \sum_{k \in \mathcal{D}}\sum_{l \in \mathcal{R}} x_{0j}^{kl} \ge 0,
\qquad \forall j \in \mathcal{V}'$ & Any drone flying to waypoint $j$ from the depot must reach before its observation start time $t_j$.\\
$6$ & $ (t_j - \bar{t}_i - \mathcal{F}_{ij}) \cdot \sum_{k \in \mathcal{D}}\sum_{l \in \mathcal{R}} x_{ij}^{kl} \ge 0,
\qquad \forall i \in \mathcal{V}', j \in \overrightarrow{i}$ & Any drone flying to waypoint $j$ from $i$ must reach before its observation start time $t_j$.\\
${7}$ & $\bar{\tau}^l_{k_{n+1}} = \sum_{i \in \mathcal{V}'} x_{i0}^{kl} \cdot (\bar{t}_i + \mathcal{F}_{i0}),
\qquad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & Decides the landing time of drone $k$ at the depot after trip $l$.\\
${8}$ & $\bar{\tau}^l_{k_{n+1}} \le \tau_{\max},
\quad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & Depot landing times for all trips is within the maximum time.\\
\hline
$9$ & $t_i + (g + 1) \cdot \beta \le \theta^g_i,
\qquad \forall i \in \mathcal{V}', g \in \mathcal{B}_i$ & Batch $g$ of activity $\alpha_i$ must be observed before it is processed. \\
${10}$ & $\bar{\theta}^g_i < \theta^{g+1}_i,
\qquad \forall i \in \mathcal{V}', g \in \mathcal{B}_i$ & Processing of batch $g$ of activity $\alpha_i$ must precede batch $g+1$. \\
${11}$ & $\sum_{j \in \overrightarrow{i}} x_{ij}^{kl} + \sum_{b \in \overrightarrow{a}} x_{ab}^{kl} - 1 \leq w^{gh}_{ia} + w^{hg}_{ai},
\qquad \forall i,a \in \mathcal{V}', i < a, g \in \mathcal{B}_i, h \in \mathcal{B}_a, k \in \mathcal{D}, l \in \mathcal{R}$ & \multirow{2}{7cm}{Compute time slots of two batches $g$ and $h$ from activities $\alpha_i$ and $\alpha_a$ on the same drone $k$ and trip $l$ should not overlap \cite{manne1960job}.} \\
${12}$ & $\bar{\theta}^g_i - \theta^h_a \leq M \cdot (1 - w^{gh}_{ia}),
\qquad \forall i,a \in \mathcal{V}, i \neq a, g \in \mathcal{B}_i, h \in \mathcal{B}_a $ & \\
${13}$ & $y_{ik}^{lg} = 1 \Rightarrow \bar{\theta}^g_i + M \left(1-\sum_{j \in \overrightarrow{i}} x_{ij}^{kl}\right) \le \delta_i, ~~~~~\forall i \in \mathcal{V}', g \in \mathcal{B}_i, k \in \mathcal{D}, l \in \mathcal{R}$ & Decision variable for batches that complete before deadline.\\
${14}$ & $z_{ik}^{lg} = 1 \Rightarrow \bar{\theta}^g_i + M\left(1 - \sum_{j \in \overrightarrow{i}} x_{ij}^{kl}\right) \le \bar{\tau}^l_{k_{n+1}},
\forall i \in \mathcal{V}', g \in \mathcal{B}_i, k \in \mathcal{D}, l \in \mathcal{R}$ & Decision variable for batches that complete before landing.\\
\hline
${15}$ & $\sum_{i \in \mathcal{V}} \Big( \sum_{j \in \overrightarrow{i}} \left( x_{ij}^{kl} \cdot \mathcal{F}_{ij} \cdot \epsilon^f \right) ~+~ \allowbreak
\sum_{g \in \mathcal{B}_i} \left(z_{ik}^{lg} \cdot \kappa^g_i \cdot \epsilon^c \right) ~+~ \allowbreak
{\qquad \sum_{j \in \overrightarrow{i}} \left( x_{ij}^{kl} \cdot (\bar{t}_j \!-\! (\bar{t}_i + \mathcal{F}_{ij})) \cdot \epsilon^h \right)} \Big)
\le E,
\quad \forall k \in \mathcal{D}, l \in \mathcal{R}$ & Sum of energy consumed for flying, hovering and computing on trip $l$ of drone $k$ should be within the battery capacity. \\
\bottomrule
\end{tabular}
\end{table*}
\section{Optimal Solution for \prob}
\label{sec:algorithms:opt}
In this section, we prove that \prob is NP-hard, and we define an optimal, but computationally slow, algorithm called \textsc{Optimal Mission Scheduling}\xspace (\algopt) based on MILP.
\ysnoted{We're actually defining a solution here, not a problem. Should we call it "Optimal Mission Scheduler (OPT)"? The short form OPT is also earlier to remember for readers.}
\subsection{NP-hardness of \prob}
As discussed earlier, the \prob combines elements of the VRP and the JSP in assigning routes and batches to drones, for maximizing the overall utility, subject to energy constraints.
\begin{theorem}
\prob is NP-hard.
\end{theorem}
\begin{proof}
The VRP is NP-hard~\cite{lenstra1981complexity}.
In addition, \prob considers multiple-trips, time-windows, energy-constraints, and utilities.
The VRP variant with multiple-trips (MTVRP), which considers a maximum travel time horizon $T_{h}$, is NP-hard. Any instance of VRP can be reduced in polynomial time to MTVRP by fixing the number of vehicles to the number of waypoints, $m = n$, and setting the time horizon $T_{h} = \sum_{e \in \mathcal{E}} \mathcal{F}(e)$, where $\mathcal{E}$ is the set of edges and $\mathcal{F}(e)$ is the flying time for traversing an edge~\cite{olivera2007adaptive}, and limiting the number of trips to one.
The VRP variant with time-windows (TWVRP), which limits the start and end time for visiting a vertex, $[t_i, \bar{t}_i)$, is NP-hard. Any instance of VRP can be reduced in polynomial time to TWVRP by just setting $t_i = 0$ and $\bar{t}_i = + \infty$~\cite{toth2002vehicle}.
Clearly, a VRP variant with energy-constrained vehicles is still NP-hard, by just relaxing those constraints to match VRP.
In the above VRP variants, the goal is only to minimize the costs. But \prob aims at maximizing the utility while bounding the energy and compute budget.
In literature, the VRP variant with profits (PVRP) is NP-hard~\cite{cattaruzza2016vehicle} since any instance of MTVRP can be reduced in polynomial time to PVRP by just setting all vertices to have the same unit-profit.
Moreover, \prob has to deal with scheduling of batches for maximizing the profit.
The original JSP is NP-hard~\cite{graham1966bounds}. So, any variant which introduces constraints is again NP-hard by a simple reduction, by relaxing those constraints, to JSP.
As \prob is a variant of VRP and JSP, it is NP-hard too.
\end{proof}
\subsection{The \algopt Algorithm}
The \textsc{Optimal Mission Scheduling}\xspace (\algopt) algorithm offers an optimal solution to \prob
by modeling it as a multi-commodity flow problem (MCF), similar to~\cite{zmazek2006multiple, trotta2018uavs}.
We reformulate the \prob definition as an MILP formulation.
The paths in the city are modeled as a \emph{complete graph}, $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, between the $n$ activity waypoint vertices, $\mathcal{V} = \{0, 1, \ldots, n\}$, where $0$ is the depot \ensuremath{\widehat{\lambda}}\xspace.
Let $\overrightarrow{i}$ and $\overleftarrow{i}$ be the set of \emph{out-edges} and \emph{in-edges} of a vertex $i$, and $\mathcal{V}' = \mathcal{V} \setminus \{ 0 \}$ be the set of all waypoint vertices.
We enumerate the $m$ drones as $\mathcal{D} = \{1, \ldots, m\}$.
Let $\tau_{\max}$ be the maximum time for completing all the missions, and
$r_{\max}$ the maximum trips a drone can do. Let $\mathcal{R} = \{1, \ldots, r_{\max}\}$ be the possible trips.
Let $x_{ij}^{kl} \in \{0,1\}$ be a decision variable that equals $1$ if the drone $k \in \mathcal{D}$ in its trip $l \in \mathcal{R}$ traverses the edge $(i,j)$, and $0$ otherwise. If $x_{ij}^{kl} = 1$ for $i \in \mathcal{V}'$, then the waypoint for activity $\alpha_i$ was visited by drone $k$ on trip $l$.
Let $\mathcal{B}_i = \{0, \ldots, q_i\}$ be the set of batches of activity $\alpha_i$.
Let $w^{gh}_{ia}$ be a binary decision variable used to linearize the batch computation whose value is $1$ if batch $b^g_i$ is processed before $b^h_a$, and $0$ otherwise~\cite{manne1960job}.
Let $y_{ig}^{kl}$ be a decision variable that equals $1$ if the drone $k \in \mathcal{D}$ in trip $l \in \mathcal{R}$ processes the batch $g$ of activity $\alpha_i$ within its deadline $\delta_i$, and $0$ otherwise; and similarly, $z_{ig}^{kl}$ equals $1$ if the batch is processed before the drone completes the trip and lands, and $0$ otherwise.
Let the per batch utility for on-board completion be $\bar{\Gamma}_i = \frac{\bar{\gamma}_i}{q_i}$, and on-time completion be $\bar{\bar{\Gamma}}_i = \frac{\bar{\bar{\gamma}}_i}{q_i}$, for activity $\alpha_i$.
Finally, let $M$ be a sufficiently large constant.
Using these, the MILP objective is:
{\small
\begin{align}
\max \sum_{k \in \mathcal{D}} \sum_{l \in \mathcal{R}} \sum_{i \in \mathcal{V}} \bigg( \sum_{j \in \overrightarrow{i}} x_{ij}^{kl} \cdot \Gamma_i \bigg)\!\! + \!\!\bigg( \sum_{g \in \mathcal{B}_i} y_{ig}^{kl} \cdot \bar{\Gamma_i} + z_{ig}^{kl} \cdot \bar{\bar{\gamma_i}}\bigg) \label{eq:main}
\end{align}
}
\noindent subject to the constraints listed in Table~\ref{tab:constraints}.
\section{Heuristic Algorithms for \prob}
\label{sec:algorithms:appox}
Since \prob is NP-hard, \algopt is tractable only for small-sized inputs.
So, time-efficient but sub-optimal algorithms are necessary for larger-sized inputs.
In this section, we propose two heuristic algorithms, called \textsc{Job Scheduling Centric}\xspace (\algjsc) and \textsc{Vehicle Routing Centric}\xspace (\algvrc).
\subsection{The \algjsc Algorithm}
The \textsc{Job Scheduling Centric}\xspace (\algjsc) algorithm aims to find near-optimal scheduling of batches while ignoring the optimizations of routing to conserve energy.
\algjsc is split into two phases: \emph{clustering} and \emph{scheduling}.
\subsubsection{Clustering Phase}
First, we use the ST-DBSCAN algorithm~\cite{birant2007st} to find time-efficient spatio-temporal clusters of activities.
It returns a set of clusters $\mathbb{C}$ such that for activities within a cluster $C_i \in \mathbb{C}$, certain spatial and temporal distance thresholds are met.
Drones are then allocated to clusters depending on their availability.
For each cluster $C_i$, let $T_i^U = \max_{\alpha_j \in C_i}{(\bar{t}_j + \mathcal{F}(\lambda_j, \ensuremath{\widehat{\lambda}}\xspace))}$ be the upper bound for the \emph{latest landing time} for a drone servicing activities in $C_i$;
analogously, let $T_i^L = \min_{\alpha_j \in C_i}{(t_j-\mathcal{F}(\ensuremath{\widehat{\lambda}}\xspace, \lambda_j))}$
be the lower bound for the \emph{earliest take-off time}.
Then, all the temporal windows $[T_i^L, T_i^U]$ for each $C_i \in \mathbb{C}$ are sorted with respect to $T_i^L$.
Recalling that there are $m$ drones available at $\widehat{t}=0$, they are proportionally allocated to clusters depending on the current availability, which in turn depends on the temporal window.
So, $c_1 = \frac{m}{n}\cdot |C_1|$ drones are allocated to $C_1$ at time $T_1^L$ and released at time $T_1^U$; $c_2 = \frac{m-c_1}{n} \cdot |C_2|$ allocated to $C_2$ from $T_2^L$ to $T_2^U$ (assuming $T_2^L < T_1^U$), and so on.
\ysnoted{Just to be clear, in this e.g., the time window of C1 and C2 overlap and hence we have (m-c1)? Else it would be m. \fbsnote{yes yes, they overlap.}}
\subsubsection{Scheduling Phase}
Here, the activities are assigned to drones.
The \emph{feasibility} of assigning $\alpha_{i}$ to $d_j$, is tested by checking if the required flying and hovering energy is enough to visit $A_j \cup \alpha_i$;
here, we ignore the batch processing energy.
If feasible, the drone can update its take-off and landing times accordingly, and then schedule the subset of batches $\widehat{B_i} \subseteq B_i$ within the energy requirements.
Assignments are done in two steps: \emph{default assignment}\xspace and \algtas.
\noindent {\textbf{Default Assignment.}}
For each $b_i^k \in \widehat{B_i}$, let $P_{b^k_i} = [t_k + i \beta, \delta_k)$ be the \emph{preferred interval}; $Q_{b^k_i} \subseteq P_{b^k_i}$ be the \emph{available preferred sub-intervals}, i.e., the set of periods where no other batch is scheduled; and $S_{b^k_i} = [\delta_k, \bar{\tau}_{j_{n+1}})$ be the \emph{schedulable interval}, which exceeds the deadline but completes on-board.\ysnoted{Swapping P and Q; P is now "preferred"}\fbsnoted{Great idea, P as preferred!}
Clearly, $P_{b^k_i} \cap S_{b^k_i} = \varnothing$.
The \emph{default schedule} determines a suitable time-slot for $b_i^k$.
If $Q_{b^k_i} \neq \varnothing$, $b_i^k$ is \emph{first-fit} scheduled within intervals of $Q_{b^k_i}$; else, if $Q_{b^k_i} = \varnothing$, the same first-fit policy is applied over intervals of $S_{b^k_i}$.
If $b_i^k$ cannot be scheduled even in $S_{b^k_i}$, it remains unscheduled.
\noindent {\textbf{Test and Swap Assignment.}}
If the \emph{default assignment}\xspace has batches that \emph{violate their deadline}, i.e., scheduled in $S$ but not in $P$, we use the \algtas to improve the schedule.
Let $P^+_i = \bigcup_i {P_{b^k_i}}$ be the union of the preferred intervals forming the \emph{total preferred interval} for an activity $\alpha_i$.
Each batch $b^k_i$ is tested for violating its deadline. \ysnoted{If $b^k_i$ violates, then $b^{k+1}_i$ must also violate}
If it violates, then batches $b^h_j$ from other activities already scheduled in $P^+_i$ are identified and tested if they too violate their deadline.
If so, $b^h_j$ is moved to the next available slot in $S_{b^h_j}$, and its old time slot given to $b^k_i$.
If $b^h_j$ is in its \emph{preferred interval} but has more slots available in this interval, then $b^h_j$ is moved to another free slot in $P_{b^h_j}$ and $b^k_i$ assigned to the slot that is freed.
Else, the current configuration does not contain violations, except for the current batch $b^k_i$, but all available slots are occupied.
So, the utility for $b^k_i$ is compared with another $b^h_j$ in $P^+_i$,
and the batch with a higher utility gets this slot.
\subsubsection{The Core of \algjsc}
The \algjsc algorithm works as follows (Algorithm~\ref{alg:algjsc}).
After the initial \emph{clustering phase}, activities are tested for their feasibility.
If so, the \emph{default assignment}\xspace is initially evaluated in terms of total utility.
If this creates deadline violation, the \algtas performed, and the best scheduling is applied.
\begin{algorithm}[htbp]
$\mathbb{C} \gets $ \emph{clustering phase}\;\label{code:2-preproc}
\For {$C_k \in \mathbb{C}$} {
\For {$\alpha_i \in C_k$} {
\For {$d_j$ assigned to $C_k$} {
\If {$\alpha_i \cup A_j$ is feasible} {\label{code:2-feasibility}
apply best scheduling among \emph{default} and \algtas on $\widehat{B_i}$\;\label{code:2-best}
}
}
}
}
\caption{$\algjsc(A, D)$}
\label{alg:algjsc}
\end{algorithm}
\subsubsection{Time Complexity of \algjsc}
ST-DBSCAN's time complexity is $\mathcal{O}(n \log n)$ for $n$ waypoints.
Unlike $k$-means clustering, ST-DBSCAN automatically picks a suitable number of clusters,
$k$, with $\approx \frac{n}{k}$ waypoints each.
For $k$ times, we compute the min-max of sets of size $\frac{n}{k}$, sort the $k$ elements and finally make $\frac{n}{k}$ assignments.
So this drones-to-clusters allocation takes $\mathcal{O}(k \frac{n}{k} + k \log k + \frac{n}{k})$ time.
Hence, this \emph{clustering phase} takes $\mathcal{O}(n \log n)$ time.
For the \algtas, we maintain an interval tree for fast temporal operations.
If $l$ is the maximum number of batches to schedule per activity, building the tree costs $\mathcal{O}(\frac{nl}{k} \log(\frac{nl}{k}))$, while search, insertion and deletion cost $\mathcal{O}(\log(\frac{nl}{k}))$.
Finding free time slots makes a pass over the batches in $\mathcal{O}(\frac{nl}{k})$.
This is repeated for $l$ batches, to give an overall time complexity of $\mathcal{O}(\frac{nl}{k} \log(\frac{nl}{k}) + \frac{n}{k}l^2)$.
Also the \emph{default assignment}\xspace relies on the same interval tree, reporting the same complexity as \algtas.
Finally, for the $k$ clusters and each application in a cluster, two schedule assignments are calculated for all the drones.
Thus, the time complexity of \algjsc is $\mathcal{O}(n \log n) + \mathcal{O}(k \frac{n}{k} m (\frac{nl}{k} \log(\frac{nl}{k}) + \frac{n}{k}l^2))$.
However, since the clustering can result in single cluster, $m \rightarrow n$, and the \emph{overall complexity} of \algjsc is $\mathcal{O}(n^3 l^2)$ in the worst case.
\subsection{The \algvrc Algorithm}
The \textsc{Vehicle Routing Centric}\xspace (\algvrc) algorithm aims to find near-optimal waypoint routing while initially ignoring efficient scheduling of the batch computation.
\algvrc is split into three phases: \emph{routing}, \emph{splitting}, and \emph{scheduling}.
\subsubsection{Routing Phase}
In this phase, \algvrc builds routes while satisfying the \emph{temporal constraint} for activities, i.e., for any two consecutive activities $(\alpha_i, \alpha_{i+1})$ in the route, $\bar{t}_i + \mathcal{F}(\lambda_i, \lambda_{i+1}) \le t_{i+1}$.
This is done using a modified version of $k$-nearest neighbors ($k$\textsc{-nn}\xspace) algorithm, whose solution is then locally optimized using the \textsc{2-opt*}\xspace heuristic~\cite{potvin1995exchange}.
The modified $k$\textsc{-nn}\xspace works as follows:
Starting from \ensuremath{\widehat{\lambda}}\xspace, a route is iteratively built by selecting, from among the $k$ nearest waypoints which meet the \emph{temporal constraint},
the one, say, $\lambda_1$ whose activity has the earliest \emph{observation start time}.
This process resumes from $\lambda_1$ to find $\lambda_2$, and so on until there is no feasible neighbor. \ensuremath{\widehat{\lambda}}\xspace is finally added to conclude the route.
This procedure is repeated to find other routes until all the possible waypoints are chosen.
This initial set of routes is optimized to minimize the flying and hovering energy using \textsc{2-opt*}\xspace, which lets us find a local optimal solution from the given one~\cite{toth2002vehicle}.
However, routes found here may be infeasible for a drone to complete within its energy constraints.
\subsubsection{Splitting Phase}
Say $R_{i,j} = (\ensuremath{\widehat{\lambda}}\xspace, \lambda_i, \ldots, \lambda_j, \ensuremath{\widehat{\lambda}}\xspace)$ be an energy-infeasible route from the routing phase, which visits $\lambda_i$ and $\lambda_j$ as the first and last waypoints from \ensuremath{\widehat{\lambda}}\xspace.
The goal is to find a suitable waypoint $\lambda_g$ for $i \le g < j$ such that by splitting $R_{i,j}$ at $\lambda_g$ and $\lambda_{g+1}$, we can find an energy-feasible route while also improving the overall utility and reducing scheduling conflicts for batches.
For each edge $(\lambda_g,\lambda_{g+1})$, we compute a \emph{split score} whose value sums up three components: \emph{energy score}, \emph{utility score}, and \emph{compute score}.
\noindent {\textbf{Energy score.}}
Let $E(a,b)$ be the cumulative flying and hovering energy required for some route $R_{a,b} \subseteq R_{i,j}$.
Here we sequentially partition the route $R_{i,j}$ into multiple \emph{viable trips} $R_{(i,k_{1}-1)}$, $R_{(k_{1},k_{2}-1)}$, \ldots, $R_{(k_{x},j)}$ such that each is a maximal trip and is energy-feasible, i.e., $E(k_{y},k_{y+1}-1) \leq E$ while $E(k_{y},k_{y+1}) > E$.
For each edge $(\lambda_g, \lambda_{g+1}) \in R_{(k_{y},k_{y+1}-1)}$, the \emph{energy score} is the ratio $\frac{E(k_{y},g)}{E} \leq 1$.
A high value indicates that a split at this edge improves the battery utilization.
\noindent \textbf{{Utility score.}}
Say $U(a,b)$ gives the cumulative data capture utility from visiting waypoints in a route $R_{a,b} \subseteq R_{i,j}$. Say edge $(\lambda_g, \lambda_{g+1}) \in R_{(k_{y},k_{y+1}-1)} \subseteq R_{i,j}$ is also part of a viable trip from above. Here, we find the data capture utility of a sub-route of $R_{i,j}$ that starts a new maximal viable trip at $\lambda_{g+1}$ and spans until $\lambda_{l}$, as $U(g,l)$. The utility score of edge $(\lambda_g, \lambda_{g+1})$ is the ratio between this new maximal viable trip and the original viable trip the edge was part of, $\frac{U(g,l)}{U(k_{y},k_{y+1}-1)}$.
A value $>1$ indicates that a split at this edge improves the utility relative to the earlier sequential partitioning of the route.
\noindent \textbf{{Compute score.}}
We first do a soft scheduling of the batches of all waypoints in $R_{i,j}$ using the \emph{first-fit} scheduling policy, mapping them to their \emph{preferred interval}, which is assumed to be free. Say there are $|R_{i,j}|$ such batches.
Then, for each edge edge $(\lambda_g, \lambda_{g+1}) \in R_{i,j}$, we find the overlap count $O_{g}$ as the number of batches from $\alpha_g$ whose execution slot overlaps with batches from all other activities.
The overlap score for edge $(\lambda_g, \lambda_{g+1})$ is given as $\frac{O_{g}}{|R_{i,j}|}$.
If this value is higher, splitting the route at this point will avoid batches from having schedule conflicts in their preferred time slot.
Once the three scores are assigned, the edge with the highest \emph{split score} is selected as the split-point to divide the route into two sub-routes.
If a sub-route meets the energy constraint, it is selected as a \emph{valid trip}.
If either or both of the sub-routes exceed the energy capacity, the splitting phase is recursively applied to that sub-route till all waypoints in the original route are part of some valid trip.
\subsubsection{Scheduling Phase}
Trips are then sorted in decreasing order of their total utility, and drones are allocated to trips depending their temporal availability.
Once assigned to a trip, the drone's scheduling is done by comparing the \emph{default assignment}\xspace and the \algtas used in \algjsc.
\subsubsection{The Core of \algvrc}
The \algvrc algorithm works as follows (Algorithm~\ref{alg:algvrc}).
After the initial \emph{routing phase}, energy-unfeasible routes are split into feasible ones in the \emph{splitting phase}, and then drones are allocated to them.
Finally, the \emph{scheduling phase} is applied to find the best schedule between the \emph{default assignment}\xspace and the \algtas.
\begin{algorithm}[htbp]
$\mathbb{R} \gets $ \emph{routing phase}\;
\For {$R_{ij} \in \mathbb{R}$} {
\For {$(\lambda_g, \lambda_{g+1}) \in R_{ij}, i \le g < j$} {
$s(g) \gets $ \emph{energy score} + \emph{utility score} + \emph{compute score}\;
}
}
$\mathbb{R}' \gets $ \emph{splitting phase} based on scores $s(i), 1 \le i \le n$\;
\For {$d_j$ assigned to $R_{ij} \in \mathbb{R}'$} {
apply best scheduling among \emph{default assignment}\xspace and \algtas on $R_{ij}$\;
}
\caption{$\algvrc(A, D)
}
\label{alg:algvrc}
\end{algorithm}
\subsubsection{Time Complexity of \algvrc}
In the \emph{routing phase}, the modified $k$\textsc{-nn}\xspace takes $\mathcal{O}(kn)$, with $n$ waypoints and $k$ number of neighbors. The \textsc{2-opt*}\xspace algorithm has time complexity $\mathcal{O}(n^4)$. Hence this phase overall has a cost of $\mathcal{O}(n^4)$.
In the \emph{splitting phase}, calculating the energy score for a route with length $n$ edges takes $\mathcal{O}(n)$. Calculating the energy score has $\mathcal{O}(n^2)$ complexity, and calculating the compute score has $\mathcal{O}(n)$ complexity. Considering a recursion of length $n-1$, the complexity of this phase is $\mathcal{O}(n^3)$
Combining \emph{default assignment}\xspace and \algtas, \algvrc's \emph{overall complexity} is $\mathcal{O}(n^4)$ in the worst case.
\section{Performance Evaluation}\label{sec:evaluation}
\subsection{Experimental Setup}
The \algopt solution is implemented using IBM's CPLEX MILP solver v12~\cite{cplex2009v12}. It uses Python to wrap the objective and constraints, and invokes the parallel solver. Our \algjsc and \algvrc heuristics have a sequential implementation using native Python.
By default, these scheduling algorithms on our workloads run on an \emph{AWS c5n.4xlarge VM} with Intel Xeon Platinum 8124M CPU, $16$ cores, $3.0 \unit{GHz}$, and $42 \unit{GB}$ RAM.
\algopt runs on $16$ threads and the heuristics on $1$ thread.
We perform real-world benchmarks on flying, hovering, DNN computing and endurance, for a fleet of custom, commercial-grade drones. The X-wing quad-copter is designed with a top speed of $6 \unit{m/s}$ ($20 \unit{km/h}$), $<120 \unit{m}$ altitude, a $24000 \unit{mAh}$ Li-Ion battery and a payload capacity of $3 \unit{kg}$. It includes dual front and downward HD cameras, GPS and LiDAR Lite, and uses the Pixhawk2 flight controller. It also has an NVIDIA Jetson TX2 compute module with $4$-Core ARM64 CPU, $256$-core Pascal CUDA cores, $8 \unit{GB}$ RAM and $32 \unit{GB}$ eMMC storage.
The maximum flying time is $\approx 30 \unit{min}$ with a range of $3.5 \unit{km}$.
Based on our benchmarks, we use the following drone parameters in our analytical experiments.
\begin{center}
\small
\setlength\tabcolsep{2pt}
\begin{tabular}{c|c|c|c|c
\toprule
$s$ & $\epsilon^f$ & $\epsilon^h$ & $\epsilon^c$ & $E$ \\%& Max.Flight & Max.Range\\
\midrule
$4 \unit{m/s}$ & $750 \unit{J/s}$ & $700 \unit{J/s}$ & $20 \unit{J/s}$ & $1350 \unit{kJ}$\\% & $\approx 30~mins$ & $3.5~km$ \\
\bottomrule
\end{tabular}
\end{center}
\begin{figure*}[t]
\centerin
\subfloat[Utility per drone, RND]{
\includegraphics[width=0.42\textwidth]{figures/W1-util-per-drone}
\label{fig:exp:w1:util}
}
\subfloat[Utility per drone, DFS]{
\includegraphics[width=0.42\textwidth]{figures/W2-util-per-drone}
\label{fig:exp:w2:util}
}
\hfill
\subfloat[Alg. runtime, RND]{
\includegraphics[width=0.42\textwidth]{figures/W1-execution}
\label{fig:exp:w1:exec}
}
\subfloat[Alg. runtime, DFS]{
\includegraphics[width=0.42\textwidth]{figures/W2-execution}
\label{fig:exp:w2:exec}
}
\caption{\emph{Expected utility per drone} and \emph{algorithm runtime} of the three MSP algorithms, for the RND and DFS workloads on MNet. On the X axis, the number of drones (outer) and activities per drone (inner) increase. \algopt is solved on $16\times$ cores while \algjsc and \algvrc run on just $1$. DNF indicates \algopt did not finish.}
\label{fig:exp:util-exec}
\end{figure*}
\subsection{Workloads}
We evaluate the scheduling algorithms for two \emph{application workloads}: Random (RND) and Depth First Search (DFS). Both have a maximum mission time of $4 \unit{h}$ over multiple trips. In the \emph{RND workload}, $n$ waypoints are randomly placed within a $3.5 \unit{km}$ radius from the depot, and with a random activity start time within $(0,240] \unit{mins}$. This is an adversarial scenario with no spatio-temporal locality. The \emph{DFS workload} is motivated by realistic traffic monitoring needs. We perform a depth-first traversal over a $3.5 \unit{km}$ radius of our local city's road network, centered at the depot. With a $\mathcal{P}=\frac{1}{10}$ probability, we pick a visited vertex as an activity waypoint; $\mathcal{P}$ grows by $\frac{1}{10}$ for every vertex that is not selected, and $n$ are chosen. The start time of these activities monotonically grows.
The table below shows the activity and drone \emph{scenarios} for each workload. These are based on reasonable operational assumptions and schedule feasibility. We vary the data capture time ($\bar{t}-t$); batching interval ($\beta$); batch execution time on $2$ DNNs ($\rho_M$, $\rho_R$)\footnote{We run \emph{SSD Mobilenet v2 DNN} (MNet, $\rho_M$)~\cite{Sandler_2018_CVPR}, popular for analyzing drone footage~\cite{wang2018bandwidth}, and \emph{FCN Resnet18 DNN} (RNet, $\rho_R$)~\cite{long2015fully} on the TX2.}; deadline ($\delta$); utility ($\gamma$); and number of drones ($m$). The \emph{load factor} $x$ decides the count of activities per mission, $n = x \cdot m$. Drones take at most $r_{\max}=\frac{n}{m}$ trips.
\begin{center}
\small
\setlength\tabcolsep{2pt}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
$\bar{t}-t$ & $\beta$ & $\rho_M$ & $\rho_R$ & $\delta$ & $\gamma$ & $m$ & $x$ & $n= x \cdot m$ \\
\midrule
$[1,5]$ & $60\unit{s}$ & $11\unit{s}$ & $98\unit{s}$ &$120\unit{s}$ & $[1,5]$ & $5,10,20,50$ & $2, 4, 8$ & $10,\ldots,200$\\
\bottomrule
\end{tabular}
\end{center}
\noindent For brevity, RNet is only run on DFS. $10$ \emph{instances} of each of these $33$ viable workload scenarios are created. We run \algopt, \algjsc and \algvrc for each to return a schedule and expected utility
\ysnoted{Try and provide citations to justify above assumptions/workloads}
\subsection{Experimental Results}
Figures~\ref{fig:exp:w1:util},~\ref{fig:exp:w2:util} and~\ref{fig:exp:w2r:util} show the \emph{expected utility per drone} for the schedules from the $3$ algorithms, for different drone counts and activity load factors. Similarly, Figures~\ref{fig:exp:w1:exec},~\ref{fig:exp:w2:exec}, and~\ref{fig:exp:w2r:exec} show the \emph{algorithm execution time} (log, $secs$) for them. Each bar is averaged for $10$ instances and the standard deviations shown as whiskers. The per drone utility lets us uniformly compare the schedules for different workload scenarios. The \emph{total utility} -- MSP objective function -- is the product of the per drone utility shown and the drone count. \algopt \emph{did not finish} (DNF) within $7 \unit{h}$ for scenarios with $40$ or more activities
\begin{figure}[t]
\centerin
\subfloat[Utility per drone]{
\includegraphics[width=0.42\textwidth]{figures/W2-DNN2-util-per-drone}
\label{fig:exp:w2r:util}
}%
\hfill
\subfloat[Alg. runtime]{
\includegraphics[width=0.42\textwidth]{figures/W2-DNN2-execution}
\label{fig:exp:w2r:exec}
}
\caption{\emph{Expected utility per drone} and \emph{algorithm runtime} for RNet on DFS.}
\label{fig:exp:util-exec:w2r}
\end{figure}
\subsubsection{\algopt offers the highest utility, if it completes executing, followed by \algvrc, and \algjsc}
Specifically, for the $5$-drone scenarios for which \algopt completes, it offers an average of $42\%$ higher expected utility than \algjsc. \algvrc gives $26\%$ more average utility than \algjsc for these scenarios, and $75\%$ more for all scenarios they run for.
This is as expected for \algopt. Since a bulk of the energy is consumed for flying and hovering, \algvrc, which starts with an energy-efficient route, schedules more activities within the time and energy budget, as compared to \algjsc.
This is evidenced by Figure~\ref{fig:exp:act}, which reports for MNet the \emph{average fraction of activities}, which are submitted and successfully scheduled by the algorithms. The remaining activities are not part of any trip. Among all workloads, \algjsc only schedules $60\%$ of activities, VRC $90\%$, and \algopt $98\%$. So \algopt and \algvrc are better at packing routes and analytics on the UAVs.
\algopt and \algvrc offer more utility for the DFS workload than RND
since $\geq 96\%$ of DFS activities are scheduled.
They exploit the spatial and temporal locality of activities in DFS.
\subsubsection{The average flying time per activity in each trip is higher for \algvrc compared to \algjsc} Interestingly, at $728 \unit{s}$ vs. $688 \unit{s}$ per activity, the route-efficient schedules from \algvrc manage to fly to waypoints farther away from the depot and/or from each other, within the energy constraints, when compared to the schedules from \algjsc.
\ysnoted{??? Avg dist of waypoints visited by VRC in a trip vs JSC in a trip; Former may be more.}
As a result, it schedules a larger fraction of the activities to gain a higher expected utility.
\ysnoted{comments on std dev}
\begin{figure}[tbp]
\centerin
\subfloat[RND]{
\includegraphics[width=0.42\textwidth]{W1-app-sch.pdf}
\label{fig:exp:w1:act}
}
\hfill
\subfloat[DFS]{
\includegraphics[width=0.42\textwidth]{W2-app-sch.pdf}
\label{fig:exp:w2:act}
}
\caption{Fraction (\%) of submitted activities scheduled per mission for MNet.}
\label{fig:exp:act}
\end{figure}
\subsubsection{The execution times for \algvrc and \algjsc match their time complexity}
We use the execution times for \algjsc to schedule the $300+$ workload instances to fit a \emph{cubic function} in $n$, the number of activities, to match its time complexity of $\mathcal{O}(n^3 \cdot l^2)$; since in our runs, $l \in [1,5]$ and $l \leq n$, we omit that term in the fit.
Similarly, we fit a \emph{degree-4 polynomial} for \algvrc in $n$.
The \emph{correlation coefficient} for these two fits are high at $0.86$ and $0.99$, respectively. So, the real-world execution time of our scheduling heuristics match our complexity analysis.
\subsubsection{\algopt is the slowest to execute, followed by \algvrc and \algjsc}
Despite \algopt using $16\times$ more cores than \algjsc and \algvrc, its average execution times are $>100 \unit{s}$ for just $20$ activities. The largest scenario to complete in reasonable time is $40$ activities on $5$ drones, which took $7 \unit{h}$ on average. This is consistent with the NP-hard nature of MSP. As our mission window is $4 \unit{h}$, any algorithm slower than that is not useful.
\algjsc is fast, and on average completes within $1 \unit{s}$ for up to $80$ activities. Even for the largest scenario with $50$ drones and $200$ activities, it takes only $90 \unit{s}$ for RND and $112 \unit{s}$ for DFS.
\algvrc is slower but feasible for a larger range of activities than \algopt. It completes within $3 \unit{min}$ for up to $100$ activities. But, it takes $\approx 45 \unit{min}$ to schedule $200$ activities on $50$ drones.
\subsubsection{The choice of a good scheduling algorithm depends on the fleet size and activity count}
From these results, we can conclude that \algopt is well suited for small drone fleets with about $20$ activities scheduled per mission. This completes within minutes and offers about $20\%$ better utility than \algvrc.
\algvrc offers a good trade-off between utility and execution time for medium workloads with $100$ activities and $50$ drones. This too completes within minutes and gives on average about $75\%$ better utility than \algjsc and schedules over $80\%$ of all submitted activities.
For large fleets with $200$ or more activities being scheduled, \algjsc is well suited with fast solutions but has low utility and leaves a majority of activities unscheduled.
\subsubsection{A higher load factor increases the utility, but causes fewer \% of activities to be scheduled}
As $x$ increases, we see that the utility derived increases. This is partly due to adequate energy and time being available for the drones to complete more activities in multiple trips.
E.g., for the 5-drone case, we use load factors of $x=\{2,4,8,16,32\}$ for \algjsc and \algvrc. There is a consistent growth in the total utility, from $109$ to $523$ for \algjsc, and from $121$ to $1080$ for \algvrc. There is also a corresponding growth in the number of trips performed per mission, e.g., from $7.5$ to $43.2$ in total for \algvrc.
However, the fraction of submitted activities that are scheduled falls. For \algjsc, its activity scheduled \% linearly drops with $x$ from $76\%$ to $23\%$. But for \algvrc, the scheduled \% stays at about $80\%$ until $x=8$, at which point the activities saturate the drone fleet's capacity and the scheduled \% falls linearly to $37\%$ for $x=32$.
Interestingly, the utility increases faster than the number of activities scheduled for \algvrc. This is due to the scheduler favoring activities that offer a higher utility, while avoiding those with a lower utility, causing a $20\%$ increase in utility received per activity between $x=8$ to $x=32$.
\subsubsection{Longer-running edge analytics offer lower on-time utility}
\ysnoted{Why does DNN2 not compelte on OPT for 5/10s and beyond, within time?}
We run the same scenarios using RNet and MNet DNNs for the DFS workload. For both \algjsc and \algvrc, the \emph{data capture utility} that accrues from their schedules for the two DNNs is similar. However, since the RNet execution time per batch is much higher than MNet, there is a drop in \emph{on-time utility}, by about $32\%$ for both \algjsc and \algvrc, due to more deadline violations.
As a result, this also causes a drop in total utility for RNet by about $15.9\%$ for \algjsc and $19\%$ for \algvrc, relative to MNet. Even for \algopt we see a similar trend with a $15.8\%$ drop in the total utility. The runtime of \algjsc and \algvrc do not exhibit a significant change between RNet and MNet.
\subsubsection{Effect of real-world factors}
The \textit{expected utilities} reported above are under ideal conditions. Here, we evaluate their practical efficacy by emulating these schedules using real drone traces to get the \textit{effective utility} and \textit{trip completion rate}.
Ideally, each trip generated by \algjsc and \algvrc should complete be within a drone's energy capacity.
In practice, factors such as wind or non-linear battery performance can increase or decrease the actual energy consumed. Figure~\ref{fig:exp:real-trips} shows the \% of scheduled trips that do not complete when using the drone trace. With $<80$ activities, all trips complete (not plotted). But but with $\geq 80$ activities, some trips in the planned schedule start to fail. At worst, $12\%$ of trips are incomplete in some schedules. So the effect of real-world factors can be significant. Interestingly, for the failed trips, an average $3.6\%$ and a maximum of $7.9\%$ extra battery capacity would allow them to finish the trip.
So by maintaining a buffer battery capacity of $\approx 10\%$ when planning a schedule, we can ensure that the drones can complete a trip and return to the depot.
\begin{figure*}[t]
\centerin
\subfloat[MNet, RND]{
\includegraphics[width=0.31\textwidth]{figures/W1-prec-trips-mnet}
\label{fig:exp:w1:util_2}
}
\subfloat[MNet, DFS]{
\includegraphics[width=0.31\textwidth]{figures/W2-prec-trips-mnet}
\label{fig:exp:w2:util_2}
}
\subfloat[RNet, DFS]{
\includegraphics[width=0.31\textwidth]{figures/W2-prec-trips-rnet}
\label{fig:exp:w1:exec_2}
}
\caption{\% of incomplete trips using drone trace. It is $0\%$ for $<80$ activities.}
\label{fig:exp:real-trips}
\end{figure*}
\section{Conclusion and Future Work}
\label{sec:conclusions}
\ysnoted{Add 1-2 paras with high level takeways from the current problem and its results...}
\ysnoted{parallel variants of VRC to speed up processing?}
This paper introduces a novel Mission Scheduling Problem (\prob) that co-schedules routes and analytics for drones, maximizing the utility for completing activities.
We proposed an optimal algorithm, \algopt, and two time-efficient heuristics, \algjsc and \algvrc.
Evaluations using two workloads, varying drone counts and load factors, and real traces exhibit different trade-offs between utility and execution time. \algopt is best for $\leq20$ activities and $\leq 5$ drones, \algvrc for $\leq100$ activities and $\leq 50$ drones, and \algjsc for $>100$ activities.
Their time complexity matches reality.
The schedules work well for fast and slow DNNs, though on-time utility drops for the latter.
The MSP proposed here is just one variant of an entire class of fleet co-scheduling problems for drones. Other architectures can be explored considering 4G/5G network coverage to send edge results to the back-end, or even off-load captured data to the cloud if it is infeasible to compute on the drone. This will allow more pathways for data sharing among UAVs and GS, but impose energy, bandwidth and latency costs for communications. Even the routing can be aware of cellular coverage to ensure deterministic off-loading on a trip
We can use alternate cost models by assigning an operational cost per trip or per visit, and convert the MSP into a profit maximization problem. The activity time-windows may be relaxed rather than be defined as a static window. Drones with heterogeneous capabilities, in their endurance, compute capabilities, and sensors, will also be relevant for performing diverse activities such as picking up a package using an on-board claw and visually verifying it using a DNN.
Finally, we need to deal with dynamics and uncertainties like wind, obstacles and non-linear battery or compute behavior that affect flight paths, energy consumption and utilities. We can use probability distributions and stochastic approaches coupled with real-time information, which can decide and enact on-line rescheduling and rerouting while on a trip.
Such on-the-fly route updates for drones also allows us to accept and schedule activities continuously, rather accumulate a mission over hours, and prioritize the profitable activities. These will also need to be validated using more robust real-world experiments and traces.
\let\thefootnote\relax\footnotetext{\noindent \textbf{Acknowledgments.} \textit{This work is supported by AWS Research Grant, Intelligent Systems Center at Missouri S\&T, and NSF grants CCF-1725755 and SCC-1952045. A. Khochare is funded by a Ph.D. fellowship from RBCCPS, IISc, Bangalore.
S. K. Das was partially supported by a Satish Dhawan Visiting Chair Professorship at IISc. We thank RBCCPS for access to the drone, and Vishal, Varun and Srikrishna for helping collect the drone traces.}}
\clearpage
\bibliographystyle{ieeetr}
|
{
"timestamp": "2021-02-18T02:18:22",
"yymm": "2102",
"arxiv_id": "2102.08768",
"language": "en",
"url": "https://arxiv.org/abs/2102.08768"
}
|
\section{Introduction}
In this paper we introduce the concept of Wiener--Luxemburg amalgam spaces which are a modification of the more classical Wiener amalgam spaces. The principal idea of both kinds of amalgam spaces is to treat separately the local and global behaviour of a given function, in the sense that said function is required to be locally in one space and globally in a different space. The exact meaning of being locally and globally in a space varies in literature, depending on the desired generality and personal preference.
The classical Wiener amalgams approach this issue in a very general, albeit quite non-trivial, manner. They were, in their general form, first introduced by Feichtinger in \cite{Feichtinger83}, although the less general cases were studied earlier, see for example the paper \cite{Holland75} due to Holland, and some special cases date as far back as 1926 when the first example of such a space was introduced by Wiener in \cite{Wiener26}. The different versions of these spaces saw many applications in the last decades, great surveys of which have been conducted, concerning a somewhat restricted version, by Fournier and Stewart in \cite{FournierSteward85} and, concerning the more general versions, by Feichtinger in \cite{Feichtinger90} and \cite{Feichtinger92}. Probably the most famous example is the Tauberian theorem for the Fourier transform on the real line due to Wiener (see \cite{Wiener32} and~\cite{Wiener33}), other examples include the theory of Fourier multipliers (see \cite{EdwardsHewitt77}), several variation of the sampling theorem (see \cite{FeichtingerGrochenig92}), and the theory of product convolution--operators (see \cite{BusbySmith81}).
One unfortunate property of Wiener amalgams is that, even in the simplest and most natural cases, their construction does not preserve the properties of Banach function spaces, nor does it preserve rearrangement invariance (see Appendix~\ref{Appendix}). This approach is therefore unsuitable when one wishes to work in this context. But this often is the case, since there are many situations when the need arises naturally to prescribe separately the conditions on local and on global behaviour of a function. One such situation is the study of optimal Sobolev type embeddings over the entire Euclidean space in the context of rearrangement-invariant Banach function spaces as performed by Alberico, Cianchi, Pick and Slav{\' i}kov{\' a} in \cite{AlbericoCianchi2018}. A very natural example is the optimal target space, in the context of rearrangement-invariant Banach function spaces, for the limiting case of the classical Sobolev embeddings over the entire Euclidean space, which has been found by Vyb{\' i}ral in \cite{Vybiral07}. Another such situation arised during the study of generalised Lorentz--Zygmund spaces which led to the introduction of broken logarithmic functions to allow separate treatment of local and global properties of functions in this context. For further details and a comprehensive study of generalised Lorentz--Zygmund spaces we refer the reader to \cite{OpicPick99}. Further example of an area where this approach has been successfully employed is the theory of interpolation, where description of sums and intersections of spaces as de facto amalgams have proven useful, see \cite{Bathory18} and \cite{BennettRudnick80} for details.
This led us to develop the theory of Wiener--Luxemburg amalgam spaces, which aims to eliminate the above mentioned limitations and to provide a general framework for separate prescription of local and global conditions in the context of rearrangement-invariant Banach function spaces. The starting point is provided by the non-increasing rearrangement, which is the crucial element in the theory of said spaces and which naturally separates the local behaviour of a function from its global behaviour, at least in the sense of size. This allows us to define Wiener--Luxemburg amalgam spaces in a very easy and straightforward manner.
While the main part of this paper focuses on amalgams of rearrangement-invariant Banach function spaces, we will in the later sections also use the recent advances in the theory of quasi-Banach function spaces (see \cite{NekvindaPesa20}) and extend our theory into this context. While of independent interest, this will also allow us to view the theory of quasi-Banach function spaces from a new viewpoint and provide a negative answer to an important open question whether Hardy--Littlewood--P\'{o}lya principle holds for all rearrangement-invariant quasi-Banach function norms.
The paper is structured as follows. In Section~\ref{CHP} we present the basic theoretical background needed in order to build the theory in later sections.
Section~\ref{CHWLAS} is the main part of the paper where the theory of Wiener--Luxemburg amalgam spaces is developed in some detail for the more classical context of rearrangement-invariant Banach function spaces. We show that they too are rearrangement-invariant Banach function spaces, then we provide a characterisation of their associate spaces, a full characterisation of their embeddings, and put them in relation with the concepts of sum and intersection of Banach spaces. Furthermore we refine the well known classical result that it holds for all rearrangement-invariant Banach function spaces $A$ that
\begin{equation*}
L^1 \cap L^{\infty} \hookrightarrow A \hookrightarrow L^1 + L^{\infty}
\end{equation*}
by showing that $L^1$ is the locally weakest and globally strongest rearrangement-invariant Banach function space, while $L^{\infty}$ is, in the same setting, the locally strongest and globally weakest space. Needless to say, our definition of Wiener--Luxemburg amalgam spaces is general enough to cover all the spaces appearing in the applications outlined above.
We then in Section~\ref{CHIAS} switch to the more abstract context of rearrangement-invariant quasi-Banach function spaces and introduce an extension of the concept of associate spaces which we call the integrable associate spaces. This concept emerged naturally in the study of associate spaces of Wiener--Luxemburg amalgams in the context of quasi-Banach function spaces, so in this section we lay the groundwork for Section~\ref{CHWLASq}. However, we believe this topic to be interesting in its own right and the treatment we provide goes beyond what is strictly necessary for our later work.
Section~\ref{CHWLASq} then contains the extension of our theory to the context of rearrangement-invariant quasi-Banach function spaces. We show that, in this context, the Wiener--Luxemburg amalgam spaces are rearrangement-invariant quasi-Banach function spaces, we describe their integrable associate spaces (and, in the case when it is meaningful, their associate spaces), and provide a precise treatment of the above mentioned refinement of the result that
\begin{equation*}
L^1 \cap L^{\infty} \hookrightarrow A \hookrightarrow L^1 + L^{\infty}
\end{equation*}
which shows what properties of rearrangement-invariant Banach function norms are relevant for the individual embeddings. Last but not least, we show that the validity of the Hardy--Littlewood--P\'{o}lya principle is sufficient for one of the embeddings in question. Since there are spaces for which this embedding fails, we obtain a negative answer to the important open question whether the Hardy--Littlewood--P\'{o}lya principle holds for all r.i.~quasi-Banach function spaces.
Finally in Appendix~\ref{Appendix} we present some counterexamples which show that our claim that Wiener amalgams are in general neither Banach function spaces nor rearrangement-invariant is justified. This serves three distinct purposes: first, it fills a gap in literature; second, it provides some insight into the thought process behind our definition of Wiener--Luxemburg amalgam spaces; and third, it is an application of our theory, since we use Wiener--Luxemburg amalgams to show that Wiener amalgams are not rearrangement-invariant.
\section{Preliminaries}\label{CHP}
This section serves to establish the basic theoretical background upon which we will build our theory of Wiener--Luxemburg amalgam spaces. The definitions and notation is intended to be as standard as possible. The usual reference for most of this theory is \cite{BennettSharpley88}.
Throughout this paper we will denote by $(R, \mu)$, and occasionally by $(S, \nu)$, some arbitrary (totally) sigma-finite measure space. Given a $\mu$-measurable set $E \subseteq R$ we will denote its characteristic function by $\chi_E$. By $M(R, \mu)$ we will denote the set of all extended complex-valued $\mu$-measurable functions defined on $R$. As is customary, we will identify functions that coincide $\mu$-almost everywhere. We will further denote by $M_0(R, \mu)$ and $M_+(R, \mu)$ the subsets of $M(R, \mu)$ containing, respectively, the functions finite $\mu$-almost everywhere and the non-negative functions.
For brevity, we will abbreviate $\mu$-almost everywhere, $M(R, \mu)$, $M_0(R, \mu)$, and $M_+(R, \mu)$ to $\mu$-a.e., $M$, $M_0$, and $M_+$, respectively, when there is no risk of confusing the reader.
When $X, Y$ are two topological linear spaces, we will denote by $Y \hookrightarrow X$ that $Y \subseteq X$ and that the identity mapping $I : Y \rightarrow X$ is continuous.
As for some special cases, we will denote by $\lambda^n$ the classical $n$-dimensional Lebesgue measure, with the exception of the $1$-dimensional case in which we will simply write $\lambda$. We will further denote by $m$ the counting measure over $\mathbb{N}$. When $p \in (0, \infty]$ we will denote by $L^p$ the classical Lebesgue space (of functions in $M(R, \mu)$) defined by
\begin{equation*}
L^p = \left \{ f \in M(R, \mu); \; \int_R \lvert f \rvert^p \: d\mu < \infty \right \}
\end{equation*}
equipped with the customary (quasi-)norm
\begin{equation*}
\lVert f \rVert_p = \left ( \int_R \lvert f \rvert^p \: d\mu \right )^{\frac{1}{p} },
\end{equation*}
with the usual modifications when $p=\infty$. In the special case when $(R, \mu) = (\mathbb{N}, m)$ we will denote this space by $l^p$.
Note that in this paper we consider $0$ to be an element of $\mathbb{N}$.
\subsection{Non-increasing rearrangement}
We now present the concept of the non-increasing rearrangement of a function and state some of its properties that will be important later in the paper. We proceed in accordance with \cite[Chapter~2]{BennettSharpley88}.
We start by introducing the distribution function.
\begin{definition}
The distribution function $\mu_f$ of a function $f \in M$ is defined for $s \in [0, \infty)$ by
\begin{equation*}
\mu_f(s) = \mu(\{ t \in R; \; \lvert f(t) \rvert > s \}).
\end{equation*}
\end{definition}
The non-increasing rearrangement is then defined as the generalised inverse of the distribution function.
\begin{definition}
The non-increasing rearrangement $f^*$ of a function $f \in M$ is defined for $t \in [0, \infty)$ by
\begin{equation*}
f^*(t) = \inf \{ s \in [0, \infty); \; \mu_f(s) \leq t \}.
\end{equation*}
\end{definition}
For the basic properties of the distribution function and the non-increasing rearrangement, with proofs, see \cite[Chapter~2, Proposition~1.3]{BennettSharpley88} and \cite[Chapter~2, Proposition~1.7]{BennettSharpley88}, respectively. We consider those properties to be classical and well known and we will be using them without further explicit reference.
An important concept used in the paper is that of equimeasurability defined below.
\begin{definition} \label{DEM}
We say that the functions $f \in M(R, \mu)$ and $g \in M(S, \nu)$ are equimeasurable if $\mu_f = \nu_g$.
\end{definition}
It is not hard to show that two functions are equimeasurable if and only if their non-increasing rearrangements coincide too.
A very important classical result is the Hardy--Littlewood inequality which we will use extensively in the paper. For proof, see for example \cite[Chapter~2, Theorem~2.2]{BennettSharpley88}.
\begin{theorem} \label{THLI}
It holds for all $f, g \in M$ that
\begin{equation*}
\int_R \lvert fg \rvert \: d\mu \leq \int_0^{\infty} f^*g^* \: d\lambda.
\end{equation*}
\end{theorem}
It follows directly from this result that it holds for every $f,g \in M$ that
\begin{equation*}
\sup_{\substack{\tilde{g} \in M \\ \tilde{g}^* = g^*}} \int_R \lvert f \tilde{g} \rvert \: d\mu \leq \int_0^{\infty} f^*g^* \: d\lambda.
\end{equation*}
This motivates the definition of resonant measure spaces.
\begin{definition}
A sigma-finite measure space $(R, \mu)$ is said to be resonant if it holds for all $f, g \in M(R, \mu)$ that
\begin{equation*}
\sup_{\substack{\tilde{g} \in M \\ \tilde{g}^* = g^*}} \int_R \lvert f \tilde{g} \rvert \: d\mu = \int_0^{\infty} f^* g^* \: d\lambda.
\end{equation*}
\end{definition}
The property of being resonant is an important one. Luckily, there is a straightforward characterisation of resonant measure spaces. For proof and further details see \cite[Chapter~2, Theorem~2.7]{BennettSharpley88}.
\begin{theorem}
A sigma-finite measure space is resonant if and only if it is either non-atomic or completely atomic with all atoms having equal measure.
\end{theorem}
\subsection{Norms and quasinorms} \label{SNQ}
In this subsection, and also in the following one, we provide the definitions for several classes of functionals we will study in the paper. All definitions should be standard or at least straightforward generalisations of standard ones.
The starting point shall be the class of norms.
\begin{definition} \label{DN}
Let $X$ be a complex linear space. A functional $\lVert \cdot \rVert : X \rightarrow [0, \infty)$ will be called a norm if it satisfies the following conditions:
\begin{enumerate}
\item it is positively homogeneous, i.e.~$\forall a \in \mathbb{C} \; \forall x \in X : \lVert ax \rVert = \lvert a \rvert \lVert x \rVert$,
\item it satisfies $\lVert x \rVert = 0 \Leftrightarrow x = 0$ in $X$,
\item it is subadditive, i.e.~$\forall x,y \in X : \lVert x+y \rVert \leq \lVert x \rVert + \lVert y \rVert$.
\end{enumerate}
\end{definition}
Because the definition of a norm is sometimes too restrictive we will need a class of weaker functionals, namely quasinorms.
\begin{definition} \label{DQ}
Let $X$ be a complex linear space. A functional $\lVert \cdot \rVert : X \rightarrow [0, \infty)$ will be called a quasinorm if it satisfies the following conditions:
\begin{enumerate}
\item it is positively homogeneous, i.e.~$\forall a \in \mathbb{C} \; \forall x \in X : \lVert ax \rVert = \lvert a \rvert \lVert x \rVert$,
\item it satisfies $\lVert x \rVert = 0 \Leftrightarrow x = 0$ in $X$,
\item \label{DQ3} there is a constant $C\geq1$, called the modulus of concavity of $\lVert \cdot \rVert$, such that it is subadditive up to this constant, i.e.~$\forall x,y \in X \: : \: \lVert x+y \rVert \leq C(\lVert x \rVert + \lVert y \rVert)$.
\end{enumerate}
\end{definition}
It is obvious that every norm is also a quasinorm with the modulus of concavity equal to $1$ and that every quasinorm with the modulus of concavity equal to $1$ is also a norm.
It is a well-known fact that every norm defines a metrizable topology on $X$ and that it is continuous with respect to that topology. This is not true for quasinorms, but this can be remedied thanks to the Aoki--Rolewicz theorem which we list below. Further details can be found for example in \cite{JohnsonLindenstraus03-25} or in \cite[Appendix~H]{BenyaminiLindenstrauss00}.
\begin{theorem}
Let $\lVert \cdot \rVert_X$ be a quasinorm over the linear space $X$. Then there is a quasinorm $\opnorm*{\cdot}_{X}$ over $X$ such that
\begin{enumerate}
\item there is a finite constant $C_0 > 0$ such that it holds for all $x \in X$ that
\begin{equation*}
C_0^{-1} \lVert x \rVert_X \leq \opnorm*{x}_{X} \leq C_0 \lVert x \rVert_X,
\end{equation*}
\item there is an $r \in (0, 1]$ such that it holds for all $x,y \in X$ that
\begin{equation*}
\opnorm*{x+y}_{X}^r \leq \opnorm*{x}_{X}^r + \opnorm*{y}_{X}^r.
\end{equation*}
\end{enumerate}
\end{theorem}
The direct consequence of this result is that every quasinorm defines a metrizable topology on $X$ and that the convergence in said topology is equivalent to the convergence with respect to the original quasinorm, in the sense that $x_n \rightarrow x$ in the induced topology if and only if $\lim_{n \rightarrow \infty} \lVert x_n - x \rVert = 0$.
Natural question to ask is when do different quasinorms define equivalent topologies. It is an easy exercise to show that the answer is the same as in the case of norms, that is that two quasinorms are topologically equivalent if and only if they are equivalent in the following sense.
\begin{definition}
Let $\lVert \cdot \rVert_X$ and $\opnorm*{\cdot}_{X}$ be quasinorms over the linear space $X$. We say that $\lVert \cdot \rVert_X$ and $\opnorm*{\cdot}_{X}$ are equivalent if there is some $C_0 > 0$ such that it holds for all $x \in X$ that
\begin{equation*}
C_0^{-1} \lVert x \rVert_X \leq \opnorm*{\cdot}_{X} \leq C_0 \lVert x \rVert_X.
\end{equation*}
\end{definition}
To conclude this part, we recall the concepts of sum and intersection of normed spaces.
\begin{definition}
Let $X$ and $Y$ be normed linear spaces equipped with the norms $\lVert \cdot \rVert_X$ and $\lVert \cdot \rVert_Y$ respectively. Suppose that there is a Hausdorff topological linear space $Z$ into which $X$ and $Y$ are continuously embedded. We then define the spaces $X+Y$ and $X \cap Y$ as
\begin{align*}
X+Y &= \left \{z \in Z; \; \exists x \in X, \, \exists y \in Y : z = x +y \right \}, \\
X \cap Y &= \left \{z \in Z; \; z \in X, \, z \in Y \right \},
\end{align*}
equipped with the norms
\begin{align*}
\lVert z \rVert_{X+Y} &= \inf \left \{\lVert x \rVert_X + \lVert y \rVert_Y; \; x \in X, \, y \in Y, \, x+y=z \right \}, \\
\lVert z \rVert_{X \cap Y} &= \max \left \{ \lVert z \rVert_X, \lVert z \rVert_Y \right \},
\end{align*}
respectively.
\end{definition}
The concepts presented above play a crucial role in the theory of interpolation. For further details, we refer the reader to \cite[Chapter~3]{BennettSharpley88}, where one can also find the following result (as \cite[Chapter~3, Theorem~1.3]{BennettSharpley88}).
\begin{theorem}
Let $X$ and $Y$ be as above. Then $X+Y$ and $X \cap Y$, when equipped with their respective norms, are normed linear spaces. Furthermore, if $X$ and $Y$ are Banach spaces, then so are $X+Y$ and $X \cap Y$.
\end{theorem}
\subsection{Banach function norms and quasinorms}
We now turn our attention to the case in which we are interested the most, that is the case of norms and quasinorms acting on spaces of functions. The approach taken here is the same as in \cite[Chapter~1, Section~1]{BennettSharpley88}, which means that it differs, at least formally, from that in the previous part.
The major definitions are of course those of a Banach function norm and the corresponding Banach function space.
\begin{definition}
Let $\lVert \cdot \rVert : M(R, \mu) \rightarrow [0, \infty]$ be a mapping satisfying $\lVert \, \lvert f \rvert \, \rVert = \lVert f \rVert$ for all $f \in M$. We say that $\lVert \cdot \rVert$ is a Banach function norm if its restriction to $M_+$ satisfies the following axioms:
\begin{enumerate}[label=\textup{(P\arabic*)}, series=P]
\item \label{P1} it is a norm, in the sense that it satisfies the following three conditions:
\begin{enumerate}[ref=(\theenumii)]
\item \label{P1a} it is positively homogeneous, i.e.\ $\forall a \in \mathbb{C} \; \forall f \in M_+ : \lVert a f \rVert = \lvert a \rvert \lVert f \rVert$,
\item \label{P1b} it satisfies $\lVert f \rVert = 0 \Leftrightarrow f = 0$ $\mu$-a.e.,
\item \label{P1c} it is subadditive, i.e.\ $\forall f,g \in M_+ \: : \: \lVert f+g \rVert \leq \lVert f \rVert + \lVert g \rVert$,
\end{enumerate}
\item \label{P2} it has the lattice property, i.e.\ if some $f, g \in M_+$ satisfy $f \leq g$ $\mu$-a.e., then also $\lVert f \rVert \leq \lVert g \rVert$,
\item \label{P3} it has the Fatou property, i.e.\ if some $f_n, f \in M_+$ satisfy $f_n \uparrow f$ $\mu$-a.e., then also $\lVert f_n \rVert \uparrow \lVert f \rVert $,
\item \label{P4} $\lVert \chi_E \rVert < \infty$ for all $E \subseteq R$ satisfying $\mu(E) < \infty$,
\item \label{P5} for every $E \subseteq R$ satisfying $\mu(E) < \infty$ there exists some finite constant $C_E$, dependent only on $E$, such that the inequality $ \int_E f \: d\mu \leq C_E \lVert f \rVert $ is true for all $f \in M_+$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $\lVert \cdot \rVert_X$ be a Banach function norm. We then define the corresponding Banach function space $X$ as the set
\begin{equation*}
X = \left \{ f \in M; \; \lVert f \rVert_X < \infty \right \}.
\end{equation*}
\end{definition}
It is easy to see that a Banach function norm, when restricted to the space it defines, is indeed a norm in the sense of Definition~\ref{DN} and therefore Banach function spaces, when equipped with their defining norm, are normed linear spaces. Detailed study of these spaces can be found in \cite{BennettSharpley88}.
Just as with general norms, the triangle inequality is sometimes too strong a condition to require. We therefore introduce the notions of quasi-Banach function norms and of the corresponding quasi-Banach function spaces.
\begin{definition}
Let $\lVert \cdot \rVert : M(R, \mu) \rightarrow [0, \infty]$ be a mapping satisfying $\lVert \, \lvert f \rvert \, \rVert = \lVert f \rVert$ for all $f \in M$. We say that $\lVert \cdot \rVert$ is a quasi-Banach function norm if its restriction to $M_+$ satisfies the axioms \ref{P2}, \ref{P3} and \ref{P4} of Banach function norms together with a weaker version of axiom \ref{P1}, namely
\begin{enumerate}[label=\textup{(Q\arabic*)}]
\item \label{Q1} it is a quasinorm, in the sense that it satisfies the following three conditions:
\begin{enumerate}[ref=(\theenumii)]
\item \label{Q1a} it is positively homogeneous, i.e.\ $\forall a \in \mathbb{C} \; \forall f \in M_+ : \lVert af \rVert = \lvert a \rvert \lVert f \rVert$,
\item \label{Q1b} it satisfies $\lVert f \rVert = 0 \Leftrightarrow f = 0$ $\mu$-a.e.,
\item \label{Q1c} there is a constant $C\geq 1$, called the modulus of concavity of $\lVert \cdot \rVert$, such that it is subadditive up to this constant, i.e.
\begin{equation*}
\forall f,g \in M_+ : \lVert f+g \rVert \leq C(\lVert f \rVert + \lVert g \rVert).
\end{equation*}
\end{enumerate}
\end{enumerate}
\end{definition}
\begin{definition}
Let $\lVert \cdot \rVert_X$ be a quasi-Banach function norm. We then define the corresponding quasi-Banach function space $X$ as the set
\begin{equation*}
X = \left \{ f \in M; \; \lVert f \rVert_X < \infty \right \}.
\end{equation*}
\end{definition}
As before, it is easy to see that a quasi-Banach function norm restricted to the space it defines is a quasinorm in the sense of Definition~\ref{DQ}. Let us now list here some of their important properties we will need later.
\begin{theorem} \label{TC}
Let $\lVert \cdot \rVert_X$ be a quasi-Banach function norm and let $X$ be the corresponding quasi-Banach function space. Then $X$ is complete.
\end{theorem}
\begin{theorem} \label{TEQBFS}
Let $\lVert \cdot \rVert_X$ and $\lVert \cdot \rVert_Y$ be quasi-Banach function norms and let $X$ and $Y$ be the corresponding quasi-Banach function spaces. If $X \subseteq Y$ then also $X \hookrightarrow Y$.
\end{theorem}
Both of these results have been known for a long time in the context of Banach function spaces but they have been only recently extended to quasi-Banach function spaces. Theorem~\ref{TC} has been first obtained by Caetano, Gogatishvili and Opic in \cite{CaetanoGogatishvili16} while Theorem~\ref{TEQBFS} has been proved by Nekvinda and the author in \cite{NekvindaPesa20}. The following result has also been obtained in \cite{NekvindaPesa20}:
\begin{theorem} \label{TP5}
Let $\lVert \cdot \rVert_X$ be a quasi-Banach function norm and let $X$ be the corresponding quasi-Banach function space. Suppose that $E \subseteq R$ is a set such that $\mu(E) < \infty$ and that for every constant $K \in (0, \infty)$ there is a non-negative function $f \in X$ satisfying
\begin{equation*}
\int_E \lvert f \rvert \: d\mu > K \lVert f \rVert_X.
\end{equation*}
Then there is a non-negative function $f_E \in X$ such that
\begin{equation*} \label{TP5.1}
\int_E f_E \: d\mu = \infty.
\end{equation*}
\end{theorem}
The last result concerning Banach function spaces we want to list at this point concerns the properties of the intersection of two Banach function spaces. The proof is an easy exercise.
\begin{proposition} \label{PIBFS}
Let $X$ and $Y$ be two Banach function spaces. Then $X \cap Y$ is also a Banach function space.
\end{proposition}
Let us now define an important property that a quasi-Banach function norm can have and that we will take a special interest in. Note that the class of quasi-Banach function norms contains that of Banach function norms so it is not necessary to provide separate definitions.
\begin{definition}
Let $\lVert \cdot \rVert_X$ be a quasi-Banach function norm. We say that $\lVert \cdot \rVert_X$ is rearrangement-invariant, abbreviated r.i., if $\lVert f\rVert_X = \lVert g \rVert_X$ whenever $f, g \in M$ are equimeasurable (in the sense of Definition~\ref{DEM}).
Furthermore, if the above condition holds, the corresponding space $X$ will be called rearrangement-invariant too.
\end{definition}
An important property of r.i.~quasi-Banach function spaces over $([0, \infty), \lambda)$ is that the dilation operator is bounded on those spaces, as stated in the following theorem. This is a classical result in the context of r.i.~Banach function spaces which has been recently extended to r.i.~quasi-Banach function spaces by by Nekvinda and the author in \cite{NekvindaPesa20} (for the classical version see for example \cite[Chapter~3, Proposition~5.11]{BennettSharpley88}).
\begin{definition} \label{DDO}
Let $t \in (0, \infty)$. The dilation operator $D_t$ is defined on $M([0, \infty), \lambda)$ by the formula
\begin{equation*}
D_tf(s) = f(ts),
\end{equation*}
where $f \in M([0, \infty), \lambda)$, $s \in (0, \infty)$.
\end{definition}
\begin{theorem} \label{TDRIS}
Let $X$ be an r.i.~quasi-Banach function space over $([0, \infty), \lambda)$ and let $t \in (0, \infty)$. Then $D_t: X \rightarrow X$ is a bounded operator.
\end{theorem}
Finally, we want to discuss here one property of some r.i.~quasi-Banach function norms that is often important in applications.
\begin{definition} \label{DHLP}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm. We say that the Hardy--Littlewood--P\'{o}lya principle holds for $\lVert \cdot \rVert_X$ if the estimate $\lVert f \rVert_X \leq \lVert g \rVert_X$ is true for any pair of functions $f, g \in M$ satisfying
\begin{equation*}
\int_0^{t} f^* \: d\lambda \leq \int_0^{t} g^* \: d\lambda
\end{equation*}
for all $t \in (0, \infty)$.
\end{definition}
To put this property into the proper context we will need the following lemma:
\begin{lemma} \label{LHLP}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm and consider the following three statements:
\begin{enumerate}
\item \label{LHLPi}
$\lVert \cdot \rVert_X$ is an r.i.~Banach function norm.
\item \label{LHLPii}
The Hardy--Littlewood--P\'{o}lya principle holds for $\lVert \cdot \rVert_X$.
\item \label{LHLPiii}
$\lVert \cdot \rVert_X$ satisfies \ref{P5}.
\end{enumerate}
Then \ref{LHLPi} implies \ref{LHLPii} which in turn implies \ref{LHLPiii}.
\end{lemma}
\begin{proof}
That \ref{LHLPi} implies \ref{LHLPii}, i.e.~that the Hardy--Littlewood--P\'{o}lya principle holds for all r.i.~Banach function spaces is a well known result, see for example \cite[Chapter~2, Theorem~4.6]{BennettSharpley88}.
The remaining implication will be proved by contradiction, so we assume that $\lVert \cdot \rVert_X$ does not satisfy \ref{P5}. Then it follows from Theorem~\ref{TP5} that there is a set $E \subseteq R$ and a function $f \in M$ such that $\mu(E)<\infty$, $\lVert f \rVert_X < \infty$, and
\begin{equation*}
\int_E \lvert f \rvert \: d\mu = \infty.
\end{equation*}
Hence $\mu(E) > 0$ and the Hardy--Littlewood inequality (Theorem~\ref{THLI}) implies that also
\begin{equation*}
\int_0^{\mu(E)} f^* \: d\lambda = \infty.
\end{equation*}
Since $f^*$ is non-increasing, we finally obtain the equality
\begin{equation*}
\int_0^{t} f^* \: d\lambda = \infty
\end{equation*}
for all $t \in (0, \infty)$.
Because we assume that the Hardy--Littlewood--P\'{o}lya principle holds for $\lVert \cdot \rVert_X$, we conclude that every function $g \in M$ satisfies $\lVert g \rVert_X < \infty$. Since this includes $g = \infty \chi_R$ and since $\mu(R) \geq \mu(E) > 0$, we obtain a contradiction with the property that quasi-Banach function spaces contain only functions that are finite almost everywhere (see \cite[Lemma~2.4]{MizutaNekvinda15} or \cite[Theorem~3.4]{NekvindaPesa20}).
\end{proof}
\begin{remark} \label{RHLP}
None of the implications in Lemma~\ref{LHLP} can be reversed. In the first case, we can show this by considering the functional $\lVert \cdot \rVert_X = \lVert \cdot \rVert_{L^{p,q}}$, where $L^{p,q}$ is the Lorentz space, since for the choice of parameters $p \in (1, \infty)$, $q \in (0,1)$, those functionals are not even equivalent to norms (see \cite[Theorem~2.5.8]{CarroRaposo07} and the references therein) while the Hardy--Littlewood--P\'{o}lya principle still holds for them (this follows from the boundedness of the Hardy--Littlewood maximal operator, see \cite[Theorem~4.1]{CarroPick00} and the references therein). The second case is more interesting and the question whether the reverse implication holds has previously been open. We provide a negative answer in Corollary~\ref{CHLP}.
\end{remark}
\subsection{Associate space}
An important concept in the theory of Banach function spaces and their generalisations is that of an associate space. The detailed study of associate spaces of Banach function spaces can be found in \cite[Chapter~1, Sections~2, 3, and 4]{BennettSharpley88}.
We will approach the issue in a slightly more general way. The very definition of an associate space requires no assumptions on the functional defining the original space.
\begin{definition} \label{DAS}
Let $\lVert \cdot \rVert_X: M \to [0, \infty]$ be some non-negative functional and put
\begin{equation*}
X = \{ f \in M; \; \lVert f \rVert_X < \infty \}.
\end{equation*}
Then the functional $\lVert \cdot \rVert_{X'}$ defined for $f \in M$ by
\begin{equation}
\lVert f \rVert_{X'} = \sup_{g \in X} \frac{1}{\lVert g \rVert_X} \int_R \lvert f g \rvert \: d\mu, \label{DAS1}
\end{equation}
where we interpret $\frac{0}{0} = 0$ and $\frac{a}{0} = \infty$ for any $a>0$, will be called the associate functional of $\lVert \cdot \rVert_X$ while the set
\begin{equation*}
X' = \left \{ f \in M; \; \lVert f \rVert_{X'} < \infty \right \}
\end{equation*}
will be called the associate space of $X$.
\end{definition}
As suggested by the notation, we will be interested mainly in the case when $\lVert \cdot \rVert_X$ is at least a quasinorm, but we wanted to emphasize that such an assumption is not necessary for the definition. In fact, it is not even required for the following result, which is the Hölder inequality for associate spaces.
\begin{theorem} \label{THAS}
Let $\lVert \cdot \rVert_X: M \to [0, \infty]$ be some non-negative functional and denote by $\lVert \cdot \rVert_{X'}$ its associate functional. Then it holds for all $f \in M$ that
\begin{equation*}
\int_R \lvert f g \rvert \: d\mu \leq \lVert g \rVert_X \lVert f \rVert_{X'}
\end{equation*}
provided that we interpret $0 \cdot \infty = -\infty \cdot \infty = \infty$ on the right-hand side.
\end{theorem}
The convention at the end of the preceding theorem is necessary because the lack of assumptions on $\lVert \cdot \rVert_X$ means that we allow some pathological cases that need to be taken care of. To be more specific, $0 \cdot \infty = \infty$ is necessary because we allow $\lVert g \rVert_X = 0$ even for non-zero $g$ while $-\infty \cdot \infty = \infty$ is needed because the set $X$ can be empty, in which case $\lVert f \rVert_{X'} = \sup \emptyset = -\infty$.
The last result we will present in this generality is the following proposition concerning embeddings. Although the proof is an easy modification of that in \cite[Chapter~2, Proposition~2.10]{BennettSharpley88} we provide it to show that it truly does not require any assumptions on the original functional.
\begin{proposition} \label{PEASG}
Let $\lVert \cdot \rVert_X: M \to [0, \infty]$ and $\lVert \cdot \rVert_Y: M \to [0, \infty]$ be two non-negative functionals satisfying that there is a constant $C>0$ such that it holds for all $f \in M$ that
\begin{equation*}
\lVert f \rVert_X \leq C \lVert f \rVert_Y.
\end{equation*}
Then the associate functionals $\lVert \cdot \rVert_{X'}$ and $\lVert \cdot \rVert_{Y'}$ satisfy, with the same constant $C$,
\begin{equation*}
\lVert f \rVert_{Y'} \leq C \lVert f \rVert_{X'}
\end{equation*}
for all $f \in M$.
\end{proposition}
\begin{proof}
Our assumptions guarantee that $Y \subseteq X$ and therefore
\begin{equation*}
\begin{split}
\lVert f \rVert_{Y'} &= \sup_{g \in Y} \frac{1}{\lVert g \rVert_Y} \int_R \lvert f g \rvert \: d\mu \\
&\leq \sup_{g \in Y} \frac{C}{\lVert g \rVert_X} \int_R \lvert f g \rvert \: d\mu \\
&\leq \sup_{g \in X} \frac{C}{\lVert g \rVert_X} \int_R \lvert f g \rvert \: d\mu \\
&= C \lVert f \rVert_{X'}.
\end{split}
\end{equation*}
\end{proof}
Let us now turn our attention to the case when $\lVert \cdot \rVert_X$ is a quasi-Banach function norm. Note that in this case the definition of the associate functional does not change when one replaces the supremum in \eqref{DAS1} by one taken only over the unit sphere in $X$.
The following result, due to Gogatishvili and Soudsk{\'y} in \cite{GogatishviliSoudsky14}, shows that the conditions the original functional needs to satisfy in order for the associate functional to be a Banach function norm are quite mild. Specially, they are satisfied by any quasi-Banach function norm that satisfies the axiom \ref{P5}. This special case was observed earlier in \cite[Remark~2.3.(iii)]{EdmundsKerman00}.
\begin{theorem} \label{TFA}
Let $\lVert \cdot \rVert_X : M \to [0, \infty]$ be a functional that satisfies the axioms \ref{P4} and \ref{P5} from the definition of Banach function norms and which also satisfies for all $f \in M$ that $\lVert f \rVert_X = \lVert \, \lvert f \rvert \, \rVert_X$. Then the functional $\lVert \cdot \rVert_{X'}$ is a Banach function norm. In addition, $\lVert \cdot \rVert_X$ is equivalent to a Banach function norm if and only if there is some constant $C \geq 1$ such that it holds for all $f \in M$ that
\begin{equation}
\lVert f \rVert_{X''} \leq \lVert f \rVert_X \leq C \lVert f \rVert_{X''}, \label{TFA1}
\end{equation}
where $\lVert \cdot \rVert_{X''}$ denotes the associate functional of $\lVert \cdot \rVert_{X'}$.
\end{theorem}
Additionally, if $\lVert \cdot \rVert_X$ is a Banach function norm then \eqref{TFA1} holds with $C = 1$. This is a classical result of Lorenz and Luxemburg, proof of which can be found for example in \cite[Chapter~1, Theorem~2.7]{BennettSharpley88}.
\begin{theorem} \label{TDAS}
Let $\lVert \cdot \rVert_X$ be a Banach function norm, then $\lVert \cdot \rVert_X = \lVert \cdot \rVert_{X''}$ where $\lVert \cdot \rVert_{X''}$ is the associate functional of $\lVert \cdot \rVert_{X'}$.
\end{theorem}
Let us point out that even in the case when $\lVert \cdot \rVert_X$, satisfying the assumptions of Theorem~\ref{TFA}, is not equivalent to any Banach function norm we still have one interesting embedding, as formalised in the following statement. The proof is an easy exercise.
\begin{proposition} \label{PESSAS}
Let $\lVert \cdot \rVert_X$ satisfy the assumptions of Theorem~\ref{TFA}. Then it holds for all $f \in M$ that
\begin{equation*}
\lVert f \rVert_{X''} \leq \lVert f \rVert_X,
\end{equation*}
where $\lVert \cdot \rVert_{X''}$ denotes the associate functional of $\lVert \cdot \rVert_{X'}$.
\end{proposition}
We will also use in the paper the following version of Landau's resonance theorem. This result was first obtained in full generality in \cite{NekvindaPesa20}.
\begin{theorem} \label{LT}
Let $\lVert \cdot \rVert_X$ be a quasi-Banach function norm, let $X$ be the corresponding quasi-Banach function space and let $\lVert \cdot \rVert_{X'}$ and $X'$, respectively, be the associate norm of $\lVert \cdot \rVert_X$ and the corresponding associate space. Then a function $f \in M$ belongs to $X'$ if and only if it satisfies
\begin{equation*}
\int_R \lvert f g \rvert \: d\mu < \infty
\end{equation*}
for all $g \in X$.
\end{theorem}
To conclude this section, we observe that, provided the underlying measure space is resonant, the associate functional of an r.i.~quasi-Banach function norm can be expressed in terms of non-increasing rearrangement. The proof is the same as in \cite[Chapter~2, Proposition~4.2]{BennettSharpley88}.
\begin{proposition} \label{PAS}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm over a resonant measure space. Then its associate functional $\lVert \cdot \rVert_{X'}$ satisfies
\begin{equation*}
\lVert f \rVert_{X'} = \sup_{g \in X} \frac{1}{\lVert g \rVert_{X}} \int_0^{\infty} f^* g^* \: d\lambda.
\end{equation*}
\end{proposition}
An obvious consequence of Proposition~\ref{PAS} is that an associate space of an r.i.~quasi-Banach function space (over a resonant measure space) is also rearrangement-invariant.
\section{Wiener--Luxemburg amalgam spaces} \label{CHWLAS}
Throughout this section we restrict ourselves to the case when $(R, \mu) = ([0, \infty), \lambda)$. This allows us to make the proofs more elegant and less technical as well as ensures that the underlying measure space is resonant. Note that this comes at no loss of generality, since any r.i.~Banach function space over an arbitrary resonant measure space can be represented by some r.i.~Banach function space over $([0, \infty), \lambda)$, as follows from the classical Luxemburg representation theorem (see for example \cite[Chapter~2, Theorem~4.10]{BennettSharpley88}), and any r.i.~Banach function space over $([0, \infty), \lambda)$ represents an r.i.~Banach function space over any resonant measure space (see for example \cite[Chapter~2, Theorem~4.9]{BennettSharpley88}).
\subsection{Wiener--Luxemburg quasinorms}
\begin{definition} \label{DefWL}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms. We then define the Wiener--Luxemburg quasinorm $\lVert \cdot \rVert_{WL(A, B)}$, for $f \in M$, by
\begin{equation}
\lVert f \rVert_{WL(A, B)} = \lVert f^* \chi_{[0,1]} \rVert_A + \lVert f^* \chi_{(1, \infty)} \rVert_B \label{DefWLN}
\end{equation}
and the corresponding Wiener--Luxemburg amalgam space $WL(A, B)$ as
\begin{equation*}
WL(A, B) = \{f \in M; \; \lVert f \rVert_{WL(A, B)} < \infty \}.
\end{equation*}
Furthermore, we will call the first summand in \eqref{DefWLN} the local component of $\lVert \cdot \rVert_{WL(A, B)}$ while the second summand will be called the global component of $\lVert \cdot \rVert_{WL(A, B)}$.
\end{definition}
For the sake of brevity we will sometimes write just Wiener--Luxemburg amalgams instead of Wiener--Luxemburg amalgam spaces.
Let us at first note that this concept somewhat generalises the concept of the r.i.~Banach function spaces in the sense that every r.i.~Banach function space is, up to equivalence of the defining functionals, a Wiener--Luxemburg amalgam of itself.
\begin{remark} \label{RAS}
Let $\lVert \cdot \rVert_A$ be an r.i.~Banach function norm. Then
\begin{equation*}
\lVert f \rVert_A \leq \lVert f \rVert_{WL(A, A)} \leq 2 \lVert f \rVert_A
\end{equation*}
for every $f \in M$.
Consequently, it makes a good sense to talk about local and global components of arbitrary r.i.~Banach function norms.
\end{remark}
The local component of an arbitrary r.i.~Banach function norm is worth separate attention. As shown in the following proposition, this component is itself an r.i.~Banach function norm and therefore any unpleasant behaviour of the Wiener--Luxemburg quasinorm must be caused by its global element. Second part of the proposition then illustrates the interesting fact that the global element of $L^{\infty}$ puts no additional condition on the size of a given function, in the sense that the spaces $WL(A,L^{\infty})$ consists of exactly those functions that are locally in $A$.
\begin{proposition} \label{PLC}
Let $\lVert \cdot \rVert_A$ be an r.i.~Banach function norm. Then the functional
\begin{equation*}
f \mapsto \lVert f^* \chi_{[0,1]} \rVert_A
\end{equation*}
is also an r.i.~Banach function norm.
Furthermore, there is a constant $C>0$ such that it holds for all $f \in M$ that
\begin{equation} \label{PLCe}
\lVert f^* \chi_{[0,1]} \rVert_A \leq \lVert f \rVert_{WL(A, L^{\infty})} \leq C \lVert f^* \chi_{[0,1]} \rVert_A.
\end{equation}
\end{proposition}
\begin{proof}
That the functional in question satisfies the axioms \ref{P2}, \ref{P3} and \ref{P4} as well as parts \ref{P1a} and \ref{P1b} of the axiom \ref{P1} is an easy consequence of the respective properties of $\lVert \cdot \rVert_A$ and the properties of non-increasing rearrangement. Furthermore, the rearrangement invariance is obvious.
As for \ref{P5}, fix some set $E \subseteq [0, \infty)$ of finite measure. We may, without loss of generality, assume that $\lambda(E) > 1$, because otherwise the proof is similar but simpler. Then, by Hardy--Littlewood inequality (Theorem~\ref{THLI}), it holds for every $f \in M$ that
\begin{equation*}
\begin{split}
\int_{E} f \: d\lambda &\leq \int_{0}^{\lambda(E)} f^* \: d\lambda = \int_0^{1} f^* \: d\lambda +
\int_{1}^{\lambda(E)} f^* \: d\lambda \\
&\leq \int_0^{1} f^* \: d\lambda + (\lambda(E) - 1) f^*(1) \leq \lambda(E) \int_0^{1} f^* \: d\lambda \leq
\lambda(E) C_{[0,1]} \lVert f^*\chi_{[0,1]} \rVert_A,
\end{split}
\end{equation*}
where $C_{[0,1]}$ is the constant from the property \ref{P5} of $\lVert \cdot \rVert_A$ for the set $[0,1]$.
For the triangle inequality (part \ref{P1c} of axiom \ref{P1}) we employ the associate definition of~$\lVert \cdot \rVert_A$ (see Theorem~\ref{TDAS} and Proposition~\ref{PAS}) and the fact that $[0, \infty)$ is resonant to get for an arbitrary pair of functions $f, g \in M$ that
\begin{equation*}
\begin{split}
\lVert (f+g)^* \chi_{[0,1]} \rVert_A &= \sup_{\lVert h \rVert_{A'} \leq 1} \int_0^{\infty} (f+g)^* \chi_{[0,1]}h^* \: d\lambda \\
&= \sup_{\lVert h \rVert_{A'} \leq 1} \sup_{\tilde{h}^* = h^* \chi_{[0,1]}} \int_0^{\infty} (f+g) \tilde{h} \: d\lambda \\
&\leq \sup_{\lVert h \rVert_{A'} \leq 1} \sup_{\tilde{h}^* = h^* \chi_{[0,1]}} \int_0^{\infty} f \tilde{h} \: d\lambda + \sup_{\lVert h \rVert_{A'} \leq 1} \sup_{\tilde{h}^* = h^* \chi_{[0,1]}} \int_0^{\infty} g \tilde{h} \: d\lambda \\
&= \sup_{\lVert h \rVert_{A'} \leq 1} \int_0^{\infty} f^* \chi_{[0,1]}h^* \: d\lambda + \sup_{\lVert h \rVert_{A'} \leq 1} \int_0^{\infty} g^* \chi_{[0,1]}h^* \: d\lambda \\
&= \lVert f^* \chi_{[0,1]} \rVert_A + \lVert g^* \chi_{[0,1]} \rVert_A.
\end{split}
\end{equation*}
Thus we have shown that the functional in question is an r.i.~Banach function norm. It remains to show \eqref{PLCe}.
The first inequality in \eqref{PLCe} is trivial. For the second estimate, it suffices to observe that
\begin{equation*}
\lVert f^* \chi_{(1, \infty)} \rVert_{L^{\infty}} = f^*(1) \leq \int_0^{1} f^* \: d\lambda \leq C_{[0,1]} \lVert f^*\chi_{[0,1]} \rVert_A,
\end{equation*}
where $C_{[0,1]}$ is the constant from the property \ref{P5} of $\lVert \cdot \rVert_A$ for the set $[0,1]$.
\end{proof}
While the local component is an r.i.~Banach function norm, the global component is much less well behaved. Indeed, it is fairly easy to see that it cannot have the properties \ref{P1} and \ref{P5} (in \ref{P1} only part \ref{P1a} can possibly hold), because it cannot distinguish from zero any function that is supported on a set of measure less than one. Thus it makes no sense to consider it separately.
The following theorem shows that although Wiener--Luxemburg quasinorm needs not to be a norm, it satisfies all the remaining axioms of r.i.~Banach function norms. Note that this result is not redundant because in order to show that Wiener--Luxemburg amalgams are normable (see Corollary~\ref{CN}) we will use Theorem~\ref{TEQBFS} and Theorem~\ref{TFA} and it is thus necessary to establish first that Wiener--Luxemburg quasinorms are quasi-Banach function norms that satisfy the axiom \ref{P5}.
\begin{theorem} \label{TQN}
The Wiener--Luxemburg quasinorms, as defined in Definition~\ref{DefWL}, are rearrangement-invariant quasi-Banach function norms and they also satisfy the axiom \ref{P5} from the definition of Banach function norms. Consequently, the corresponding Wiener--Luxemburg amalgam spaces are rearrangement-invariant quasi-Banach function spaces.
\end{theorem}
\begin{proof}
The properties \ref{P2}, \ref{P3} and \ref{P4} as well as those from parts \ref{Q1a} and \ref{Q1b} of the axiom \ref{Q1} are easy consequences of the respective properties of $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ and the properties of non-increasing rearrangement. Furthermore, the rearrangement invariance is obvious.
To show \ref{P5}, fix some set $E \subseteq [0, \infty)$ of finite measure. We may, without loss of generality, assume that $\lambda(E) > 1$, since otherwise the proof is similar but simpler. Then, by the Hardy--Littlewood inequality (Theorem~\ref{THLI}), it holds for every $f \in M_+$ that
\begin{equation*}
\begin{split}
\int_{E} f \: d\lambda \leq \int_{0}^{\lambda(E)} f^* \: d\lambda &= \int_0^{1} f^* \: d\lambda + \int_1^{\lambda(E)} f^* \: d\lambda \\
&\leq C_{[0,1]} \lVert f^* \chi_{[0,1]} \rVert_A + C_{(1, \lambda(E))} \lVert f^* \chi_{(1, \infty)} \rVert_B,
\end{split}
\end{equation*}
where $C_{[0,1]}$ is the constant from the property \ref{P5} of $\lVert \cdot \rVert_A$ for the set $[0,1]$ and $C_{(1, \lambda(E))}$ is the constant from the same property of $\lVert \cdot \rVert_B$ for the set $(1, \lambda(E ))$.
Finally, for the triangle inequality up to a multiplicative constant (part \ref{Q1c} of the axiom \ref{Q1}), consider the dilation operator $D_{\frac{1}{2}}$, as defined in Definition~\ref{DDO}, and use at first only the appropriate properties of non-increasing rearrangement and those of $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ to calculate
\begin{align*}
\lVert f+g \rVert_{WL(A, B)} &= \lVert (f+g)^* \chi_{[0,1]} \rVert_A + \lVert (f+g)^* \chi_{(1, \infty)} \rVert_B \\
& \; \begin{aligned}
\leq \lVert (D_{\frac{1}{2}} f^* + &D_{\frac{1}{2}} g^*) \chi_{[0,1]} \rVert_A \\
&+ \lVert (D_{\frac{1}{2}} f^* + D_{\frac{1}{2}} g^*) \chi_{(1, \infty)} \rVert_B
\end{aligned} \\
& \; \begin{aligned}
\leq \lVert D_{\frac{1}{2}} f^* &\chi_{[0,1]} \rVert_A + \lVert D_{\frac{1}{2}} g^* \chi_{[0,1]} \rVert_A \\
&+ \lVert D_{\frac{1}{2}} f^* \chi_{(1, \infty)} \rVert_B + \lVert D_{\frac{1}{2}} g^* \chi_{(1, \infty)} \rVert_B,
\end{aligned}
\end{align*}
which shows that it is sufficient to prove that there is some $C \in (0, \infty)$ such that
\begin{equation*}
\lVert D_{\frac{1}{2}} f^* \chi_{[0,1]} \rVert_A + \lVert D_{\frac{1}{2}} f^* \chi_{(1, \infty)} \rVert_B \leq C \lVert f \rVert_{WL(A, B)}
\end{equation*}
for all $f \in M_+$. Actually, it suffices to show
\begin{equation} \label{TQN1}
\lVert D_{\frac{1}{2}} f^* \chi_{(1, \infty)} \rVert_B \leq C \lVert f \rVert_{WL(A, B)},
\end{equation}
because $D_{\frac{1}{2}}$ is bounded on $A$ (by Theorem~\ref{TDRIS}), and thus
\begin{equation*}
\lVert D_{\frac{1}{2}} f^* \chi_{[0,1]} \rVert_A = \lVert D_{\frac{1}{2}}( f^* \chi_{[0,\frac{1}{2}]}) \rVert_A \leq \lVert D_{\frac{1}{2}} \rVert \lVert f^* \chi_{[0,\frac{1}{2}]} \rVert_A \leq \lVert D_{\frac{1}{2}} \rVert \lVert f^* \chi_{[0,1]} \rVert_A.
\end{equation*}
To show \eqref{TQN1}, fix some $f \in M_+$ and calculate
\begin{equation*}
\begin{split}
\lVert D_{\frac{1}{2}} f^* \chi_{(1, \infty)} \rVert_B &= \lVert D_{\frac{1}{2}} (f^* \chi_{(\frac{1}{2}, \infty)}) \rVert_B \\
&\leq \lVert D_{\frac{1}{2}} \rVert \lVert f^* \chi_{(\frac{1}{2}, \infty)} \rVert_B \\
&\leq \lVert D_{\frac{1}{2}} \rVert (\lVert f^* \chi_{(1, \infty)} \rVert_B + \lVert f^* \chi_{(\frac{1}{2}, 1)} \rVert_B) \\
&\leq \lVert D_{\frac{1}{2}} \rVert (\lVert f^* \chi_{(1, \infty)} \rVert_B + f^*(\tfrac{1}{2}) \lVert \chi_{(\frac{1}{2}, 1)} \rVert_B) \\
&\leq \lVert D_{\frac{1}{2}} \rVert (\lVert f^* \chi_{(1, \infty)} \rVert_B + \lVert \chi_{(\frac{1}{2}, 1)} \rVert_B \lVert \chi_{(0, \frac{1}{2})} \rVert_A^{-1} \lVert f^*(\tfrac{1}{2}) \chi_{(0, \frac{1}{2})} \rVert_A \\
&\leq \lVert D_{\frac{1}{2}} \rVert (\lVert f^* \chi_{(1, \infty)} \rVert_B + \lVert \chi_{(\frac{1}{2}, 1)} \rVert_B \lVert \chi_{(0, \frac{1}{2})} \rVert_A^{-1} \lVert f^* \chi_{[0,1]} \rVert_A) \\
&\leq \lVert D_{\frac{1}{2}} \rVert \max \{1, \lVert \chi_{(\frac{1}{2}, 1)} \rVert_B \lVert \chi_{(0, \frac{1}{2})} \rVert_A^{-1}\} \lVert f \rVert_{WL(A, B)}.
\end{split}
\end{equation*}
This concludes the proof.
\end{proof}
\subsection{Associate spaces of Wiener--Luxemburg amalgams}
Let us now turn our attention to the associate spaces of Wiener--Luxemburg amalgams. The natural intuition here is that an associate space of an amalgam should be an amalgam of the respective associate spaces. This intuition turns out to be correct, as can be observed from the theorem presented below.
Note that while the conclusion of this theorem is natural, its proof is in fact quite involved. The difficulty stems from the fact that for a general function $f \in M$ the restriction of its non-increasing rearrangement $f^* \chi_{(1, \infty)}$ is not necessarily non-increasing and thus the quasinorm $\lVert f^* \chi_{(1, \infty)} \rVert_{WL(A, B)}$ actually depends not only on $\lVert \cdot \rVert_B$ but also on $\lVert \cdot \rVert_A$. This complicates things greatly and an entirely new method had to be developed to resolve the ensuing problems.
\begin{theorem} \label{TAS}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms and let $\lVert \cdot \rVert_{A'}$ and $\lVert \cdot \rVert_{B'}$ be their respective associate norms. Then there is a constant $C>0$ such that the associate norm $\lVert \cdot \rVert_{(WL(A, B))'}$ of $\lVert \cdot \rVert_{WL(A, B)}$ satisfies
\begin{equation}\label{TAS1}
\lVert f \rVert_{(WL(A, B))'} \leq \lVert f \rVert_{WL(A', B')} \leq C \lVert f \rVert_{(WL(A, B))'}
\end{equation}
for every $f \in M$.
Consequently, the corresponding associate space satisfies
\begin{equation*}
(WL(A, B))' = WL(A', B'),
\end{equation*}
up to equivalence of defining functionals.
\end{theorem}
\begin{proof}
We begin by showing the first inequality in \eqref{TAS1}. To this end, fix some $f \in M$ and arbitrary $g \in M$ satisfying $\lVert g \rVert_{WL(A, B)} < \infty$. Then it follows from the Hölder inequality for associate spaces (Theorem~\ref{THAS}) that
\begin{equation*}
\begin{split}
\int_0^{\infty} f^* g^* \: d\lambda &= \int_0^{\infty} f^*\chi_{[0,1]} g^* \: d\lambda + \int_0^{\infty} f^* \chi_{(1, \infty)} g^* \: d\lambda \\
&\leq \lVert f^* \chi_{[0,1]} \rVert_{A'} \lVert g^* \chi_{[0,1]} \rVert_A + \lVert f^* \chi_{(1,\infty)} \rVert_{B'} \lVert g^* \chi_{(1,\infty)} \rVert_B \\
&\leq \max \{\lVert f^* \chi_{[0,1]} \rVert_{A'}, \: \lVert f^* \chi_{(1,\infty)} \rVert_{B'} \} \cdot \lVert g \rVert_{WL(A, B)} \\
&\leq \lVert f \rVert_{WL(A', B')} \lVert g \rVert_{WL(A, B)}.
\end{split}
\end{equation*}
The desired inequality now follows by dividing both sides of the inequality by $\lVert g \rVert_{WL(A, B)}$, taking the supremum over $WL(A,B)$ and using Proposition~\ref{PAS}.
The second inequality in \eqref{TAS1} is more involved. We obtain it indirectly, showing first that $(WL(A,B))' \subseteq WL(A', B')$ and then using Theorem~\ref{TEQBFS}.
Suppose that $f \notin WL(A', B')$. Then at least one of the following holds: $f^* \chi_{[0,1]} \notin A'$ or $f^* \chi_{(1,\infty)} \notin B'$. We treat these two cases separately and show that either is sufficient for $f \notin (WL(A,B))'$.
If $f^* \chi_{[0,1]} \notin A'$ then we get by Theorem~\ref{LT} that there is a non-negative function $g \in A$ such that
\begin{equation*}
\int_0^{\infty} f^* \chi_{[0,1]} g \: d\lambda = \infty.
\end{equation*}
Now, $g^* \chi_{[0,1]} \in WL(A,B)$ because
\begin{equation*}
\lVert g^* \chi_{[0,1]} \rVert_{WL(A,B)} = \lVert g^* \chi_{[0,1]} \rVert_A \leq \lVert g^* \rVert_A = \lVert g \rVert_A < \infty
\end{equation*}
and we have by the Hardy--Littlewood inequality (Theorem~\ref{THLI}) the following estimate:
\begin{equation*}
\infty = \int_0^{\infty} f^* \chi_{[0,1]} g \: d\lambda \leq \int_0^{\infty} f^* g^* \chi_{[0,1]} \: d\lambda.
\end{equation*}
We have thus shown that $f \notin (WL(A,B))'$.
Suppose now that $f^* \chi_{(1,\infty)} \notin B'$. We may assume that $f^*(1) < \infty$, because otherwise $f^* \chi_{[0,1]} = \infty \chi_{[0,1]} \notin A'$ and thus $f \notin (WL(A,B))'$ by the argument above. As in the previous case, we get by Theorem~\ref{LT} that there is some non-negative function $g \in B$ such that
\begin{equation*}
\int_0^{\infty} f^* \chi_{(1,\infty)} g \: d\lambda = \infty.
\end{equation*}
Now, it holds for all $t \in (0, \infty)$ that
\begin{equation*}
(f^* \chi_{(1,\infty)})^*(t) = f^*(t+1),
\end{equation*}
which, when combined with the Hardy--Littlewood inequality (Theorem~\ref{THLI}), yields
\begin{equation*}
\infty = \int_0^{\infty} f^* \chi_{(1,\infty)} g \: d\lambda \leq \int_0^{\infty} f^*(t+1) g^*(t) \: dt = \int_1^{\infty} f^*(t) g^*(t-1) \: dt.
\end{equation*}
If we now put
\begin{equation*}
\tilde{g}(t) = \begin{cases}
0 & \text{for } t \in [0,1], \\
g^*(t-1) & \text{for } t \in (1, \infty),
\end{cases}
\end{equation*}
we immediately see that $\tilde{g}^* = g^*$ and thus we have found a function $\tilde{g} \in B$ that is zero on $[0, 1]$, non-increasing on $(1, \infty)$ and that satisfies
\begin{equation*}
\int_0^{\infty} f^* \chi_{(1,\infty)} \tilde{g} \: d\lambda = \infty.
\end{equation*}
Furthermore, we may estimate
\begin{equation*}
\int_1^{2} f^* \tilde{g} \: d\lambda \leq f^*(1) \int_1^2 \tilde{g} \: d\lambda \leq f^*(1)C_{[1,2]} \lVert \tilde{g} \rVert_{B} < \infty,
\end{equation*}
where $C_{[1,2]}$ is the constant from property \ref{P5} of $\lVert \cdot \rVert_B$. It follows that
\begin{equation*}
\int_2^{\infty} f^* \tilde{g} \: d\lambda = \infty,
\end{equation*}
because
\begin{equation*}
\infty = \int_0^{\infty} f^* \chi_{(1,\infty)} \tilde{g} \: d\lambda = \int_1^{2} f^* \tilde{g} \: d\lambda + \int_2^{\infty} f^* \tilde{g} \: d\lambda.
\end{equation*}
Finally, put $h = \tilde{g}(2) \chi_{[0,1]} + \min\{\tilde{g}, \tilde{g}(2)\}$. Note that $\tilde{g}(2) < \infty$ as follows from $\tilde{g} \in B$ and that $h$ is therefore a finite non-increasing function. Hence, we get that
\begin{equation*}
\lVert h \rVert_{WL(A,B)} = \tilde{g}(2) \lVert \chi_{(0,1)} \rVert_A + \lVert \min\{\tilde{g}, \tilde{g}(2)\} \rVert_{B} \leq \tilde{g}(2) \lVert \chi_{(0,1)} \rVert_A + \lVert \tilde{g} \rVert_{B} < \infty,
\end{equation*}
while by the arguments above we have
\begin{equation*}
\int_0^{\infty} f^* h^* \: d\lambda \geq \int_2^{\infty} f^* h^* \: d\lambda = \int_2^{\infty} f^* \tilde{g} \: d\lambda = \infty.
\end{equation*}
We therefore get that $f \notin (WL(A,B))'$. This covers the last case and establishes the desired inclusion $(WL(A,B))' \subseteq WL(A', B')$.
Because we already know from Theorem~\ref{TQN} that $WL(A', B')$ is a quasi-Banach function space and from Theorem~\ref{TFA} that $(WL(A,B))'$ is a Banach function space, we may use Theorem~\ref{TEQBFS} to obtain $(WL(A,B))' \hookrightarrow WL(A', B')$, i.e.~that there is a constant $C>0$ such that it holds for all $f \in M$ that
\begin{equation*}
\lVert f \rVert_{WL(A', B')} \leq C \lVert f \rVert_{(WL(A, B))'},
\end{equation*}
which concludes the proof.
\end{proof}
As a corollary, we obtain normability of Wiener--Luxemburg amalgam spaces.
\begin{corollary} \label{CN}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms. Then the Wiener--Luxemburg quasinorm $\lVert \cdot \rVert_{WL(A, B)}$ is equivalent to an r.i.~Banach function norm. Consequently, the Wiener--Luxemburg amalgam space $WL(A,B)$ is an r.i.~Banach function space.
\end{corollary}
\begin{proof}
It follows from Theorem~\ref{TAS} that
\begin{equation*}
WL(A, B) = WL(A'', B'') = (WL(A',B'))',
\end{equation*}
where the space on the right-hand side is a Banach function space thanks to Theorem~\ref{TFA} and Theorem~\ref{TQN}.
\end{proof}
\subsection{Embeddings}
We now examine the embeddings of Wiener--Luxemburg amalgams. First we characterise the embeddings between two Wiener--Luxemburg amalgams, then between a Wiener--Luxemburg amalgam of two spaces and the sum or intersection of these spaces and finally we examine the case when the local or the global component is $L^1$ or $L^{\infty}$.
The first theorem provides the characterisation of embeddings among Wiener--Luxemburg amalgams.
\begin{theorem} \label{TEM}
Let $\lVert \cdot \rVert_A$, $\lVert \cdot \rVert_B$, $\lVert \cdot \rVert_C$ and $\lVert \cdot \rVert_D$ be r.i.~Banach function norms. Then the following assertions are true:
\begin{enumerate}
\item The embedding $WL(A, C) \hookrightarrow WL(B,C)$ holds if and only if the local component of $\lVert \cdot \rVert_A$ is stronger that that of $\lVert \cdot \rVert_B$, in the sense that for every $f \in M$ the implication
\begin{equation*}
\lVert f^* \chi_{[0,1]} \rVert_A < \infty \Rightarrow \lVert f^* \chi_{[0,1]} \rVert_B < \infty
\end{equation*} \label{TEMp1}
holds.
\item The embedding $WL(A, B) \hookrightarrow WL(A,C)$ holds if and only if the global component of $\lVert \cdot \rVert_B$ is stronger that that of $\lVert \cdot \rVert_C$, in the sense that for every $f \in M$ such that $f^*(1) < \infty$ the implication
\begin{equation*}
\lVert f^* \chi_{(1, \infty)} \rVert_B < \infty \Rightarrow \lVert f^* \chi_{(1, \infty)} \rVert_C < \infty
\end{equation*}
holds. \label{TEMp2}
\item The embedding $WL(A,B) \hookrightarrow WL(C,D)$ holds if and only if the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_C$ and the global component of $\lVert \cdot \rVert_B$ is stronger than that of $\lVert \cdot \rVert_D$. \label{TEMp3}
\end{enumerate}
\end{theorem}
\begin{proof}
In the first two cases the sufficiency follows directly from Theorem~\ref{TEQBFS} and Definition~\ref{DefWL}, only in the second case one has to realise that all $f \in WL(A, B)$ satisfy $f^*(1) < \infty$. The third case then follows, because we can use what we already proved to get
\begin{equation*}
WL(A,B) \hookrightarrow WL(A,D) \hookrightarrow WL(C,D).
\end{equation*}
The necessity in the case~\ref{TEMp1} can be shown in a following way. Fix some $f_0 \in M$ such that $\lVert f_0^* \chi_{[0,1]} \rVert_A < \infty$ but $\lVert f_0^* \chi_{[0,1]} \rVert_B = \infty$. Then $f = f_0^* \chi_{[0,1]}$ belongs to $WL(A,C)$, since
\begin{align*}
\lVert f^* \chi_{[0,1]} \rVert_A &= \lVert f_0^* \chi_{[0,1]} \rVert_A < \infty, \\
\lVert f^* \chi_{(1, \infty)} \rVert_C &= \lVert 0 \rVert_C = 0,
\end{align*}
but not to $WL(B,C)$, since
\begin{equation*}
\lVert f^* \chi_{[0,1]} \rVert_B = \lVert f_0^* \chi_{[0,1]} \rVert_B = \infty.
\end{equation*}
As for the case~\ref{TEMp2}, fix some $f_0 \in M$ such that $f_0^*(1) < \infty$ and $\lVert f_0^* \chi_{(1, \infty)} \rVert_B < \infty$ while $\lVert f_0^* \chi_{(1, \infty)} \rVert_C = \infty$. Then $f = f_0^*(1) \chi_{[0,1]} + f_0^* \chi_{(1, \infty)}$ belongs to $WL(A,B)$, since
\begin{align*}
\lVert f^* \chi_{[0,1]} \rVert_A &= \lVert f_0^*(1) \chi_{[0,1]} \rVert_A = f_0^*(1) \lVert \chi_{[0,1]} \rVert_A < \infty, \\
\lVert f^* \chi_{(1, \infty)} \rVert_B &= \lVert f^*_0 \chi_{(1, \infty)} \rVert_B < \infty,
\end{align*}
but not to $WL(A,C)$, since
\begin{equation*}
\lVert f^* \chi_{(1, \infty)} \rVert_C = \lVert f^*_0 \chi_{(1, \infty)} \rVert_C = \infty.
\end{equation*}
Finally for the case~\ref{TEMp3} one needs to combine the steps presented above. To be precise, if the condition on local components is violated one finds the counterexample as in the case~\ref{TEMp1} while in the case when the condition on the global components gets violated one finds it as in the case~\ref{TEMp2}.
\end{proof}
To provide an example we turn to the classical Lebesgue spaces. It is well known that Lebesgue spaces over $[0,\infty)$ are not ordered, but it is easy to show that their local and global component are, as is formalised in the following remark.
\begin{remark} \label{RELp}
Let $p,q \in (0,\infty]$. Then it holds that
\begin{enumerate}
\item the local component of $\lVert \cdot \rVert_{L^p}$ is stronger than that of $\lVert \cdot \rVert_{L^q}$ if and only if $p \geq q$,
\item the global component of $\lVert \cdot \rVert_{L^p}$ is stronger than that of $\lVert \cdot \rVert_{L^q}$ if and only if $p \leq q$.
\end{enumerate}
\end{remark}
As a second example we present a similar statement about Lorentz spaces. The proof is easy and uses only standard techniques.
\begin{remark} \label{RELpq}
Let $p_1, p_2, q_1, q_2 \in (0,\infty]$ such that $p_1 \neq p_2$ and let $\lVert \cdot \rVert_{L^{p_1,q_1}}$ and $\lVert \cdot \rVert_{L^{p_2,q_2}}$ be the corresponding Lorentz functionals. Then it holds that
\begin{enumerate}
\item the local component of $\lVert \cdot \rVert_{L^{p_1,q_1}}$ is stronger than that of $\lVert \cdot \rVert_{L^{p_2,q_2}}$ if and only if $p_1 > p_2$,
\item the global component of $\lVert \cdot \rVert_{L^{p_1,q_1}}$ is stronger than that of $\lVert \cdot \rVert_{L^{p_2,q_2}}$ if and only if $p_1 < p_2$.
\end{enumerate}
\end{remark}
A third example might be found among Orlicz spaces. The following remark contains sufficient conditions for the ordering of their respective local and global components. It is again easy to prove, using only the already well-known methods which have been originally developed for characterising the embeddings between Orlicz spaces and which can be found for example in \cite[Theorem~4.17.1]{FucikKufner13}. We also refer the reader to \cite[Chapter~4]{FucikKufner13} for an extensive treatment of Orlicz spaces.
\begin{remark}
Let $\Phi_1$ and $\Phi_2$ be two Young functions and let $\lVert \cdot \rVert_{\Phi_1}$ and $\lVert \cdot \rVert_{\Phi_2}$ be the corresponding Orlicz norms. Then
\begin{enumerate}
\item if there are some constants $c, T \in (0, \infty)$ such that
\begin{equation*}
\Phi_2(t) \leq \Phi_1(ct)
\end{equation*}
for all $t \in [T, \infty)$ then the local component of $\lVert \cdot \rVert_{\Phi_1}$ is stronger than that of $\lVert \cdot \rVert_{\Phi_2}$,
\item if there are some constants $c, T \in (0, \infty)$ such that
\begin{equation*}
\Phi_2(t) \leq \Phi_1(ct)
\end{equation*}
for all $t \in [0, T]$ then the global component of $\lVert \cdot \rVert_{\Phi_1}$ is stronger than that of $\lVert \cdot \rVert_{\Phi_2}$.
\end{enumerate}
\end{remark}
We now put $WL(A,B)$ in relation with the sum and intersection of $A$ and $B$. We first show that $WL(A,B)$ is always sandwiched between them.
\begin{theorem}\label{TRS}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms. Then
\begin{equation*}
A \cap B \hookrightarrow WL(A,B) \hookrightarrow A + B.
\end{equation*}
\end{theorem}
\begin{proof}
Fix some $f \in M$. Then
\begin{equation*}
\lVert f \rVert_{WL(A, B)} = \lVert f^* \chi_{[0,1]} \rVert_A + \lVert f^* \chi_{(1, \infty)} \rVert_B \leq \lVert f \rVert_A + \lVert f \rVert_B \leq 2 \lVert f \rVert_{A \cap B}
\end{equation*}
which establishes the first embedding.
As for the second embedding, note that we may consider $f$ to be non-negative, since it is easy to show that it holds for every $f \in M$ that $\lVert f \rVert_{A+B} = \lVert \, \lvert f \rvert \, \rVert_{A+B}$.
Consider now functions $g$ and $h$ defined by
\begin{align*}
g &= \max \{ f - f^*(1), 0\}, \\
h &= \min \{ f, f^*(1)\}.
\end{align*}
Then $f = g + h$ and thus
\begin{equation*}
\begin{split}
\lVert f \rVert_{A+B} &\leq \lVert g \rVert_A + \lVert h \rVert_B = \lVert g^* \rVert_A + \lVert h^* \rVert_B
\end{split}
\end{equation*}
thanks to rearrangement invariance of both $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$. Furthermore, thanks to $f$ beeing non-negative, it is an exercise to verify that
\begin{align*}
g^* &= (f^* - f^*(1)) \chi_{[0,1]}, \\
h^* &= f^*(1) \chi_{[0,1]} + f^* \chi_{(1, \infty)},
\end{align*}
and therefore
\begin{equation*}
\begin{split}
\lVert f \rVert_{A+B} &\leq \lVert f^* \chi_{[0,1]} \rVert_A + \lVert f^*(1) \chi_{[0,1]} \rVert_B + \lVert f^* \chi_{(1, \infty)} \rVert_B \\
&\leq \lVert f \rVert_{WL(A, B)} + \lVert \chi_{[0,1]} \rVert_B \int_0^1 f^* \: d\lambda \\
&\leq (1 + C_{[0,1]} \lVert \chi_{[0,1]} \rVert_B ) \lVert f \rVert_{WL(A, B)},
\end{split}
\end{equation*}
where $C_{[0,1]}$ is the constant from the property \ref{P5} of $\lVert \cdot \rVert_A$ for the set $[0,1]$. This establishes the second embedding.
\end{proof}
Moreover, in the case when we have proper relations between the respective components of $A$ and $B$ we can describe their sum and intersection in terms of Wiener--Luxemburg amalgams, at least in the set theoretical sense.
\begin{corollary}\label{CIS}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms. Suppose that the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$ while the global component of $\lVert \cdot \rVert_B$ is stronger than that of $\lVert \cdot \rVert_A$. Then
\begin{equation*}
A \cap B = WL(A,B)
\end{equation*}
up to equivalence of quasinorms, while
\begin{equation} \label{CIS1}
A + B = WL(B,A)
\end{equation}
as sets.
\end{corollary}
\begin{proof}
Thanks to Proposition~\ref{PIBFS}, Theorem~\ref{TEQBFS} and Theorem~\ref{TRS} it suffices to prove that $WL(A,B) \subseteq A \cap B$ and $A + B \subseteq WL(B,A)$. But this is provided by Theorem~\ref{TEM} and Remark~\ref{RAS}, which, thanks to our assumptions, yield
\begin{align*}
WL(A,B) &\hookrightarrow A, \\
WL(A,B) &\hookrightarrow B, \\
A &\hookrightarrow WL(B,A), \\
B &\hookrightarrow WL(B,A).
\end{align*}
When combined with the fact that $WL(B,A)$ is a linear set, this is sufficient for the inclusions.
\end{proof}
The reason for the equality \eqref{CIS1} holding only in the set theoretical sense is of course the fact that $A+B$ is not necessarily a Banach function space. This also motivates the following observation.
\begin{corollary}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms. Suppose that the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$ while the global component of $\lVert \cdot \rVert_B$ is stronger than that of $\lVert \cdot \rVert_A$. Then there is an r.i.~Banach function norm $\lVert \cdot \rVert_X$ such that there is a constant $C >0$ satisfying
\begin{equation*}
\lVert f \rVert_{A+B} \leq C \lVert f \rVert_X
\end{equation*}
for all $f \in A+B$ and such that the corresponding r.i.~Banach function space $X$ satisfies
\begin{equation*}
X = A + B
\end{equation*}
as a set.
\end{corollary}
In the next theorem, we show that the classical Lebesgue space $L^1$ has the weakest local component, as well as the strongest global component, among all r.i.~Banach function spaces, while $L^{\infty}$ has, in the same context, the strongest local component as well as the weakest global component.
\begin{theorem} \label{TLL}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms and let $A$ and $B$ be the corresponding r.i.~Banach function spaces. Then
\begin{enumerate}
\item $WL(L^{\infty}, B) \hookrightarrow WL(A,B)$, \label{TLLp1}
\item $WL(A, L^1) \hookrightarrow WL(A,B)$, \label{TLLp2}
\item $WL(A, B) \hookrightarrow WL(L^1, B)$, \label{TLLp3}
\item $WL(A, B) \hookrightarrow WL(A, L^{\infty})$. \label{TLLp4}
\end{enumerate}
\end{theorem}
\begin{proof}
Fix $f \in M$. The first embedding follows from the estimate
\begin{equation*}
\lVert f^* \chi_{[0,1]} \rVert_A \leq \lVert f^* \chi_{[0,1]} \rVert_{L^{\infty}} \lVert \chi_{[0,1]} \rVert_A
\end{equation*}
and part~\ref{TEMp1} of Theorem~\ref{TEM}.
The third embedding also uses part~\ref{TEMp1} of Theorem~\ref{TEM} but this time paired with the estimate
\begin{equation*}
\lVert f^* \chi_{[0,1]} \rVert_{L^1} = \int_0^1 f^* \chi_{[0,1]} \: d\lambda \leq C_{[0,1]} \lVert f^* \chi_{[0,1]} \rVert_A,
\end{equation*}
where $C_{[0,1]}$ is the constant from property \ref{P5} of $\lVert \cdot \rVert_A$.
The fourth embedding is a trivial consequence of Proposition~\ref{PLC}, specifically of \eqref{PLCe}.
The second embedding is most involved. Denote by $\lVert \cdot \rVert_{B'}$ the associate norm of $\lVert \cdot \rVert_B$ and by $B'$ the associate space of $B$. Then we know from part~\ref{TLLp4}, which has already been proved, and Remark~\ref{RAS} that $B' \hookrightarrow WL(B',L^{\infty})$. Thus it follows from Theorem~\ref{TAS} and Proposition~\ref{PEASG} that
\begin{equation*}
WL(B, L^1) = (WL(B', L^{\infty}))' \hookrightarrow B'' = B.
\end{equation*}
Hence, it follows from part~\ref{TEMp2} of Theorem~\ref{TEM} that the global component of $\lVert \cdot \rVert_{L^1}$ is stronger than that of $\lVert \cdot \rVert_B$ which, by the same theorem, implies the desired embedding.
\end{proof}
As a corollary, we obtain the following well-known classical result, for which we thus provide an alternative proof.
\begin{corollary}
Let $A$ be an r.i.~Banach function space. Then
\begin{equation*}
L^1 \cap L^{\infty} \hookrightarrow A \hookrightarrow L^1 + L^{\infty}.
\end{equation*}
\end{corollary}
\begin{proof}
The assertion is a direct consequence of Remark~\ref{RELp}, Corollary~\ref{CIS}, Theorem~\ref{TLL}, Theorem~\ref{TEQBFS} and the fact that $L^1 + L^{\infty}$ is an r.i.~Banach function space.
\end{proof}
The final result of this section is a more precise version of Proposition~\ref{PEASG}.
\begin{proposition} \label{PEAS}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~Banach function norms and denote by $\lVert \cdot \rVert_ {A'}$ and $\lVert \cdot \rVert_{B'}$ the respective associate norms. Suppose that the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$. Then the local component of $\lVert \cdot \rVert_{B'}$ is stronger than that of $\lVert \cdot \rVert_{A'}$.
Similarly, if the global component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$, then the global component of $\lVert \cdot \rVert_{B'}$ is stronger than that of $\lVert \cdot \rVert_{A'}$
\end{proposition}
\begin{proof}
By our assumption and part~\ref{TEMp1} of~Theorem~\ref{TEM} we get that
\begin{equation*}
WL(A, L^{\infty}) \hookrightarrow WL(B, L^{\infty}).
\end{equation*}
Consequently, it follows from Theorem~\ref{TAS} and Proposition~\ref{PEASG} that
\begin{equation*}
WL(B', L^1) = (WL(B, L^{\infty}))' \hookrightarrow (WL(A, L^{\infty}))' = WL(A', L^1),
\end{equation*}
that is, the local component of $\lVert \cdot \rVert_{B'}$ is stronger than that of $\lVert \cdot \rVert_ {A'}$.
The second claim can be proved in similar manner, only using $WL(L^1, A)$ and $WL(L^1, B)$ instead of $WL(A, L^{\infty})$ and $WL(B, L^{\infty})$.
\end{proof}
\section{Integrable associate spaces} \label{CHIAS}
In this section we introduce a generalisation of the concept of associate spaces (see Definition~\ref{DAS}) the need of which arose naturally during the study of associate spaces of Wiener--Luxemburg amalgams of r.i.~quasi-Banach function spaces. We then study some of its properties, mainly those directly needed for our purposes.
Unlike in the other parts of the paper, we will now work in a more abstract setting and assume only that $(R, \mu)$ is a resonant measure space.
Our terminology and notation in this section is inspired by the down associate norm which is derived from the concept of down norms. To those interested in this topic we suggest the papers \cite{EdmundsKerman00} and \cite{Sinnamon01}.
\subsection{Basic properties}
Our definition of integrable associate spaces is rather indirect. We proceed by first introducing a certain subspace of an arbitrary r.i.~quasi-Banach function space and then defining the integrable associate space as an associate space of this subspace.
\begin{definition}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm and let $X$ be the corresponding r.i.~quasi-Banach function space. Then the functional $\lVert \cdot \rVert_{X_i}$, defined for $f \in M$ by
\begin{equation*}
\lVert f \rVert_{X_i} = \max \{ \lVert f \rVert_X, \, \lVert f \rVert_{WL(L^1, L^{\infty})} \},
\end{equation*}
will be called the integrable norm of $\lVert \cdot \rVert_X$, while the space
\begin{equation*}
X_i = \left \{ f \in M; \; \lVert f \rVert_{X_i} < \infty \right \}
\end{equation*}
will be called integrable subspace of $X$.
\end{definition}
\begin{theorem} \label{TISS}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm and let $X$ be the corresponding r.i.~quasi-Banach function space. Denote by $C$ the modulus of concavity of $\lVert \cdot \rVert_X$. Then the functional $\lVert \cdot \rVert_{X_i}$ is an r.i.~quasi-Banach function norm that has the property \ref{P5}. Furthermore, if $C = 1$, then $\lVert \cdot \rVert_{X_i}$ is an r.i.~Banach function norm.
Moreover, the space $X_i$ satisfies
\begin{equation} \label{TISS1}
X_i = \left \{ f \in X; \; \int_0^{1} f^* \: d\lambda < \infty \right \}
\end{equation}
and thus $X_i \hookrightarrow X$. On the other hand, if $\lVert \cdot \rVert_X$ has the property \ref{P5} then $X_i = X$ up to equivalence of quasinorms.
\end{theorem}
\begin{proof}
Given that we know from Proposition~\ref{PLC} that $\lVert \cdot \rVert_{WL(L^1, L^{\infty})}$ is an r.i.~Banach function norm, it is an easy exercise to show that the first part of the theorem holds, i.e. that $\lVert \cdot \rVert_{X_i}$ has the asserted properties. The characterisation of $X_i$ also follows from Proposition~\ref{PLC} while the embedding $X_i \hookrightarrow X$ can then be obtained via Theorem~\ref{TEQBFS} (or directly from the definition of $\lVert \cdot \rVert_{X_i}$).
The last part is slightly more involved. Note that we cannot simply use the Luxemburg representation theorem to show that every $f \in X$ belongs to $X_i$, since $\lVert \cdot \rVert_X$ needs not to be a norm. We thus proceed as follows.
We will assume that $\mu(R) \geq 1$; the remaining case is easier. If $\lVert \cdot \rVert_X$ has the property \ref{P5}, then its associate norm $\lVert \cdot \rVert_{X'}$ is an r.i.~Banach function norm and we can use it, as well as our assumption that the underlying measure space is resonant, to obtain an estimate on
\begin{equation*}
\int_0^{1} f^* \: d\lambda.
\end{equation*}
To this end, we use the $\sigma$-finiteness of $(R, \mu)$ to find some $a \in [1, \infty)$ such that there is a set $E$ with $\mu(E) = a$. Then it follows from the Hölder inequality for associate spaces (Theorem~\ref{THAS}) that
\begin{equation} \label{TISS2}
\int_0^{1} f^* \: d\lambda \leq \int_0^{a} f^* \: d\lambda = \sup_{\substack{E \subseteq R \\ \mu(E) = a}} \int_E \lvert f \rvert \: d\mu \leq \sup_{\substack{E \subseteq R \\ \mu(E) = a}} \lVert \chi_E \rVert_{X'} \lVert f \rVert_X = C_a \lVert f \rVert_X
\end{equation}
where $C_a$ is the norm $\lVert \chi_E \rVert_{X'}$ for any set $E \subseteq R$ such that $\mu(E) = a$. Hence, it follows from \eqref{TISS1} that the sets $X$ and $X_i$ coincide. The equivalence of quasinorms then follows from Theorem~\ref{TEQBFS} (or directly from \eqref{TISS2}).
\end{proof}
\begin{definition}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm, let $X$ be the corresponding r.i.~quasi-Banach function space and let $\lVert \cdot \rVert_{X_i}$ and $X_i$, respectively, be the corresponding integrable norm and integrable subspace. Then the associate norm $\lVert \cdot \rVert_{X_i'}$ of $\lVert \cdot \rVert_{X_i}$ will also be called the integrable associate norm of $\lVert \cdot \rVert_X$, while the associate space $X_i'$ of $X_i$ will also be called the integrable associate space of $X$.
\end{definition}
The next two results describe the properties of the integrable associate spaces and their relation to the associate spaces.
\begin{corollary} \label{CIAS}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm, let $X$ be the corresponding r.i.~quasi-Banach function space and let $\lVert \cdot \rVert_{X_i'}$ and $X_i'$, respectively, be the corresponding integrable associate norm and integrable associate space. Then $\lVert \cdot \rVert_{X_i'}$ is an r.i.~Banach function norm and $X_i'$ is an r.i.~Banach function space.
\end{corollary}
\begin{proof}
This result follows from Theorem~\ref{TISS} and Theorem~\ref{TFA}.
\end{proof}
\begin{corollary} \label{CIASAS}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm and let $X$ be the corresponding r.i.~quasi-Banach function space. If $\lVert \cdot \rVert_X$ has the property \ref{P5} then $X_i' = X'$ up to equivalence of norms.
\end{corollary}
\begin{proof}
This result follows directly from Theorem~\ref{TISS}.
\end{proof}
Next we obtain an analogue to Proposition~\ref{PEASG}.
\begin{corollary} \label{CEIAS}
Let $\lVert \cdot \rVert_X$ and $\lVert \cdot \rVert_Y$ be a pair of r.i.~quasi-Banach function norms and let $X$ and $Y$, respectively, be the corresponding r.i.~quasi-Banach function spaces. Suppose that $X \hookrightarrow Y$. Then the respective integrable associate spaces $X_i'$ and $Y_i'$ satisfy $X_i' \hookleftarrow Y_i'$.
\end{corollary}
\begin{proof}
Thanks to our assumptions the respective integrable subspaces $X_i$ and $Y_i$ of $X$ and $Y$ satisfy $X_i \hookrightarrow Y_i$ and thus the result follows from Proposition~\ref{PEASG}.
\end{proof}
Finally, we now formulate the appropriate versions of Hölder inequality and Landau's resonance theorem. Note that while the latter can be formulated only in terms of the original quasinorm, the Hölder inequality requires us to work with the integrable norm which is rather unfortunate.
\begin{corollary}\label{THIAS}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm and let $\lVert \cdot \rVert_{X_i}$ and $\lVert \cdot \rVert_{X_i'}$, respectively, be the corresponding integrable norm and integrable associate norm. Then it holds for every pair of functions $f, g \in M$ that
\begin{equation*}
\int_R \lvert fg \rvert \: d\mu \leq \lVert f \rVert_{X_i'} \lVert g \rVert_{X_i}.
\end{equation*}
\end{corollary}
\begin{corollary} \label{GLT}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm, let $X$ be the corresponding r.i.~quasi-Banach function space and let $\lVert \cdot \rVert_{X_i'}$ and $X_i'$, respectively, be the corresponding integrable associate norm and integrable associate space. Then arbitrary function $f \in M$ belongs to $X_i'$ if and only if it satisfies
\begin{equation} \label{GLT1}
\int_R \lvert f g \rvert \: d\mu < \infty
\end{equation}
for all $g \in X$ such that
\begin{equation} \label{GLT2}
\int_0^{1} g^* \: d\lambda < \infty.
\end{equation}
\end{corollary}
\begin{proof}
This result is a consequence of Theorem~\ref{LT} and \eqref{TISS1}.
\end{proof}
\subsection{The second integrable associate space}
We now answer the natural question how does the second integrable associate space relate to the original r.i.~quasi-Banach function space. It turns out that the answer is much more interesting than in the case of associate spaces.
\begin{definition} \label{DSIAN}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm, let $X$ be the corresponding r.i.~quasi-Banach function space and let $\lVert \cdot \rVert_{X_i'}$ and $X_i'$, respectively, be the corresponding integrable associate norm and integrable associate space. We then define the second integrable associate norm $\lVert \cdot \rVert_{X_i''}$ as the integrable associate norm of $\lVert \cdot \rVert_{X_i'}$ and the second integrable associate space $X_i''$ as the integrable associate space of $X_i'$.
\end{definition}
It follows directly from Corollary~\ref{CIAS} and Corollary~\ref{CIASAS} that the second integrable associate norm is equivalent to the associate norm of $\lVert \cdot \rVert_{X_i'}$. An analogous claim holds for the second integrable associate space.
\begin{theorem} \label{TSIAS}
Let $\lVert \cdot \rVert_X$ be an r.i.~quasi-Banach function norm and let $X$ be the corresponding r.i.~quasi-Banach function space. Then:
\begin{enumerate}
\item \label{TSIAS1}
if $\lVert \cdot \rVert_X$ is a Banach function norm, then $X = X_i''$ with equivalent norms;
\item \label{TSIAS2}
if $\lVert \cdot \rVert_X$ has the property \ref{P5} but it is not equivalent to a Banach function norm, then $X \hookrightarrow X_i''$ and $X_i'' \not \hookrightarrow X$;
\item \label{TSIAS3}
if $\lVert \cdot \rVert_X$ has the property \ref{P1} but not the property \ref{P5}, then $X_i'' \hookrightarrow X$ and $X \not \hookrightarrow X_i''$;
\item \label{TSIAS4}
if $\lVert \cdot \rVert_X$ has neither the property \ref{P1} nor the property \ref{P5}, then the spaces $X$ and $X_i''$ cannot, in general, be compared.
\end{enumerate}
\end{theorem}
\begin{proof}
The assertion \ref{TSIAS1} follows immediately from Corollary~\ref{CIASAS} and Theorem~\ref{TDAS}.
The positive part of assertion \ref{TSIAS2} follows from Corollary~\ref{CIASAS} and Proposition~\ref{PESSAS} while the negative part follows from the fact that $X_i''$ is a Banach function space by Corollary~\ref{CIAS}.
In the case \ref{TSIAS3} we know from Theorem~\ref{TISS} that the integrable subspace $X_i$ is a Banach function space, hence it follows from Corollary~\ref{CIASAS}, and Theorem~\ref{TDAS} that
\begin{equation*}
X_i'' = (X_i)'' = X_i \hookrightarrow X.
\end{equation*}
On the other hand, since $X_i''$ is a Banach function space by Corollary~\ref{CIAS}, we get from Theorem~\ref{TP5} that there is a function in $X$ that does not belong to $X_i''$.
Finally, to observe that there needs not to be any embedding in the case \ref{TSIAS4} one only has to consider the Lebesgue space $L^p$ with $p<1$. Indeed, in this case the space $(L^p)_i''$ is a Banach function space and therefore we have from Theorem~\ref{TLL} that
\begin{equation*}
WL(L^{\infty}, L^1) \hookrightarrow (L^p)_i'' \hookrightarrow WL(L^1, L^{\infty}),
\end{equation*}
while there are some functions in $WL(L^{\infty}, L^1)$ that do not belong to $L^p$ and at the same time there are some functions in $L^p$ that do not belong to $WL(L^1, L^{\infty})$.
\end{proof}
\section{Wiener--Luxemburg amalgams of quasi-Banach function spaces} \label{CHWLASq}
In this section we extend the theory of Wiener--Luxemburg amalgams to the context of r.i.~quasi-Banach function spaces. This is possible thanks to the recent advances of the general theory of quasi-Banach function spaces developed in \cite{NekvindaPesa20}. We focus on those areas where the generalisation leads to new results and insights.
Throughout this section we restrict ourselves to the case when $(R, \mu) = ([0, \infty), \lambda)$, which ensures that the underlying measure space is resonant. This restriction is necessary because we need to work with the non-increasing rearrangement and, in contrast to the situation in Section~\ref{CHWLAS}, the representation theory is not available for r.i.~quasi Banach function spaces.
\subsection{Wiener--Luxemburg quasinorms for quasi-Banach function spaces}
The definition and the basic properties of Wiener--Luxemburg quasinorms are subject only to the most natural and expected changes.
\begin{definition} \label{DefWLq}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~quasi-Banach function norms. We then define the Wiener--Luxemburg quasinorm $\lVert \cdot \rVert_{WL(A, B)}$, for $f \in M$, by
\begin{equation}
\lVert f \rVert_{WL(A, B)} = \lVert f^* \chi_{[0,1]} \rVert_A + \lVert f^* \chi_{(1, \infty)} \rVert_B \label{DefWLNq}
\end{equation}
and the corresponding Wiener--Luxemburg amalgam space $WL(A, B)$ as
\begin{equation*}
WL(A, B) = \{f \in M; \; \lVert f \rVert_{WL(A, B)} < \infty \}.
\end{equation*}
Furthermore, we will call the first summand in \eqref{DefWLNq} the local component of $\lVert \cdot \rVert_{WL(A, B)}$ while the second summand will be called the global component of $\lVert \cdot \rVert_{WL(A, B)}$.
\end{definition}
\begin{remark}
Let $\lVert \cdot \rVert_A$ be an r.i.~quasi-Banach function norm and denote by $C$ its modulus of concavity. Then
\begin{equation*}
\lVert f \rVert_A \leq \lVert f \rVert_{WL(A, A)} \leq 2C \lVert f \rVert_A
\end{equation*}
for every $f \in M$.
Consequently, it makes a good sense to talk about local and global components of arbitrary r.i.~quasi-Banach function norms.
\end{remark}
\begin{theorem} \label{TQNq}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~quasi-Banach function norms. Then the Wiener--Luxemburg quasinorm $\lVert \cdot \rVert_{WL(A, B)}$ is an r.i.~quasi-Banach function norm. Consequently, the corresponding Wiener--Luxemburg amalgam $WL(A,B)$ is a rearrangement-invariant quasi-Banach function space.
Moreover, $\lVert \cdot \rVert_{WL(A, B)}$ has the property $\ref{P5}$ if and only if $\lVert \cdot \rVert_A$ does.
\end{theorem}
\begin{proof}
The proof of the first part, that is that $\lVert \cdot \rVert_{WL(A, B)}$ has all the properties of an r.i.~quasi-Banach function norm, is analogous to that of Theorem~\ref{TQN}.
The proof that if $\lVert \cdot \rVert_A$ has the property $\ref{P5}$ then the same is true for $\lVert \cdot \rVert_{WL(A, B)}$ is analogous to the appropriate part of the proof of Proposition~\ref{PLC}.
Consider now the case when $\lVert \cdot \rVert_A$ does not have the property $\ref{P5}$ and fix some $E \subseteq [0, \infty)$ that serves as an appropriate counterexample. It follows from Theorem~\ref{TP5} that there is a non-negative function $f_E \in A$ such that
\begin{equation*}
\infty = \int_E f_E \: d\lambda \leq \int_0^{\lambda(E)} f_E^* \: d\lambda,
\end{equation*}
where the last estimate follows from the Hardy--Littlewood inequality (Theorem~\ref{THLI}). The function $f = f_E^* \chi_{[0,1]}$ then satisfies $\lVert f \rVert_{WL(A,B)} < \infty$ while
\begin{equation*}
\int_0^{1} f \: d\lambda = \infty.
\end{equation*}
\end{proof}
\subsection{Integrable associate spaces of Wiener--Luxemburg amalgams}
An interesting question is how to describe the associate space of a Wiener--Luxemburg amalgam $WL(A,B)$ in the case when $A$ has the property \ref{P5} but $B$ does not. It follows from Theorems~\ref{TFA} and \ref{TQNq} that it will be a Banach function space, but it cannot be described as $WL(A', B')$ since in this case $B' = \{0\}$. Trying to answer this question is what motivates the introduction of integrable associate spaces in the previous section.
The answer to this question will follow as a corollary to the following general theorem, in which we describe the integrable associate spaces of Wiener--Luxemburg amalgams.
\begin{theorem} \label{TASq}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~quasi-Banach function norms and let $\lVert \cdot \rVert_{A_i'}$ and $\lVert \cdot \rVert_{B_i'}$ be their respective integrable associate norms. Then there is a constant $C>0$ such that the integrable associate norm $\lVert \cdot \rVert_{(WL(A, B))_i'}$ of $\lVert \cdot \rVert_{WL(A, B)}$ satisfies
\begin{equation}\label{TASq1}
\lVert f \rVert_{(WL(A, B))_i'} \leq \lVert f \rVert_{WL(A_i', B_i')} \leq C \lVert f \rVert_{(WL(A, B))_i'}
\end{equation}
for every $f \in M$.
Consequently, the corresponding integrable associate space satisfies
\begin{equation*}
(WL(A, B))_i' = WL(A_i', B_i'),
\end{equation*}
up to equivalence of defining functionals.
\end{theorem}
\begin{proof}
We begin by showing the first inequality in \eqref{TASq1}. To this end, fix some $f \in M$ and arbitrary $g \in M$ satisfying $\lVert g \rVert_{WL(A, B)_i} < \infty$. It then follows from Corollary~\ref{THIAS} that
\begin{equation*}
\begin{split}
\int_0^{\infty} f^* g^* \: d\lambda &= \int_0^{\infty} f^*\chi_{[0,1]} g^* \: d\lambda + \int_0^{\infty} f^* \chi_{(1, \infty)} g^* \: d\lambda \\
&\leq \lVert f^* \chi_{[0,1]} \rVert_{A_i'} \lVert g^* \chi_{[0,1]} \rVert_{A_i} + \lVert f^* \chi_{(1,\infty)} \rVert_{B_i'} \lVert g^* \chi_{(1,\infty)} \rVert_{B_i} \\
&\leq \lVert f \rVert_{WL(A_i', B_i')} \cdot \max \{\lVert g^* \chi_{[0,1]} \rVert_{A_i}, \, \lVert g^* \chi_{(1,\infty)} \rVert_{B_i} \} \\
&\leq \lVert f \rVert_{WL(A_i', B_i')} \lVert g \rVert_{WL(A, B)_i}.
\end{split}
\end{equation*}
The desired inequality now follows by dividing both sides by $\lVert g \rVert_{WL(A, B)_i}$, taking the appropriate supremum and using Proposition~\ref{PAS}.
The second inequality in \eqref{TASq1} is more involved. We obtain it indirectly, showing first that $(WL(A,B))' \subseteq WL(A', B')$ and then using Theorem~\ref{TEQBFS}.
Suppose that $f \notin WL(A_i', B_i')$. Then $f^* \chi_{[0,1]} \notin A_i'$ or $f^* \chi_{(1,\infty)} \notin B_i'$. We treat these two cases separately.
If $f^* \chi_{[0,1]} \notin A_i'$ then we get by Corollary~\ref{GLT} that there is a non-negative function $g \in A$ such that
\begin{align*}
\int_0^{1} g^* \: d\lambda &< \infty, \\
\int_0^{\infty} f^* \chi_{[0,1]} g \: d\lambda &= \infty.
\end{align*}
Now, $g^* \chi_{[0,1]} \in WL(A,B)$ because
\begin{equation*}
\lVert g^* \chi_{[0,1]} \rVert_{WL(A,B)} = \lVert g^* \chi_{[0,1]} \rVert_A \leq \lVert g^* \rVert_A = \lVert g \rVert_A < \infty.
\end{equation*}
Moreover, it obviously holds that
\begin{equation*}
\int_0^{1} (g^*\chi_{[0,1]})^* \: d\lambda = \int_0^{1} g^* \: d\lambda < \infty
\end{equation*}
and we get, by the Hardy--Littlewood inequality (Theorem~\ref{THLI}), the following estimate:
\begin{equation*}
\infty = \int_0^{\infty} f^* \chi_{[0,1]} g \: d\lambda \leq \int_0^{\infty} f^* g^* \chi_{[0,1]} \: d\lambda.
\end{equation*}
It thus follows from Corollary~\ref{GLT} that $f \notin (WL(A,B))_i'$.
Suppose now that $f^* \chi_{(1,\infty)} \notin B_i'$. We may assume that $f^*(1) < \infty$, because otherwise $f^* \chi_{[0,1]} = \infty \chi_{[0,1]} \notin A_i'$ (see \cite[Lemma~2.4]{MizutaNekvinda15} or \cite[Theorem~3.4]{NekvindaPesa20}) and thus $f \notin (WL(A,B))_i'$ by the argument above. As in the previous case, we get by Corollary~\ref{GLT} that there is some non-negative function $g \in B$ such that
\begin{align}
\int_0^{1} g^* \: d\lambda &< \infty, \label{TASq2} \\
\int_0^{\infty} f^* \chi_{(1,\infty)} g \: d\lambda &= \infty. \nonumber
\end{align}
Now, it holds for all $t \in (0, \infty)$ that
\begin{equation*}
(f^* \chi_{(1,\infty)})^*(t) = f^*(t+1),
\end{equation*}
which, when combined with the Hardy--Littlewood inequality (Theorem~\ref{THLI}), yields
\begin{equation*}
\infty = \int_0^{\infty} f^* \chi_{(1,\infty)} g \: d\lambda \leq \int_0^{\infty} f^*(t+1) g^*(t) \: dt = \int_1^{\infty} f^*(t) g^*(t-1) \: dt.
\end{equation*}
If we now put
\begin{equation*}
\tilde{g}(t) = \begin{cases}
0 & \text{for } t \in [0,1], \\
g^*(t-1) & \text{for } t \in (1, \infty),
\end{cases}
\end{equation*}
we immediately see that $\tilde{g}^* = g^*$ and thus we have found a function $\tilde{g} \in B$ that is zero on $[0, 1]$, non-increasing on $(1, \infty)$ and that satisfies
\begin{equation*}
\int_0^{\infty} f^* \chi_{(1,\infty)} \tilde{g} \: d\lambda = \infty.
\end{equation*}
Furthermore, we may estimate by \eqref{TASq2}
\begin{equation*}
\int_1^{2} f^* \tilde{g} \: d\lambda \leq f^*(1) \int_1^2 \tilde{g} \: d\lambda = f^*(1) \int_0^{1} g^* \: d\lambda < \infty.
\end{equation*}
It follows that
\begin{equation*}
\int_2^{\infty} f^* \tilde{g} \: d\lambda = \infty,
\end{equation*}
because
\begin{equation*}
\infty = \int_0^{\infty} f^* \chi_{(1,\infty)} \tilde{g} \: d\lambda = \int_1^{2} f^* \tilde{g} \: d\lambda + \int_2^{\infty} f^* \tilde{g} \: d\lambda.
\end{equation*}
Finally, put $h = \tilde{g}(2) \chi_{[0,1]} + \min\{\tilde{g}, \tilde{g}(2)\}$. Note that $\tilde{g}(2) < \infty$ as follows from $\tilde{g} \in B$ (see again \cite[Lemma~2.4]{MizutaNekvinda15} or \cite[Theorem~3.4]{NekvindaPesa20}) and that $h$ is therefore a finite non-increasing function. Hence, we get that
\begin{gather*}
\lVert h \rVert_{WL(A,B)} = \tilde{g}(2) \lVert \chi_{(0,1)} \rVert_A + \lVert \min\{\tilde{g}, \tilde{g}(2)\} \rVert_{B} \leq \tilde{g}(2) \lVert \chi_{(0,1)} \rVert_A + \lVert \tilde{g} \rVert_{B} < \infty, \\
\int_0^{1} h^* \: d\lambda = \tilde{g}(2) < \infty,
\end{gather*}
while by the arguments above we have
\begin{equation*}
\int_0^{\infty} f^* h^* \: d\lambda \geq \int_2^{\infty} f^* h^* \: d\lambda = \int_2^{\infty} f^* \tilde{g} \: d\lambda = \infty.
\end{equation*}
We therefore get from Corollary~\ref{GLT} that $f \notin (WL(A,B))_i'$. This covers the last case and establishes the desired inclusion $(WL(A,B))_i' \subseteq WL(A_i', B_i')$.
Because we already know from Theorem~\ref{TQNq} that $WL(A_i', B_i')$ is a quasi-Banach function space and from Theorem~\ref{CIAS} that $(WL(A,B))_i'$ is a Banach function space, we may use Theorem~\ref{TEQBFS} to obtain $(WL(A,B))_i' \hookrightarrow WL(A_i', B_i')$, i.e.~that there is a constant $C_2>0$ such that it holds for all $f \in M$ that
\begin{equation*}
\lVert f \rVert_{WL(A_i', B_i')} \leq C_2 \lVert f \rVert_{(WL(A, B))_i'},
\end{equation*}
which concludes the proof.
\end{proof}
\begin{corollary}
Let $\lVert \cdot \rVert_A$ be an r.i.~quasi-Banach function norm that has the property \ref{P5}, let $\lVert \cdot \rVert_B$ be an r.i.~quasi-Banach function norm, let $\lVert \cdot \rVert_{A'}$ be the associate norm of $\lVert \cdot \rVert_A$ and let $\lVert \cdot \rVert_{B_i'}$ be the integrable associate norm of $\lVert \cdot \rVert_B$. Then there is a constant $C>0$ such that the associate norm $\lVert \cdot \rVert_{(WL(A, B))'}$ of $\lVert \cdot \rVert_{WL(A, B)}$ satisfies
\begin{equation*}
\lVert f \rVert_{(WL(A, B))'} \leq \lVert f \rVert_{WL(A', B_i')} \leq C \lVert f \rVert_{(WL(A, B))'}
\end{equation*}
for every $f \in M$.
Consequently, the corresponding associate space satisfies
\begin{equation*}
(WL(A, B))' = WL(A', B_i'),
\end{equation*}
up to equivalence of defining functionals.
\end{corollary}
\begin{proof}
The result follows by combining Theorems~\ref{TQNq} and \ref{TASq} and Corollary~\ref{CIASAS}.
\end{proof}
\subsection{Embeddings}
The characterisation of embeddings remains the same and this is also true for its proof (which we thus omit). We state it here only because we will use it later.
\begin{theorem} \label{TEMq}
Let $\lVert \cdot \rVert_A$, $\lVert \cdot \rVert_B$, $\lVert \cdot \rVert_C$ and $\lVert \cdot \rVert_D$ be r.i.~quasi-Banach function norms. Then the following assertions are true:
\begin{enumerate}
\item The embedding $WL(A, C) \hookrightarrow WL(B,C)$ holds if and only if the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$, in the sense that for every $f \in M$ the implication
\begin{equation*}
\lVert f^* \chi_{[0,1]} \rVert_A < \infty \Rightarrow \lVert f^* \chi_{[0,1]} \rVert_B < \infty
\end{equation*} \label{TEMqp1}
holds.
\item The embedding $WL(A, B) \hookrightarrow WL(A,C)$ holds if and only if the global component of $\lVert \cdot \rVert_B$ is stronger than that of $\lVert \cdot \rVert_C$, in the sense that for every $f \in M$ such that $f^*(1) < \infty$ the implication
\begin{equation*}
\lVert f^* \chi_{(1, \infty)} \rVert_B < \infty \Rightarrow \lVert f^* \chi_{(1, \infty)} \rVert_C < \infty
\end{equation*}
holds. \label{TEMqp2}
\item The embedding $WL(A,B) \hookrightarrow WL(C,D)$ holds if and only if the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_C$ and the global component of $\lVert \cdot \rVert_B$ is stronger than that of $\lVert \cdot \rVert_D$. \label{TEMqp3}
\end{enumerate}
\end{theorem}
The following theorem generalises Theorem~\ref{TLL} and also provides better insight into the relationships between the individual embeddings and the specific properties a quasi-Banach function norm can possess. We formulate it in a simpler way than Theorem~\ref{TLL}, but this comes at no loss of generality thanks to Theorem~\ref{TEMq}.
\begin{theorem} \label{TLLq}
Let $\lVert \cdot \rVert_A$ be an r.i.~quasi-Banach function norm. Then
\begin{enumerate}
\item $WL(L^{\infty}, A) \hookrightarrow A$, \label{TLLqp1}
\item if $\lVert \cdot \rVert_A$ has the property \ref{P1} then $WL(A, L^1) \hookrightarrow A$, \label{TLLqp2}
\item $A \hookrightarrow WL(L^1, A)$ if and only if $\lVert \cdot \rVert_A$ has the property \ref{P5}, \label{TLLqp3}
\item $A \hookrightarrow WL(A, L^{\infty})$. \label{TLLqp4}
\end{enumerate}
\end{theorem}
\begin{proof}
The first and last embeddings are proved in the same way as in Theorem~\ref{TLL}.
The sufficiency in part \ref{TLLqp3} follows exactly as in the part \ref{TLLp3} of Theorem~\ref{TLL}. For the necessity, assume that $A \hookrightarrow WL(L^1, A)$. It then holds for any $E \subseteq [0, \infty)$ of finite measure and any $f \in A$ that
\begin{equation*}
\int_E \lvert f \rvert \: d\lambda \leq \int_0^{\lambda(E)} f^* \: d\lambda \leq \max\{1, \, \lambda(E)\} \int_0^{1} f^* \: d\lambda < \infty,
\end{equation*}
where the first estimate is due to the Hardy--Littlewood inequality (Theorem~\ref{THLI}), the second estimate is due to $f^*$ being non-increasing, and the last estimate is due to part \ref{TEMqp1} of Theorem~\ref{TEMq}. We now obtain from Theorem~\ref{TP5} that $\lVert \cdot \rVert_A$ must have the property \ref{P5}.
As for the part \ref{TLLqp2}, we know from Theorem~\ref{TISS} that if $\lVert \cdot \rVert_A$ has the property \ref{P1} then its integrable subspace $A_i$ is an r.i.~Banach function space. Hence, we obtain from parts \ref{TLLp1} and \ref{TLLp2} of Theorem~\ref{TLL} that
\begin{equation*}
WL(L^{\infty}, L^1) \hookrightarrow A_i \hookrightarrow A.
\end{equation*}
The desired conclusion now follows by combining parts \ref{TEMqp3} and \ref{TEMqp2} of Theorem~\ref{TEMq}.
\end{proof}
\begin{remark}
Unlike in part \ref{TLLqp3} of the preceding theorem, there is no equivalence in part \ref{TLLqp2}. This can be observed by considering $A = L^{p,q}$, where $L^{p,q}$ is a Lorenz space, because if we choose $p \in (1, \infty)$, $q \in (0, 1)$ then $L^{p,q}$ satisfies the embedding (see Remark~\ref{RELpq}) but is not normable (see \cite[Theorem~2.5.8]{CarroRaposo07} and the references therein).
\end{remark}
An alternative sufficient condition for the embedding $WL(A, L^1) \hookrightarrow A$ is provided in the following theorem. The relevant term is defined in Definition~\ref{DHLP} and put into context in Lemma~\ref{LHLP} and Remark~\ref{RHLP}.
\begin{theorem} \label{THLP}
Let $\lVert \cdot \rVert_A$ be an r.i.~quasi-Banach function norm and assume that the Hardy--Littlewood--P\'{o}lya principle holds for $\lVert \cdot \rVert_A$. Then the global component of $\lVert \cdot \rVert_{L^1}$ is stronger than that of $\lVert \cdot \rVert_A$.
\end{theorem}
\begin{proof}
Fix some $f \in M$ such that $f^*(1) < \infty$ and $\lVert f^* \chi_{(1, \infty)} \rVert_{L^1} < \infty$. We want to show that $\lVert f^* \chi_{(1, \infty)} \rVert_{A} < \infty$. To this end, consider the non-increasing function $f_0 = f^*(1) \chi_{[0,1]} + f^* \chi_{(1, \infty)}$ which belongs to $L^1$ and the norm of which satisfies
\begin{equation*}
f^*(1) \leq \left \lVert f_0 \right \rVert_{L^1} < \infty.
\end{equation*}
Hence, we may define a function $h = \left \lVert f_0 \right \rVert_{L^1} \chi_{[0,1]}$ and observe that $h \in A$, $h$ is non-increasing, and
\begin{equation*}
\int_0^{t} f_0^* \: d\lambda \leq \int_0^{t} h^* \: d\lambda
\end{equation*}
for all $t \in (0, \infty)$. It thus follows from our assumption on $\lVert \cdot \rVert_A$ that $f_0 \in A$ and consequently $\lVert f^* \chi_{(1, \infty)} \rVert_{A} < \infty$, as desired.
\end{proof}
As a corollary to this theorem we obtain a negative answer to the previously open question whether the Hardy--Littlewood--P\'{o}lya principle holds for every r.i.~quasi-Banach function norm. This also serves as an example of application of Wiener--Luxemburg amalgams, since they appear only in the proof of the corollary, not in its statement.
\begin{corollary} \label{CHLP}
There is an r.i.~quasi-Banach function norm over $M((0, \infty), \lambda)$ which has the property \ref{P5} and for which the Hardy--Littlewood--P\'{o}lya principle does not hold.
\end{corollary}
\begin{proof}
Let $p \in (0, 1)$. It follows from Theorem~\ref{TQNq}, Theorem~\ref{TEMq}, Remark~\ref{RELp}, and Theorem~\ref{THLP} that the spaces $WL(L^1, L^p)$ have the desired properties.
\end{proof}
Finally, we present a result that generalises Proposition~\ref{PEAS} and which is useful when one wants to find an integrable associate space to a given r.i.~quasi-Banach function space.
\begin{proposition} \label{PEIAS}
Let $\lVert \cdot \rVert_A$ and $\lVert \cdot \rVert_B$ be r.i.~quasi-Banach function norms and denote by $\lVert \cdot \rVert_ {A_i'}$ and $\lVert \cdot \rVert_{B_i'}$ the respective integrable associate norms. Suppose that the local component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$. Then the local component of $\lVert \cdot \rVert_{B_i'}$ is stronger than that of $\lVert \cdot \rVert_{A_i'}$.
Similarly, if the global component of $\lVert \cdot \rVert_A$ is stronger than that of $\lVert \cdot \rVert_B$, then the global component of $\lVert \cdot \rVert_{B_i'}$ is stronger than that of $\lVert \cdot \rVert_{A_i'}$
\end{proposition}
\begin{proof}
By our assumption and part~\ref{TEMqp1} of~Theorem~\ref{TEMq} we get that
\begin{equation*}
WL(A, L^{\infty}) \hookrightarrow WL(B, L^{\infty}).
\end{equation*}
Consequently, it follows from Theorem~\ref{TASq}, Corollary~\ref{CIASAS} and Corollary~\ref{CEIAS} that
\begin{equation*}
WL(B_i', L^1) = (WL(B, L^{\infty}))_i' \hookrightarrow (WL(A, L^{\infty}))_i' = WL(A_i', L^1),
\end{equation*}
that is, the local component of $\lVert \cdot \rVert_{B_i'}$ is stronger than that of $\lVert \cdot \rVert_ {A_i'}$.
The second claim can be proved in similar manner, only using $WL(L^1, A)$ and $WL(L^1, B)$ instead of $WL(A, L^{\infty})$ and $WL(B, L^{\infty})$.
\end{proof}
|
{
"timestamp": "2021-02-18T02:17:33",
"yymm": "2102",
"arxiv_id": "2102.08743",
"language": "en",
"url": "https://arxiv.org/abs/2102.08743"
}
|
\section{Introduction}\label{sec:intro}
Classical learning theory \citep{Wu1999,Hastie2009,Mitchell1997} and its applied machine learning methods have been popularized in the geosciences after various technological advances, leading initiatives in open-source software \citep{Pedregosa2011,Abadi2016,Paszke2017,Innes2018}, and intense marketing from a diverse portfolio of industries. In spite of its popularity, learning theory cannot be applied straightforwardly to solve problems in the geosciences as the characteristics of these problems violate fundamental assumptions used to derive the theory and related methods (e.g. i.i.d. samples).
Among these methods derived under classical assumptions (more on this later), those for estimating the generalization (or prediction) error of learned models in unseen samples are crucial in practice \citep{Hastie2009}. In fact, estimates of generalization error are widely used for selecting the best performing model for a problem, out of a collection of available models \citep{Arlot2010}. If estimates of error are inaccurate because of violated assumptions, then there is great chance that models will be selected inappropriately \citep{Ferraciolli2019}. The issue is aggravated when models of great expressiveness (i.e. many learning parameters) are considered in the collection since they are quite capable of overfitting the available data \citep{Jiang2019,Zhang2019}. In the following paragraphs, we consider statistical learning broadly as minimization of generalization error.
The literature on generalization error estimation methods is vast \citep{Arlot2010,Vehtari2012}, and we do not intend to review it extensively here. Nevertheless, some methods have gained popularity since their introduction in the mid 70s because of their generality, ease of use, and availability in open-source software:
\paragraph{\textbf{Leave-one-out (1974)}} The leave-one-out method for assessing and selecting learning models was based on the idea that to estimate the prediction error on an unseen sample one only needs to hide a seen sample from a dataset and learn the model. Because the hidden sample has a known label, the method can compare the model prediction with the true label for the sample. By repeating the process over the entire dataset, one gets an estimate of the expected generalization error \citep{Stone1974}. Leave-one-out has been investigated in parallel by many statisticians, including Nicholson (1960) and Stone (1974), and is also known as ordinary cross-validation.
\paragraph{\textbf{k-fold cross-validation (1975)}} The term k-fold cross-validation refers to a family of error estimation methods that split a dataset into non-overlapping ``folds'' for model evaluation. Similar to leave-one-out, each fold is hidden while the model is learned using the remaining folds. It can be thought of as a generalization of leave-one-out where folds may have more than a single sample \citep{Geisser1975,Burman1989}. Cross-validation is less computationally expensive than leave-one-out depending on the size and number of folds, but can introduce bias in the error estimates if the number of samples in the folds used for learning is much smaller than the original number of samples in the dataset.
Major assumptions are involved in the derivation of the estimation methods listed above. The first of them is the assumption that samples come from independent and identically distributed (i.i.d.) random variables. It is well-known that spatial samples are not i.i.d., and that spatial correlation needs to be modeled explicitly with geostatistical theory. Even though the sample mean of the empirical error used in those methods is an unbiased estimator of the prediction error regardless of the i.i.d. assumption, the precision of the estimator can be degraded considerably with non-i.i.d. samples.
Motivated by the necessity to leverage non-i.i.d. samples in practical applications, and evidence that model's performance is affected by spatial correlation \citep{Cracknell2014,Baglaeva2020}, the statistical community devised new error estimation methods using the spatial coordinates of the samples:
\paragraph{\textbf{h-block leave-one-out (1995)}} Developed for time-series data (i.e. data showing temporal dependency), the h-block leave-one-out method is based on the principle that stationary processes achieve a correlation length (the ``h'') after which the samples are not correlated. The time-series data is then split such that samples used for error evaluation are at least "h steps" distant from the samples used to learn the model \citep{Burman1994}. Burman (1994) showed how the method outperformed traditional leave-one-out in time-series prediction by selecting the hyperparameter ``h'' as a fraction of the data, and correcting the error estimates accordingly to avoid bias.
\paragraph{\textbf{Spatial leave-one-out (2014)}} Spatial leave-one-out is a generalization of h-block leave-one-out from time-series to spatial data \citep{LeRest2014}. The principle is the same, except that the blocks have multiple dimensions (e.g. norm-balls).
\paragraph{\textbf{Block cross-validation (2016)}} Similarly to k-fold cross-validation for non-spatial data, block cross-validation was proposed as a faster alternative to spatial leave-one-out. The method creates folds using blocks of size equal to the spatial correlation length, and separates samples for error evaluation from samples used to learn the model. The method introduces the concept of ``dead zones'', which are regions near the evaluation block that are discarded to avoid over-optimistic error estimates \citep{Roberts2017,Pohjankukka2017}.
Unlike the estimation methods proposed in the 70s, which use random splits of the data, these methods split the data based on spatial coordinates and what the authors called ``dead zones''. This set of heuristics for creating data splits avoids configurations in which the model is evaluated on samples that are too near ($<$ spatial correlation length) other samples used for learning the model. Consequently, these estimation methods tend to produce error estimates that are higher on average than their non-spatial counterparts, which are known to be over-optimistic in the presence of spatial correlation. However, systematic splits of the data introduce bias, which have not been emphasized enough in the literature.
All methods for estimating generalization error in classical learning theory, including the methods listed above, rely on a second major assumption. The assumption that the distribution of unseen samples to which the model will be applied is equal to the distribution of samples over which the model was trained. This assumption is very unrealistic for various applications in geosciences, which involve quite heterogeneous (i.e. variable), heteroscedastic (i.e. with different variability) processes \citep{Chiles2012}.
Very recently, an alternative to classical learning theory has been proposed, known as transfer learning theory, to deal with the more difficult problem of learning under shifts in distribution, and learning tasks \citep{Pan2010,Weiss2016,Silver2008}. The theory introduces methods that are more amenable for geoscientific work \citep{Zadrozny2003,Zadrozny2004,Fan2005}, yet these same methods were not derived for geospatial data (e.g. climate data, Earth observation data, field measurements).
Of particular interest in this work, the covariate shift problem is a type of transfer learning problem where the samples on which the model is applied have a distribution of covariates that differs from the distribution of covariates over which the model was trained \citep{Joaquin2008}. It is relevant in geoscientific applications in which a list of explanatory features is known to predict a response via a set of physical laws that hold everywhere. Under covariate shift, a generalization error estimation method has been proposed:
\paragraph{\textbf{Importance-weighted cross-validation (2007)}} Under covariate shift, and assuming that learning models may be misspecified, classical cross-validation is not unbiased. Importance weights can be considered for each sample to recover the unbiasedness property of the method, and this is the core idea of importance-weighted cross-validation \citep{Sugiyama2006,Sugiyama2007}. The method is unbiased under covariate shift for the two most common supervised learning tasks: regression and classification.
The importance weights used in importance-weighted cross-validation are ratios between the target (or test) probability density and the source (or train) probability density of covariates. Density ratios are useful in a much broader set of applications including two-sample tests, outlier detection, and distribution comparison. For that reason, the problem of density ratio estimation has become a general statistical problem \citep{Sugiyama2012}. Various density ratio estimators have been proposed with increasing performance \citep{Huang2007,Sugiyama2009,Kanamori2009,Kanamori2009a}, yet an investigation is missing that contemplates importance-weighted cross-validation and other existing error estimation methods in geospatial settings.
In this work, we introduce \emph{geostatistical (transfer) learning}, and discuss how most prior work in spatial statistics fits in a specific type of learning from geospatial data that we term \emph{pointwise learning}. In order to illustrate the challenges of learning from geospatial data, we assess existing estimators of generalization error from the literature using synthetic Gaussian process data and real data from geophysical well logs in New Zealand that we made publicly available \citep{W.S.RdeCarvalho2020}.
The paper is organized as follows. In \autoref{sec:problem}, we introduce \emph{geostatistical (transfer) learning}, which contains all the elements involved in learning from geospatial data. We define covariate shift in the geospatial setting and briefly review the concept of spatial correlation. In \autoref{sec:error}, we define generalization error in geostatistical learning, discuss how it generalizes the classical definition of error in non-spatial settings, and review estimators of generalization error from the literature devised for \emph{pointwise learning}. In \autoref{sec:exps}, we present our experiments with geospatial data, and discuss the results of different error estimation methods. In \autoref{sec:concls}, we conclude the work and share a few remarks regarding the choice of error estimation methods in practice.
\section{Geostatistical learning}\label{sec:problem}
In this section, we define the elements of statistical learning in geospatial settings. We discuss the covariate shift and spatial correlation properties of the problem, and illustrate how they affect the involved feature spaces.
Consider a sample space $\Omega$, a source spatial domain $\mathfrak{D}_s \subset \mathbb{R}^{d_s}$, and a target spatial domain $\mathfrak{D}_t \subset \mathbb{R}^{d_t}$ on which stochastic processes (i.e. spatial random variables) are defined:
\begin{equation}
\begin{aligned}
Z_{s_j}&\colon\,\mathfrak{D}_s \times \Omega \to \mathbb{R},\ j=1,2,\ldots,n_s\ \text{on source domain}\ \mathfrak{D}_s\\
Z_{t_j}&\colon\,\mathfrak{D}_t \times \Omega \to \mathbb{R},\ j=1,2,\ldots,n_t\ \text{on target domain}\ \mathfrak{D}_t
\end{aligned}
\end{equation}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.18]
\tikzset{thin_line/.style={ thin, solid, color=black}}
\tikzset{dash_line/.style={ thin, dashed, color=darkgray}}
\tikzset{vect_line/.style={very thick, ->, >=latex, solid, color=black}}
\draw [fill=gray!30] plot [mark=none, sharp cycle] coordinates {(7,3) (21,3) (16,16) (3,16)};
\draw (7,3) node[above right] {\large $\mathfrak{D}_s \subset \mathbb{R}^2$};
\coordinate (a) at ( 40, 3);
\coordinate (am) at (39.5, 0);
\coordinate (an) at ( 44.5, 7);
\coordinate (b) at ( 50, 11);
\coordinate (bm) at ( 49.5, 8);
\coordinate (c) at ( 40, 19);
\coordinate (cm) at (39.5, 16);
\coordinate (d) at (29, 12);
\coordinate (dm) at (28.5, 9);
\draw[thin_line] (a) to[out= 25, in=200] (b);
\draw[thin_line] (b) to[out=115, in=-30] (c);
\draw[thin_line] (c) to[out=185, in= 60] (d);
\draw[thin_line] (d) to[out=-25, in=110] (a);
\draw[fill=gray!30] (a) to[out= 25, in=200]
(b) to[out=115, in=-30]
(c) to[out=185, in= 60]
(d) to[out=-25, in=110] (a);
\draw[thin_line] (am) to[out= 25, in=200] (bm);
\draw[dash_line] (bm) to[out=115, in=-30] (cm);
\draw[dash_line] (cm) to[out=185, in= 60] (dm);
\draw[thin_line] (dm) to[out=-25, in=110] (am);
\draw[thin_line] (am) -- (a);
\draw[thin_line] (bm) -- (b);
\draw[dash_line] (cm) -- (c);
\draw[thin_line] (dm) -- (d);
\draw[fill=gray!90] (a) to[out= 25, in=200]
(b) --
(bm) to[out=200, in=25]
(am) -- (a);
\draw[fill=gray!90] (am) --
(a) to[out=110, in=-25]
(d) --
(dm) to[out=-25, in=110] (am);
\draw (a)++(0,6) node[above] {\large $\mathfrak{D}_t \subset \mathbb{R}^3$};
\end{tikzpicture}
\caption{Examples of source and target spatial domains. (a) Source domain $\mathfrak{D}_s \subset \mathbb{R}^2$, which may represent a satellite view of an area of interest. (b) Target domain $\mathfrak{D}_t \subset \mathbb{R}^3$, which may represent a body of rock in the subsurface of the Earth.}
\label{fig:domains}
\end{figure}
For example, $(Z_{s_j})_{j=1,2,\ldots,n_s}$ may represent a collection of processes observed remotely from satellite on a 2D surface $\mathfrak{D}_s \subset \mathbb{R}^2$, whereas $(Z_{t_j})_{j=1,2,\ldots,n_t}$ may represent processes that occur within the 3D subsurface of the Earth $\mathfrak{D}_t \subset \mathbb{R}^3$ (see \autoref{fig:domains}). Any process $Z$ in these collections can be viewed in two distinct ways:
\paragraph{\textbf{Geostatistical theory}} From the viewpoint of geostatistical theory, samples $z(\cdot,\omega)$ of the process $Z(\mathbf{u},w)$ are obtained by fixing $\omega \in \Omega$. These samples are spatial maps that assign a real number to each location $\mathbf{u} \in \mathfrak{D}$.
\paragraph{\textbf{Learning theory}} From the viewpoint of statistical learning theory, scalar samples $z(\mathbf{u},\cdot)$ are obtained by fixing $\mathbf{u} \in \mathfrak{D}$. These scalar samples are ordered into a ``feature vector'' $\mathbf{x}_\mathbf{u} = (z_1, z_2,\ldots, z_n)$ for a collection of processes $(Z_j)_{j=1,2,\ldots,n}$, and for a specific location $\mathbf{u} \in \mathfrak{D}$. In this case, $\mathbf{X}_\mathbf{u}\colon\, \Omega \to \mathbb{R}^n$ denotes the corresponding random vector of features such that $\mathbf{x}_\mathbf{u} \sim \mathbf{X}_\mathbf{u}$.
In order to define the geostatistical learning problem, we need to understand the joint probability distribution of features for all locations in a spatial domain $\Pr(\{\mathbf{X}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}})$. This distribution is very complex in general as feature vectors $\mathbf{X}_\mathbf{u}$ and $\mathbf{X}_{\mathbf{v}}$ for two different locations $\mathbf{u} \ne \mathbf{v}$ are not independent. The closer the locations $\mathbf{u}, \mathbf{v} \in \mathfrak{D}$ in the spatial domain, the more similar are their features $\mathbf{x}_\mathbf{u},\,\mathbf{x}_\mathbf{v} \in \mathbb{R}^n$ in the feature space. Moreover, given that only one realization $z^{obs} = z(\cdot,\omega) \sim Z$ of the process is available at any given time, one must introduce stationarity assumptions inside $\mathfrak{D}$ to pool together different scalar samples $z(\mathbf{u},\cdot)$ from different locations $\mathbf{u} \in \mathfrak{D}$ in the spatial domain, and be able to estimate the distribution.
Regardless of the stationarity assumptions involved in modeling these processes, we can assume that inside $\mathfrak{D}$ the probability $\Pr_\mathfrak{D}(\mathbf{X}) = \Pr(\{\mathbf{X}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}})$ is well-defined. For example, most prior art in statistical learning with geospatial data assumes that the pointwise probability of features $\Pr_\mathbf{u}(\mathbf{X}) = \Pr(\mathbf{X}_\mathbf{u})$ is not a function of location, that is $\Pr_\mathbf{u}(\mathbf{X}) = \Pr(\mathbf{X}),\, \forall \mathbf{u} \in \mathfrak{D}$. Under this assumption, samples from everywhere in $\mathfrak{D}$ are used to estimate $\Pr(\mathbf{X}) = \Pr(Z_1,Z_2,\ldots,Z_n)$. With the additional assumption that the feature vectors $\mathbf{X}_\mathbf{u}$ and $\mathbf{X}_\mathbf{v}$ are independent, the joint distribution of features for all locations can be written as $\Pr_\mathfrak{D}(\mathbf{X}) = \prod_{\mathbf{u}\in\mathfrak{D}} \Pr_\mathbf{u}(\mathbf{X})$.
Whereas the pointwise stationarity assumption may be reasonable inside a given spatial domain, the assumption of spatial independence of features is rarely defensible in practice. Additionally, pointwise stationarity often does not transfer from a source domain $\mathfrak{D}_s$ where the model is learned to a target domain $\mathfrak{D}_t$ where the model is applied, and consequently the joint distributions of features differ $\Pr_{\mathfrak{D}_s} \ne \Pr_{\mathfrak{D}_t}$. Before we can illustrate these two issues in more detail, we need to complete the definition of geostatistical learning problems by introducing the notion of spatial learning tasks.
We have introduced the notion of spatial domain $\mathfrak{D}$, and the notion of joint probability of features $\Pr_\mathfrak{D}(\mathbf{X})$ for all locations in the domain. Now we introduce the notion of spatial learning tasks, which are similar to classical learning tasks, but with the main difference that they can leverage properties of the underlying spatial domain. Classically, a learning task describes an action in terms of available features to produce new data. For example, ``predict feature $Z_{j_0}$ from features $(Z_{j_1},Z_{j_2})$'', or ``cluster the samples using features $(Z_{j_1},Z_{j_2},Z_{j_3})$'' are classical learning tasks. Differently, a spatial learning task $T$ involves the spatial domain $\mathfrak{D}$ besides the features, and is therefore more complex. Practical examples from the industry include:
\begin{itemize}
\item \textbf{Mining}: The task of segmenting a mineral deposit from borehole samples using a set of features is a spatial learning task. It assumes the segmentation result to be a \emph{contiguous volume} of rock, which is an additional constraint in terms of spatial coordinates.
\item \textbf{Agriculture}: The task of identifying crops from satellite images is a spatial learning task. Locations that have the same crop type \emph{appear together} in the images despite possible noise in image layers (e.g. presence of clouds, animals).
\item \textbf{Petroleum}: The task of segmenting formations from seismic data is a spatial learning task because these formations are large-scale \emph{near-horizontal} layers of stacked rock.
\end{itemize}
Many more examples of spatial learning tasks exist, and others are yet to be proposed. Given the concepts introduced above, we are now ready for the main definition of this section:
\paragraph{\textbf{Definition (Geostatistical Learning)}} Let $\mathfrak{D}_s$ be a source spatial domain, and $\mathfrak{D}_t$ be a target spatial domain. Let $\Pr_{\mathfrak{D}_s}(\mathbf{X}_s)$ and $\Pr_{\mathfrak{D}_t}(\mathbf{X}_t)$ be the joint distributions of features for all locations in these domains, and let $T_s$ and $T_t$ be two spatial learning tasks. Geostatistical (transfer) learning consists of learning $T_t$ over $\mathfrak{D}_t$ using the knowledge acquired while learning $T_s$ over $\mathfrak{D}_s$, and assuming that the observed spatial data in $\mathfrak{D}_s$ and $\mathfrak{D}_t$ are both a single spatial sample of $\Pr_{\mathfrak{D}_s}(\mathbf{X}_s)$ and $\Pr_{\mathfrak{D}_t}(\mathbf{X}_t)$, respectively.
There are considerable differences between the classical definition of transfer learning \citep{Pan2010,Weiss2016}, and the proposed definition above. First, the distribution we have denoted by $\Pr_\mathfrak{D}(\mathbf{X})$ is spatial and involves all the locations $\mathbf{u}\in\mathfrak{D}$, whereas the distribution in classical transfer learning is the marginal for any specific location, obtained from the assumption of pointwise stationarity $\Pr(\mathbf{X}_\mathbf{u}) = \Pr(\mathbf{X})$. Second, we use the term domain to refer to spatial domains $\mathfrak{D}$, whereas the non-spatial literature uses the same term for the pair $\left(\mathbf{X}_\mathbf{u}, \Pr(\mathbf{X}_\mathbf{u})\right) = \left(\mathbf{X}, \Pr(\mathbf{X})\right)$. Third, the spatial learning task $T$ we have introduced may be described in terms of properties of the spatial domain, which are not available in generic transfer learning problems.
Having understood the main differences between classical and geostatistical learning, we now focus our attention to a specific type of geostatistical transfer learning problem, and illustrate some of the unique challenges caused by spatial dependence.
\subsection{Covariate shift}\label{sec:shift}
Assume that the two spatial domains are different $\mathfrak{D}_s \ne \mathfrak{D}_t$, but that they share a set of processes $(Z_1,Z_2,\ldots,Z_n)$. Additionally, assume that pointwise stationarity holds. Let $Z_o = f(Z_1,Z_2,\ldots,Z_n)$ be a new process obtained as a function of the shared processes, and assume that it has only been observed in $\mathfrak{D}_s$ via a measuring device and/or manual labeling. That is, $z_o^{obs}(\cdot, \omega) \sim Z_o$ is a spatial sample of the process $Z_o$ over $\mathfrak{D}_s$. Under these assumptions, we can introduce the shared vector of features $\mathbf{X}_s = \mathbf{X}_t = \mathbf{X} = (Z_1,Z_2,\ldots,Z_n,Z_o)$, and the supervised learning task $T_s = T_t = T$ of predicting the process $Z_o$ regardless of location $\mathbf{u} \in \mathfrak{D}_s \cup \mathfrak{D}_t$.
Let $\mathcal{X} = \mathbf{X}_{1:n}$ be the explanatory features, and $\mathcal{Y} = \mathbf{X}_{n+1}$ be the response feature. For any $\mathbf{u}\in\mathfrak{D}_s$, we can write $\Pr_\mathbf{u}(\mathcal{X},\mathcal{Y}) = \Pr_\mathbf{u}(\mathcal{Y} | \mathcal{X}) \Pr_\mathbf{u}(\mathcal{X})$. Likewise, for any $\mathbf{v}\in\mathfrak{D}_t$ we can write $\Pr_\mathbf{v}(\mathcal{X},\mathcal{Y}) = \Pr_\mathbf{v}(\mathcal{Y} | \mathcal{X}) \Pr_\mathbf{v}(\mathcal{X})$. The covariate shift property is defined as follows:
\paragraph{\textbf{Definition (Covariate Shift)}} A geostatistical learning problem has the covariate shift property when for any $\mathbf{u}\in\mathfrak{D}_s$ and for any $\mathbf{v}\in\mathfrak{D}_t$ the distributions $\Pr_\mathbf{u}(\mathcal{X},\mathcal{Y})$ and $\Pr_\mathbf{v}(\mathcal{X},\mathcal{Y})$ differ by $\Pr_\mathbf{u}(\mathcal{X}) \ne \Pr_\mathbf{v}(\mathcal{X})$ while $\Pr_\mathbf{u}(\mathcal{Y} | \mathcal{X}) = \Pr_\mathbf{v}(\mathcal{Y} | \mathcal{X})$.
The property is based on the idea that the underlying true function $f$ that created the process $\mathcal{Y} = f(\mathcal{X})$ is the same for all $\mathbf{u}\in\mathfrak{D}_s$ and all $\mathbf{v}\in\mathfrak{D}_t$. In this case, the function is approximated by the conditional distribution $\Pr_\mathbf{u}(\mathcal{Y} | \mathcal{X}) = \Pr_\mathbf{v}(\mathcal{Y} | \mathcal{X})$ for each and every location (see \autoref{fig:shift}).
\tikzset{
error band/.style={fill=red},
error band style/.style={
error band/.append style=#1
}
}
\newcommand{\addplotwitherrorband}[4][]{
\addplot [#1, draw=none, stack plots=y, forget plot] {#2-(#3)};
\addplot +[#1, draw=none, stack plots=y, error band] {(#3)+(#4)} \closedcycle;
\addplot [#1, draw=none, stack plots=y, forget plot] {-(#2)-(#3)};
\addplot [#1, forget plot] {#2};
}
\defgray{gray}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.75,
declare function={
f(\mathbf{x})=rad(\mathbf{x})-sin(\mathbf{x});
s(\mathbf{x})=1.5+0.5*cos(\mathbf{x});
}]
\begin{axis}[domain=0:360,
ticks=none,enlargelimits=false,
xlabel={$\mathcal{X}$},ylabel={$\mathcal{Y}$},
extra description/.code={
\node[rotate=30] at (0.7,0.5) {$\Pr_\mathbf{u}(\mathcal{Y} | \mathcal{X}) = \Pr_\mathbf{v}(\mathcal{Y} | \mathcal{X})$};
\node at (0.2,0.15) {$\Pr_\mathbf{u}(\mathcal{X})$};
\node at (0.8,0.2) {$\Pr_\mathbf{v}(\mathcal{X})$};
\draw[-Latex] (0.2,0.8) node[inner sep=0,label=above:$f$] {}
to[out=-90,in=135] (0.5,0.61);
},
cycle list={
error band style=gray!20\\
error band style=gray!40\\
error band style=gray!60\\
error band style=gray!80\\
error band style=gray!100\\
}]
\foreach \dh in {1,0.5,0.25,0.125,0.0625} {
\addplotwitherrorband[thick,black] {f(x)}{\dh*s(x)}{\dh*s(x)}
}
\addplot[smooth,fill=gray!80,fill opacity=0.5] coordinates {(0,-5) (180,-3) (270,-5)} --cycle;
\addplot[smooth,fill=purple!80,fill opacity=0.5] coordinates {(180,-5) (270,-3) (360,-5)} --cycle;
\end{axis}
\end{tikzpicture}
\caption{Covariate shift. The true underlying function $\mathcal{Y} = f(\mathcal{X})$ is approximated equally as $\Pr_\mathbf{u}(\mathcal{Y} | \mathcal{X}) = \Pr_\mathbf{v}(\mathcal{Y} | \mathcal{X}),\ \forall\mathbf{u}\in\mathfrak{D}_s,\forall\mathbf{v}\in\mathfrak{D}_t$. The only difference between source and target spatial domains lies in the distributions of explanatory variables $\Pr_\mathbf{u}(\mathcal{X}) \ne \Pr_\mathbf{v}(\mathcal{X})$.}
\label{fig:shift}
\end{figure}
In the geosciences, it is very common to encounter problems with covariate shift due to the great variability of natural processes. Whenever a model is (1) learned using labels provided by experts on a spatial domain $\mathfrak{D}_s$, is (2) validated with classical train-validation-test methodologies (meaning that it satisfies some performance threshold), and yet (3) performs poorly on a new spatial domain $\mathfrak{D}_t$ where the labeling function is expected to be the same, we can conclude that there are shifts in distribution. In \autoref{sec:exps} we illustrate covariate shifts in real data that we prepared in-house from geophysical surveys in New Zealand.
\subsection{Spatial correlation}\label{sec:corr}
Another important issue with geospatial data that is often ignored is spatial dependence, which we illustrate next. As mentioned earlier, the closer are two locations $\mathbf{u},\mathbf{v}\in\mathfrak{D}$ in a spatial domain, the more similar are their features $\mathbf{x}_\mathbf{u},\,\mathbf{x}_\mathbf{v} \in \mathbb{R}^n$ in the feature space. Different statistics are available to quantify this spatial dependence in a collection of samples, and a popular choice from geostatistics is the variogram $\gamma(h)$, which estimates for each spatial lag $h=||\mathbf{u}-\mathbf{v}||\in\mathbb{R}^+_0$ a correlation $\sigma^2 - \gamma(h)$, where $\sigma^2$ is the total sill in the samples \citep{Chiles2012}. Parallel algorithms for efficient variogram estimation exist in the literature \citep{Hoffimann2019}, and can be useful tools for fast diagnosis of the spatial correlation property:
\paragraph{\textbf{Definition (Spatial Correlation)}} A geostatistical learning problem has the spatial correlation property when the variogram of any of the stochastic processes $(Z_{s_j})_{j=1,2,\ldots,n_s}$ and $(Z_{t_j})_{j=1,2,\ldots,n_t}$ defined over $\mathfrak{D}_s$ and $\mathfrak{D}_t$ has a non-negligible positive range (or correlation length).
Besides serving as a tool for diagnosing spatial correlation in geostatistical learning problems, variograms can also be used to simulate spatial processes with theoretical correlation structure. In \autoref{fig:feat-corr}, we illustrate the impact of spatial correlation in the feature space of two independent spatial processes $Z_1$ and $Z_2$ simulated with direct (a.k.a. LU) Gaussian simulation \citep{Alabert1987}. As we increase the variogram range $r$ in a spatial domain $\mathfrak{D}$ with $100\times 100$ pixels, we observe that the distribution of features $\Pr(\mathcal{X}) = \Pr(Z_1, Z_2)$ is gradually deformed from a standard Gaussian ($r=0$) to a ``boomerang''-shaped distribution ($r=80$).
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/feat-corr.png}
\caption{Impact of spatial correlation in feature space. Two Gaussian processes $Z_1$ and $Z_2$ are simulated over a domain $\mathfrak{D}$ with $100\times 100$ pixels. As the variogram range $r$ increases from $r=0$ to $r=80$ pixels, the distribution of features $\Pr(\mathcal{X}) = \Pr(Z_1,Z_2)$ is gradually deformed away from the standard Gaussian $\mathcal{N}(0,I)$. Marker colors in scatter plots represent the pixel number in column-major order.}
\label{fig:feat-corr}
\end{figure}
Similar deformations are observed when the two processes $Z_1$ and $Z_2$ are correlated. In \autoref{fig:feat-corr2}, we illustrate the impact of spatial correlation for an inter-process correlation of $\rho(Z_1,Z_2) = 0.9$.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/feat-corr2.png}
\caption{Impact of spatial correlation in feature space with correlated processes. Similar to \autoref{fig:feat-corr} with the difference that the processes $Z_1$ and $Z_2$ are strongly correlated with correlation $\rho(Z_1,Z_2) = 0.9$.}
\label{fig:feat-corr2}
\end{figure}
Spatial correlation may have different impact in source and target domains $\mathfrak{D}_s$ and $\mathfrak{D}_t$, and can certainly affect the generalization error of learning models. In our experiments of \autoref{sec:exps}, we assume that the variogram range of source and target processes are equal (i.e. $r_s = r_t = r$) to facilitate the analysis of the results. In practice, source and target processes may also have different spatial correlation, which is a type of shift that is not considered in classical transfer learning problems.
\section{Generalization error of learning models}\label{sec:error}
Having defined geostatistical learning problems, and their covariate shift and spatial correlation properties, we now turn into a general definition of generalization error of learning models in geospatial settings. We review an importance-weighted approximation of a related generalization error based on pointwise stationarity assumptions, and the use of an efficient importance-weighted cross-validation method for error estimation.
Consider a geostatistical learning problem $\mathcal{P} = \left\{(\mathfrak{D}_s,\Pr_{\mathfrak{D}_s},T_s), (\mathfrak{D}_t,\Pr_{\mathfrak{D}_t},T_t)\right\}$ with a single supervised spatial learning task $T_s = T_t = T$ (e.g. regression), and assume that a set of response features $\mathcal{Y}_\mathbf{u}$ are created by a function $f$, based on a set of explanatory features $\mathcal{X}_\mathbf{u}$ for each and every location $\mathbf{u}\in \mathfrak{D}_s \cup \mathfrak{D}_t$. Our goal is to learn a model $\{\mathcal{Y}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}_t} \approx \hat{f}\Big(\{\mathcal{X}\}_{\mathbf{u}\in\mathfrak{D}_t}\Big)$ over the target domain $\mathfrak{D}_t$ that approximates $f$ in terms of expected risk for some spatial supervised loss function $\mathcal{L}$:
\begin{equation}\label{eq:risk}
\hat{f} = \argmin_g \mathbb{E}_{\Pr_{\mathfrak{D}_t}}\left[\mathcal{L}\Bigg(\{\mathcal{Y}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}_t},\ g\Big(\{\mathcal{X}\}_{\mathbf{u}\in\mathfrak{D}_t}\Big)\Bigg)\right]
\end{equation}
In the expected value of \autoref{eq:risk}, spatial samples of the processes are drawn from $\Pr_{\mathfrak{D}_t}$ and rearranged into feature vectors $\mathcal{X}_\mathbf{u}$ and $\mathcal{Y}_\mathbf{u}$ for every location $\mathbf{u} \in \mathfrak{D}_t$. The spatial loss function $\mathcal{L}$ compares the spatial map of features from the sample $\{\mathcal{Y}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}_t}$ with the approximated map from the model $g\Big(\{\mathcal{X}\}_{\mathbf{u}\in\mathfrak{D}_t}\Big)$. The model $\hat{f}$ is the model that minimizes the expected loss (or risk) over the target domain $\mathfrak{D}_t$.
\paragraph{\textbf{Definition (Generalization Error)}} The generalization error of a learning model $\hat{f}$ in a geostatistical learning problem $\mathcal{P}$ is the expected risk attained by the model when spatial samples are drawn from $\Pr_{\mathfrak{D}_t}$ over the target domain $\mathfrak{D}_t$ (see \autoref{eq:risk}).
Unlike the classical definition of generalization error, the definition above for geostatistical learning problems relies on a spatial loss function $\mathcal{L}$, and on spatial samples like those produced via geostatistical simulation \citep{Hoffimann2017,Mariethoz2010}. For truly spatial learning models $\hat{f}$ that use multiple locations in the spatial domain to make predictions, this generalization error is more appropriate. In this present work, however; we do not target spatial learning models, and only consider pointwise learning:
\paragraph{\textbf{Definition (Pointwise Learning)}} Given a family of classical (non-spatial) learning models $\{\hat{f}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}}$, pointwise learning consists of learning the model $\hat{f}\Big(\{\mathcal{X}\}_{\mathbf{u}\in\mathfrak{D}}\Big) = \{\hat{f}_\mathbf{u}(\mathcal{X}_\mathbf{u})\}_{\mathbf{u}\in\mathfrak{D}}$ that assigns for each location $\mathbf{u}\in\mathfrak{D}$ the value $\hat{f}_\mathbf{u}(\mathcal{X}_\mathbf{u})$ independently of the explanatory features at other locations.
More specifically, we consider pointwise learning with families that are made of a single learning model $\{\hat{f}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}} = \{\dot{f}\}$. In this case, the model $\dot{f}$ is often learned based on pointwise stationarity assumptions, for some pointwise loss $\dot{\mathcal{L}}$:
\begin{equation}\label{eq:pointrisk}
\dot{f} = \argmin_{\dot{g}} \mathbb{E}_{\Pr}\left[\dot{\mathcal{L}}\Big(\mathcal{Y}, \dot{g}(\mathcal{X})\Big)\right]
\end{equation}
Although pointwise learning with a single model is a very simple type of geostatistical learning, it is by far the most widely used approach in the geospatial literature. We acknowledge this fact, and consider an empirical approximation of the pointwise expected risk in \autoref{eq:pointrisk} as opposed to the spatial expected risk in \autoref{eq:risk}.
An empirical approximation of the pointwise expected risk of a model $\dot{g}$ can be obtained via discretization of the target spatial domain $\mathfrak{D}_t$:
\begin{equation}\label{eq:emprisk}
\mathcal{R}_t(\dot{g}) = \mathbb{E}_{\Pr_{\mathbf{u}\in\mathfrak{D}_t}}\left[\dot{\mathcal{L}}\Big(\mathcal{Y}, \dot{g}(\mathcal{X})\Big)\right] \approx \frac{1}{|\mathfrak{D}_t|} \sum_{\mathbf{u}\in\mathfrak{D}_t} \dot{\mathcal{L}}\Big(\mathcal{Y}_\mathbf{u}, \dot{g}(\mathcal{X}_\mathbf{u})\Big)
\end{equation}
with $|\mathfrak{D}_t|$ the number of locations in the discretization. The problem with this empirical approximation is that the response features $\mathcal{Y}_\mathbf{u}$ are not available in the target domain where the model will be applied. However, it is easy to show that the pointwise expected risk in \autoref{eq:emprisk} can be rewritten with importance weights $\dot{w}(\mathcal{X},\mathcal{Y})$ when samples from $\mathfrak{D}_s$ are drawn instead \citep{Sugiyama2006}:
\begin{equation}
\mathcal{R}_t(\dot{g}) = \mathbb{E}_{\Pr_{\mathbf{v}\in\mathfrak{D}_t}}\left[\dot{\mathcal{L}}\Big(\mathcal{Y}, \dot{g}(\mathcal{X})\Big)\right] = \mathbb{E}_{\Pr_{\mathbf{u}\in\mathfrak{D}_s}}\left[\dot{w}(\mathcal{X},\mathcal{Y})\dot{\mathcal{L}}\Big(\mathcal{Y}, \dot{g}(\mathcal{X})\Big)\right]
\end{equation}
with $\dot{w}(\mathcal{X},\mathcal{Y}) = \frac{\Pr_{\mathbf{v}\in\mathfrak{D}_t}(\mathcal{X},\mathcal{Y})} {\Pr_{\mathbf{u}\in\mathfrak{D}_s}(\mathcal{X},\mathcal{Y})}$. Under covariate shift, the importance weights only depend on the distribution of explanatory features $\dot{w}(\mathcal{X}) = \frac{\Pr_{\mathbf{v}\in\mathfrak{D}_t}(\mathcal{X})} {\Pr_{\mathbf{u}\in\mathfrak{D}_s}(\mathcal{X})}$, and we can write a simple importance-weighted empirical approximation:
\begin{equation}\label{eq:iwer}
\mathcal{R}_t(\dot{g}) = \mathbb{E}_{\Pr_{\mathbf{u}\in\mathfrak{D}_s}}\left[\dot{w}(\mathcal{X})\dot{\mathcal{L}}\Big(\mathcal{Y}, \dot{g}(\mathcal{X})\Big)\right] \approx \frac{1}{|\mathfrak{D}_s|} \sum_{\mathbf{u}\in\mathfrak{D}_s} \dot{w}(\mathcal{X}_\mathbf{u})\dot{\mathcal{L}}\Big(\mathcal{Y}_\mathbf{u}, \dot{g}(\mathcal{X}_\mathbf{u})\Big)
\end{equation}
Our goal is to find the pointwise model that minimizes the empirical risk approximation $\hat{\mathcal{R}}_t(\dot{g})$ introduced in \autoref{eq:iwer}:
\begin{equation}
\dot{f} = \argmin_{\dot{g}} \hat{\mathcal{R}}_t(\dot{g})
\end{equation}
Alternatively, our goal is to rank a collection of models $\{\dot{g}_i\}_{i=1,2,\ldots,k}$ based on their empirical risk $\{\hat{\mathcal{R}}_t(\dot{g}_i)\}_{i=1,2,\ldots,k}$ in a geostatistical learning problem to aid model selection.
In order to achieve the stated goals, we need to (1) estimate the importance weights in the empirical risk approximation, and (2) remove the dependence of the approximation on a specific dataset. These two issues are addressed in the following sections.
\subsection{Density ratio estimation}\label{sec:dre}
The empirical approximation of the risk $\hat{\mathcal{R}}_t(\dot{g})$, depends on estimates of the weights $\dot{w}(\mathcal{X}_\mathbf{u})$, which are ratios of probabilities in the target and source domains. The following problem can be posed \citep{Sugiyama2012}:
\paragraph{\textbf{Definition (Density Ratio Estimation)}} Given two collections of samples $\{\mathcal{X}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}_s}$ and $\{\mathcal{X}_\mathbf{v}\}_{\mathbf{v}\in\mathfrak{D}_t}$ from source and target domains, estimate the density ratio $\frac{\Pr_{\mathbf{v}\in\mathfrak{D}_t}(\mathcal{X})}{\Pr_{\mathbf{u}\in\mathfrak{D}_s}(\mathcal{X})}$ at any new sample $\mathcal{X}$. In particular, estimate the ratio at all samples $\{\mathcal{X}_\mathbf{u}\}_{\mathbf{u}\in\mathfrak{D}_s}$ from the source.
Efficient methods for density ratio estimation that perform well with high-dimensional features have been proposed in the literature. In this work we consider a fast method named Least Squares Importance Fitting (LSIF) \citep{Kanamori2009,Kanamori2009a}. The LSIF method assumes that the weights are a linear combination of basis functions $\dot{w}(\mathcal{X}_\mathbf{u}) = \bm{\alpha}^\top \bm{\varphi}(\mathcal{X}_\mathbf{u})$ with coefficients to be learned $\bm{\alpha} = (\alpha_1,\alpha_2,\ldots,\alpha_b)$ and fixed basis $\bm{\varphi}(\mathcal{X}_\mathbf{u}) = (\varphi_1(\mathcal{X}_\mathbf{u}),\varphi_2(\mathcal{X}_\mathbf{u}),\ldots,\varphi_b(\mathcal{X}_\mathbf{u}))$. The LSIF estimator is derived by minimizing the squared error with the true density ratio:
\begin{equation}
\begin{aligned}
\minimize_{\bm{\alpha}\in\mathbb{R}^b} \quad & \frac{1}{2}\int\left(\dot{w}(\mathcal{X}_\mathbf{u}) - \frac{\Pr_{\mathbf{v}\in\mathfrak{D}_t}(\mathcal{X}_\mathbf{u})}{\Pr_{\mathbf{u}\in\mathfrak{D}_s}(\mathcal{X}_\mathbf{u})} \right)^2\Pr_{\mathbf{u}\in\mathfrak{D}_s}(\mathcal{X}_\mathbf{u}) d\mathcal{X}_\mathbf{u}\\
\textrm{s.t.} \quad & \bm{\alpha} \succeq \mathbf{0}
\end{aligned}
\end{equation}
under the constraint that densities are always positive. By choosing $b$ center features randomly from the target domain $\{\mathcal{X}_i\}_{i=1,2,\ldots,b}$, the method introduces a Gaussian kernel basis $\varphi_i(\mathcal{X}_\mathbf{u}) = k(\mathcal{X}_\mathbf{u},\mathcal{X}_i)$ that simplifies the objective function to a matrix form:
\begin{equation}
\begin{aligned}
\minimize_{\bm{\alpha}\in\mathbb{R}^b} \quad & \frac{1}{2} \bm{\alpha}^\top \mathbf{H} \bm{\alpha} - \mathbf{h}^\top \bm{\alpha} + \lambda \mathbf{1}^\top \bm{\alpha}\\
\textrm{s.t.} \quad & \bm{\alpha} \succeq \mathbf{0}
\end{aligned}
\end{equation}
with $\mathbf{H} = \int \bm{\varphi}(\mathcal{X}_\mathbf{u}) \bm{\varphi}(\mathcal{X}_\mathbf{u})^\top \Pr_{\mathbf{u}\in\mathfrak{D}_s}(\mathcal{X}_\mathbf{u}) d\mathcal{X}_\mathbf{u}$ and $\mathbf{h} = \int \bm{\varphi}(\mathcal{X}_\mathbf{v}) \Pr_{\mathbf{v}\in\mathfrak{D}_t}(\mathcal{X}_\mathbf{v})d\mathcal{X}_\mathbf{v}$. The regularization parameter $\lambda \ge 0$ on the coefficients $\bm{\alpha}$ avoids overfitting, and empirical estimates of both $\mathbf{H}$ and $\mathbf{h}$ are easily obtained with sample averages:
\begin{equation}
\begin{aligned}
\hat{\mathbf{H}} &= \frac{1}{|\mathfrak{D}_s|}\sum_{\mathbf{u}\in\mathfrak{D}_s} \bm{\varphi}(\mathcal{X}_\mathbf{u})\bm{\varphi}(\mathcal{X}_\mathbf{u})^\top\\
\hat{\mathbf{h}} &= \frac{1}{|\mathfrak{D}_t|}\sum_{\mathbf{v}\in\mathfrak{D}_t} \bm{\varphi}(\mathcal{X}_\mathbf{v})
\end{aligned}
\end{equation}
This quadratic optimization problem with linear inequality constraints can be solved very efficiently with modern optimization software \citep{KMogensen2018,Dunning2017}. In the end, the optimal coefficients $\bm{\alpha}^\star$ are plugged back into the basis expansion for optimal estimates of the weights on new samples $\dot{w}(\mathcal{X}) = {\bm{\alpha}^\star}^\top \bm{\varphi}(\mathcal{X})$.
\subsection{Weighted cross-validation}\label{sec:iwcv}
In order to remove the dependence of the empirical risk approximation on the dataset, we use importance-weighted cross-validation (IWCV) \citep{Sugiyama2006,Sugiyama2007}. As with the traditional cross-validation procedure, the source domain is split into $k$ folds $\mathfrak{D}_s = \bigcup_{j=1}^k \mathfrak{D}_s^{(j)}$, and each fold $\mathfrak{D}_s^{(j)}$ is hidden for error evaluation while the model $\dot{g}^{(j)}$ is learned on the remaining folds:
\begin{equation}
\hat{\mathcal{R}}_t^{IWCV}(\dot{g}) = \frac{1}{k} \sum_{j=1}^k \frac{1}{|\mathfrak{D}_s^{(j)}|} \sum_{\mathbf{u}\in\mathfrak{D}_s^{(j)}} (\dot{w}(\mathcal{X}_\mathbf{u}))^l \dot{\mathcal{L}}\Big(\mathcal{Y}_\mathbf{u}, \dot{g}^{(j)}(\mathcal{X}_\mathbf{u})\Big)
\end{equation}
The main difference in the IWCV procedure are the weights that multiply each sample. The regularization exponent $l \in [0,1]$ can be set to zero to recover the traditional estimator, or to a positive value to account for covariate shift. An optimal value for $l$ can be found via hyperparameter search by considering another layer of cross-validation. In this work, we simply set default values for $l$ such as $l=1$ or $l=0.5$.
In the rest of the paper, we combine IWCV with LSIF into a method for estimating generalization error that we term \emph{Density Ratio Validation}. Although IWCV is known to outperform classical cross-validation methods in non-spatial settings, little is known about its performance with geospatial data. Moreover, like all prior art, IWCV approximates the pointwise generalization error of \autoref{eq:pointrisk} as opposed to the geostatistical generalization error of \autoref{eq:risk}, and therefore is limited by design to non-spatial learning models.
\section{Experiments}\label{sec:exps}
In this section, we perform experiments to assess estimators of generalization error under varying covariate shifts and spatial correlation lengths. We consider Cross-Validation (CV), Block Cross-Validation (BCV) and Density Ratio Validation (DRV), which all rely on the same cross-validatory mechanism of splitting data into folds.
First, we use synthetic Gaussian process data and simple labeling functions to construct geostatistical learning problems for which learning models have a known (via geostatistical simulation) generalization error. In this case, we assess the estimators in terms of how well they estimate the actual error under various spatial distributions. Second, we demonstrate how the estimators are used for model selection in a real application with well logs from New Zealand, which can be considered to be a dataset of moderate size in this field.
\subsection{Gaussian processes}\label{sec:gauss}
Let $Z_{s_1},Z_{s_2}$ be two Gaussian processes with constant mean $\mu_s$ and variogram $\gamma_s$ defined over $\mathfrak{D}_s$, and likewise let $Z_{t_1},Z_{t_2}$ be two Gaussian processes with constant mean $\mu_t$ and variogram $\gamma_t$ defined over $\mathfrak{D}_t$. Denote by $r_s$ the variogram range (or correlation length) and by $\sigma_s^2$ the variogram sill (or total variance) of the processes in the source domain. Likewise, denote by $r_t$ and $\sigma_t^2$ the range and sill of the variogram in the target domain. It is clear that pointwise stationarity holds inside each of these domains. The feature vector $\mathcal{X}_\mathbf{u} \in \mathbb{R}^2$ for any location $\mathbf{u}\in\mathfrak{D}_s$ in the source domain has a bivariate Gaussian distribution $\mathcal{N}(\mu_s \mathbf{1}, \sigma_s^2 \mathbf{I})$, whereas the feature vector $\mathcal{X}_\mathbf{v} \in \mathbb{R}^2$ for any location $\mathbf{v}\in\mathfrak{D}_t$ in the target domain has a bivariate Gaussian distribution $\mathcal{N}(\mu_t \mathbf{1}, \sigma_t^2 \mathbf{I})$. By constraining the variogram ranges to be equal in source and target domains, that is $r_s = r_t = r$, and by requiring that both variograms pass through the origin (i.e. no nugget effect), we can investigate two types of covariate shift for various ranges $r$:
\paragraph{\textbf{Mean shift}} Define the shift in the mean as $\delta = c\, \abs{\mu_t - \mu_s} \in [0,\infty)$ for some normalization constant $c > 0$. In this experiment, we set the value of this constant to $c = \frac{1}{3\sqrt{2}\sigma_s}$ for convenience so that a $\delta=1$ becomes equivalent to $\abs{\mu_t - \mu_s} = 3\sqrt{2}\sigma_s$, which in turn is equivalent to two circles (i.e. bivariate Gaussians) of radii $3\sigma_s$ touching each other along the identity line, see \autoref{fig:config}.
\paragraph{\textbf{Variance shift}} Define the shift in the variance as $\tau = \sigma_t / \sigma_s \in (0,\infty)$. Here, $\tau=1$ means absence of variance shift, $\tau < 1$ means that the variance where the model is applied is smaller than the variance where the model was trained, and $\tau > 1$ means the exact opposite of $\tau < 1$.
Geostatistical learning problems with $\tau>1$ are very challenging to solve, and usually require additional extrapolation models, beyond the pointwise learning models discussed in this work. Therefore, we only consider cases with $\tau \le 1$ in this experiment. More specifically, we consider all combinations of shift in the mean and variance of Gaussian features by varying $(\delta,\tau)$ in the open unit square $\mathcal{B} = [0,1]\times(0,1]$.
Given a shift parameterized by $(\delta,\tau) \in \mathcal{B}$, we can classify it into one of three possible configurations depending on how the source and target distributions of features overlap:
\begin{equation}\label{eq:config}
\config(\delta,\tau) =
\begin{cases}
\text{inside}, & 2\delta \le 1 - \tau\\
\text{outside}, & 2\delta \ge 1 + \tau\\
\text{partial}, & \text{otherwise}
\end{cases}
\end{equation}
The first configuration in \autoref{eq:config} refers to the case in which the target distribution $\mathcal{N}(\mu_t \mathbf{1}, \sigma_t^2 \mathbf{I})$ is ``inside'' the source distribution $\mathcal{N}(\mu_s \mathbf{1}, \sigma_s^2 \mathbf{I})$ meaning that the circle of radius $3\sigma_t$ centered at $\mu_t\mathbf{1}$ is contained in the circle of radius $3\sigma_s$ centered at $\mu_s\mathbf{1}$. Similarly, the second configuration refers to the case in which the target distribution is ``outside'' the source distribution. Finally, the third configuration refers to a ``partial'' overlap when the two distributions share a common set of samples but are not entirely one inside of the other. We note, however; that the illustration with circles provided in \autoref{fig:config} is only representative in the absence of spatial correlation (i.e. $r = 0$), see \autoref{fig:feat-corr}.
\begin{figure}[h]
\centering%
\begin{subfigure}[b]{.3\textwidth}
\begin{tikzpicture}
\draw[dashed] (-1.2,-1.2) -- (1.3,1.3) {};
\draw[fill=gray!80,fill opacity=0.5] (0,0) circle (1);
\draw[fill=purple!80,fill opacity=0.5] (0.15,0.15) circle (0.7);
\node at (0,-1.5) {inside};
\draw[-{Circle[length=2pt]}] (-1.0,0.8) node[inner sep=0,label=above:$\mathbf{x}_\mathbf{u}$] {} to[out=-90,in=120] (-0.7,-0.2);
\draw[-{Circle[length=2pt]}] (0.5,1.2) node[inner sep=0,label=above:$\mathbf{x}_\mathbf{v}$] {} to[out=-90,in=60] (0.4,0.0);
\node at (0,2.2) {For $\mathbf{u}\in\mathfrak{D}_s$ and $\mathbf{v}\in\mathfrak{D}_t$:};
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}[b]{.3\textwidth}
\begin{tikzpicture}
\draw[dashed] (-1.2,-1.2) -- (1.3,1.3) {};
\draw[fill=gray!80,fill opacity=0.5] (0,0) circle (1);
\draw[fill=purple!80,fill opacity=0.5] (0.6,0.6) circle (0.7);
\node at (0,-1.5) {partial};
\node[rotate=30] at (-0.7,1.0) {$\Pr_\mathbf{u}(\mathcal{X})$};
\node[rotate=-30] at (1.2,1.3) {$\Pr_\mathbf{v}(\mathcal{X})$};
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}[b]{.3\textwidth}
\begin{tikzpicture}
\draw[dashed] (-1.2,-1.2) -- (1.3,1.3) {};
\draw[fill=gray!80,fill opacity=0.5] (0,0) circle (1);
\draw[fill=purple!80,fill opacity=0.5] (1.3,1.3) circle (0.7);
\node at (0,-1.5) {outside};
\draw[-Latex] (0,0) -- (1,0) node[midway,above] {$3\sigma_s$};
\draw[-Latex] (1.3,1.3) -- (1.3+0.7,1.3) node[midway,above] {$3\sigma_t$};
\draw[dotted] (0+1.2,0-1) -- (1.3+1.2,1.3-1) node[midway,above,rotate=45] {$\abs{\mu_t - \mu_s}\sqrt{2}$} -- (1.3+1.2,0-1) node[midway,right] {$\abs{\mu_t - \mu_s}$} -- cycle node[midway,below] {$\abs{\mu_t - \mu_s}$};
\end{tikzpicture}
\end{subfigure}%
\caption{Three possible shift configurations. The target distribution is ``inside'' the source distribution (left), ``outside'' (right), or they show ``partial'' overlap (middle). The processes $Z_{s_1}$ and $Z_{s_2}$ are observed at any location $\mathbf{u}\in\mathfrak{D}_s$ forming a feature vector $\mathbf{x}_\mathbf{u}\in\mathbb{R}^2$. Similarly, the processes $Z_{t_1}$ and $Z_{t_2}$ (which are shifted versions of the previous two) are observed at any location $\mathbf{v}\in\mathfrak{D}_t$ forming a feature vector $\mathbf{x}_\mathbf{v}\in\mathbb{R}^2$. In this case the features $\mathbf{x}_\mathbf{u} \sim \Pr_\mathbf{u}(\mathcal{X})$ and $\mathbf{x}_\mathbf{v} \sim \Pr_\mathbf{v}(\mathcal{X})$ are samples of bivariate Gaussians illustrated with circles of radii $3\sigma_s$ and $3\sigma_t$ centered at $\mu_s\mathbf{1}$ and $\mu_t\mathbf{1}$.}
\label{fig:config}
\end{figure}
To efficiently simulate multiple spatial samples of the processes over a regular grid domain with $100\times 100$ locations (or pixels), we use spectral Gaussian simulation \citep{Gutjahr1997}. We fix the parameters of the source distribution at $\mu_s = 0$ and $\sigma_s = 1$ without loss of generality, and assume no inter-process correlation (i.e. $\rho = 0$) like we did in \autoref{fig:feat-corr}. Under these modeling assumptions, we are able to investigate the spatial distribution of features as a function of shift parameters $(\delta,\tau) \in \mathcal{B}$ and variogram ranges $r \in \mathcal{C} = \{0,10,20\}$.
To fully specify the geostatistical learning problem, we need to specify a learning task. The task consists of predicting a binary variable $\mathcal{Y}_\mathbf{v}$ at locations $\mathbf{v}$ in the target grid $\mathfrak{D}_t$ based on observations $y_\mathbf{u}$ of the variable at locations $\mathbf{u}$ in the source grid $\mathfrak{D}_s$. These observations (or labels) are synthesized using known labeling functions such as $y_\mathbf{u} = \sgn(\sin(w\norm{\mathbf{x}_\mathbf{u}}_p))$, where $\norm{\cdot}_p$ is the $p$-norm, $w$ is the angular frequency, and $\sgn$ is the modified sign function that assigns $+1$ to $x\ge0$ and $-1$ otherwise. The observations produced by these functions form alternating patterns in the feature space, which are not trivial to predict with simple learning models, see \autoref{fig:label}. In this experiment, we fix $p=1$ and $w = 4$ to save computational time. Other norms and angular frequencies produce similar results.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{groupplot}[group style={group size=3 by 1, horizontal sep=2cm},
height=4cm,width=4cm,
xlabel=$Z_1$,ylabel=$Z_2$,
ylabel near ticks,
enlargelimits=false]
\nextgroupplot[title={$p=1,w=2$}]
\addplot graphics[xmin=-4,ymin=-4,xmax=4,ymax=4] {figs/labels_p=1_w=2.png};
\nextgroupplot[title={$p=1,w=4$}]
\addplot graphics[xmin=-4,ymin=-4,xmax=4,ymax=4] {figs/labels_p=1_w=4.png};
\nextgroupplot[title={$p=2,w=4$}]
\addplot graphics[xmin=-4,ymin=-4,xmax=4,ymax=4] {figs/labels_p=2_w=4.png};
\end{groupplot}
\end{tikzpicture}
\caption{Labeling function $f_{p,w}(\mathbf{x}) = \sgn(\sin(w\norm{\mathbf{x}}_p))$. Labels form alternating patterns in the feature space for different $p$-norm and angular frequencies $w$.}
\label{fig:label}
\end{figure}
Having defined the problem, we proceed and specify learning models in order to investigate the different estimators of generalization error. We choose two models that are based on different prediction mechanisms \citep{Hastie2009}:
\paragraph{\textbf{Decision tree}} A pointwise decision tree model $\dot{f}_T$ makes predictions solely based on the features of the sample, without exploiting nearby features in the feature space.
\paragraph{\textbf{K-nearest neighbors}} A pointwise k-nearest neighbors model $\dot{f}_N$ makes predictions based on nearby features, and is sometimes called a ``spatial model''.
These two models $\dot{f}_T$ and $\dot{f}_N$ are simply representative models from the ``non-spatial'' and ``spatial'' families of models. We emphasise, however; that the term ``spatial model'' can be misleading in the spatial statistics literature. It is important to distinguish ``spatial models'' such as k-nearest neighbors that exploit the notion of proximity of features in the \emph{feature space} from ``geospatial models'' that also exploit the proximity of samples in the \emph{physical space} (or spatial domain as we have been calling it) besides their features.
The experiment proceeds as follows. For each shift $(\delta,\tau) \in \mathcal{B}$, each correlation length $r \in \mathcal{C}$, and each pointwise learning model $\dot{f} \in \{\dot{f}_T, \dot{f}_N\}$, we sample a problem $\mathcal{P}_{\delta,\tau,r}$ and estimate the generalization error of the model $\dot{f}$ on the problem with CV, BCV and DRV. We set the hyperparameters of the CV and BCV estimators based on the fact that the correlation length never exceeds $20$. For instance, we set the block side in BCV to $s=20$, and use the equivalent number of folds in CV, i.e. $k=(100/20)^2=25$ for a domain with $100\times 100$ pixels. We set the kernel width of the LSIF estimator in DRV to $\sigma=2$ based on the synthetic Gaussian distributions, and use $10$ kernels in the basis expansion. Additionally, we approximate the true generalization error of the models with Monte Carlo simulation over the target domain (e.g. 100 spatial samples).
To facilitate the visualization of the results, we introduce shift functions $\mathcal{S}\colon \mathcal{B} \to [0,\infty)$ that map the shift parameters $(\delta,\tau)$ to a single covariate shift value, which can be interpreted loosely as the ``difficulty'' of the problem:
\paragraph{\textbf{Kullback-Leibler divergence}} The divergence or relative entropy between two distributions $p$ and $q$ is defined as $\mathcal{S}_{KL}(p||q) = \int p(x) \log\frac{p(x)}{q(x)}dx$, and can be derived analytically for two (2D) Gaussian distributions $p = \mathcal{N}(\mu_t \mathbf{1}, \sigma_t^2 \mathbf{I})$ and $q = \mathcal{N}(\mu_s \mathbf{1}, \sigma_s^2 \mathbf{I})$. We derive a formula in terms of $\delta$ and $\tau$ by fixing $\sigma_s = 1$:
\begin{equation}
\mathcal{S}_{KL}(\delta,\tau) = \delta^2 + \tau^2 - \log(\tau^4) - 1
\end{equation}
\paragraph{\textbf{Jaccard distance}} The Jaccard index between two sets $A$ and $B$ is defined as $J(A,B) = \frac{|A\cap B|}{|A\cup B|}$, and the corresponding distance as $\mathcal{S}_J(A,B) = 1 - J(A,B)$. For two (2D) Gaussian distributions, we consider $A$ and $B$ to be circles of radii $3\sigma_s$ and $3\sigma_t$ centered at $\mu_s\mathbf{1}$ and $\mu_t\mathbf{1}$. The distance is then expressed in terms of areas, which can be derived analytically in terms of $\delta$ and $\tau$ by fixing $\sigma_s = 1$:
\begin{equation}\label{eq:jaccard}
\begin{aligned}
|A| = 9\pi,\quad |B| = 9\pi\tau^2\\
C_1 = 9\arccos\left(\frac{2\delta^2 + 9(1-\tau^2)}{6\sqrt{2}\delta}\right)\\
C_2 = 9\tau^2\arccos\left(\frac{2\delta^2 + 9(\tau^2 - 1)}{6\sqrt{2}\delta\tau}\right)\\
C_3 = \frac{1}{2}\sqrt{\left(9(1+\tau)^2 - 2\delta^2\right)\left(2\delta^2 - 9(1-\tau)^2\right)}\\
|A \cap B| = C_1 + C_2 - C_3\\
|A \cup B| = |A| + |B| - |A \cap B|
\end{aligned}
\end{equation}
\paragraph{\textbf{Novelty factor}} We propose a new shift function termed the \emph{novelty factor} inspired by the geometric view of Jaccard. First, we define the novelty of $B$ with respect to $A$ as $N(B/A) = \frac{|B- A\cap B| - |A\cap B|}{|B|}$, and notice that it is the fraction of $B$ that is outside of $A$ minus the fraction of $B$ that is inside of $A$. Second, we restrict the definition to cases with $|B| \le |A|$ (e.g. Gaussian case with $\tau \le 1$), and notice that the novelty $N(B/A)$ lies in the interval $[-1,1]$. Finally, we define the novelty factor $\mathcal{S}_N(A,B) = \frac{N(B/A) + 1}{2}$ in the interval $[0,1]$, which can be easily computed for the Gaussian case using the formulas derived in \autoref{eq:jaccard}.
We plot the true generalization error of the models as a function of the different covariate shifts in \autoref{fig:gaussian-plot1}, and color the points according to their shift configuration (see \autoref{eq:config}). In this plot, the horizontal dashed line intercepting the vertical axis at $0.5$ represents a model that assigns positive and negative labels to samples at random with equal weight (i.e. Bernoulli variable).
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/gaussian-plot1}
\caption{Generalization error of learning models versus covariate shift functions. Among all shift functions, the novelty factor is the only function that groups shift configurations along the horizontal axis. Models behave similarly in terms of generalization error for the given dataset size ($100\times 100$ pixels), and difficult shift configurations lead to errors that approach the theoretical Bernoulli limit of $0.5$.}
\label{fig:gaussian-plot1}
\end{figure}
Among the three shift functions, the novelty factor is the only function that groups shift configurations along the horizontal axis. In this case, configurations deemed easy (i.e. where the target distribution is \emph{inside} the source distribution) appear first, then configurations of moderate difficulty (i.e. \emph{partial} overlap) appear next, and finally difficult configurations (i.e. target is \emph{outside} the source) appear near the theoretical Bernoulli limit. The Kullback-Leibler divergence and the Jaccard distance fail to summarize the shift parameters into a one-dimensional visualization, and are therefore omitted in the next plots.
The two models behave similarly in terms of generalization error for the given dataset size (i.e. $100\times 100$ pixels), and therefore can be aggregated into a single scatter to increase the confidence in the observed trends.
We plot the CV, BCV and DRV estimates of generalization error versus covariate shift (i.e. novelty factor) in the top row of \autoref{fig:gaussian-plot2}, and color the points according to their correlation length. We omit a few DRV estimates that suffered from numerical instability in challenging \emph{partial} or \emph{outside} configurations, and show the box plot of error estimates in the bottom row of the figure.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/gaussian-plot2}
\caption{Estimates of generalization error for various shifts (i.e. novelty factor) and various correlations lengths. The box plots for the \emph{inside} configuration illustrate how the estimators behave differently for increasing correlation lengths.}
\label{fig:gaussian-plot2}
\end{figure}
First, we emphasize that the CV and BCV estimates remain constant as a function of covariate shift. This is expected given that these estimators do not make use of the target distribution. The DRV estimates increase with covariate shift as expected, but do not follow the same rate of increase of the true (or actual) generalization error obtained with Monte Carlo simulation. Second, we emphasize in the box plots for the \emph{inside} configuration that the correlation length affects the estimators differently. The CV estimator becomes more optimistic with increasing correlation length, whereas the BCV estimator becomes less optimistic, a result that is also expected from prior art. Additionally, the interquartile range of the BCV estimator increases with correlation length. It is not clear from the box plots that a trend exists for the DRV estimator. The actual generalization error behaves erratically in the presence of large correlation lengths as indicated by the scatter and box plots.
In order to better visualize the trends in the estimates, we smooth the scatter plots with locally weighted regression per correlation length in the top row of \autoref{fig:gaussian-plot3}, and show in the bottom row of the figure the Q-Q plots of the different estimates against the actual generalization error for the \emph{inside} configuration where all estimators are supposed to perform well.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/gaussian-plot3}
\caption{Trends of generalization error for different estimators (top) and Q-Q plots of estimated versus actual error for \emph{inside} configuration (bottom).}
\label{fig:gaussian-plot3}
\end{figure}
From the figure, there exists a gap between the DRV estimates and the actual generalization error of the models for all covariate shifts. This gap is expected given that the target distribution may be very different from the source distribution, particularly in \emph{partial} or \emph{outside} shift configurations. On the other hand, the gap also seems to be affected by the correlation length, and is largest with $20$ pixels of correlation. Additionally, we emphasize in the Q-Q plots that the BCV estimates are biased due to the systematic selection of folds. The BCV estimates are less optimistic than the CV estimates, which is a desired property in practice, however there is no guarantee that the former estimates will approximate well the actual generalization error of the models.
\subsection{New Zealand dataset}\label{sec:newzealand}
Unlike the previous experiment with synthetic Gaussian process data and known generalization error, this experiment consists of applying the CV, BCV and DRV estimators to a real dataset of well logs prepared in-house \citep{W.S.RdeCarvalho2020}. We quickly describe the dataset, introduce the related geostatistical learning problems, and use error estimates to rank learning models. Finally, we compare these ranks with an ideal rank obtained with additional label information that is not available during the learning process.
The dataset consists of $407$ wells in the Taranaki basin, including the main geophysical logs and reported geological formations. The basin comprises an area of about $330.000km^2$, located broadly onshore and offshore the New Zealand west coast (see \autoref{fig:newzealand}). Well trajectories are georeferenced in UTM coordinates (X and Y) and true vertical depth (Z).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figs/newzealand-map}
\caption{Curated dataset with $407$ wells in the Taranaki basin, New Zealand. The basin comprises an area of about $330.000km^2$, located bradly onshore and offshore the New Zealand west coast.}
\label{fig:newzealand}
\end{figure}
We split the wells into onshore and offshore locations in order to introduce a geostatistical learning problem with covariate shift. The problem consists of predicting the rock formation from well logs offshore after learning a model with well logs and reported (i.e. manually labeled) formations onshore. The well logs considered are gamma ray (GR), spontaneous potential (SP), density (DENS), compressional sonic (DTC) and neutron porosity (NEUT). We eliminate locations with missing values for these logs and investigate a balanced dataset with the two most frequent formations---Urenui and Manganui. We normalize the logs and illustrate the covariate shift property by comparing the scatter plots of onshore and offshore locations in \autoref{fig:scatter}. Additionally, we define a second geostatistical learning problem without covariate shift. In this case, we join all locations filtered in the previous problem and sample two new sets of locations with sizes respecting the same source-to-target proportion (e.g. $300000 : 50000$).
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/newzealand-shift}
\caption{Distribution of main geophysical logs onshore (gray) and offshore (purple) centered by the mean and divided by the standard deviation. Visible covariate shift in the scatter and contour plots.}
\label{fig:scatter}
\end{figure}
We set the hyperparameters of the error estimators based on variography and according to available computational resources. In particular, we set blocks for the BCV estimators with sides $10000\times 10000\times 500$ that are much greater than the vertical and horizontal correlation lengths estimated from empirical variograms. We obtain the corresponding number of folds $k=99$ for the CV estimator by partitioning the bounding box of onshore wells into blocks with the given sides. Similarly to the previous experiment with synthetic Gaussian process data, we set the kernel width in DRV to $\sigma=2$ given that the well logs were normalized to have unit variance. Finally, we select a list of learning models to rank including Ridge classification (Ridge), logistic regression (Logistic), k-nearest neighbors (KNeighbors), naive Bayes (GaussianNB), linear discriminant analysis (BayesianLDA), perceptron (Perceptron), decision tree (DecisionTree), and a dummy model that reproduces the marginal distribution of formations in the source domain (Dummy).
In \autoref{tab:onoff}, we report the results for the onshore-to-offshore problem. In the upper part of the table we compare side-by-side the error estimates of the different methods. We highlight the closest estimates to the target error in {\color{NavyBlue} \textbf{blue}} color, and the most distant in {\color{Maroon} \textbf{red}} color. We emphasize that the target error is the error of the model in one single realization of the process, and is \emph{not} the generalization error averaged over multiple spatial realizations. In spite of this important distinction, we still think it is valuable to compare it with the estimates of generalization error given by CV, BCV and DRV since these methods were all derived under pointwise learning assumptions, and are therefore smooth averages over multiple points exactly like the error estimated from the single realization of the target. In the bottom part of the table, we report the model ranks derived from the error estimates as well as the ideal rank derived from the target error.
\begin{table}[h]
\centering
\input{tables/newzealand-onoff.tex}
\input{tables/newzealand-rank-onoff.tex}
\caption{Estimates of generalization error with different estimators for the onshore-to-offshore problem. The CV estimator produces estimates that are the most distant to the actual target error due to covariate shift and spatial correlation. None of the estimators is capable of ranking the models correctly. They all select complex models with low generalization ability.}
\label{tab:onoff}
\end{table}
Among the three estimators of generalization error, the CV estimator produces estimates that are the most distant from the target error, with a tendency to underestimate the error. The BCV estimator produces estimates that are higher than the CV estimates, and consequently closer to the target error in this case. The DRV estimator produces the closest estimates for most models, however; like the CV estimator it fails to approximate the error for models like KNeighbors and DecisionTree that are over-fitted to the source distribution. The three estimators fail to rank the models under covariate shift and spatial correlation. Over-fitted models with low generalization ability are incorrectly ranked at the top of the list, and the best models, which are simple ``linear'' models, appear at the bottom. We compare these results with the results obtained for the problem without covariate shift in \autoref{tab:noshift}.
\begin{table}[h]
\centering
\input{tables/newzealand-noshift.tex}
\input{tables/newzealand-rank-noshift.tex}
\caption{Estimates of generalization error with different estimators for the problem without covariate shift. The BCV estimator produces estimates that are the most distant to the actual target error due to bias from its systematic selection of folds. All estimators are capable of ranking the models in the absence of covariate shift.}
\label{tab:noshift}
\end{table}
From \autoref{tab:noshift}, the CV estimator produces estimates that are the closest to the target error. The BCV estimator produces estimates that are higher than the CV estimates as before, however this time this means that the BCV estimates are the most distant to the target error. The DRV estimator produces estimates that are not the closest nor the most distant to the target error. The three estimators successfully rank the models from simple linear models at the bottom of the list to more complex learning models at the top. Unlike the previous problem with covariate shift, this time complex models like KNeighbors and DecisionTree show high generalization ability.
\section{Conclusions}\label{sec:concls}
In this work, we introduce \emph{geostatistical (transfer) learning}, and demonstrate how most prior art in statistical learning with geospatial data fits into a category we term \emph{pointwise learning}. We define geostatistical generalization error and demonstrate how existing estimators from the spatial statistics literature such as block cross-validation are derived for that specific category of learning, and are therefore unable to account for general spatial errors.
We propose experiments with spatial data to compare estimators of generalization error, and illustrate how these estimators fail to rank models under covariate shift and spatial correlation. Based on the results of these experiments, we share a few remarks related to the choice of estimators in practice:
\begin{itemize}
\item The apparent quality of the BCV estimator is falsified in the Q-Q plots of \autoref{fig:gaussian-plot3} and in \autoref{tab:noshift}. The systematic bias produced by the blocking mechanism only guarantees that the error estimates are higher than the CV estimates. When the CV estimates are good (i.e. no covariate shift), the BCV estimates are unnecessarily pessimistic.
\item The CV estimator is not adequate for geostatistical learning problems that show various forms of covariate shift. Situations without covariate shift are rare in geoscientific settings, and since the DRV estimator works reasonably well for both situations (i.e. with and without shift), it is recommended instead.
\item Nevertheless, both the CV and DRV estimators suffer from a serious issue with over-fitted models in which case they largely underestimate the generalization error. For risk-averse applications where one needs to be careful about the generalization error of the model, the BCV estimator can provide more conservative results.
\item None of the three estimators were capable of ranking models correctly under covariate shift and spatial correlation. This is an indication that one needs to be skeptical about interpreting similar rankings available in the literature.
\end{itemize}
Finally, we believe that this work can motivate methodological advances in learning from geospatial data, including research on new estimators of geostatistical generalization error as opposed to pointwise generalization error, and more explicit treatments of spatial coordinates of samples in learning models.
\section*{Computer code availability}
All concepts and methods developed in this paper are made available in the GeoStats.jl project \citep{Hoffimann2018}. The project is hosted on GitHub under the ISC\footnote{\url{https://opensource.org/licenses/ISC}} open-source license: \url{https://github.com/JuliaEarth/GeoStats.jl}.
Experiments of this specific work can be reproduced with the following scripts: \url{https://github.com/IBM/geostats-gen-error}.
\section*{References}
|
{
"timestamp": "2021-02-18T02:19:44",
"yymm": "2102",
"arxiv_id": "2102.08791",
"language": "en",
"url": "https://arxiv.org/abs/2102.08791"
}
|
\section{Introduction}
\label{intro}
Predicting lexical complexity can enable systems to better guide a user to an appropriate text, or tailor it to their needs. The task of automatically identifying which words are likely to be considered complex by a given target population is known as Complex Word Identification (CWI) and it constitutes an important step in many lexical simplification pipelines \cite{paetzold2017survey}.
The topic has gained significant attention in the last few years, particularly for English. A number of studies have been published on predicting complexity of both single words and multi-word expressions (MWEs) including two recent competitions organized on the topic, CWI 2016 and CWI 2018, discussed in detail in Section \ref{sec:survey}. The first shared task on CWI was organized at SemEval in 2016 \cite{paetzold-specia:2016:SemEval1} providing participants with an English dataset in which words in context were annotated as non-complex (0) or complex (1) by a pool of human annotators. The goal was to predict this binary value for the target words in the test set. A post-competition analysis of the CWI 2016 results \cite{zampieri-EtAl:2017:NLPTEA} examined the performance of the participating systems and evidenced how challenging CWI 2016 was with respect to the distribution (more testing than training instances) and annotation (binary and aggregated) of its dataset.
The second edition of the CWI shared task was organized in 2018 at the BEA workshop \cite{yimam2018report}. CWI 2018 featured a multilingual (English, Spanish, German, and French) and multi-domain dataset \cite{yimam-EtAl:2017:RANLP}. Unlike in CWI 2016, predictions were evaluated not only in a binary classification setting but also in terms of probabilistic classification in which systems were asked to assign the probability of the given target word in its particular context being complex. Although CWI 2018 provided an element of regression, the continuous complexity value of each word was calculated as the proportion of annotators that found a word complex. For example, if 5 out of 10 annotators labeled a word as complex then the word was given a score of 0.5. This measure relies on an aggregation of absolute binary judgments of complexity to give a continuous value.
The obvious limitation of annotating lexical complexity using binary judgments has been recently addressed by the CompLex dataset \cite{shardlow-etal-2020-complex}, which is discussed in depth in Section \ref{sec:spec_cwi}. CompLex is a multi-domain English dataset annotated with a 5-point Likert scale (1-5) corresponding to the annotators comprehension and familiarity with the words in which 1 represents {\em very easy} and 5 represents {\em very difficult}. The CompLex dataset is being used as the official dataset of the ongoing SemEval-2019 Task 1: Lexical Complexity Prediction (LCP). The goal of LCP 2021 is to predict this complexity score for each target word in context in the test set.
In this paper we investigate properties of multiple annotated English lexical complexity datasets such as the aformentioned CWI datasets and others from the literature \cite{maddela-xu-2018-word}. We investigate the types of features that make words complex. We analyse the shortcomings of the previous CWI datasets and use this to motivate a specification for a new type of CWI dataset, focussing not on complex-word identification (CWI), but instead on lexical complexity prediction (LCP), that is CWI in a continuous-label setting. We further develop a datset based on adding additional annotations to the existing CompLex 1.0 to create our new dataset, CompLex 2.0, and use this to provide experiments into the nature of lexical complexity.
The main contributions of this paper are the following:
\begin{itemize}
\item A concise yet comprehensive survey on the two editions of the CWI shared tasks organized in 2016 and 2018;
\item An investigation into the types of features that correlate with lexical complexity.
\item A detailed analysis of the CWI--2016 \cite{paetzold-specia:2016:SemEval1}, CWI--2018 \cite{yimam2018report} and Madella--2018 \cite{maddela-xu-2018-word} datasets, highlighting issues with the annotation protocols that have been used;
\item A specification for a new annotation protocol for the CWI task;
\item An implementation of our specification, describing the annotation of a new dataset for CWI (CompLex 1.0 and 2.0)
\item Experiments comparing the features affecting lexical complexity in our dataset, as compared to others.;
\item Experiments using our dataset, demonstrating the effects of genre on CWI.
\end{itemize}
\noindent The remainder of this paper is organized as follows. Section \ref{sec:survey} provides an overview of the previous CWI shared tasks. Section \ref{sec:analysis_of_features} provides a preliminary investigation into the types of features that correlate with complexity labels in previous CWI datasets. Section \ref{sec:spec_cwi} firstly discusses the datasets that have previously been used for CWI, highlighting issues in their annotation protocols in Section \ref{sec:building_on}, and then proposes a new protocol for constructing CWI datasets in Section \ref{sec:specification}. Section \ref{sec:complex2} reports on the construction of a new dataset following the specification previously laid out. Section \ref{sec:analysis_of_existing} compares the annotations in our new dataset to those of previous datasets by developing a categorical annotation scheme. Section \ref{sec:complex_experiments} shows further experiments demonstrating how our new corpus can be used to investigate the nature of lexical complexity. Finally, a discussion of our main thesis and conclusions of our work are presented in Sections \ref{sec:discussion} and \ref{sec:conclusion} respectively.
\section{An Overview of the CWI 2016 and 2018 Shared Tasks}
\label{sec:survey}
There have been various studies which have both created datasets and explored computational models for CWI, particularly focussing on English texts \cite{shardlow13,shardlow2013comparison,gooding-kochmar-2019-complex,finnimore-etal-2019-strong}. The studies have addressed CWI as a stand-alone task or as part of lexical simplification pipelines. The increased interest of the research community on this topic was the primary motivation for the organization of the two editions of the aforementioned CWI shared task.
In this section we provide an overview of the two editions of the CWI shared task, namely CWI--2016 organized at SemEval 2016 \cite{paetzold-specia:2016:SemEval1} and CWI--2018 organized at the BEA workshop in 2018 \cite{yimam2018report}. We describe the task setup, present the datasets, and briefly discuss the approaches submitted by participants of the two editions of the competition. We also present the approaches and the features used by each system. Finally, we analyze the results obtained by the participants and the main challenges of each edition of the CWI Shared Task.
\subsection{CWI--2016}
\label{sec:CWI2016}
The first shared task on CWI was organized as Task 11 at the International Workshop on Semantic Evaluation (SemEval) in 2016.\footnote{\url{http://alt.qcri.org/semeval2016/task11/}} CWI--2016 provided participants with a manually annotated dataset in which words in context were labeled as complex or non-complex, where complexity is interpreted as whether a word was understood or not by a pool of 400 non-native speakers of English. CWI--2016 was therefore modelled as a binary text classification task at the word level. Participants were required to build systems to predict lexical complexity in sentences of the unlabeled test set and assign label 0 to non-complex words and 1 to complex ones. Two examples from the CWI--2016 dataset are shown below:
\enumsentence{A {\bf frenulum} is a small fold of tissue that secures or {\bf restricts} the {\bf motion} of a mobile organ in the body.}
\enumsentence{The name `kangaroo mouse' refers to the species’ {\bf extraordinary} jumping ability, as well as its habit of {\bf bipedal} {\bf locomotion}.}
\noindent The words in bold: {\em frenulum, restricts,} and {\em motion} in Example 1, and {\em extraordinary, bipedal,} and {\em locomotion} in Example 2 were annotated by at least one of the annotators as complex and thus they were labeled as such in the training set. Adjacent words like {\em bipedal locomotion} do not represent multi-word expressions (MWEs) as they were annotated in isolation because the task set-up of CWI--2016 only considered single word annotations. Whilst MWEs were not considered in CWI--2016, they were studied in CWI--2018 (see Section \ref{sec:CWI2018}).
The dataset provided by the organizers of CWI--2016 contained a training set of 2,237 target words in 200 sentences. The training set was annotated by 20 annotators and a word was considered complex in the dataset if at least one of the 20 annotators assigned it as so. The test set included 88,221 target words in 9,000 sentences and each word was annotated by only one annotator. Therefore, the ground truth label for each word in the test was attributed based on a single complexity judgement. According to the organisers of CWI--2016, this setup was devised to imitate a realistic scenario where the goal was to predict the individual needs of a speaker based on the needs of the target group \cite{paetzold-specia:2016:SemEval1}. Finally, the data included in the CWI--2016 dataset comes from various sources such as the CW Corpus \cite{shardlow2013comparison}, LexMTurk Corpus \cite{horn14}, and Simple Wikipedia \cite{kauchak13}.
CWI--2016 attracted a large number of participants. A total of 21 teams submitted 42 systems to the competition. A wide range of features such as word embeddings, word and character n-grams, word frequency, Zipfian frequency distribution, word length, morphological, syntactic, semantic, and psycholinguistic features were used by participants. A number of different approaches to classification were tested, ranging from traditional machine learning classifiers such as support vector machines (SVM), decision trees, random forest, and maximum entropy classifiers to deep learning classifiers, such as recurrent neural networks. In Table \ref{tab:approaches}, we list the approaches submitted to CWI--2016 by the 19 teams who wrote system description papers presented at SemEval.
\begin{table*}[!ht]
\centering
\scalebox{0.95}{
\begin{tabular}{lp{3.2cm}p{4.8cm}c}
\hline
\bf Team & \bf Classifiers & \bf Features & \bf Paper \\ \hline
\bf AI-KU & SVM & word embeddings of the target and surrounding words & \cite{kuru:2016:SemEval} \\
\bf Amrita-CEN & SVM & word embeddings and various semantic and morphological features & \cite{sp-kumar-kp:2016:SemEval} \\
\bf BHASHA & SVM, Decision Tree & lexical and morphological features & \cite{choubey-pateria:2016:SemEval} \\
\bf ClacEDLK & Random Forests & semantic, morphological, and psycholinguistic features & \cite{davoodi-kosseim:2016:SemEval}\\
\bf CoastalCPH & Neural Network, Logistic Regression & word frequencies and word embeddings & \cite{bingel-schluter-martinezalonso:2016:SemEval} \\
\bf HMC & Decision Tree & lexical, semantic, syntactic and psycholinguistic features & \cite{quijada-medero:2016:SemEval} \\
\bf IIIT & Nearest Centroid & semantic and morphological features & \cite{palakurthi-mamidi:2016:SemEval} \\
\bf JUNLP & Random Forest, Naive Bayes & semantic, lexicon-based, morphological and syntactic features & \cite{mukherjee-EtAl:2016:SemEval} \\
\bf LTG & Decision Tree & n-grams and word length & \cite{malmasi-dras-zampieri:2016:SemEval} \\
\bf MACSAAR & Random Forest, SVM & Zipfian frequency distribution, word length & \cite{zampieri-tan-vangenabith:2016:SemEval} \\
\bf MAZA & Meta-classifier & n-grams, word probability, word length & \cite{malmasi-zampieri:2016:SemEval} \\
\bf Melbourne & Weighted Random Forests & lexical and semantic features & \cite{brooke-uitdenbogerd-baldwin:2016:SemEval} \\
\bf PLUJAGH & Threshold-based methods & features extracted from Simple Wikipedia & \cite{wrobel:2016:SemEval} \\
\bf Pomona & Threshold-based methods & word frequencies & \cite{kauchak:2016:SemEval} \\
\bf Sensible & Ensemble Recurrent Neural Networks & over embeddings & \cite{nat:2016:SemEval} \\
\bf SV000gg & System voting with threshold & morphological, lexical, and semantic features & \cite{paetzold-specia:2016:SemEval2} \\
\bf TALN & Random Forest & lexical, morphological, semantic, and syntactic features & \cite{ronzano-EtAl:2016:SemEval} \\
\bf USAAR & Bayesian Ridge classifiers & hand-crafted word sense entropy metric and language model perplexity & \cite{martinezmartinez-tan:2016:SemEval} \\
\bf UWB & Maximum Entropy & word occurrence counts on Wikipedia documents & \cite{konkol:2016:SemEval} \\
\hline
\end{tabular}
}
\caption{Systems submitted to the CWI--2016 in alphabetical order. We include team names and a brief description of each system including features and classifiers used. A reference to each system description paper is provided for more information.}
\label{tab:approaches}
\end{table*}
In terms of performance the top-3 systems were team PLUJAGH \cite{wrobel:2016:SemEval}, LTG \cite{malmasi-dras-zampieri:2016:SemEval}, and MAZA \cite{malmasi-zampieri:2016:SemEval} which obtained 0.353, 0.312, and 0.308 F1-score respectively. The three teams used rather simple probabilistic models trained on features such as n-grams, word frequency, word length, and the presence of words in vocabulary lists extracted from Simple Wikipedia, introduced by PLUJAGH. The relatively low performance obtained by all teams, including the top-3 systems, evidences how challenging the CWI--2016 shared task was. Both the data annotation protocol and the training/test split, where 40 times more testing data than training data is available, contributed to making CWI--2016 a difficult task.
A post-competition analysis was carried out using the output of all 42 systems submitted to CWI--2016 \cite{zampieri-EtAl:2017:NLPTEA}. Each system output to each instance were used as votes to build two ensemble models. The ensemble models built were plurality voting which assigns the highest number of votes as the label of a given instance, and oracle which assigns the correct label for an instance if at least one of the systems have predicted the ground truth label for that given instance. The plurality vote serves to better understand the performance of the systems using the same dataset while the oracle is used to quantify the theoretical upper limit performance on the dataset \cite{kuncheva2001decision}. The study showed that the potential upper limit for the CWI--2016 dataset considering the output of the participating systems is 0.60 F1 score for the complex word class. The outcome confirms that the low performance of the systems is related to the way the data has been annotated. Finally, this study also confirmed the relationship between word length and lexical complexity annotation in this dataset, a feature used by many of the teams participating in CWI--2016 as well as in our present work.
\subsection{CWI--2018}
\label{sec:CWI2018}
Following the success of CWI--2016, the second edition, CWI--2018, was organized at the Workshop on Innovative Use of NLP for Building Educational Applications (BEA) in 2018.\footnote{https://sites.google.com/view/cwisharedtask2018/} Unlike CWI--2016 which focused only on English, CWI--2018 featured English, French, German, and Spanish datasets opening new perspectives in research in this area.
A total of four tracks were available at CWI--2018: English, German, and Spanish monolingual, that is, training and testing was available for the same language, and French multilingual. The organizers released a French test set only without training set in the same language with the goal of using English, Spanish, and German datasets for training and project predictions to the French test set. CWI--2018 featured two sub-tasks: (i) a binary classification task similar to CWI--2016 where participants were asked to label the given target word in particular context as complex or simple; (ii) a probabilistic classification task where participants were asked to give a probability of the given target word in a particular context being complex.
In terms of data, CWI--2018 used the \emph{CWIG3G2} dataset \cite{yimam-EtAl:2017:RANLP} in English, German, Spanish. The English dataset contains texts from three domains, \emph{News}, \emph{WikiNews}, and \emph{Wikipedia} articles and the evaluation was carried out per domains. To allow cross-lingual learning, a dataset for French was collected using the same methodology as the one used for the CWIG3G2 corpus. Another important difference between CWI--2016 and CWI--2018 is that the \emph{CWIG3G2} featured annotation of both single words and MWEs while the dataset used in CWI--2016 only considered single words.
In terms of participation, CWI--2018 attracted 12 teams in different task/track combinations. In Table \ref{tab:approaches2018}, we list the approaches submitted to the English binary classification single word track by the 10 teams who wrote system description papers presented at BEA. Most teams tried multiple approaches and here we describe the one that achieved the best performance according to their system description paper.
\begin{table*}[!ht]
\centering
\scalebox{0.95}{
\begin{tabular}{lp{3cm}p{4.5cm}c}
\hline
\bf Team & \bf Classifiers & \bf Features & \bf Paper \\ \hline
\bf Camb & Adaboost & N-grams, WordNet features, POS tags, dependency parsing relations, psycholinguistic features. & \cite{syspaper7} \\
\bf CFILT\_IITB & Voting ensemble & Word length, syllable counts, vowel counts, WordNet-based features. & \cite{syspaper10} \\
\bf hu-berlin & Naive Bayes & Character n-grams & \cite{syspaper3} \\
\bf ITEC & LSTM & Word length, word anc character embeddings, frequency count, psycholinguistics features. & \cite{syspaper6}\\
\bf LaSTUS/TALN & SVM, Random Forest & Word length, word embeddings, semantic and contextual features. & \cite{syspaper9} \\
\bf NILC & XGBoost & N-grams, word length, number of syllables, WordNet-based features. & \cite{syspaper4} \\
\bf NLP-CIC & Tree Ensembles and CNNs & Word frequency, syntactic and lexical features, psycholinguistic features, and word embeddings. & \cite{syspaper11} \\
\bf SB@GU & Extra Trees & Word length, number of syllables, n-grams, frequency distribution. & \cite{syspaper2} \\
\bf TMU & Random Forest & Word length, word frequency, probability features derived from corpora. &\cite{syspaper5} \\
\bf UnibucKernel & Kernel-based learning with SVMs. & Character n-grams, semantic features, and word embeddings. & \cite{syspaper1} \\
\hline
\end{tabular}
}
\caption{Systems submitted to the CWI--2018 English binary classification single word track. We include team names and a brief description of each system including features and classifiers used. A reference to each system description paper is provided for more information.}
\label{tab:approaches2018}
\end{table*}
For the English binary classification single word track, the organizers reported the performance by all teams per domain. Team CAMB obtained the best performance for the three domains: 0.8736 F1-score on News, 0.8400 F1-score on WikiNews, and 0.8115 F1-scores on Wikipedia. We observed that for all teams the performance on the News domain was generally substantially higher than the performance obtained in the two other domains. Several teams used the opportunity to compare multiple approaches for this task and many of them reported that traditional machine learning classifiers obtained higher performance than deep neural networks \cite{syspaper4,syspaper2}.
\section{Analysis of Features of Complex Words}\label{sec:analysis_of_features}
Upon analysing the systems and datasets used in CWI--2016 and CWI-2018, we noticed several intuitive explanations as to why a word may be judged as complex, or not:
\begin{itemize}
\item The word is archaic.
\item The word is a borrowing from another language or refers to a concept that is atypical in the culture of the reader.
\item The word is uncommon and many people are not commonly exposed to it.
\item The word refers to a very specialised concept.
\item Although the word is common, it is being used with an uncommon meaning in the given context.
\end{itemize}
\noindent These possible characteristics motivated us to represent input words as sets of indicative linguistic features for the purpose of lexical complexity prediction. We used 378 features to represent words in our data set. These include psycholinguistic features derived from the MRC database \cite{wilson-1988}, word embeddings, and several other features with the potential to capture our intuitions about lexical complexity.
The psycholinguistic features of words were evaluated using the API to the MRC database. Many of the resources included in the database were built before 1998, were derived through rigorous psycholinguistic testing, and as a result are of restricted size (offering relatively poor coverage of current English vocabulary). For this reason, in addition to specifying the values of these features directly from the database, we included additional binary features to indicate whether or not the word occurs in the MRC database.
We used information about whether or not the Wikipedia entry for the word includes an infobox element to indicate the degree of specialisation of the word. We observed that entries for specialised vocabulary (e.g. \emph{Gharial}) frequently contain infobox elements of various types (e.g. \emph{biota}). We extracted features encoding information about the occurrence and type of infobox element as an indicator of the level of specialisation of the word.
The full feature set is displayed in tables \ref{table:wordFeaturesAJ} and \ref{table:wordFeaturesKT}. Given that it encodes well-motivated psycholinguistic information and includes features which capture our intuitions about lexical complexity, We consider this feature set to be suitable for use in the derivation of prediction models. We processed the human-annotated CWI--2016 and CWI--2018 datasets to represent words as feature vectors using the features in these tables.
\begin{table}[ht]
\begin{small}
\begin{centering}
\begin{tabular}{llll}
\hline
\bf ID & \textbf{Feature} & \textbf{Type} & \textbf{Definition} \\ \hline
A & \parbox{2cm}{Frequent} & Binary & \parbox[l]{6cm}{One of the $10\,000$ most frequent words listed in Wiktionary} \\
& & & \parbox[l]{6cm}{} \\
B & Archaic & Binary & \parbox[l]{6cm}{Listed in an archaic word list.\tablefootnote{Avaialable at \url{https://archive.org/stream/dictionaryofarch028421mbp/dictionaryofarch 028421mbp_djvu.txt}. Last accessed 26th February 2019.}} \\
& & & \parbox[l]{6cm}{} \\
C & \parbox{2cm}{Length (normalised)} & Numerical & \parbox[l]{6cm}{Length of the word divided by 50.\tablefootnote{Longest word in English being 45 characters (pneumonoultramicroscopicsilicovolcanoconiosis).}} \\
& & & \parbox[l]{6cm}{} \\
D & Plurality & Binary & \parbox[l]{6cm}{5 features indicating whether the word is plural, has no plural form, is a singular form, is both singular and plural form, or is plural but acts singular.} \\
& & & \parbox[l]{6cm}{} \\
E & Familiarity & \parbox[r]{1.5cm}{Numerical (100-700)} & \parbox[l]{6cm}{Familiarity score, derived by merging three sets of norms: Paivio (unpublished; these are an expansion of the norms of Paivio, Yuille, and Madigan \cite{paivio-1968}), Toglia and Battig \cite{toglia-1978}, and Gilhooly and Logie \cite{gilhooly-1980}). See Wilson \cite{wilson-1988} for more details on these metrics.} \\
& & & \parbox[l]{6cm}{} \\
F & Concreteness & \parbox[r]{1.5cm}{Numerical (100-700)} & \parbox[l]{6cm}{Concreteness score, listed in the MRC Database} \\
& & & \parbox[l]{6cm}{} \\
G & Imageability & \parbox[r]{1.5cm}{Numerical (100-700)} & \parbox[l]{6cm}{Imageability score of the word, listed in the MRC Database} \\
& & & \parbox[l]{6cm}{} \\
H & Brown & Numerical & \parbox[l]{6cm}{Frequency count of the word in the London-Lund Corpus of English Conversation \cite{svartvik-1980}} \\
& & & \parbox[l]{6cm}{} \\
I & KF$_{FREQ}$ & Numerical & \parbox[l]{6cm}{Frequency count of the word in the Ku\v{c}era and Francis \cite{kucera-1967} frequency list, derived from the Brown corpus.} \\
& & & \parbox[l]{6cm}{} \\
J & TL$_{FREQ}$ & Numerical & \parbox[l]{6cm}{Frequency listed in Thorndike and Lorge’s \cite{thorndike-1944} L count, which combines the counts of morphological variants of the word in a reference corpus.} \\
\hline
\end{tabular}
\end{centering}
\end{small}
\caption{Features (A-J) used to represent words.} \label{table:wordFeaturesAJ}
\end{table}
\begin{table}[ht]
\begin{small}
\begin{centering}
\begin{tabular}{lcll}
\hline
\bf ID & \textbf{Feature} & \textbf{Type} & \textbf{Definition} \\ \hline
& & & \parbox[l]{6cm}{} \\
K & MEANC & \parbox[r]{1.5cm}{Numerical (100-700)} & \parbox[l]{6cm}{Meaningfulness rating of the word as provided by the Colorado norms of Toglia and Battig \cite{toglia-1978}} \\
& & & \parbox[l]{6cm}{} \\
L & MEANP & \parbox[r]{1.5cm}{Numerical (100-700)} & \parbox[l]{6cm}{Meaningfulness rating of the word as provided by the norms of Paivio (unpublished)} \\
& & & \parbox[l]{6cm}{} \\
M & AOA & \parbox[r]{1.5cm}{Numerical (100-700)} & \parbox[l]{6cm}{Age of acquisition, as provided by the norms of Gilhooly and Logie \cite{gilhooly-1980}.} \\
& & & \parbox[l]{6cm}{} \\
N & TQ2$_Q$ & Binary & \parbox[l]{6cm}{Morphological variant of another word in the dictionary.} \\
& & & \parbox[l]{6cm}{} \\
O & TQ2$_2$ & Binary & \parbox[l]{6cm}{Ends in the letter R and this R is not pronounced except when the next word begins with a vowel.} \\
& & & \parbox[l]{6cm}{} \\
P & WTYPE & Binary & \parbox[l]{6cm}{9 features indicating the word type as listed in the Shorter Oxford English Dictionary or Webster’s New International Dictionary. Word types are: adverb, conjunction, interjection, adjective, noun, past participle, pronoun, verb, or other.} \\
& & & \parbox[l]{6cm}{} \\
Q & STATUS & Binary & \parbox[l]{6cm}{7 features indicating the word status as listed in the Dolby database \cite{dolby-1963}. Word statuses are: archaic, alien, obsolete, colloquial, rare, and standard} \\
& & & \parbox[l]{6cm}{} \\
R & STRESS & Binary & \parbox[l]{6cm}{14 features indicating the stress pattern of the word when pronounced. Where 2 is strongly stressed syllable, 1 is medium stressed stressed, and 0 is an unstressed syllable, the 14 stress patterns are: 0, 01020, 010200, 02, 020, 0200, 10020, 102, 1020, 10200, 20, 200, 2000, and 22.} \\
& & & \parbox[l]{6cm}{} \\
S & INFOBOX & Binary & \parbox[l]{6cm}{13 features indicating the type of infobox present in the English Wikipedia page for the word. Infobox types are: AMBIGUOUS, BIOGRAPHY\_VCARD, BIOTA, BORDERED, COLLAPSIBLE\_ AUTOCOLLAPSE, DEFAULT, GEOGRAPHY\_VCARD, HPRODUCT, NONE, VCARD, VCARD\_PLAINLIST, VEVENT, and VEVENT\_HAUDIO.} \\
& & & \parbox[l]{6cm}{} \\
T & Word Embeddings & Numerical & \parbox[l]{6cm}{300 features are the vector representation of the word derived using GloVe \cite{pennington2014glove}} \\
\hline
\end{tabular}
\end{centering}
\end{small}
\caption{Features (K-T) used to represent words.} \label{table:wordFeaturesKT}
\end{table}
Considered individually, the great majority of features/feature sets listed in Table \ref{table:wordFeaturesAJ} have no linear relationship with the averaged human judgement of word complexity in the CWI 2016 and CWI 2018 datasets. The only exceptions are word length (feature group C) and the word's frequency count in the London-Lund corpus (feature group H). As the distributions of these two features are non-normal, we measured correlation with the averaged complexity ratings of words using Spearman's rho. We found that normalised word length has a low positive correlation ($r=0.4208$) while the frequency of the word in the Brown corpus has a low negative correlation with word complexity ($r=-0.3640$).
There is no linear relationship between the values of features/feature sets listed in rows K-S of Table \ref{table:wordFeaturesKT} and the averaged values of word complexity assigned by the annotators. In our experiments, we did not investigate the strength of correlations between individual word embedding features and average complexity ratings.
Clearly, this is a surprising result. We should expect to see more of our features correlating with complexity. Especially as we already know these are indicators of how difficult a word may be. This is likely to be a factor of the annotation protocols used in the datasets we analysed. We include this here to show the lack of correlation between sensible features and those datasets. In our next section we have discussed the deficiencies of these datasets, as well as proposed our specification for an improved CWI dataset.
\section{Specification for CWI Data Protocol}\label{sec:spec_cwi}
In the previous Section, we analysed the features of complexity that are appropriate for existing CWI datasets. In this section, we first highlight some of the design decisions that were taken in the creation of prior CWI datsets. We continue, by proposing a specification, based on our prior analysis, for a new CWI dataset that improves on prior work. Our specification is designed to enable CWI research in areas that have not previously been undertaken. As well as providing a specification, we also provide a list of features for future datasets to implement in Table \ref{tab:cwi_spec}.
\subsection{Building on Previous Datasets}\label{sec:building_on}
The previous datasets for CWI have interesting assets that make them useful for the CWI task. A quick overview of these datasets is presented in Table \ref{tab:CWI_dataset_comparison}, where they are compared according to some of their basic features.
\begin{table}[ht]
\centering
\begin{tabular}{cccccc}
\hline
Dataset & Binary & Probabilistic & Continuous & Context & Multi-Genre\\\hline
CWI ST 2016 & $\times$ & & & $\times$ & $\times$\\
CWI ST 2018 & $\times$ & $\times$ & & $\times$ & $\times$\\
\cite{maddela-xu-2018-word} & & & $\times$ & &\\
\hline
\end{tabular}
\caption{CWI Datasets compared according to their features.}
\label{tab:CWI_dataset_comparison}
\end{table}
The first dataset we have considered is the CWI--2016 dataset, which provides binary annotations on words in context. 9,200 sentences were selected and the annotation was performed as described below in \cite{paetzold-specia:2016:SemEval1}:
\begin{quote}
\textit{
``Volunteers were instructed to annotate all words that they could not understand\dots A subset of 200 sentences was split into 20 sub-sets of 10 sentences, and each subset was annotated by a total of 20 volunteers. The remaining 9,000 sentences were split into 300 subsets of 30 sentences, each of which was annotated by a single volunteer.''
}
\end{quote}
The annotators were asked to identify any words for which they did not know the meaning. Each annotator has a different proficiency level and therefore will find different words more or less complex - giving rise to a varied dataset with different portions of the data reflecting differing complexity levels. Further, each instance in the test set was annotated by 20 annotators, whereas each instance in the training set was annotated by a single annotator. For the test set, any word which was annotated as complex by at least one annotator was marked complex (even if the other 19 annotators disagreed). This is problematic as the make up of the test and training set do not reflect each other, making it hard for systems to do well on this task. Binary annotation of complexity infers an internal mental thresholding task on the annotator's judgment of complexity. An annotator's background, education, etc. may affect where this threshold between complex and simple terms should be set. Further, it is likely that one annotator may find words difficult that another finds simple and vice-versa. Factors such as the annotator's native language, educational background, region, etc. all affect the type of words they are familiar with. In the case of the training data where 20 annotators have all annotated the same instance and any instance with at least one annotation is considered complex, it may be taken that the annotations represent some form of maximum complexity - i.e., that any word is above the lowest possible threshold of complexity. However, in the case of the test set where each word is annotated by a single annotator, the annotations are harder to interpret. Each instance is personal, reflecting only a single annotator's judgment.
Moving on from the CWI--2016 dataset, the CWI--2018 dataset also provides binary annotations, which were aggregated to give a 'probabilistic' measure of complexity. CWI--2018 invited participants to submit results on both the binary complexity annotation setting and the probabilistic annotation setting. To collect their data, the organisers of CWI--2018 followed a similar principal as in CWI--2016. Sentences were presented to annotators and the annotators were asked to select any words or phrases that they found to be complex. Again, this retains the same issues as before, that each annotator is using their subjective judgment to determine which words are complex and which are simple, and that annotators may make judgments which are accurate to themselves, but not consistent with other annotators. In the probabilistic setting, at least 20 annotations were collected from native and non-native speakers and each word was given a score indicating what proportion of annotators found that word to be complex. (i.e., if 10 out of 20 annotators marked the word, then it would be given a score of 0.5). A useful property of this style of annotation is that words are seen on a continuous scale of complexity. However, the aggregation of binary annotations to give continuous annotations does not necessarily tell us about the complexity of the word itself. Instead it tells us about the annotators, and how many of them will consider a word complex. So, for example a score of 0.5 should not be interpreted as half-complexity (or some sort of neutrality between simple and complex), but instead should be interpreted as 50\% of people will consider this word complex.
The final dataset we have covered was published in 2018 by Maddela and Xu \cite{maddela-xu-2018-word}. We refer to this as Maddela--18 for brevity.
In this dataset, 11 non-native annotators were employed to annotate a portion of 15,000 words on a 6-point Likert scale with 5--7 annotations being collected for each vocabulary item.
Words were presented without context, with the annotators inferring the sense of the word at annotation. Two annotators may have considered the word as a different sense or with a different context.
Almost all words are polysemous and the different senses of the words are likely to have different levels of complexity - particularly in a coarse grained sense setting (e.g., \textit{mean} average vs. a \textit{mean} person).
The main effect here is that the varied complexities of the multiple senses and usages of a word are conflated into a single annotation.
There is no information as to which word sense the annotators were giving the annotations for, and as such the annotations may be unreliable in cases where a word is used in an uncommon sense.
In the Likert-scale type annotation, it is less of an issue that annotators opinions will vary than in the binary setting used in CWI--2016 and CWI--2018, as each annotator's judgment is aggregated on a common continuous scale.
This means that the final averaged annotation is reflective of the average complexity that a word might have in a general setting.
This is making an assumption that the annotations are normally distributed and that a mean average is valid in this case. A normality test could be used to quantify whether instances are likely to have normal distributions, however with only 5--7 annotations per instance, this may not be reliable.
In the text above, we have mainly considered complex words. However, the complexity of multi-word expressions is a valuable addition to the CWI literature. MWEs can be considered as compositional or non-compositional. Compositional MWEs (e.g., christmas tree, notice board, golf cart, etc.) take their meaning from the constituent words in the MWE, whereas non-compositional MWEs do not (e.g., hot dog, red herring, reverse ferret). It is reasonable to assume that complexity will follow a similar pattern to semantics and that compositional MWEs will be dependent on the constituent MWEs to give the complexity of the expression, whereas the complexity of non-compositional MWEs will be independent of the constituent MWEs. In the previous datasets, only the CWI--2018 dataset asked annotators to highlight phrases as well as single words, giving a limit of 50 characters to prevent overreaching. Participants in the task were asked to also give complexity annotations for the highlighted phrases. However this was a difficult task and the winning system reported that they found it easier to always consider MWEs as complex in the binary setting \cite{syspaper7}. The work of \cite{maddela-xu-2018-word} also considers MWEs. although they do not annotate for these, instead choosing to average embedding features for phrases and treat them in the same way as single words. As described previously, this assumes compositionality, which will not always be the case.
Little treatment has been given to the variations in complexity between different parts of speech. None of the previous datasets annotate specifically for part of speech except for the 2016 ST data, which explicitly asks annotators to only highlight content words in the target sentences. Again, this is an important consideration as the roles of nouns, verbs, adjectives and adverbs are different in a sentence and considering them as different entities during annotation will help to better structure corpora. It's important to note that the authors of the existing corpora that span POS tags all suggest the use of POS as a feature for classification --- demonstrating its importance in CWI.
All of the corpora recognise the importance of a variety of language abilities in their corpus construction. Native speakers of English may not realise words that are innate to their vocabulary, or may falsely assume that they ``find all words easy''. All three of the corpora that we have studied make use of Non-native speakers as part of their annotations. The CWI--2016 dataset used crowdsourcing to get annotations from 400 Non-native speakers, the CWI--2018 dataset used native and non-native speakers (collecting at least 10 annotations from each for every instance). The Maddela--2018 data used 11 non-native speakers. The use of non-native speakers for CWI annotation may lead to models trained using these datasets being useful for identifying words which are complex to non--native speakers, but may not be applicable to other groups.
All the datasets are all heavily biased towards unedited, informal text. The CWI--2016 dataset compiles a number of sources taken from Wikipedia and Simple Wikipedia, the CWWI--2018 dataset takes Wikipedia, WikiNews and one formal set of news text sources. The Maddela-2018 dataset uses the Google Web1T (taken from a large web-crawl) to identify the most frequent 15,000 words in English and re-annotates each for complexity. Except for the news texts in the 2018 data, all of these sources are written for informal purposes and will contain spelling mistakes, idioms, etc. There has been little prior work exploring cross-genre learning for CWI, however it is unlikely that models trained on such informal text will be appropriate for identifying complexity in formal texts.
\subsection{Specification}\label{sec:specification}
In the section before, we have given a critical analysis of the existing datasets for CWI. It is evident from our experiments in Section \ref{sec:analysis_of_existing} that these datasets do not provide labels which typically correlate with features that we would expect to represent complexity. In the remainder of this section we will describe some of the qualities of an ideal dataset for CWI. Our recommendations are summarised in Table \ref{tab:cwi_spec}. This specification is intended to give general purpose recommendations for anyone seeking to develop a new CWI dataset.
The key issue with the shared task datasets was their treatment of complexity as a binary notion. When multiple annotators are asked to ``mark any complex word'' they will each draw on their subjective definition of complexity, and each will choose a different subset of words to be annotated as complex. The resulting annotations then reflect many opinions, and the data is varied. Even in aggregation, this still does not tell us about the nature of the words, but instead the annotator's perception of those words. Any future datasets should concentrate on providing continuous complexity values for the words and terms it covers. These could be given by asking annotators to mark words on a Likert scale as by Maddela-2018, or by looking at external measurements of the ability of people to read the words, such as lexical access time, eye tracking, etc. There are two factors to be considered here when measuring word complexity. One is the perceived complexity of a word (how difficult an annotator estimates a word to be) and the other is the actual complexity of a word (how much difficulty that word presents to the reader) \cite{leroy2013user}. Clearly these are both important factors in estimating a word's complexity and although we may expect them to be well-correlated there is no guarantee they will be aligned. Whereas perceived complexity affects how a user may prejudge a text, actual complexity determines the degree with which a reader is likely to struggle.
The only previous dataset to present continuous annotations (Maddela-2018) did so in the absence of context. Context is key to determining the usage and meaning of a word and the same word used in different contexts can vary greatly in both semantics and complexity. Indeed, a familiar word in an unfamiliar context may be just as jarring as a rare word for a reader, who is forced to quickly update their mental lexicon with the new sense of the word they have encountered. Datasets should include context for any words that annotations are provided for. This will help systems to identify how contextual factors affect the complexity of a given instance. When presenting context, datasets may wish to either ask annotators to mark every word in a sentence according to some complexity judgment (dense annotation) or they may wish to pick a target word in a context and ask only for a judgment of the complexity of this word (sparse annotation). In the dense annotation setting, it is likely to be possible to get a much higher throughput of complexity annotations, as the reader will need to only read a sentence once to give multiple annotations, however they are likely to be deeply influenced by the meaning of the sentence, and may struggle to disassociate this from their annotation of complex words themselves. In the sparse annotation setting more contexts are required to give a comparative number of instances compared to the dense annotation setting, however the annotation given is more likely to be a direct result of the token itself, rather than the sentence. Any such sparse annotation task should be set up to ensure that an annotator gives judgments based on the word in its context (i.e., that they read and understand the context), rather than just giving a judgment based on the word, as if no context were presented.
Given that we are recommending that the data is presented in context, there is a strong argument for presenting multiple instances of each word. If only one instance of a word were presented in context, then it may be the case that this word had a specific usage that was not representative of its general usage. Words are polysemous \cite{Fellbaum2010} and this is true both at the coarse grained (tennis \textit{bat} vs. fruit \textit{bat}) and narrow grained levels (I \textit{love} you vs. I \textit{love} London). The coarse grained level represents different meanings or etymologies, whereas the fine-grained level may represent a similar meaning but a different intensity (as in our example). The provision of multiple instances of a word allows both of these factors to be taken into account. This consideration should be held in balance with the need to have a diversity of tokens. If a dataset has $N$ instances, constituting $P$ occurrences of $R$ words, then we suggest that $R>>P$. I.e., the number of total words should be much larger than the number of instances of each word. There is more to be gained in a dataset by having a diversity of tokens than by having many annotations on each token. An interesting separate task would be to annotate many instance of one wordform for complexity and analyse how the context affects this. However, this is a secondary task to the one we are presenting here of assessing a word's complexity.
Each instance in a new CWI dataset should be viewed and annotated by multiple people, ideally from a spectrum of ability levels. Multiple annotations has been a common theme of the previous CWI datasets we have discussed, with datasets using as many as 20 annotators per instance. All subsets (train, dev, test) of a dataset should be annotated by the same number of annotators, or at the very least by annotators drawn from the same distribution. This ensures that all subsets of the data are comparable.
More annotations allows us to capture a wider array of viewpoints from annotators of varying ability levels. If the annotators are carefully selected to ensure they represent a mixture of ability levels then this will lead to annotations that are representative. Consider the case where all annotators are of low ability, or of high ability. The resulting annotations may lead to all words being assigned to the most or least complex categories respectively. This may be desirable in user- or genre-specific settings, but is not desirable for general-purpose LCP. There are two potential approaches to selecting a pool of annotators and distributing annotations between them. Firstly, a researcher may choose to use a fixed number of annotators, such that each annotator views every data instance once. in this setting, each data instance receives N annotations, where N is the number of annotators chosen. Secondly, the annotations may be distributed across a wider pool of Annotators, where given N annotators each sees a randomised subset of the data. In this setting a researcher may choose to control how many instances each annotator sees, ensuring an even distribution of annotators across the data instances. The second approach is more appropriate in a crowd-sourcing setting, where a researcher has diminished ability to control who takes on which job.
Previous CWI datasets for English have given a strong focus on non-native speakers as discussed above. Non-native speakers have learnt English as a foreign language and the assumption in using them for CWI research is that they will have only learnt a simple subset of English that allows them to get by in daily tasks. However, a non-native speaker may range from a new immigrant who has recently arrived in an English speaking country to someone who has lived there for decades. We would suggest, that whilst non-native speakers should not be excluded from the CWI annotation process, they should not be relied upon either. Instead the pool of annotators should be selected for their general ability in English, not for their mother tongue. Indeed, when selecting non-native speakers it may be worth considering selecting a variety of mother-tongues, as it is the case that different languages, or language families will have cognates and near-cognates with English, making it easier for non-native speakers of certain backgrounds to understand words in English with roots in their mother tongue.
A great feature of any dataset, CWI included, is multiple genres in the source texts. Allowing for multiple genres gives more diversity in the type of text studied and allows systems that are trained on it to generalise better to unseen texts. This prevents overfitting to one text-type, leading to results being more reliable and hence more interpretable, and ultimately leads to the creation of useful models that can be applied across genres. CWI resources should name the source genres that their texts are taken from and comply with licences placed on those genres. Whilst informal, or amateur text is in abundance (e.g., Twitter or Wikipedia), formal texts should also be considered for CWI such as professionally written news, scientific articles, parliament proceedings, legal texts or any other such texts that are written for a professional audience. These texts provide well structured language, which is typically targeted at a specific audience and is of a difficult quality for those outside that audience. These texts contain a higher density of complex words and as such are useful examples of the types of text that might need interventions to improve their readability for a lay reader.
As discussed previously, MWEs are an important element in complexity as previous studies have shown that MWEs are generally considered more complex by a user than individual words \cite{syspaper7}. Any new CWI dataset should consider incorporating MWEs as they will certainly be useful for future CWI research. When we consider that MWEs can range from simple collocations (White House), to verbal phrases (pick up) and may span 2 or more words, across parts of speech --- including phrasal MWEs (it's raining cats and dogs) --- it is clear that the number of potential MWEs to consider is much wider than the number of single tokens. How do we select appropriate MWEs to cover? There is no particular advantage to CWI in selecting one category of MWE over another, but we suggest that any dataset covering MWEs explicitly names the types of MWE that it has covered. By incorporating MWEs, a dataset may be used to investigate both the nature of complexity in those MWEs and in the constituent tokens. Strategies for identifying MWEs, as well as the different types of MWEs are beyond the scope of this work and we would direct the reader to the MWE literature \cite{sag02,schneider-etal-2014-comprehensive} for a more comprehensive treatment of this problem.
\begin{table}[ht]
\centering
\begin{tabular}{ccp{6cm}}
\hline
\bf ID & \bf Feature & \bf Description \\\hline
1 & Continuous annotations & Complexity labels should be on a continuous scale ranging from most to least difficult. \\
2 & Context & Tokens should be presented in their original contexts of usage.\\
3 & Multiple token instances & Each token should be included several times in a dataset.\\
4 & Multiple token annotations & Each token should receive many annotations from different annotators\\
5 & Diverse annotators & The fluency and background of annotators should be as diverse as possible.\\
6 & Multiple genres & The text sources used to select contexts should cover diverse genres.\\
7 & Multi-word expressions & These should be considered alongside single word tokens as part of an annotation scheme.\\
\hline
\end{tabular}
\caption{A list of recommended features for future CWI dataset development.}
\label{tab:cwi_spec}
\end{table}
\section{CompLex 2.0}\label{sec:complex2}
In this Section we describe a new dataset for complex word identification that we have collected. Our new dataset, dubbed `CompLex 2.0' builds on prior work (CompLex 1.0 \cite{shardlow-etal-2020-complex}), in which we collected and annotated data for complexity levels. We have described the data collection process for CompLex 1.0 as below and then the annotation process that we undertook to extend this data to CompLex 2.0. CompLex 2.0 covers more instances than CompLex 1.0 and crucially, has more annotations per instance than CompLex 1.0. We present statistics on our new dataset and describe how it fits the recommendations we have made in our specification for new CWI datasets above. CompLex 2.0 was used as the dataset for the SemEval Shared Task on Lexical Complexity Prediction in 2021.
\subsection{Data Collection}
The first challenge in dataset creation is the collection of appropriate source texts. We have followed our specification above and selected three sources that give a sufficient level of complexity. We aimed to select sources that were sufficiently different from one another to prevent trained models generalising to any one source text. The sources that we used are described below.
\begin{itemize}
\item \textbf{Bible:} We selected the World English Bible translation \cite{Christodouloupoulos2015}. This is a modern translation, so does not contain archaic words (thee, thou, etc.), but still contains religious language that may be complex. The inclusion of this text gives language that combines narrative and poetic text that uses language typically familiar for a reader, yet interspersed with unfamiliar named entities and terms with specific religious meanings (propitiation, atonement, etc.).
\item \textbf{Europarl:} We used the English portion of the European Pariliament proceedings selected from europarl \cite{koehn2005europarl}. This is a very varied corpus talking about all manner of matters related to european policy. As this is speech transcription, it is often dialogical in nature in contrast to our other two corpora. Again, the style of text is generally familiar as it is transcriptions of debates. However technical terminology relating to the topics of discussion is present, raising the difficulty level of this text for a reader.
\item \textbf{Biomedical:} We selected articles from the CRAFT corpus \cite{bada2012concept}, which are all in the biomedical domain. These present a very specialised type of language that will be unfamiliar to non-domain experts. Academic articles present a classic challenge in understanding for a reader and are typically written for a very narrow audience. We expect these texts to be particularly dense with complex words.
\end{itemize}
In addition to single words, we also selected targets containing two tokens. We used syntactic patterns to identify these MWEs, selecting for adjective-noun or noun-noun patterns. We discounted any syntactic pattern that was followed by a further noun to avoid splitting complex noun phrases (e.g., noun-noun-noun, or adjective-noun-noun). We used the StanfordCoreNLP tagger \cite{manning2014stanford} to get part-of-speech tokens for each sentence and then applied our syntactic patterns to identify candidate MWEs.
Clearly this approach does not capture the full variation of MWEs. It limits the length of each to 2 tokens and only identifies compound or described nouns. Some examples of the types of MWE that we identify with this scheme are given in Table \ref{tab:mwes}. Whilst this inhibits the scope of MWEs that are present in our corpus, this does allow us to make a focused investigation on these types of MWEs. Notably, the types of MWE that we have identified are those that are the most common (compound nouns, described nouns, compositional, non-compositional and Named Entities). The investigation of other types of MWEs may be addressed by other, more targeted studies following our recommendations for CWI annotation.
\begin{table}[]
\centering
\begin{tabular}{ccc}
\hline
\bf Pattern & \bf MWE & \bf Type \\\hline
NN & storage box & Compound Noun \\
JN & ready meal & Described Noun \\
JN & electric vehicle & Compositional\\
NN & hot dog & Non-compositional\\
JN & European Union & Named Entity\\
\hline
\end{tabular}
\caption{The varied types of MWEs that can be captured by our syntactic pattern matching. NN indicates a Noun-Noun pattern, whereas JN indicates an Adjective-Noun pattern.}
\label{tab:mwes}
\end{table}
For each corpus we selected words using frequency bands, ensuring that words in our corpus were distributed across the range of low to high frequency.
We selected the following eight frequency bands according to the SUBTLEX frequencies in order of least to most frequent (i.e., most to least complex): 2--4, 5--10, 11--50, 51--250, 251--500, 501--1400, 1401--3100, 3101--10000.
We excluded the rarest words (those with a frequency of only 1) as well as the most frequent (those above 10,000) in order to ensure that our instances were representative content words.
As frequency is correlated to complexity, this ensures that our final corpus will have a range of high and low complexity targets. We chose to select 3000 single words and 600 MWEs from each corpus to give a total of 10,800 instances in our corpus. We selected a representative number of instances from each frequency band to give the desired total number of instances in each corpus. We automatically annotated each sentence with POS tags and only selected nouns as our targets, in-keeping with our MWE selection strategy. We allowed a maximum of 5 instances of a token to be selected in each genre (ensuring that contexts were different). This maximises the total number of examples of each instance, whilst still allowing some variation in the selection of tokens. There is a theoretical minimum of 600 instances of single words and 120 MWEs that could occur in our corpus (each with 5 occurrences in each of the three genres. Table \ref{tab:complex2.0} shows that in truth the number of repeated instances is much lower. This is a factor of the random selection procedure that we have employed. We have included examples of the contexts and target words in Table \ref{tab:examples}.
\begin{table*}[ht]
\centering
\begin{tabular}{lp{8cm}c}
\hline
\multicolumn{1}{c}{\textbf{Corpus}} & \multicolumn{1}{c}{\textbf{Context}} & \textbf{Complexity} \\ \hline
Bible & This was the \textbf{length} of Sarah's life. & Low \\
Biomed & [...] cell \textbf{growth} rates were reported to be 50\% lower [...] & Low \\
Europarl & Could you tell me under which rule they were enabled to extend this item to have four rather than three \textbf{debates}? & Low \\
Europarl & These agencies have gradually become very important in the \textbf{financial world}, for a variety of reasons. & Medium\\
Biomed & [...] leads to the \textbf{hallmark loss} of striatal neurons [...] & Medium \\
Bible & The \textbf{idols} of Egypt will tremble at his presence [...] & Medium \\
Bible & This is the law of the \textbf{trespass offering}. & High \\
Europarl & They do hold elections, but candidates have to be endorsed by the conservative clergy, so \textbf{dissenters} are by definition excluded.& High \\
Biomed & [..] due to a reduction in \textbf{adipose} tissue. & High \\
\hline
\end{tabular}
\caption{Examples from our corpus, the target word is highlighted in bold text. The field {\em Complexity} refers to perceived complexity}
\label{tab:examples}
\end{table*}
\subsection{Data Labelling}
As has been previously mentioned, prior datasets have focused on either (a) binary complexity or (b) probabilistic complexity. Neither of which give a true representation of the complexity of a word. In our annotation we chose to annotate each word on a 5-point Likert scale, where each point was given the following descriptor:
\begin{description}
\item[1. Very Easy:] Words which were very familiar to an annotator.
\item[2. Easy:] Words with which an annotator was aware of the meaning.
\item[3. Neutral: ] A word which was neither difficult nor easy.
\item[4. Difficult:] Words which an annotator was unclear of the meaning, but may have been able to infer the meaning from the sentence.
\item[5. Very Difficult:] Words that an annotator had never seen before, or were very unclear.
\end{description}
We used the following key to transform the numerical labels to a 0-1 range when aggregating the annotations: $1 \rightarrow 0$, $2 \rightarrow 0.25$, $3 \rightarrow 0.5$, $4 \rightarrow 0.75$, $5 \rightarrow 1$. This allowed us to ensure that our complexity labels were normalised in the range 0--1.
We initially employed crowd workers through the Figure Eight platform (formerly CrowdFlower), requesting 20 annotations per data instance and paying 3 cents per annotation. We selected annotators from English speaking countries (UK, USA and Australia). In addition, we used the annotation platform's in-built quality control metrics to filter out annotators who failed pre-set test questions, or who answered a set of questions too quickly.
After we had collected these results, we further analysed the data to detect instances where annotators had not fully participated in the task. We specifically analysed instances where an annotator had given the exact same annotation for all instances (usually these were all 'Neutral') and discarded these from our data. We retained any data instance that had at least 4 valid annotations in our final dataset.
This led to the version of the dataset we described as CompLex 1.0. Whilst this dataset proved the trends we expected to see, the conclusions we were able to draw from it were weaker than we hoped \cite{shardlow-etal-2020-complex}. The median number of annotators was 7 per instance, and we identified this as an area for improvement. More annotators, means more opinions and better average judgments.
For the second round of annotations we used the Amazon Mechanical Turk platform. We used exactly the same data as in the original annotation of CompLex 1.0 and requested N new annotations for each instance. We gave the same instructions to annotators regarding the Likert-scale points. As there is no in-built quality control in Mechanical Turk, we opted to release the data in batches (1200 instances at a time). We asked for a further 10 annotations per instance and paid at a rate of 3 cents per annotation. We reviewed the annotators work in between batches, rejecting accounts which submitted annotations too quickly, or without correlation to the other annotator's judgments. We also measured the correlation with lexical frequency to ensure that the annotations we were receiving were in the range we expected.
This allowed us to gather a further 108,000 annotations on the CompLex data. These new judgments were aggregated with those from CompLex 1.0 to give a new dataset --- CompLex 2.0. We used this data to run a shared task on Lexical Complexity Prediction at SemEval 2021.
\subsection{Corpus Statistics}
The first round of annotations led to an initial version of the Corpus (CompLex 1.0), for which we have shown the statistics originally reported in Table \ref{tab:complex1.0}. Due to the quality control that we employed for this round of annotation, we discarded a large portion of our original judgments and only kept instances with four or more annotations. This is evident in the fact that only 9,476 instances out of our original 10,800 are present in this iteration of the corpus. Additionally, the median number of annotators was 7 across our corpus (with the range being from 4 to 20). Retaining only the annotations in which we could be certain of the quality was a difficult choice, as it reduced the amount of data available. However, The mean complexities of the sub-corpora were in line with our expectations. With Biomedical text being on average more complex than the other two genres.
\begin{table}[ht]
\centering
\begin{tabular}{lccc}
\hline
\bf Genre & \bf Contexts & \bf Unique Words & \bf Complexity \\\hline
All & 9,476 & 5,166 & 0.394 \\
Europarl & 3,496 & 2,194 & 0.390 \\
Biomed & 2,960 & 1,670 & 0.407 \\
Bible & 3,020 & 1,705 & 0.385 \\
\hline
\end{tabular}
\caption{The statistics for CompLex 1.0.}
\label{tab:complex1.0}
\end{table}
This led us to undertake our second round of annotation in order to develop CompLex 2.0 ready for the SemEval shared task. We have included statistics on the annotations aggregated from both rounds in Table \ref{tab:amt_stats}. 513 separate annotators viewed our data, with each annotator seeing on average 542 instances across all rounds of annotation (around 5\% of our corpus). We gathered a total of 278,093 annotations, paying 3 cents per annotation. The average time spent per annotation was 21.61 seconds, which means that we payed our workers at an average rate of 5 US Dollars per hour. The task received reviews indicating that annotators found it to be well paid in comparison to other tasks on the platform. We gathered an average of 25.75 annotations per instance, this is an increase over CompLex 1.0, which only had on average 7 annotations per instance. We expect that by having more annotations per instance, we will have more reliable representations of the complexity of each word.
\begin{table}[ht]
\centering
\begin{tabular}{cc}
\hline
Number of Annotators & 513 \\
Number of Instances & 10,800 \\
Number of Annotations & 278,093 \\
Annotations per Instance & 25.75 \\
Instances per Annotator & 542.09\\
Time per Annotation & 21.61 (s)\\
\hline
\end{tabular}
\caption{Statistics on the round of evaluation undertaken with Mechanical Turk.}
\label{tab:amt_stats}
\end{table}
We report detailed statistics on our new dataset, CompLex 2.0, in Table \ref{tab:complex2.0}. We can see that in total 5,617 unique tokens covering single words and multi-word expressions are distributed across 10,800 contexts. Whilst the contexts are split evenly between each genre (3,600 each) the number of repeated words is higher in the Biomed and Bible corpus, with more distinct words occurring in the Europarl corpus. The complexity annotations are low at 0.321 for the entire corpus, indicating that the average complexity of words is somewhere between point 2 (0.25 --- a word which that the annotator was aware of the meaning) and 3 (0.5 --- A word which was neither difficult nor easy) on our Likert scale. This indicates that annotators generally understood the words in our dataset. The annotations did use the full range of our Likert scale and the dataset contains words of all complexities. We can see from the data that the Biomedical genre was on average more difficult to understand (0.353) than the other genres (0.303 for Europarl and 0.307 for Bible respectively). Multiword expressions are markedly more complex (0.419) than single words (0.302), with the same genre distinctions as in the full data.
\begin{table}[ht]
\centering
\begin{tabular}{llccc}
\hline
\bf Subset & \bf Genre & \bf Contexts & \bf Unique Words & \bf Complexity \\\hline
\multirow{4}{*}{All} &
\bf Total & \bf 10,800 & \bf 5,617 & \bf 0.321 \\
& Europarl & 3,600 & 2,227 & 0.303 \\
& Biomed & 3,600 & 1,904 & 0.353 \\
& Bible & 3,600 & 1,934 & 0.307 \\\hline
\multirow{4}{*}{Single} &
\bf Total & \bf 9,000 & \bf 4,129 & \bf 0.302 \\
& Europarl & 3,000 & 1,725 & 0.286 \\
& Biomed & 3,000 & 1,388 & 0.325 \\
& Bible & 3,000 & 1,462 & 0.293 \\\hline
\multirow{4}{*}{MWE} &
\bf Total & \bf 1,800 & \bf 1,488 & \bf 0.419 \\
& Europarl & 600 & 502 & 0.388 \\
& Biomed & 600 & 516 & 0.491 \\
& Bible & 600 & 472 & 0.377 \\
\hline
\end{tabular}
\caption{The statistics for CompLex 2.0.}
\label{tab:complex2.0}
\end{table}
\subsection{Inter-annotator Agreement}
Testing whether annotators agree is more difficult in a crowd-sourcing setting as there is little time to train, test or survey annotators. We provided some controls as outlined above to ensure that annotators were fully participating in the task and that their annotations aligned with those of other annotators on the task. In our setting, we do not necessarily expect annotators to agree in every case as one may legitimately consider a word to be complex, whilst another considers it to be simple. A reasonable expectation is that annotators will provide annotations near to each other, and that the annotations will mostly fall into one category. We can generalise this to say that we typically expect the annotations for one instance to be normally distributed. We have already made this assumption, as we take the mean to give the average complexity.
To test this, we used a Shapiro-Wilk test \cite{shapiro_wilk}, which gives a number in the range of 0-1 indicating how likely a given distribution is to follow the normal distribution. For each of our instances, we perform the test on the annotations for that instance. A higher number indicates that the instance has annotations which are more likely to be normally distributed, whereas a low number on this test indicates a non-Gaussian distribution, such as a multi-modal distribution. We created a histogram of this data, which is displayed in Figure \ref{fig:Shapiro_Histogram}. This shows that the majority of our data gains a score between 0.7 and 0.9 according to the Shapiro-Wilk test, with a peak around 0.85. This indicates that our data is generally normally distributed, and hence that annotators generally gave annotations that centered around a mean value.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{shapiro_hist.png}
\caption{A histogram of Shapiro-Wilk's test statistics, demonstrating the likelihood for each instance that the annotations are normally distributed.}
\label{fig:Shapiro_Histogram}
\end{figure}
In Table \ref{tab:Shapiro_examples} we have shown a number of examples from our corpus that do not follow the distribution that we may have expected. These were infrequent in our corpus, but are displayed here to help the reader understand where annotators may have disagreed. In example 1 the simple word `heaven' was given to annotators, most of whom assigned it to the \textit{Very Easy} category. However, 3 annotators disagreed with this, assigning it to the \textit{Neutral} category. Possibly, the annotators found the word easy, but the metaphorical usage harder to grasp. Example 2 shows a similar disagreement, albeit around a more difficult word. `Election' is a word that most people living in a democracy will have encountered, yet 5 people felt it was neither easy, nor difficult --- placing it in the \textit{Neutral} category. Our third example, taken from the Biomedical genre, demonstrates a word (Granules) which is considered \textit{Easy} by 14 annotators, yet is considered \textit{Difficult} by 4 annotators. Whilst `Granules' is not a particularly rare word, it may be considered complex by some in this instance due to its contextual usage in the biomedical literature. Example 4 shows a word which is specific to biblical language (`Cubit'). Although the annotations gave a reasonably Gaussian set of annotations (0.848 according to the Shapiro-Wilk statistic), they were split over all 5 potential categories. This is an example of annotators' previous familiarity with the text. Those who know a cubit is an ancient measure of length will score it on the easier side of the Likert scale, whereas those who have not seen the word before will score it as more difficult. The remaining three examples (5--7) all score similarly highly on the Shapiro-Wilk test, however have a wide spread of annotations. Again, this is likely due to the familiarity of the annotators with each word.
\begin{table}[ht]
\centering
\scalebox{0.98}{
\begin{tabular}{ccp{4.2cm}p{3.2cm}c}
\hline
\bf ID & \bf Corpus & \bf Context & \bf Annotations & \bf S-W \\\hline
1 & Bible & You will have treasure in \textbf{heaven}. & Very Easy = 24, Easy = 1, Neutral = 3 & 0.423 \\\hline
2 & Europarl & \textbf{Election} of Vice-Presidents (first, second and third ballots) & Very Easy = 19, Easy = 1, Neutral = 5 & 0.544 \\\hline
3 & Biomed & Annexin A7 was isolated as the agent that mediated aggregation of chromaffin \textbf{granules} and fusion of \ldots & Easy = 14, Neutral = 2, Difficult = 4 & 0.612 \\\hline
4 & Bible & Ehud made him a sword which had two edges, a \textbf{cubit} in length; and he wore it under his clothing on his right thigh. & Very Easy = 2, Easy = 3, Neutral = 4, Difficult = 12, Very Difficult = 8 & 0.848 \\\hline
5 & Europarl & I therefore wanted to tell you that I inadvertently voted "yes' in the vote on the Cornelissen report on the first part of \textbf{recital} 0, when I intended to vote "no' . & Very Easy = 1, Easy = 11, Neutral = 3, Difficult = 9, Very Difficult = 2 & 0.848 \\\hline
6 & Europarl & The Rospuda valley is the last \textbf{peat} bog system of its kind in Europe. & Very Easy = 2, Easy = 11, Neutral = 3, Difficult = 4, Very Difficult = 4 & 0.848 \\ \hline
7 & Biomed & Amyloid burden worsens significantly with age, and by 9 mo, the \textbf{hippocampus} and cortex of untreated mice are largely filled with aggregated peptide. & Very Easy = 1, Easy = 8, Neutral = 7, Difficult = 9, Very Difficult = 2 & 0.901 \\\hline
\end{tabular}
}
\caption{Examples of annotations with interesting distributions indicating disagreement among annotators respectively. The target word in the context is highlighted in bold. S-W stands for Shapiro-Wilk.}
\label{tab:Shapiro_examples}
\end{table}
\subsection{Corpus Features} \label{sec:corpus_ftrs}
We have presented a corpus that has been developed according to the recommendations that we have set out earlier in this work (see Section \ref{sec:building_on}. Whilst we have made every effort to follow these, practical concerns have led us to pragmatic design decisions that made the development of our corpus feasible. In the following list, we itemise the design decisions that were made during the construction of our corpus and show how these link to the recommendations from Table \ref{tab:cwi_spec}.
\begin{enumerate}
\item \textbf{Continuous Annotations:} We have implemented this using a Likert Scale as described above. Unlike Maddela-18 who used a 4-point Likert scale, we chose a 5-point Likert scale to allow annotators to give a neutral judgment. To give final complexity values we took the mean average of these annotations, transforming the complexity labels in the range 0--1.
\item \textbf{Context:} We presented annotations in context to the annotators and explicitly asked annotators to judge a word based on its contextual usage (but not on the context itself). There are clear variations in the complexity of a common token across multiple instance. For example, the word `table' receives a higher complexity rating in the less common sense of `table a motion' than in the more frequent sense of something being `on the table'.
\item \textbf{Multiple Token Instances:} We presented a maximum of 5 instances per token, per genre. This led to 5,617 tokens across 10,800 contexts giving an average density of 1.92 contexts. Although some tokens do appear in multiple contexts 3,423 words appear with only a single context. 671 tokens feature 5 or more contexts. This is a compromise between our desire to have a wide variety of tokens covered in the dataset and to have multiple instances of each token. A dataset featuring a more rigorous treatment of contexts may reveal the role of context in complexity estimation in a way that our data is not able to.
\item \textbf{Multiple Token Annotations:} We have described our process of gathering an average of 25.75 annotations per instance. We could have chosen to do fewer annotations in favour of further instances, however we prioritised having a large number of judgments per instance to give a more consistent and representative averaged annotation.
\item \textbf{Diverse Annotators:} We did not place many restrictions, or record demographic information regarding our annotators. Doing so may have helped to better understand the makeup of our annotations and identify potential biases. We did not record this information due to the crowd-sourcing setting that we used. This is something for future LCP annotation efforts to consider.
\item \textbf{Multiple Genres:} We have selected three diverse genres with a potential for complex language. We deliberately avoided the use of Wiki text as this has been studied widely already in CWI.
\item \textbf{Multi-word Expressions:} We have included these in a limited form as part of our corpus. The MWEs make up 16.66\% of our corpus. We have included these as an interesting area to study and we hope that their inclusion will shed light on the complexity of MWEs. Further studies could focus on specific MWE types, extending our research.
\end{enumerate}
\noindent
The CompLex 2.0 corpus fulfils the recommendations we have set out, however it is intended as a starting point for future LCP research. Using the same methodology that we have described further datasets for the annotation of complex words can be developed focussing on the remaining open research problems in lexical complexity prediction.
\section{Predicting Categorical Complexity} \label{sec:analysis_of_existing}
We represented words and multiword units in the CWI 2016 \cite{paetzold-specia:2016:SemEval1}, CWI 2018 \cite{yimam2018report}, CompLex 2.0 single word, and multiword datasets as vectors of features which, on the basis of previous work in lexical simplification \cite{paetzold-2016}, text readability \cite{yaneva-2017,deutsch-2020}, psycholinguistics/neuroscience \cite{yonelinas-2005}, and our inspection of the annotated data, we consider likely to be predictive of their complexity (Section \ref{sec:analysis_of_features}).
We used the \texttt{trees.RandomForest} method distributed with Weka \cite{Hall2009} to build baseline lexical complexity prediction models exploiting the features presented in Section \ref{sec:analysis_of_features}. In the experiments described in the current Section, we framed the prediction as a classification task with continuous complexity scores mapped to a 5-point scale. The points on these scales denote the proportions of annotators who consider the word complex (c): few ($0 \leq c < 0.2$), some ($0.2 \leq c < 0.4$), half ($0.4 \leq c < 0.6$), most ($0.6 \leq c < 0.8$), and all ($0.8 \leq c \leq 1$).
Table \ref{table:evaluationBaselineRandomForests} displays weighted average F-scores and mean absolute error scores obtained by the models in the ten-fold cross validation setting. This table includes statistics on the number of instances to be classified in each dataset.
\begin{table}[ht]
\begin{centering}
\begin{tabular}{cccc}
\hline
& \bf F-score & & \\
\bf Dataset & \bf (weighted average) & \bf MAE & \bf Instances \\ \hline
CWI 2016 & 0.915 & 0.04 & $2\,237$ \\
CWI 2018 & 0.843 & 0.0681 & $11\,949$ \\
CompLex 2.0 (single) & 0.607 & 0.1782 & $7\,233$ \\
CompLex 2.0 (MWE) & 0.568 & 0.2137 & $1\,465$ \\
\hline
\end{tabular}
\caption{Evaluation results of the baseline \texttt{trees.RandomForest} classifier. \label{table:evaluationBaselineRandomForests}}
\end{centering}
\end{table}
Table \ref{table:featureAblation} displays the results of an ablation study performed in order to assess the contribution of various groups of features to the word complexity prediction task applied in the four datasets: CWI--2016, CWI--2018, CompLex (single words), and CompLex (multiple words). The feature sets refer to those studied previously in this work in Section \ref{sec:analysis_of_features}. In the table, negative values of $\Delta$MAE indicate that the features are helpful, reducing the mean absolute error of the classifier. The reverse is true of positive values.
\begin{table}[ht]
\begin{small}
\begin{centering}
\begin{tabular}{cllll}
\hline
& \multicolumn{4}{c}{\bf $\Delta$MAE} \\
\bf Ablated & & & \bf CompLex & \bf CompLex \\
\bf feature group & \bf CWI--2016 & \bf CWI--2018 & \bf (single) & \bf (multi) \\ \hline
A & +1E-04 & 0 & +0.0002 & -0.0002 \\
B & +0.0002 & +0.0002 & -1E-04 & -0.0006 \\
C & -0.0001 & +0.0004 & +1E-04 & -0.0004 \\
D & -0.0001 & +0.0001 & -1E-04 & -0.0002 \\
E, F, G & 0 & +0.0001 & +0.0003 & +0.0006 \\
H, I, J & +0.0002 & +0.0004 & +0.0007 & +0.0012 \\
M & -0.0001 & +0.0001 & 0 & -0.001 \\
N & 0 & 0 & +1E-04 & +0.0005 \\
P & -0.0002 & +0.0001 & +0.0002 & -0.0007 \\
Q & 0 & +0.0001 & +1E-04 & -0.0002 \\
R & +1E-04 & 0 & -1E-04 & -0.0009 \\
S & +1E-04 & +0.0001 & 0 & -1E-04 \\
Linguistic features (A-S) & 0 & +0.0009 & +0.0018 & +0.0027 \\
T & -0.0029 & +0.002 & +0.001 & -0.0065 \\
All but C & +0.0093 & -0.0681 & +0.0469 & +0.0396 \\
\hline
\end{tabular}
\end{centering}
\caption{Results of feature ablation.} \label{table:featureAblation}
\end{small}
\end{table}
Our results indicate that for prediction of lexical complexity in the CWI--2016 dataset, five of the ablated feature groups are useful. Features encoding information about word length and the regularity of the singular/plural forms of nouns, the typical age of acquisition of the words, and the broad syntactic categories of the words improve the accuracy of the classifier, as do word embeddings.
For words in the CWI--2018 dataset, no feature group was found to be particularly useful for prediction of lexical complexity, though a simple model based only on word length information outperformed the default baseline exploiting all features.
When predicting the lexical complexity of individual words in the CompLex data, features encoding information about whether or not the word was archaic, about the regularity of the singular/plural forms of nouns, and about the stress patterns of the words were all found to be useful. When considering multiword units (bigrams), a far larger proportion of the feature groups was observed to be useful for lexical complexity prediction. In our ablation study of bigrams, we assigned the bigram the average value for numeric features and the value assigned to the second word for symbolic/string features. We found that features encoding information about word frequency, whether or not the words were archaic, word length, regularity of singular/plural forms, standard age of acquisition, broad syntactic category, the word's status as either archaic, alien, obsolete, colloquial, rare, or standard, the stress pattern of the word, and the occurrence of an INFOBOX element in the Wikipedia entry for the word were all useful predictors of lexical complexity. Averaged word embeddings also improved the accuracy of predictions made by the \texttt{trees.RandomForest} classifier in the CompLex (multi) dataset.
In the CWI--2016 and CWI--2018 datasets, we used Weka's \texttt{PrincipalComponents} attribute (feature) selection method to evaluate the 378 numerical features described previously in Tables \ref{table:wordFeaturesAJ} and \ref{table:wordFeaturesKT}. Table \ref{table:featureSelection} displays the ten top ranked features for the four datasets. The main observations to be drawn from the feature selection study is the usefulness of information related to word familiarity, concreteness, and imageability in all datasets and information from the vector representations of words derived using GloVe \cite{pennington2014glove}.
\begin{table}[ht]
\begin{small}
\centering
\begin{tabular}{ccccc}
\hline
& & & \bf CompLex & \bf CompLex \\
\bf Rank & \bf CWI--2016 & \bf CWI--2018 & \bf (single) & \bf (multi) \\ \hline
1 & E, F, G, K & E, F, G & E, F, G & T (subset) \\
2 & T (subset) & E, F, G & T (subset) & T (subset) \\
3 & H, I, J , T (subset) & & T (subset) & E, F, G \\
4 & T (subset) & T (subset) & T (subset) & T (subset) \\
5 & T (subset) & T (subset) & D, N, A & T (subset) \\
6 & T (subset) & D, T (subset) & T (subset) & T (subset) \\
7 & T (subset) & T (subset) & T (subset) & T (subset) \\
8 & T (subset) & T (subset) & T (subset) & T (subset) \\
9 & T (subset) & T (subset) & T (subset) & T (subset) \\
10 & T (subset) & P, T (subset) & T (subset) & T (subset) \\
\hline
\end{tabular}
\caption{Results of feature selection (\texttt{PrincipalComponents}).} \label{table:featureSelection}
\end{small}
\end{table}
These results demonstrate that by using our new data from CompLex 2.0, the features that we expect to correlate well with complexity judgments are more likely to be effective features for classification than when annotations are done in a binary setting as in the CWI--2016 and CWI--2018 datasets.
\section{Predicting Continuous Complexity}
\label{sec:complex_experiments}
In our final section, we use the data we have collected to discuss the nature of complex words from a different perspective than in Section \ref{sec:analysis_of_existing}. Whereas in the previous Section we converted all labels into a categorical format to allow comparison, in this Section we use the labels assigned to CompLex 2.0 to discuss factors affecting the nature of lexical complexity, and its prediction. We first look at the effects of genre on complex word identification. We then continue in our exploration to study the distribution of annotations, to determine how and when annotators agree on the complexity of a word.
\subsection{Prediction of Complexity Across Genres}
One feature of our corpus is the presence of multiple genres. We have included several genres as we wish to ensure that models resulting from our data are transferable to domains outside of those in our corpus. This is important for CWI as previous corpora have focused on a narrow range of genres (Wikipedia and news text), which may limit their applicability outside of these domains.
We used a linear regression with the features described previously in Section~\ref{sec:analysis_of_features}. We use the single words in the corpus and split the data into training and test portions, with 90\% of the data in the training portion and 10\% of the data in the test portion. We first created our linear regression using all the available training data and evaluated this using Pearson's Correlation. We used the labels given to the data during the annotation round we undertook to create CompLex 2.0. We achieved a score of 0.771. Indicating a reasonably high level of correlation between our model's predictions and the labels of the test set.
This result is recorded in Table~\ref{tab:predicting}, where we also show the results for each genre. In each case, we have selected only data from a given genre and followed the same procedure as above, splitting into train and test and evaluating using Pearson's correlation. There is a modest drop in performance for Europarl (0.724) and the Bible data (0.735), which is expected given the reduction in size of the training data. It is surprising to see that the Biomedical data had a higher performance than any other subset (0.784). This may indicate that there is a sharper difference between simple and complex words in this corpus, which can be learnt from a more focussed training set.
\begin{table}[ht]
\centering
\begin{tabular}{cc}
\hline
\bf Subset & \bf Correlation \\\hline
All & 0.771 \\
Europarl & 0.724 \\
Biomed & 0.784\\
Bible & 0.735 \\
\hline
\end{tabular}
\caption{Results of training a linear regression on all the data, and on each genre.}
\label{tab:predicting}
\end{table}
To further determine the effects of genre on lexical complexity prediction, we constructed a new linear regression that was trained and tested using specific genres selected from our corpus. We trained on single genres and tested on each of the other 2 genres, as well as training on a combined subset of 2 genres and testing on the remaining genre. The results for this experiment are shown in Table \ref{tab:genre}. We were able to build a reliable predictive model for cross-genre complexity prediction in each case.
Our results show that there is a dip in performance when training on out-of-domain data, compared to training on in-domain data. This is true across all genres, where a reduction of between 0.119 and 0.297 can be observed in Pearson's correlation. In each, genre, the scores improve when training on the other two genres, rather than just on one. This may be due to the effect of multiple genres helping the linear regression to generalise to global complexity effects, rather than overfitting to specific complexity features in one genre. If we were to test our results on a further genre, we may hope to see that training on three genres (as are present in our corpus) would yield even more generalised results.
\begin{table}[ht]
\centering
\begin{tabular}{ccc}
\hline
\bf Train & \bf Test & \bf Correlation \\\hline
Biomed & Europarl & 0.542 \\
Bible & Europarl & 0.484 \\
Biomed + Bible & Europarl & 0.651 \\\hline
Bible & Biomed & 0.487 \\
Europarl & Biomed & 0.630 \\
Bible + Europarl & Biomed & 0.723 \\\hline
Biomed & Bible & 0.605 \\
Europarl & Bible & 0.616 \\
Biomed + Europarl & Bible & 0.692 \\
\hline
\end{tabular}
\caption{Results of training a linear regression on one genre, or pair of genres and testing on a different genre.}
\label{tab:genre}
\end{table}
\subsection{Subjectivity}
We previously used a Shapiro-Wilk test to demonstrate that our annotations are generally normally distributed. We have taken the mean of each annotation distribution to give a complexity score for each instance in our dataset. An interesting question to ask is how representative these means are of the true complexity of a word. One word may be considered easy by one annotator, yet difficult by another. Factors such as age, education and background may well affect which words a reader is familiar with. We can use the normally distributed annotations to understand this phenomenon by investigating the standard deviations of the annotations for each instance.
We have provided examples from our corpus in Table \ref{tab:stdev_examples} with both the mean complexity and the standard deviation ($\sigma$) of the annotations. The top three rows show examples of high standard deviation, whereas the bottom three rows show examples of low standard deviations. It is clear from the table that annotators generally agree more on less complex words, with disagreements tending to happen around the more difficult words. An analysis of the mean complexity and standard deviation of the complexity yields a Pearson's correlation of 0.621, indicating that these are indeed linked.
\begin{table}[ht]
\centering
\begin{tabular}{cp{5cm}cc}
\hline
\bf Corpus & \bf Context & \bf Complexity & \bf $\sigma$ \\\hline
Biomed & The first step requires generating a floxed allele in ES cells that will serve as the \textbf{substrate} for subsequent exchanges (RMCE-ready ES cell, Figure 1). & 0.556 & 0.433 \\
Bible & The second came, saying, 'Your mina, Lord, has made five \textbf{minas}. & 0.433 & 0.423 \\
Europarl & 'Budget support' refers to the transfer of financial resources from a funding agency outside the partner country's treasury, under the \textbf{proviso} that the country abide by the agreed conditions governing payments. & 0.567 & 0.382 \\
Biomed & Similarly, changes in \textbf{synaptic plasticity} due to Ca2+-permeable AMPARs [51,52,60], e.g., in piriform cortex, might alter odor memorization processes. & 0.975 & 0.077 \\
Bible & Or were you baptized into the \textbf{name} of Paul? & 0.000 & 0.000 \\
Europarl & Therefore, I would like to ask, in accordance with the Rules of Procedure, for the \textbf{matter} to be referred to the competent body. & 0.175 & 0.118 \\
\hline
\end{tabular}
\caption{Examples of instances with subjective (wide standard deviation) and certain (narrow standard deviation) annotations.}
\label{tab:stdev_examples}
\end{table}
\section{Discussion}\label{sec:discussion}
Our work has sought to introduce a new definition of lexical complexity to the research community. Whereas previous treatments of lexical complexity have considered it a binary affair in the Complex Word Identification (CWI) setting, we have extended this definition to lexical complexity prediction (LCP), considering complexity as a continuous value associated with a word. This new definition asks the question of `how complex is a word' rather than `is this word complex or not?'. This question allows us to give each token a complexity rating on a continuous scale, rather than a binary judgment. If binary judgments were required, it would be easy to create them using our dataset by imposing a threshold at some point in the data. By imposing thresholds at different points, binary labels can be obtained to suit different subjective definitions of complexity. Further, by implementing multiple thresholds, multiple categorical labels can be recovered from the data.
In Section \ref{sec:analysis_of_features} we showed that the types of features we would typically expect to correlate with word complexity did not show any correlation with the CWI-2016 and CWI-2018 datasets. This motivated our analysis of the protocol underlying the annotation of these datasets and our development of a new protocol for CWI annotation. In Section \ref{sec:analysis_of_existing}, we were abvle to show through the use of feature ablation experiments that more of the feature sets that we used were relevant to the classification of CompLex 2.0, than were relevant to the annotation of CWI--2016 or CWI--2018. This implies that the annotations in our new dataset are more reflective of traditional measures of complexity.
We discussed the existing CWI datasets at length, culminating in our new specification for LCP datasets in Section~\ref{sec:analysis_of_existing}.
Whilst we have gone on to develop our own dataset (CompLex 2.0), we also hope to see future work developing new CWI datasets following the principles that we have laid out.
Future datasets could focus on multilinguality, multi-word expressions, further genres, or simply extending the analysis we have done to further tokens and contexts.
Certainly, we do not see the production of CompLex 2.0 as an end point in LCP research, but rather a starting point for other researchers to build from.
This is why we have included our protocol in such detail --- in order to enable the replicability of our work for future research.
We implemented our specification for a new LCP dataset, following the recommendations in Section \ref{sec:analysis_of_features}. This led to the creation of CompLex 2.0. In Section \ref{sec:corpus_ftrs} we have explicitly compared our dataset to the recommendations we made in Table~\ref{tab:cwi_spec}, and we would encourage the creators of future LCP datasets to do the same. Ensuring that datasets can be easily evaluated and compared at a feature level. The CompLex 2.0 dataset is available via GitHub\footnote{\url{https://github.com/MMU-TDMLab/CompLex}}. We have made this data available under a CC-BY licence, maximising its reuse and reproducibility outside of our work.
Our new LCP dataset is the first to provide continuous complexity annotations for words in context. The role of context in lexical complexity has not been widely studied and we hope that this dataset will go some way towards allowing researchers to work on this topic. Indeed, the evidence from our annotations shows that for a single token in multiple contexts, the complexity annotation of that token does vary. Further work is needed to prove that the variation is an effect of the contextual occurrence, or difference in sense and not due to the stochastic nature of annotations resulting from crowdsourcing.
Although we gave annotators in our task a 5-point scale ranging from Very Easy to Very Difficult, we chose to aggregate the annotations to give a mean-average for each instance. This makes a fundamental assumption that the distance in continuous complexity space between each of our points on the Likert scale is constant. Obviously, there is no guarantee that such an assumption is true. The danger of this is that annotations may be falsely biased towards one end of the scale. For instance, if the distance between Very Easy and Easy is shorter than the distance between Easy and Neutral, then considering these as the same distance will falsely inflate complexity ratings. Another strategy could have been to take the median or mode of the complexity annotations to give a final value. The disadvantage of that approach would be that every instance would have an ordinal categorical label instead of a continuous label as we have advocated for. This would be a different problem to the one we have explored, and is left to future research.
We were able to use our data to show that complexity can be predicted across genres. This is encouraging as our dataset contains three diverse genres, and we can expect that the complexity annotations we have identified will generalise well to other genres. A model trained on all three genres will learn features of complexity that are common to all genres, rather than to any one specific genre.
We also demonstrated that our instances vary in subjectivity of complexity, with those rated as more complex typically being more subjective. Investigating what factors make a word subjectively complex would be an interesting line of study, but is left for future research.
\section{Conclusion}\label{sec:conclusion}
We have demonstrated that previous datasets are insufficient for the task of Complex Word Identification. In fact, the very definition of the task --- identifying complex words in a binary setting is at fault. We have advocated for a generalisation of this task to Lexical Complexity Prediction and we have provided recommendations for datasets approaching this task. Further to this we have provided a new dataset, CompLex 2.0, which is the first dataset to provide continuous complexity annotations on words in context. We have scratched the surface of experiments that can be performed using CompLex and we release the data in full to allow future researchers to join us in the task of lexical complexity prediction.
\bibliographystyle{spmpsci}
|
{
"timestamp": "2021-02-18T02:18:30",
"yymm": "2102",
"arxiv_id": "2102.08773",
"language": "en",
"url": "https://arxiv.org/abs/2102.08773"
}
|
\section{Introduction}
Recently, privacy preserving machine learning studies aimed at protecting sensitive information during training and/or testing of a model in scenarios where data is distributed between different sources and cannot be shared in plaintext \citep{mohassel2017secureml,wagh2018securenn,juvekar2018gazelle,mohassel2018aby3,unal2019framework,damgaard2019new,byali2020flash,PatraS20}. However, privacy protection in the computation of the area under curve (AUC), which is one of the most preferred methods to compare different machine learning models with binary outcome, has not been addressed sufficiently. Even though there are no studies in the literature enabling such a computation for the precision-recall (PR) curve, there are several differential privacy based approaches in the literature for the receiver operating characteristic (ROC) curve \citep{chaudhuri2013stability,boyd2015differential,chen2016differentially}. Briefly, they protect the privacy of the data by introducing noise into the computation so that one cannot obtain the original data employed in the computation. However, due to the nature of differential privacy, the resulting AUC is different from the one which could be obtained by using non-perturbed prediction confidence values (PCVs). For private computation of the exact AUC, there exists no approach in the literature to the best of our knowledge.
In this paper, we propose the \textbf{p}rivacy \textbf{p}reserving \textbf{a}rea \textbf{u}nder \textbf{r}eceiver \textbf{o}perating characteristic and precision-\textbf{r}ec\textbf{a}ll curves ({ppAURORA}{}) based on a secure 3-party computation framework to address the necessity of an efficient, private and secure computation of the exact AUC. We compute the area under the PR curve (AUPR) and ROC curve (AUROC) with {ppAURORA}{}. We address two different cases of ROC curve in {ppAURORA}{} by two different versions of AUROC computation. The first one is designed for the computation of the exact AUC by using PCVs with no tie. In case of a tie of PCVs of samples from different classes, it just approximates the metric based on the order of the samples, having a problem when values of both axes change at the same time. In order to compute the exact AUC even in case of a tie, we introduce the second version with a slightly higher communication cost than the first approach. Along with the AUC, both are capable of protecting the information of the number of samples belonging to the classes from all participants of the computation, which could be used to obtain the order of the labels of the PCVs \citep{whitehill2019does}. Furthermore, since we do not provide the data sources with the ROC curve, they cannot regenerate the underlying true data. Therefore, both versions are secure against such attacks \citep{matthews2013examination}. We utilized the with-tie version of AUROC computation to compute the AUPR since the values of both axes can change at the same time even if there is no tie.
We introduce a novel 3-party computation framework to achieve privacy preserving AUC computation with {ppAURORA}{}. The framework consists of privacy preserving sorting and four operations with novel and efficient versions, which are select share ($\mathsf{MUX}$), modulus conversion ($\mathsf{MC}$), compare ($\mathsf{CMP}$) and division ($\mathsf{DIV}$). $\mathsf{MUX}$ is designed to select one of two secret shares based on a secret shared bit value. $\mathsf{MC}$ privately converts the ring of a secret shared value from $2^{\ell-1}$ to $2^{\ell}$. $\mathsf{CMP}$ compares two secret shared values, determines if the first argument is larger than the second one without revealing values and splits the result in a secret shared way. $\mathsf{DIV}$ performs the division of two secret shared values without sacrificing the privacy of the values. Note that our new $\mathsf{DIV}$ is specifically customized for efficient and secure AUC computation.
\section{Scenarios}
In this section, we describe the scenarios at which {ppAURORA}{} is applicable. Note that it is not limited to these scenarios.
\textbf{End-to-end MPC-based Collaborative Learning:} Recently, researchers proposed multi-party computation (MPC) based training and testing of several machine learning algorithms \citep{mohassel2017secureml,wagh2018securenn,damgaard2019new}. Their approaches can train the model privately and collaboratively, and make predictions on the test data of the data sources involved in the computation. However, the privacy preserving collaborative evaluation of the model is lacking. To fulfill such a gap, one can integrate {ppAURORA}{} at the end of the process once the PCVs are secret shared among two computing parties. One possible scenario would be that one wants to build a model for predicting hospitalization times for COVID-19 patients. Usually, the personal data cannot be shared easily, due to the private nature of the data, but even sharing the hospitalization times might be problematic in case the hospitals do not want to allow competitors to learn about this piece of information. Nevertheless, one can assume that a model built on data from many hospitals will perform much better than models built on individual datasets and that the global AUC will allow for a better model selection than an average of locally computed AUCs. This privacy preserving global AUC computation can be performed with our {ppAURORA}{}.
\textbf{Evaluation of Models Trained by Federated Learning:} In some cases, the data is not allowed to be shared at all. For such cases, federated learning has been widely utilized to train a collaborative machine learning model without gathering the data in one place. Each data source updates the model by its own data in an online learning process and passes the model to the next data source until the model converges or the iteration limit is reached. Once the data sources obtain the trained model, they make the predictions of their own test data. In order to collaboratively evaluate the performance of the model, they can utilize {ppAURORA}{}.
\section{Preliminaries}
\textbf{Security Model:} In this study, we prove the full security of our solution (i.e., privacy and correctness) in the presence of semi-honest adversaries that follow the protocol specification, but try to learn information from the execution of the protocol. We consider a scenario where a semi-honest adversary corrupts a single server and an arbitrary number of data owners in the simulation paradigm \citep{Lindell17,Canetti01} where two worlds are defined: the real world where parties run the protocol without any trusted party, and the ideal world where parties make the computation through a trusted party. Security is modeled as the view of an adversary called a simulator $\mathcal{S}$ in the ideal world, who cannot be distinguished from the view of an adversary $\mathcal{A}$ in the real world. The universal composability framework \citep{Canetti01} introduces an adversarial entity called environment $\mathcal{Z}$, which gives inputs to all parties and reads outputs from them. The environment is used in modeling the security of end-to-end protocols where several secure protocols are used arbitrarily. Security here is modeled as \textit{no environment can distinguish if it interacts with the real world and the adversary $\mathcal{A}$ or the ideal world and the simulator $\mathcal{S}$}. We also provide privacy in the presence of a malicious adversary corrupting any single server, which is formalized by \citet{araki2016high}. The privacy is formalized by saying that a malicious party, which arbitrarily deviates from the protocol description, cannot learn anything about the inputs and outputs of the honest parties.
\textbf{Notations:} In our secure protocols, we use additive secret sharing over three different rings $\mathbb{Z}_{L}$, $\mathbb{Z}_{K}$ and $\mathbb{Z}_{P}$ where $L = 2^{\ell}$, $K = 2^{\ell-1}$, $P=67$ and $\ell=64$. We denote two shares of $x$ over $\mathbb{Z}_{L}$, $\mathbb{Z}_{K}$ and $\mathbb{Z}_{P}$ with ($\langle x\rangle_0$, $\langle x\rangle_1$), ($\langle x\rangle_0^K$, $\langle x\rangle_1^K$) and ($\langle x\rangle_0^P$, $\langle x\rangle_1^P$), respectively. If a value $x$ is shared over the ring $\mathbb{Z}_P$, each bit of $x$ is additively shared in $\mathbb{Z}_P$. This means $x$ is shared as a vector of $64$ shares where each share takes a value between $0$ and $66$. We also use boolean sharing of a single bit which is denoted with ($\langle x\rangle_0^B$, $\langle x\rangle_1^B$).
\subsection{Secure Multi-party Computation}
Secure multi-party computation was proposed in the 1980s \citep{Yao:1986:GES:1382439.1382944,Goldreich:1987:PAM:28395.28420}. These studies showed that multiple parties can compute any function on inputs without learning anything about the inputs of the other parties. Let us assume that there are $n$ parties $I_1,\cdots,I_n$ and $I_i$ has a private input $x_i$ for $i \in \{1,\ldots,n\}$. All parties want to compute the arbitrary function $(y_1,\ldots,y_n) = f(x_1,\ldots,x_n)$ and get the result $y_i$. MPC allows the parties to compute the function through an interactive protocol and $I_i$ to learn only $y_i$.
We first explain the 2-out-of-2 additive secret sharing and how addition (ADD) and multiplication (MUL) are computed. In additive secret sharing, an $\ell$-bit value $x$ is shared additively in the ring $\mathbb{Z}_{L}$ as the sum of two values. For $\ell$-bit secret sharing of $x$, we have $\langle x\rangle_{0} +\langle x\rangle_{1} \equiv x\mod L$ where $I_i$ knows only $\langle x\rangle_{i} $ and $i \in \{0,1\}$. All arithmetic operations are performed in the ring $\mathbb{Z}_{L}$. For additive secret sharing, we use protocols based on Beaver’s multiplication triples \citep{DBLP:conf/crypto/Beaver91a}.
\textbf{Addition:} $\langle z\rangle = \langle x\rangle +\langle y\rangle$. $I_i$ locally computes $\langle z\rangle_{i}=\langle x\rangle_{i} +\langle y\rangle_{i}$. In order to compute the addition of a shared value $\langle x\rangle $ and a constant $c$, $I_i$ locally computes $\langle z\rangle_i = \langle x\rangle_i + c$ and $I_{1-i}$ locally computes $\langle z\rangle_{1-i} = \langle x\rangle_{1-i}$.
\textbf{Multiplication:} $\langle z\rangle = \langle x\rangle \cdot \langle y\rangle $. Multiplication is performed using a pre-computed multiplication triple $\langle c\rangle_i = \langle a\rangle_i \cdot \langle b\rangle_i $ \citep{DBLP:conf/crypto/Beaver91a}. $I_i$ computes $\langle e\rangle_i = \langle x\rangle_i - \langle a\rangle_i $ and $\langle f\rangle_i = \langle y\rangle_i - \langle b\rangle_i $. $I_i$ sends $\langle e\rangle_i$ and $\langle f\rangle_i$ to $I_{1-i}$. $I_i$ reconstructs $e$ and $f$, and then computes $\langle z\rangle_i = i\cdot e \cdot f + f \cdot \langle a\rangle_i + e \cdot \langle b\rangle_i + \langle c\rangle_i$. The computation of the multiplication triple is performed via homomorphic encryption or oblivious transfer. $I_i$ cannot perform multiplication locally.
\subsection{Area Under Curve}
One of the most common ways to summarize the plot-based model evaluation metrics is area under curve (AUC) which measures the area under the curve. It is applicable to various different evaluation metrics. In this study, we employ AUC to measure the area under the ROC curve and the PR curve.
\subsubsection{Area Under ROC Curve (AUROC)}
In machine learning problems with binary outcome, the ROC curve is very effective to take the sensitivity and the specificity of the classifier into account by plotting the false positive rate (FPR) on the x-axis and the true positive rate (TPR) on the y-axis. AUC summarizes this plot by measuring the area between the line and the x-axis, which is the area under ROC curve (AUROC). Let us assume that $M$ is the number of test samples, $V \in [0,1]^M$ contains the sorted PCVs of test samples in descending order, $T \in [0,1]^M$ and $F \in [0,1]^M$ contain the corresponding TPR and FPR values, respectively, where the threshold for entry $i$ is set to $V[i]$, and $T[0] = F[0] = 0$. In case that there is no tie in $V$, the privacy-friendly AUROC computation is as follows:
\begin{equation} \label{eq:auc_wo_tie}
\small
AUROC = \sum_{i = 1}^{M} \Big( T[i] \cdot (F[i] - F[i-1]) \Big)
\end{equation}
This formula just approximates the exact AUROC in case of a tie in $V$ depending on the order of the samples. As an extreme example, let $V$ have $10$ samples with the same PCV. The first $5$ samples have label $1$ and the second $5$ samples have label $0$. Such a setting outputs $AUROC=1$. As a contrary, if we have samples with $0$ at the beginning and samples with $1$ later, we obtain $AUROC=0$. In order to define an accurate formula for the AUROC in case of such a tie condition, let $\xi$ be the vector of indices in ascending order where the PCV of the sample at that index and the preceding are different for $0 \leq |\xi| \leq M$ where $|\xi|$ denotes the size of the vector. Assuming that $\xi[0] = 0$, the computation of AUROC can be done as follows:
\begin{equation} \label{eq:auc_with_tie}
\small
\begin{aligned}
AUROC = \sum_{i = 1}^{|\xi|}& \Big( T[\xi[i-1]] \cdot (F[\xi[i]] - F[\xi[i-1]]) + \dfrac{(T[\xi[i]] - T[\xi[i-1]]) \cdot (F[\xi[i]] - F[\xi[i-1]])}{2} \Big)
\end{aligned}
\end{equation}
As Equation \ref{eq:auc_with_tie} indicates, one only needs TPR and FPR values on the points where the PCV changes to obtain the exact AUROC. We will benefit from this observation in the privacy preserving AUROC computation.
\subsubsection{Area Under PR Curve (AUPR)}
Another model evaluation metric for machine learning problems with binary outcome is the PR curve which plots recall on the x-axis and precision on the y-axis, and it is generally preferred over AUROC for scenarios with class imbalances. The AUC summarizes this plot by measuring the area under the PR curve (AUPR). Since both precision and recall can change at the same time even without a tie, we measure the area by using the Equation \ref{eq:auc_with_tie} where $T$ being the precision and $F$ being the recall.
\section{Framework} \label{sec:framework}
In this section, we give the definitions of basic operations of the framework that we use in {ppAURORA}{}. Note that we include only the operations with novelty due to the page limit. One can refer to the Supplement for the other operations.
\textbf{Selecting One of Two Secret Shared Values:} Algorithm \ref{alg:ss} performs 3-party computation to select one of two secret shared values based on the secret shared bit value (functionality $\mathcal{F}_{\mathsf{MUX}}$). At the beginning of the protocol, $S_0$ and $S_1$ hold ($\langle x\rangle_0$, $\langle y\rangle_0$, $\langle b\rangle_0$) and ($\langle x\rangle_1$, $\langle y\rangle_1$, $\langle b\rangle_1$), respectively. At the end of the secure computation, $S_0$ and $S_1$ hold the fresh shares of $z=x -b(x-y)$. We utilize the randomized encoding of multiplication \citep{Applebaum17}. As shown in Equation \ref{eq:sscom}, we need to multiply two values owned by different parties in the computation of $\langle b\rangle_0(\langle x\rangle_1 - \langle y\rangle_1)$ and $\langle b\rangle_1(\langle x\rangle_0 - \langle y\rangle_0$. We assume that $S_2$ is the computation party and performs these multiplications via the randomized encoding.
\begin{equation}
\small
\begin{split}
z ={}& x-b(x-y) \\
={}& \langle x\rangle_0 + \langle x\rangle_1 - \langle b\rangle_0(\langle x\rangle_0 - \langle y\rangle_0) - \langle b\rangle_1(\langle x\rangle_1 - \langle y\rangle_1) - \langle b\rangle_0(\langle x\rangle_1 - \langle y\rangle_1)- \langle b\rangle_1(\langle x\rangle_0 - \langle y\rangle_0)
\end{split}
\label{eq:sscom}
\end{equation}
\textbf{Modulus Conversion:} Algorithm \ref{alg:mc} describes our 3-party protocol realizing the functionality $\mathcal{F}_{\mathsf{MC}}$ that converts shares over $\mathbb{Z}_{K}$ to fresh shares over $\mathbb{Z}_{L}$ where $L=2K$. Assuming that $S_0$ and $S_1$ have the shares $\langle x\rangle_0^K$ and $\langle x\rangle_1^K$, respectively, the first step for $S_0$ and $S_1$ is to mask their shares by using the shares of the random value $r \in \mathbb{Z}_K$ sent by $S_2$. Afterwards, they reconstruct $(x+r) \in \mathbb{Z}_K$, by first computing $\langle y \rangle_i^K = \langle x \rangle_i^K + \langle r \rangle_i^K$ for $i \in \{0,1\}$ and sending these values to each other. Along with the shares of $r \in \mathbb{Z}_K$, $S_2$ also sends the information in boolean shares telling whether the summation of the shares of $r$ wraps so that $S_0$ and $S_1$ can convert $r$ from the ring $\mathbb{Z}_K$ to the ring $\mathbb{Z}_L$. Once they reconstruct $y \in \mathbb{Z}_K$, $S_0$ and $S_1$ can change the ring of $y$ to $\mathbb{Z}_L$ by adding $K$ to one of the shares of $y$ if $\langle y \rangle_0^K + \langle y \rangle_1^K$ wraps. After conversion, the important detail regarding $y \in \mathbb{Z}_L$ is to fix the value of it. In case $(x+r) \in \mathbb{Z}_K$ wraps, which we identify by using $\mathsf{PC}$, depending on the boolean share of the outcome of $\mathsf{PC}$, $S_0$ or $S_1$ or both add $K$ to their shares. If both adds, this means that there is no addition to the value of $y \in \mathbb{Z}_L$. At the end, $S_i$ subtracts $r_i \in \mathbb{Z}_L$ from $y_i \in \mathbb{Z}_L$ and obtains $x_i \in \mathbb{Z}_L$ for $i \in \{0,1\}$.
\begin{table*}[t]
\tiny
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabularx}{\linewidth}{CRC|CRC}
\toprule
\multicolumn{3}{c|}{SecureNN \cite{wagh2018securenn}} & \multicolumn{3}{c}{Our Framework} \\ \cmidrule(lr){1-3} \cmidrule(lr){4-6}
Protocol & Rounds & Communication & Protocol & Rounds & Communication\\
\midrule
$\mathsf{SelectShare}$ & $2$ & $5\ell$ & $\mathsf{MUX}$ & $2$ & $6\ell$ \\
$\mathsf{ShareConvert}$ & 4 & $4\ell\log P + 6\ell$ & $\mathsf{ModulusConversion}$ & 3 & $4\ell\log P + 6\ell$\\
$\mathsf{Compute MSB}$ & 5 & $4\ell\log P + 13\ell$ & $\mathsf{Compare}$ & 5 & $4\ell\log P + 11\ell$ \\
$\mathsf{DIV}$ & $10\ell_D$ & $(8\ell\log P + 24\ell)\ell_D$ & $\mathsf{DIV}$ & 2 & $6\ell$ \\
\bottomrule
\end{tabularx}
\caption{Complexity comparison of distinguishing protocols of {ppAURORA}{} with SecureNN \cite{wagh2018securenn}}
\label{tab:complexity}
\end{table*}
\textbf{Comparison of Two Secret Shared Values:} Algorithm \ref{alg:cmp} gives the definition of the 3-party protocol for the functionality $\mathcal{F}_{\mathsf{CMP}}$ that compares two secret shared values $x$ and $y$, and outputs zero if $x \geq y$, $1$ otherwise. In Algorithm \ref{alg:cmp}, we find the value of $(x-y)\wedge2^{L-1}$ that indicates the most significant bit (MSB) of $(x-y)$. First, $S_i$ where $i \in \{0,1\}$ computes $\langle d\rangle_i^K = (\langle x\rangle_i^L-\langle y\rangle_i^L) \mod K$. $S_i$ converts the ring of $d$ from $\mathbb{Z}_K$ to $\mathbb{Z}_L$ by calling $\mathsf{MC}$. Finally, $S_0$ and $S_1$ subtract the shares of $d$ over $\mathbb{Z}_L$ from the shares of $x-y$ and map the result to $1$ if it equals $K$, otherwise $0$.
\textbf{Division:} Algorithm \ref{alg:div} gives the definition of the 3-party protocol realizing the functionality $\mathcal{F}_{\mathsf{DIV}}$ that computes $x/y$. $S_0$ and $S_1$ hold shares of $x$ and $y$, and $\mathsf{DIV}$ outputs fresh shares of $x/y$. $\mathsf{DIV}$ computes $x/y$ correctly when $S_0$ and $S_1$ know the upper bound for $x$ and $y$. Thus, it is not a general purpose method over $\mathbb{Z}_L$. We rather specifically designed it for {ppAURORA}{} since $S_0$ and $S_1$ know the upper bound of inputs to $\mathsf{DIV}$ in the computation of the AUC.
\subsection{Complexities of Our Protocols}
There are some frameworks in the literature that also perform 3-party secure computation. The recently published SecureNN \cite{wagh2018securenn} has shown successful results in terms of performance in private CNN training. We compared our secure protocols with the corresponding building blocks from SecureNN that one needs to use for computing AUC. Table \ref{tab:complexity} summarizes the comparison. It is clear that our protocols have lower costs. Therefore, we can deduce that our framework outperforms SecureNN in AUC computation. Our novel protocols ($\mathsf{MUX}$, $\mathsf{MC}$, $\mathsf{CMP}$, $\mathsf{DIV}$) require at most 5 communication rounds, which is the most important factor affecting the performance of secure computing protocols. Our new protocols show an improvement in terms of round complexities.
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwFunction{algo}{$\mathsf{MUX}$}
\SetKwProg{myalg}{Algorithm}{}{}
\myalg{\algo{}}{
\Input{$S_0$ and $S_1$ hold ($\langle x\rangle_0$, $\langle y\rangle_0$, $\langle b\rangle_0$) and ($\langle x\rangle_1$, $\langle y\rangle_1$, $\langle b\rangle_1$), respectively.}
\Output{$S_0$ and $S_1$ get $\langle z\rangle_0$ and $\langle z\rangle_1$, respectively, where $z=x-b(x-y)$.}
$S_0$ and $S_1$ hold four common random values $r_i$ where $i \in \{0,1,2,3\}$\;
$S_0$ computes $M_1=\langle x\rangle_0-\langle b\rangle_0(\langle x\rangle_0-\langle y\rangle_0)+r_1\langle b\rangle_0 + r_2(\langle x\rangle_0-\langle y\rangle_0)+r_2r_3$, $M_2=\langle b\rangle_0+r_0$, $M_3=\langle x\rangle_0-\langle y\rangle_0+r_3$\;
$S_0$ sends $M_2$ and $M_3$ to $S_2$\;
$S_1$ computes $M_4=\langle x\rangle_1-\langle b\rangle_1(\langle x\rangle_1-\langle y\rangle_1) + r_0(\langle x\rangle_1-\langle y\rangle_1)+r_0r_1 + r_3\langle b\rangle_1$, $M_5=(\langle x\rangle_1-\langle y\rangle_1)+r_1$, $M_6=\langle b\rangle_1+r_2$\;
$S_1$ sends $M_5$ and $M_6$ to $S_2$\;
$S_2$ computes $M_2M_5+M_3M_6$ $=z$\;
$S_2$ divides $z$ into two shares $(\langle z\rangle_0+\langle z\rangle_1)$ and sends $\langle z\rangle_0$ and $\langle z\rangle_1$ to $S_0$ and $S_1$, respectively\;
$S_0$ computes $\langle z\rangle_0=M_1 - \langle z\rangle_0$\;
$S_1$ computes $\langle z\rangle_1=M_4-\langle z\rangle_1$\;
}
\caption{Select Share ($\mathsf{MUX}$)}
\label{alg:ss}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwFunction{algo}{$\mathsf{MC}$}
\SetKwProg{myalg}{Algorithm}{}{}
\myalg{\algo{}}{
\Input{$S_0$ and $S_1$ hold $\langle x\rangle_0^K$ and $\langle x\rangle_1^K$, respectively}
\Output{$S_0$ and $S_1$ get $\langle x\rangle_0$ and $\langle x\rangle_1$, respectively}
$S_0$ and $S_1$ hold a common random bit $n^\prime$\;
$S_2$ picks a random numbers $r \in \mathbb{Z}_{2^{L-1}}$ and generates $\langle r\rangle_0^K$, $\langle r\rangle_1^K$,
$\{\langle r[j]\rangle_{0}^{p}\}_{j \in[\ell]}$ and $\{\langle r[j]\rangle_{1}^{p}\}_{j \in[\ell]}$.\;
$S_2$ computes $w = \text{isWrap}(\langle r\rangle_0^K, \langle r\rangle_1^K,K)$ and divides $w$ into two boolean shares $w_0^B$ and $w_1^B$\;
$S_2$ sends $\langle r\rangle_i^K$, $\{\langle r[j]\rangle_{i}^{p}\}_{j \in[\ell]}$ and $w_i^B$ to $S_i$, for each $i \in \{0, 1\}$\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $7$-$8$\;
$\langle y\rangle_i^K = \langle x\rangle_i^K+\langle r\rangle_i^K$\;
$S_i$ reconstructs $y$ by exchanging shares with $S_{1-i}$\;
$n_i^B = \mathsf{PC}(\{\langle r[j]\rangle_{i}^{p}\}_{j \in[\ell]},y,n^\prime)$\;
$S_0$ computes $n_i^B = n_i^B \oplus n^\prime$\;
For each $i \in \{0, 1\}$, $S_i$ computes $c_i^B = w_i^B\oplus n_i^B$\;
$S_0$ computes $\langle y\rangle_0 = \langle y\rangle_0^K + \text{isWrap}(\langle y\rangle_0^K, \langle y\rangle_1^K, K) \cdot K$\;
$S_1$ sets $\langle y\rangle_1 = \langle y\rangle_1^K$\;
For each $i \in \{0, 1\}$, $S_i$ computes
$\langle x\rangle_i = \langle y\rangle_i - (\langle r\rangle_i^K + c_i^B \cdot K)$\;
}
\caption{Modulus Conversion ($\mathsf{MC}$)}
\label{alg:mc}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\section{{ppAURORA}{} Computation}
In this section, we give the description of our protocol for {ppAURORA}{} computation. In {ppAURORA}{}, we have data owners that outsource their PCVs and the ground truth labels in secret shared form and three non-colluding servers that perform 3-party computation on secret shared PCVs to compute AUC. The protocol starts with outsourcing by the data sources. Afterward, the servers perform the desired calculation privately. Finally, they send the shares of the result back to the data sources. The communication between all parties is performed over a secure channel (e.g., TLS).
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwFunction{algo}{$\mathsf{CMP}$}
\SetKwProg{myalg}{Algorithm}{}{}
\myalg{\algo{}}{
\Input{$S_0$ and $S_1$ hold ($\langle x\rangle_0$, $\langle y\rangle_0$) and ($\langle x\rangle_1$, $\langle y\rangle_1$), respectively}
\Output{$S_0$ and $S_1$ get $\langle z\rangle_0$ and $\langle z\rangle_1$, respectively, where $z$ is equal to zero if $x>=y$ and $K$ otherwise}
$S_0$ and $S_1$ hold a common random bit $f$\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $4$-$9$.\;
$\langle d\rangle_i^K = (\langle x\rangle_i-\langle y\rangle_i) \mod K$\;
$\langle d\rangle_i=\mathsf{MC}(\langle d\rangle_i^K)$.\;
$\langle z\rangle_i = \langle x\rangle_i - \langle y\rangle_i - \langle d\rangle_i$.\;
$\langle a[0]\rangle_i = ifK-\langle z\rangle_i$\;
$\langle a[1]\rangle_i = i(1-f)K-\langle z\rangle_i$\;
$S_i$ sends $\langle a\rangle_i$ to $S_2$\;
$S_2$ reconstructs $a[j]$ where $j \in \{0, 1\}$ and computes $a[j]=a[j]/K$\;
$S_2$ creates two fresh shares of $a[j]$ where $j \in \{0, 1\}$ sends them to $S_0$ and $S_1$\;
For each $i \in \{0, 1\}$, $S_i$ executes Step $13$\;
$\langle z\rangle_i = \langle a[f]\rangle_i$\;
}
\captionsetup{width=\linewidth}
\caption{Comparison of two secret shared values ($\mathsf{CMP}$)}
\label{alg:cmp}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwFunction{algo}{$\mathsf{DIV}$}
\SetKwProg{myalg}{Algorithm}{}{}
\myalg{\algo{}}{
\Input{$S_0$ and $S_1$ hold ($\langle x\rangle_0$, $\langle y\rangle_0$) and ($\langle x\rangle_1$, $\langle y\rangle_1$), respectively.}
\Output{$S_0$ and $S_1$ get $\langle z\rangle_0$ and $\langle z\rangle_1$, respectively, where $z=x/y$.}
$S_0$, $S_1$ and $S_2$ know the common scaling factor $F$\;
$S_0$ and $S_1$ know the upper limit of $x$ and $y$, which is denoted by $U$\;
$S_0$ and $S_1$ hold two common random values $r_0$ and $r_1$, where $r_0 < \lfloor L/2U \rfloor$ and $r_1 < \lfloor L/2U \rfloor$\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $4$-$5$.\;
$S_i$ computes $\langle a\rangle_i=r_1\langle x\rangle_i + r_0\langle y\rangle_i$ and $\langle b\rangle_i=r_1\langle y\rangle_i$\;
$S_i$ sends $\langle a\rangle_i$ and $\langle b\rangle_i$ to $S_2$\;
$S_2$ reconstructs $a$ and $b$ and computes $c=aF/b$\;
$S_2$ creates two shares of $c$ denoted by $\langle c\rangle_0$ and $\langle c\rangle_1$, and sends them to $S_0$ and $S_1$, respectively\;
$S_0$ computes $\langle z\rangle_0=\langle c\rangle_0$\;
$S_1$ computes $\langle z\rangle_1=\langle c\rangle_1-r_0F/r_1$\;
}
\caption{Division ($\mathsf{DIV}$)}
\label{alg:div}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\textbf{Outsourcing:} At the start of {ppAURORA}{}, each data owner $H_i$ has a list of PCVs and corresponding ground truth labels for $i \in \{1,\ldots,n\}$. Then, each data owner $H_i$ sorts its whole list $T_i$ according to PCVs in descending order and divides it into two additive shares $T_{i_0}$ and $T_{i_1}$, and sends $T_{i_0}$ and $T_{i_1}$ to $S_0$ and $S_1$, respectively. We refer to $S_0$ and $S_1$ as \textit{proxies}.
\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\linewidth]{img/ppauc_arch.pdf}
\end{center}
\vspace*{-0.3cm}
\caption{General system architecture of our solution. Hospitals $H_1,\ldots,H_N$, as data owner example here, send their test data $S_0$ and $S_1$. $S_0$, $S_1$ and $S_2$ computes AUC on all test data via secure 3-party computation and send the result back to the hospitals.}
\label{fig:sysover}
\vspace*{-0.3cm}
\end{figure}
\fi
\textbf{Sorting:} After the outsourcing phase, $S_0$ and $S_1$ obtain the shares of individually sorted lists of PCVs of the data owners. Afterwards, the proxies need to perform a merging operation on each pair of individually sorted lists and continue with the merged lists until they obtain the global sorted list of PCVs. This can be considered as the leaves of a binary tree merging into the root node, which is, in our case, the global sorted list. Due to the high complexity of privacy preserving sorting, we decided to make the sorting parametric to adjust the trade-off between privacy and practicality. Let $\delta = 2a + 1$ be this parameter that determines the number of PCVs that will be added to the global sorted list in each iteration for $a \in \mathbb{N}$, $T_{i_k}$ and $T_{j_k}$ be the shares of two individually sorted lists of PCVs in $S_k$s for $k \in \{0,1\}$ and $|T_{i}| \geq |T_{j}|$ where $|.|$ is size operator. At the beginning, the proxies privately compare the lists elementwise. They utilize the results of the comparison in $\mathsf{MUX}$s to privately exchange the shares of PCVs in each pair if the PCV in $T_j$ is larger than the PCV in $T_i$. In the first $\mathsf{MUX}$, they input the share in $T_{i_k}$ to $\mathsf{MUX}$ first and then the share in $T_{j_k}$ along with the share of the result of the comparison to select the larger of the PCVs and move it to $T_{i_k}$. In the second $\mathsf{MUX}$, they reverse the order to select the smaller of the PCVs and move it to $T_{j_k}$. We call this stage \textit{shuffling}. Then, they move the top PCV of $T_{i_k}$ to the merged list of PCVs. If $\delta \neq 1$, then they continue comparing the top PCVs in the lists and moving the largest of them to the merged list. Once they move $\delta$ PCVs to the merged list, they shuffle the lists again. Until finishing up the PCVs in $T_{i_k}$, the proxies follows shuffling-moving cycle.
The purpose of the shuffling is to increase the number of candidates for a specific position and, naturally, lower the chance of matching a PCV in the individually sorted lists to a PCV in the merged list. The highest possible chance of a matching is $50\%$. This results in a very low chance of guessing the matching of whole PCVs in the list. Regarding the effect of $\delta$ on the privacy, it is important to note that $\delta$ needs to be an odd number to make sure that shuffling always leads to increment in the number of candidates. Even value of $\delta$ may cause ineffective shuffling during the sorting. Furthermore, $\delta = 1$ provides the utmost privacy, which means that the chance of guessing the matching of the whole PCVs is 1 over the number of all possible merging of those two individually sorted lists. However, the execution time of sorting with $\delta = 1$ can be relatively high. For $\delta \neq 1$, the execution time can be low but the number of possible matching of PCVs in the individually sorted list to the merged list decreases in parallel to the increment of $\delta$. As a guideline on the choice of $\delta$, one can decide it based on how much privacy loss any matching could cause on the specific task. In case of $\delta \neq 1$ and $|T_{j_k}| = 1$ at some point in the sorting, the sorting continues as if it had just started with $\delta = 1$ to make sure that the worst case scenario for guessing the matching can be secured. More details of the sorting phase are in the Supplement.
\subsection{Secure Computation of AUROC}
Once $S_0$ and $S_1$ obtain the global sorted list of PCVs, they calculate the AUROC based on this list by employing one of the versions of AUROC depending on whether there exists tie in the list.
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$\langle T\rangle_i = (\{\langle con_1\rangle_i,\langle label_1\rangle_i\},...,$ $\{\langle con_M\rangle_i,\langle label_M\rangle_i\} )$, $\langle T\rangle_i$ is a share of the global sorted list of PCVs, and labels}
For each $i \in \{0, 1\}$, $S_i$ executes Steps $2$-$11$\;
$\langle TP\rangle_i \gets 0$, $\langle P\rangle_i \gets 0$, $\langle pFP\rangle_i \gets 0$, $\langle N\rangle_i \gets 0$\;
\ForEach{item $\langle t\rangle \in \langle T\rangle$}{%
$\langle TP\rangle_i \gets \langle TP\rangle_i + \langle t.label\rangle_i$\;
$\langle P\rangle_i \gets \langle P\rangle_i + i$\;
$\langle FP\rangle_i \gets \langle P\rangle_i - \langle TP\rangle_i$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle TP\rangle_i,\langle FP\rangle_i-\langle pFP\rangle_i)$\;
$\langle N\rangle_i \gets \langle N\rangle_i+\langle A\rangle_i$\;
$\langle pFP\rangle_i \gets \langle FP\rangle_i$\;
}
$\langle D\rangle_i \gets \mathsf{MUL}(\langle TP\rangle_i,\langle FP\rangle_i)$\;
$\langle ROC\rangle_i \gets \mathsf{DIV}(\langle N\rangle_i,\langle D\rangle_i)$\;
\captionsetup{width=\linewidth}
\caption{Secure AUROC computation without ties}
\label{alg:auc1}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$\langle T\rangle_i = (\{\langle con_1\rangle_i,\langle label_1\rangle_i\},$ $,...,$ $\{\langle con_M\rangle_i,\langle label_M\rangle_i\} )$, $\langle T\rangle_i$ is a share of the global sorted list of PCVs, and labels}
For each $i \in \{0, 1\}$, $S_i$ executes Steps $2$-$18$\;
$\langle TP\rangle_i \gets 0$, $\langle P\rangle_i \gets 0$, $\langle pFP\rangle_i \gets 0$, $\langle pTP\rangle_i \gets 0$, $\langle N_1\rangle_i \gets 0$, $\langle N_2\rangle_i \gets 0$\;
\ForEach{item $\langle t\rangle_i \in \langle T\rangle_i$}{%
$\langle TP\rangle_i \gets \langle TP\rangle_i + \langle t.label\rangle_i$\;
$\langle P\rangle_i \gets \langle P\rangle_i + i$\;
$\langle FP\rangle_i \gets \langle P\rangle_i - \langle TP\rangle_i$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle pTP\rangle_i,\langle FP\rangle_i-\langle pFP\rangle_i)$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle A\rangle_i,\langle t.con\rangle_i)$\;
$\langle N_1\rangle_i \gets \langle N_1\rangle_i+\langle A\rangle_i$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle TP\rangle_i - \langle pTP\rangle_i,\langle FP\rangle_i-\langle pFP\rangle_i)$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle A\rangle_i,\langle t.con\rangle_i)$\;
$\langle N_2\rangle_i \gets \langle N_2\rangle_i+\langle A\rangle_i$\;
$\langle pre\_FP\rangle_i \gets \mathsf{MUX}(\langle pFP\rangle_i, \langle FP\rangle_i, \langle t.con\rangle_i)$\;
$\langle pre\_TP\rangle_i \gets \mathsf{MUX}(\langle pTP\rangle_i, \langle TP\rangle_i, \langle t.con\rangle_i)$\;
}
$\langle N\rangle_i \gets 2\cdot\langle N_1\rangle_i+\langle N_2\rangle_i$\;
$\langle D\rangle_i \gets 2\cdot\mathsf{MUL}(\langle TP\rangle_i,\langle FP\rangle_i)$\;
$\langle ROC\rangle_i \gets \mathsf{DIV}(\langle N\rangle_i,\langle D\rangle_i)$\;
\caption{Secure AUROC computation with tie}
\label{alg:auc2}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\subsubsection{Secure AUROC Computation without Ties}
\label{sec:aucwotie}
In Algorithm \ref{alg:auc1}, we compute the AUROC as shown in Equation \ref{eq:auc_wo_tie} because we assume that there is no tie in the sorted list of PCVs. At the end of the secure computation, the shares of numerator $N$ and denominator $D$ are computed where $AUROC=N/D$. $S_i$ for $i \in \{0,1\}$ knows the number of test samples $M$. Thus $S_i$ can determine the upper bound for $N$ and $D$ and $\mathsf{DIV}$ can be used to calculate $AUROC=N/D$. With the help of high numeric value precision of the results, most of the machine learning algorithms yield different PCVs for samples. Therefore, such an approach to compute the AUROC is applicable to most of the machine learning tasks. However, in case of a tie between samples from two classes in the PCVs, it does not guarantee that it gives the exact AUROC. Depending on the order of the samples, it approximates the score. To have a more accurate AUROC, we will propose another version with a slightly higher communication cost in the next section.
\subsubsection{Secure AUROC Computation with Ties}
\label{sec:auctie}
To detect ties in the list of PCVs, $S_0$ and $S_1$ compute the difference between each PCV and its following PCV. $S_0$ computes the modular additive inverse of its shares. The proxies apply a common random permutation to the bits of each share in the list to prevent $S_3$ from learning the non-zero relative differences. They also permute the list of shares using a common random permutation to shuffle the order of the real test samples. Then, they send the list of shares to $S_2$. $S_2$ XORes two shares and maps the result to one if it is greater than zero and zero otherwise. Then, proxies privately map PCVs to zero if they equal to their previous PCV and one otherwise. This phase is depicted in Algorithm \ref{alg:ties}. In Algorithm \ref{alg:auc2}, $S_0$ and $S_1$ use these mappings to take only the PCVs which are different from their subsequent PCV into account in the computation of the AUROC based on the Equation \ref{eq:auc_with_tie}. In Algorithm \ref{alg:auc2}, $\mathsf{DIV}$ method is used because the upper limit for the numerator and denominator is known, as in the AUROC computation described in Section \ref{sec:aucwotie}.
\subsection{Secure AUPR Computation}
As in the AUROC computation described in Section \ref{sec:auctie}, $S_0$ and $S_1$ map a PCV in the global sorted list to zero if it equals to the previous PCV and one otherwise by running the Algorithm \ref{alg:ties}. Then, we use Equation \ref{eq:auc_with_tie} to calculate AUPR as shown in Algorithm \ref{alg:prc}. In the AUPR calculation, the denominator of each precision value is different. Therefore, we need to perform division in each iteration. Since the proxy servers can determine the upper bound for numerators and denominators we can use the $\mathsf{DIV}$ operation to perform divisions.
\begin{minipage}{0.49\textwidth}
\IncMargin{1em}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$\langle C\rangle_i = (\langle con_1\rangle_i,...,$ $\langle con_M\rangle_i)$, $\langle C\rangle_i$ is a share of the global sorted list of PCVs, $M$ is the number of PCVs}
$S_0$ and $S_1$ hold a common random permutation $\pi$ for $M$ items\;
$S_0$ and $S_1$ hold a list of common random values $R$\;
$S_0$ and $S_1$ hold a list of common random permutation $\sigma$ for $\ell$ items\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $5$-$13$\;
\For{$j\leftarrow 1$ \KwTo $M-1$}{
$\langle C[j]\rangle_{i} \gets (\langle C[j]\rangle_{i} - \langle C[j+1]\rangle_{i})$\;
\uIf{$i = 0$}{
$\langle C[j]\rangle_{i} = L - \langle C[j]\rangle_{i}$\;
}
$\langle C[j]\rangle_{i} = \langle C[j]\rangle_{i} \oplus R[j]$\;
$\langle C[j]\rangle_{i} = \sigma_j(\langle C[j]\rangle_{i})$\;
}
$\langle D\rangle_{i}=\pi(\langle C\rangle_{i})$\;
Insert arbitrary number of dummy zero and non-zero values to randomly chosen locations in $\langle D\rangle_{i}$\;
$S_i$ sends $\langle D\rangle_{i}$ to $P_{2}$\;
$S_2$ reconstructs $D$ by computing $\langle D\rangle_{0}\oplus\langle D\rangle_{1}$\;
\ForEach{item $\langle d\rangle \in \langle D\rangle$}{%
\If{$d>0$}{
$d \gets 1$\;
}
}
$S_2$ creates new shares of $D$, denoted by $\langle D\rangle_0$ and $\langle D\rangle_1$, and sends them to $S_0$ and $S_1$, respectively.\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $18$-$21$\;
Remove dummy zero and non-zero values from $\langle D\rangle_i$\;
$\langle C\rangle_i=\pi(\langle D\rangle_i)$\;
\For{$j\leftarrow 1$ \KwTo $M-1$}{
$\langle T[j].con\rangle_i \gets \langle C[j]\rangle_i$\;
}
$\langle T[M].con\rangle_i \gets i$\;
\caption{Secure detection of ties}
\label{alg:ties}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\hfill
\IncMargin{1em}
\begin{minipage}{0.49\linewidth}
\begin{algorithm}[H]
\tiny
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$\langle T\rangle_i = (\{\langle con_1\rangle_i,\langle label_1\rangle_i\},...,$ $\{\langle con_M\rangle_i,\langle label_M\rangle_i\} )$, $\langle T\rangle_i$ is a share of the global sorted list of PCVs, and labels}
$S_0$ and $S_1$ hold a common random permutation $\pi$ for $M$ items\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $3$-$24$\;
$\langle TP[0]\rangle_i \gets 0$, $\langle RC[0]\rangle_i \gets 0$, $\langle pPC\rangle_i \gets i$, $\langle pRC\rangle_i \gets 0$, $\langle N_1\rangle_i \gets 0$, $\langle N_2\rangle_i \gets 0$\;
\For{$j\leftarrow 1$ \KwTo $M$}{
$\langle TP[j]\rangle_i \gets \langle TP[j-1]\rangle_i + \langle T[j].label\rangle_i$\;
$\langle RC[j]\rangle_i \gets \langle RC[j-1]\rangle_i + i$\;
}
$\langle T\_TP\rangle_{i}=\pi(\langle TP\rangle_{i})$\;
$\langle T\_RC\rangle_{i}=\pi(\langle RC\rangle_{i})$\;
\For{$j\leftarrow 1$ \KwTo $M$}{
$\langle T\_PC[j]\rangle_i \gets \mathsf{DIV}(\langle T\_TP[j]\rangle_i, \langle T\_RC[j]\rangle_i)$\;
}
$\langle PC\rangle_{i}=\pi'(\langle T\_PC\rangle_{i})$\;
\For{$j\leftarrow 1$ \KwTo $M$}{
$\langle A\rangle_i \gets \mathsf{MUL}(\langle pPC\rangle_i,\langle RC[j]\rangle_i-\langle pRC\rangle_i)$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle A\rangle_i,\langle T[j].con\rangle_i)$\;
$\langle N_1\rangle_i \gets \langle N_1\rangle_i+\langle A\rangle_i$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle RC[j]\rangle_i - \langle pRC\rangle_i,\langle PC[j]\rangle_i-\langle pPC\rangle_i)$\;
$\langle A\rangle_i \gets \mathsf{MUL}(\langle A\rangle_i,\langle T[j].con\rangle_i)$\;
$\langle N_2\rangle_i \gets \langle N_2\rangle_i+\langle A\rangle_i$\;
$\langle pPC\rangle_i \gets \mathsf{MUX}(\langle pPC\rangle_i, \langle PC[j]\rangle_i, \langle T[j].con\rangle_i)$\;
$\langle pRC\rangle_i \gets \mathsf{MUX}(\langle pRC\rangle_i, \langle RC[j]\rangle_i, \langle T[j].con\rangle_i)$\;
}
$\langle N\rangle_i \gets 2\cdot\langle N_1\rangle_i+\langle N_2\rangle_i$\;
$\langle D\rangle_i \gets 2\cdot \langle TP[M]\rangle_i$\;
$\langle PRC\rangle_i \gets \mathsf{DIV}(\langle N\rangle_i,\langle D\rangle_i)$\;
\caption{Secure AUPR computation with tie}
\label{alg:prc}
\end{algorithm}\DecMargin{1em}
\end{minipage}
\section{Security Analysis}
We provide all semi-honest simulation-based security proofs for the $\mathsf{MUX}$, $\mathsf{MC}$, $\mathsf{CMP}$ and $\mathsf{DIV}$ functions defined in our framework as well as the computations of {ppAURORA}{} in the Supplement.
\section{Dataset}
\textbf{DREAM Challenge Dataset:} To demonstrate the correctness of {ppAURORA}{} and its applicability to a real-life problem, we utilized data from the first subchallenge of the Acute Myeloid Leukemia (AML) outcome prediction challenge from the DREAM Challenges \citep{noren2016crowdsourcing}. We chose the submission of the team with the lowest score in the leaderboard with accessible files, which is the team {\it Snail}. The training dataset has $191$ samples, among which $136$ patients have complete remission. The ground truth labels of the training samples can be reached at \href{https://www.synapse.org/#!Synapse:syn2501858}{https://www.synapse.org/\#!Synapse:syn2501858} and the submission of team Snail can be found at \href{https://www.synapse.org/#!Synapse:syn2700200}{https://www.synapse.org/\#!Synapse:syn2700200}.
\textbf{Synthetic Dataset:} Since we aimed to analyze the scalability of {ppAURORA}{} at different settings, we generated a synthetic dataset with no restriction other than having the PCVs between $0$ and $1$.
\begin{figure*}[h!t]
\centering
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=\linewidth]{img/neurips_original_num_samples.pdf}
\caption{}
\label{fig:n_samples}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/neurips_original_num_data_sources.pdf}
\caption{}
\label{fig:n_parties}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/neurips_original_delta.pdf}
\caption{}
\label{fig:delta}
\end{subfigure}
\caption{\textbf{(a)} The execution time of various settings to evaluate the scalability of {ppAURORA}{} to the number of samples for a fixed number of parties and \textbf{(b)} to the number of parties for a fixed number of samples in each party. \textbf{(c)} The effect of $\delta$ on the execution time is shown.}
\label{fig:exe_time}
\end{figure*}
\section{Results} \label{sec:results}
\textbf{Experimental Setup:} We conducted our experiments on Amazon EC2 t2.xlarge instances. For the LAN setting, we utilized only the instances in the Frankfurt region. For the WAN setting, we additionaly selected one instance from both London and Paris.
\textbf{Experiments on DREAM Challenge Dataset:} We conducted the experiments on LAN setting. We assume that the result needs to be precise in the first three decimal places. Thus, we set the precision of the division operation to cover up to the fourth decimal place. To assess the correctness, we computed the AUROC by {ppAURORA}{} and compared it to the result obtained without privacy. Both settings gave $AUROC = 0.693$. To check the correctness of AUROC with no-tie of {ppAURORA}{}, we randomly picked one of the samples in tie condition and generated a subset of the samples with no tie. We obtained the same AUROC with no-tie version of AUROC of {ppAURORA}{} as the non-private computation. Additionally, we computed AUPR by employing {ppAURORA}{} and verified that both private and non-private gave $AUPR=0.844$. These results indicate that without sacrificing the privacy of the data, {ppAURORA}{} can compute the exact same AUC as one could obtain on the pooled test samples.
To justify the collaborative evaluation, we performed $1000$ repetitions of the non-private AUROC computation with $N \in \{5,10,20,40,80,160\}$ samples, which are randomly drawn from the whole data. The AUC starts to become more and more stable when we increase the size of the samples.
\textbf{Experiments on Synthetic Dataset:} We evaluated no-tie and with-tie versions of AUROC and AUPR of {ppAURORA}{} with $\delta = 1$ on the settings in which the number of data sources is $16$ and the number of samples is $N \in \{64, 125, 250, 500, 1000\}$. The results showed that {ppAURORA}{} scales almost quadratically in terms of both communication costs among all parties and the execution time of the computation. Figure \ref{fig:n_samples} displays the results. We also analyzed the performance of all computations of {ppAURORA}{} on a varying number of data sources. We fixed $\delta = 1$, the number of samples in each data sources to $1000$ and experimented with $D$ data sources where $D \in \{2, 4, 8, 16\}$. Similar to the previous analysis, {ppAURORA}{} scales around quadratically to the number of data sources and Figure \ref{fig:n_parties} summarizes the results. We also analyzed the effect of $\delta \in \{1,3,5,11,25,51,101\}$ by fixing the number of data sources to $8$ and the number of samples in each data source to $1000$. The execution time shown in Figure \ref{fig:delta} displays logarithmic decrease for increasing $\delta$. In all analyses, since the dominating factor is sorting, the execution times of the computations are close to each other. Additionally, our analysis showed that LAN is $12$ to $14$ times faster than WAN on average due to the high round trip time of WAN, which is approximately $13.2$ ms. However, even with such a scaling factor, {ppAURORA}{} can be deployed in real life scenarios if the alternative is a more time-consuming approval process required for gathering all data in one place still protecting the privacy of data. We provided the detailed results as tables in the Supplement.
\section{Conclusion}
In this work, we presented a novel secure 3-party computation framework and its application, {ppAURORA}{}, to compute AUC of the ROC and PR curves privately even when there exist ties in the PCVs. We proposed four novel protocols in the framework, which are $\mathsf{MUX}$ to select one of two secret shared values, $\mathsf{MC}$ to convert the ring of a secret value from $2^{\ell-1}$ to $2^{\ell}$, $\mathsf{CMP}$ to compare two secret shared values and $\mathsf{DIV}$ to divide two secret shared values. They have low round and communication complexities. The framework and its application {ppAURORA}{} are secure against passive adversaries in the honest majority setting. We implemented {ppAURORA}{} in C++ and demonstrated that {ppAURORA}{} can both compute correctly and privately, and scales quadratically to the number of parties and the number of samples. To the best of our knowledge, {ppAURORA}{} is the first method that enables computing the exact AUC (AUROC and AUPR) privately and securely.
\section{Supplementary Material}
\subsection{Framework}
In this section, we give the definitions of basic operations that we utilized in addition to the novel operations given in the main paper.
\subsubsection{Multiplication}
In two-party setting, multiplication is performed using a multiplication triple \citep{DBLP:conf/crypto/Beaver91a} which is generated via homomorphic encryption or oblivious transfer. In our 3-party setting, $S_2$ generates the multiplication triple and sends the shares of it to $S_0$ and $S_1$. $S_0$ and $S_1$ hold $(\langle x\rangle_0,\langle y\rangle_0)$ and $(\langle x\rangle_1,\langle y\rangle_1)$, respectively, and secure multiplication outputs fresh shares of $xy$ (functionality $\mathcal{F}_{\mathsf{MUL}}$).
\IncMargin{1em}
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwFunction{algo}{$\mathsf{MUL}$}
\SetKwProg{myalg}{Algorithm}{}{}
\myalg{\algo{}}{
\Input{$S_0$ and $S_1$ hold ($\langle x\rangle_0$, $\langle y\rangle_0$) and ($\langle x\rangle_1$, $\langle y\rangle_1$), respectively.}
\Output{$S_0$ and $S_1$ get $\langle z\rangle_0$ and $\langle z\rangle_1$, respectively, where $z=x\cdot y$.}
$S_2$ picks three random numbers $a$, $b$ and $c$ where $c = a\cdot b$\; $S_2$ generates $\langle a\rangle_i$, $\langle b\rangle_i$ and $\langle c\rangle_i$, and sends them to $S_i$ for $i \in \{0,1\}$\;
For each $i \in \{0, 1\}$, $S_i$ executes Steps $5$-$7$\;
$S_i$ computes $\langle e\rangle_i = \langle x\rangle_i - \langle a\rangle_i$ and $\langle f\rangle_i = \langle y\rangle_i - \langle b\rangle_i$\;
$S_i$ reconstructs $e$ and $f$ by exchanging the shares with $S_{1-i}$\;
$S_i$ computes $\langle z\rangle_i = i\cdot e \cdot f + f \cdot \langle x\rangle_i + e \cdot \langle y\rangle_i + \langle c\rangle_i$\;
}
\caption{Multiplication}
\label{alg:mul}
\end{algorithm}\DecMargin{1em}
\subsubsection{Comparison of a Secret Shared Value and a Plain Value}
In this function, a value $r$ in the ring $\mathbb{Z}_{K}$ whose bits are secret shared to $S_0$ and $S_1$ in the ring $P$ where $P=67$ is compared with a common value $y$. At the end of the secure computation, $S_2$ learns a bit $n^\prime=n \oplus (r>y)$. We get the definition of this functionality $\mathcal{F}_{\mathsf{PC}}$ from \cite{wagh2018securenn} where it is called as Private Compare ($\mathsf{PC}$). The only change that we made is $S_2$ sends the fresh boolean shares of $n^\prime$ to $S_1$ and $S_2$. $\mathsf{PC}$ is described in Algorithm \ref{alg:pc} in the Supplement.
\IncMargin{1em}
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwFunction{algo}{$\mathsf{PC}$}
\SetKwProg{myalg}{Algorithm}{}{}
\myalg{\algo{}}{
\Input{$S_{0}, S_{1}$ hold $\left\{\langle r[j]\rangle_{0}^{p}\right\}_{j \in[\ell]}$ and $\{\langle r[j]\rangle_{1}^{p}\}_{j \in[\ell]}$, respectively, a common input $y$ and a common random bit $n$.}
\Output{$S_0$ and $S_1$ get $n^\prime=n \oplus(r>y)$, $\langle n^\prime\rangle_0^B$ and $\langle n^\prime\rangle_1^B$, respectively.}
$S_{0}, S_{1}$ hold $\ell$ common random values $s_{j} \in \mathbb{Z}_{p}^{*}$ for all $j \in[\ell]$ and a random permutation $\pi$ for $\ell$ elements. $S_{0}$ and $S_{1}$ additionally hold $\ell$ common random values $u_{j} \in \mathbb{Z}_{p}^{*}$.
Let $t=y+1 \bmod 2^{\ell}$\;
$P_{i}$ executes Steps $5-17$:\;
\For{$j = \ell;\ j > 0;\ j = j - 1$}{
\uIf{$n=0$}{
$\left\langle w_{j}\right\rangle_{i}^{p}=\langle r[j]\rangle_{i}^{p}+i y[j]-2 y[j]\langle r[j]\rangle_{i}^{p}$\;
$\left\langle c_{j}\right\rangle_{i}^{p}=i y[j]-\langle r[j]\rangle_{i}^{p}+j+\sum_{k=j+1}^{\ell}\left\langle w_{k}\right\rangle_{i}^{p}$\;
}
\uElseIf{$n=1$ AND $r \neq 2^{\ell}-1$}{
$\left\langle w_{j}\right\rangle_{i}^{p}=\langle r[j]\rangle_{i}^{p}+i t[j]-2 t[j]\langle r[j]\rangle_{i}^{p}$\;
$\left\langle c_{j}\right\rangle_{i}^{p}=-i t[j]+\langle r[j]\rangle_{i}^{p}+i+\sum_{k=j+1}^{\ell}\left\langle w_{k}\right\rangle_{i}^{p}$;\
}
\Else{
\uIf{$i \neq 1$}{
$\left\langle c_{j}\right\rangle_{i}^{p}=(1-i)\left(u_{j}+1\right)-i u_{j}$\;
}
\Else{
$\left\langle c_{j}\right\rangle_{i}^{p}=(-1)^{j} \cdot u_{j}$\;
}
}
}
Send $\left\{\left\langle d_{j}\right\rangle_{i}^{p}\right\}_{j}=\pi\left(\left\{s_{j}\left\langle c_{j}\right\rangle_{i}^{p}\right\}_{j}\right)$ to $P_{2}$\;
For all $j \in {[\ell]}$, $P_{2}$ computes $d_{j} = \mathsf{Reconst}(\langle d_{j}\rangle_{0}^{p},\langle d_{j}\rangle_{1}^{p})$ and sets $n^{\prime}=1$ iff $\exists j \in[\ell]$ such that $d_{j}=0$.\;
$P_2$ sends $\langle n^{\prime}\rangle_i^B$ to $S_i$ for $i \in \{0,1\}$
}
\caption{Private Compare \cite{wagh2018securenn}}
\label{alg:pc}
\end{algorithm}\DecMargin{1em}
\subsection{Result Tables}
In this section, we provide the detailed results of the experiments with {ppAURORA}{} to compute AUROC and AUPR in Tables \ref{tab:syn_auroc_result_summary} and \ref{tab:syn_auprc_result_summary}, respectively.
\begin{table*}[h!tb]
\small
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabularx}{\textwidth}{Vgmmmmf}
\toprule
& & \multicolumn{4}{>{\columncolor{gray!20!white}}c}{Communication Costs (MB)} & \\
\hhline{*1{>{\arrayrulecolor{gray!5!white}}-}*1{>{\arrayrulecolor{gray!10!white}}-}*4{>{\arrayrulecolor{black}}-}*1{>{\arrayrulecolor{gray!30!white}}-}}{\arrayrulecolor{black}}
M $\times$ N & $\delta$ & $P_1$ & $P_2$ & Helper & Total & Time (sec)\\
\midrule
$16 \times 64$ & $1$ & $73.51 / 73.76$ & $73.48 / 73.73$ & $139.36 / 139.64$ & $286.36 / 287.13$ & $10.1 / 14.55$ \\
$16 \times 125$ & $1$ & $279.71 / 280.20$ & $279.65 / 280.12$ & $530.35 / 530.90$ & $1089.71 / 1091.22$ & $29.18 / 38.43$ \\
$16 \times 250$ & $1$ & $1117.32 / 1118.30$ & $1117.20 / 1118.15$ & $2116.65 / 2120.23$ & $4351.17 / 4356.68$ & $144.24 / 157.01$ \\
$16 \times 500$ & $1$ & $4466.23 / 4468.19$ & $4466.00 / 4467.90$ & $8469.41 / 8470.51$ & $17401.65 / 17406.60$ & $627.19 / 655.78$ \\
$16 \times 10^3$ & $1$ & $17858.86 / 17862.78$ & $17858.4 / 17862.18$ & $33863.45 / 33876.00$ & $69580.71 / 69600.96$ & $2578.46 / 2556.93$ \\
\midrule
$2 \times 10^3$ & $1$ & $149.04 / 149.53$ & $149.03 / 149.50$ & $282.17 / 282.98$ & $580.24 / 582.02$ & $13.68 / 23.07$ \\
$4 \times 10^3$ & $1$ & $893.51 / 894.49$ & $893.45 / 894.39$ & $1693.99 / 1694.75$ & $3480.94 / 3483.63$ & $116.24 / 134.42$ \\
$8 \times 10^3$ & $1$ & $4168.03 / 4169.99$ & $4167.86 / 4169.75$ & $7903.60 / 7907.26$ & $16239.49 / 16247.01$ & $597.8 / 626.66$ \\
\midrule
$8 \times 10^3$ & $3$ & $2068.78 / 2070.74$ & $2068.66 / 2070.55$ & $3921.74 / 3923.83$ & $8059.18 / 8065.11$ & $307.49 / 334.96$ \\
$8 \times 10^3$ & $5$ & $1383.59 / 1385.56$ & $1383.49 / 1385.38$ & $2622.98 / 2625.59$ & $5390.06 / 5396.53$ & $210.13 / 248.72$ \\
$8 \times 10^3$ & $11$ & $693.62 / 695.58$ & $693.53 / 695.42$ & $1313.98 / 1316.86$ & $2701.13 / 2707.87$ & $114.93 / 151.21$ \\
$8 \times 10^3$ & $25$ & $322.52 / 324.49$ & $322.45 / 324.34$ & $610.66 / 612.96$ & $1255.63 / 1261.79$ & $64.16 / 99.35$ \\
$8 \times 10^3$ & $51$ & $162.59 / 164.56$ & $162.52 / 164.41$ & $306.88 / 309.43$ & $631.99 / 638.40$ & $43.98 / 78.24$ \\
$8 \times 10^3$ & $101$ & $85.58 / 87.54$ & $85.50 / 87.39$ & $161.12 / 163.50$ & $332.21 / 338.43$ & $34.56 / 70.19$ \\
\midrule
$8 \times 250$ & $1$ & $260.95 / 261.44$ & $260.91 / 261.38$ & $494.57 / 495.27$ & $1016.43 / 1018.08$ & $26.35 / 34.43$ \\
$8 \times UNB$ & $1$ & $331.49 / 331.98$ & $331.42 / 331.9$ & $628.7 / 629.15$ & $1291.61 / 1293.03$ & $34.4 / 43.81$ \\
\bottomrule
\end{tabularx}
\caption{The summary of the results of the experiments with {ppAURORA}{} to compute AUROC with and without tie on synthetic data. The left side of ''/´´ represents {\it without-tie} results and the right side of it represents {\it with-tie} results. $M$ represents the number of data sources and $N$ represents the number of samples in one data sources. $UNB$ represents the unbalanced sample distribution, which is $\{12,18,32,58,107,258,507,1008\}$.}
\label{tab:syn_auroc_result_summary}
\end{table*}
\begin{table*}[h!tb]
\small
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabularx}{\textwidth}{Vgmmmmf}
\toprule
& & \multicolumn{4}{>{\columncolor{gray!20!white}}c}{Communication Costs (MB)} & \\
\hhline{*1{>{\arrayrulecolor{gray!5!white}}-}*1{>{\arrayrulecolor{gray!10!white}}-}*4{>{\arrayrulecolor{black}}-}*1{>{\arrayrulecolor{gray!30!white}}-}}{\arrayrulecolor{black}}
M $\times$ N & $\delta$ & $P_1$ & $P_2$ & Helper & Total & Time (sec)\\
\midrule{{\arrayrulecolor{black}}}
$16 \times 64$ & $1$ & $73.79$ & $73.75$ & $139.70$ & $287.24$ & $15.49$ \\
$16 \times 125$ & $1$ & $280.25$ & $280.17$ & $530.90$ & $1091.32$ & $39.50$ \\
$16 \times 250$ & $1$ & $1118.39$ & $1118.24$ & $2120.17$ & $4356.80$ & $160.25$ \\
$16 \times 500$ & $1$ & $4468.38$ & $4468.08$ & $8470.66$ & $17407.12$ & $666.28$ \\
$16 \times 10^3$ & $1$ & $17863.16$ & $17862.55$ & $33874.20$ & $69599.90$ & $2639.34$ \\
\midrule
$2 \times 10^3$ & $1$ & $149.58$ & $149.55$ & $283.02$ & $582.14$ & $23.09$ \\
$4 \times 10^3$ & $1$ & $894.58$ & $894.49$ & $1695.60$ & $3484.67$ & $137.44$ \\
$8 \times 10^3$ & $1$ & $4170.19$ & $4169.94$ & $7905.92$ & $16246.05$ & $635.72$ \\
\midrule
$8 \times 10^3$ & $3$ & $2070.93$ & $2070.73$ & $3924.48$ & $8066.14$ & $346.11$ \\
$8 \times 10^3$ & $5$ & $1385.75$ & $1385.57$ & $2624.46$ & $5395.77$ & $249.01$ \\
$8 \times 10^3$ & $11$ & $695.77$ & $695.61$ & $1316.72$ & $2708.10$ & $152.96$ \\
$8 \times 10^3$ & $25$ & $324.68$ & $324.52$ & $613.25$ & $1262.45$ & $104.86$ \\
$8 \times 10^3$ & $51$ & $164.75$ & $164.59$ & $309.92$ & $639.25$ & $85.14$ \\
$8 \times 10^3$ & $101$ & $87.73$ & $87.58$ & $163.67$ & $338.98$ & $75.98$ \\
\midrule
$8 \times 250$ & $1$ & $261.49$ & $261.43$ & $495.38$ & $1018.29$ & $35.95$ \\
$8 \times UNB$ & $1$ & $332.03$ & $331.94$ & $629.20$ & $1293.17$ & $46.00$ \\
\bottomrule
\end{tabularx}
\caption{The summary of the results of the experiments of AUPR computation with {ppAURORA}{} on synthetic data. $M$ represents the number of data sources and $N$ represents the number of samples in one data sources. $UNB$ represents the unbalanced sample distribution, which is $\{12,18,32,58,107,258,507,1008\}$.}
\label{tab:syn_auprc_result_summary}
\end{table*}
\subsection{AUC Stability Analysis}
We analyzed the stability of the AUROC based on the number of test samples to justify the collaborative evaluation. We experimented with $N \in \{5,10,20,40,80,160\}$ test samples which are randomly chosen from the DREAM Challenge Dataset. In order to have a fair evaluation, we repeated these experiments $1000$ times. The experiments showed that the AUROC becomes more reliable and stable if we increase the number of test samples. In case a data source has no more additional test samples, the collaborative evaluation can be a best option. Figure \ref{fig:auc_stabilization} summarizes the results of the analysis.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\linewidth]{img/partial_auc.pdf}
\caption{AUROC for varying number of test samples randomly chosen from the whole dataset}
\label{fig:auc_stabilization}
\end{figure}
\subsection{Privacy Preserving Merging}
We include some sorting examples to demonstrate the process. In Figure \ref{fig:merging_normal}, we show the merging of two lists of PCVs with the same size. In this example, $\delta = 1$. By this setting, we do not decrease the number of possible merging of two individually sorted lists, which can be computed as:
\begin{equation}
\sum_{i=0}^{|L_2| - 1} {|L_1| + 1 \choose i+1} {|L_2| - 1 \choose i}
\end{equation}
where $L_1$ and $L_2$ are the individually sorted lists and $|.|$ denotes the size operation.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{img/merging_normal_case.pdf}
\caption{The merging of two same size lists with $\delta = 1$. The red arrows represent \textit{shuffling}, the black ones denote \textit{moving} the larger of the first list to the global sorted list. The progress of the global sorted list is shown by the grey arrows. Each color in the boxes represents different number of candidate PCVs. The coding of the colors are shown on the right most side of the figure. For each red arrow, i.e. for each \textit{shuffling}, we utilize $\mathsf{PC}$ operation on PCVs from two lists which are on the same index to select the larger of them and employ the result of this comparison along with $\mathsf{MUX}$s put the larger one into the $L_1$ and the other into $L_2$. Afterward, we \textit{move} the top of the first list to the global sorted list. Since $\delta = 1$, we perform \textit{shuffling} after we move the top element.}
\label{fig:merging_normal}
\end{figure}
Since the cost of having fully private merging is high and may not necessary for some applications, we can have an intermediate solution. By setting $\delta$ to a higher odd value, we can speed up the merging process. Figure \ref{fig:merging_delta_3_121} and \ref{fig:merging_delta_3_111} demonstrates an example of such a merging with $\delta = 3$. The total number of \textit{shuffling} is $5$ and $3$, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{img/merging_delta_3_121.pdf}
\caption{The merging of two same size lists with $\delta = 3$. We again start with \textit{shuffling} and then move the top PCV of the first list into the global sorted list. Afterward, we compare the second element of the first list and the top element of the second list via $\mathsf{PC}$. The proxies reconstruct the result of this comparison and move the share of the larger of the compared PCVs to the global sorted list. They continue until they move $\delta$ PCVs to the global sorted list. Then, they \textit{shuffle} and repeat the same procedure until there is no PCV in the first list.}
\label{fig:merging_delta_3_121}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{img/merging_delta_3.pdf}
\caption{The merging of two same size lists with $\delta = 3$. This example shows how the merging happens in case all the PCVs are taken from the first list.}
\label{fig:merging_delta_3_111}
\end{figure}
In Figure \ref{fig:merging_edge}, we show the sorting when the number of PCVs in the second list is $1$ to justify why we set $\delta = 1$ regardless of the initial $\delta$ due to the privacy reasons. By setting $\delta$, we secure that the number of possible PCVs in each position of the global sorted list is the same as one can see on such a merging without privacy. The beginning and the end of the global sorted list have two possible matching, however, due to the nature of the individual sorting, the number of possible PCVs for the other positions is only $3$, which is lower than the ones in Figure \ref{fig:merging_normal}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{img/merging_edge_case.pdf}
\caption{The merging of two lists with $\delta = 1$. The second list has only one PCV. The red arrows represent \textit{shuffling}, the black ones denote \textit{moving} the larger of the first list to the global sorted list. The progress of the global sorted list is shown by the grey arrows. Each color in the boxes represents different number of candidate PCVs. The coding of the colors are shown on the right most side of the figure.}
\label{fig:merging_edge}
\end{figure}
\subsection{Security Analysis}
Here, we provide semi-honest simulation-based security proofs for the $\mathsf{MUX}$, $\mathsf{MC}$, $\mathsf{CMP}$ and $\mathsf{DIV}$ functions we have defined in our framework. Since the protocols we propose for AUC calculation use $\mathsf{MUX}$, $\mathsf{MC}$, $\mathsf{CMP}$, $\mathsf{DIV}$ and previously defined functions, we prove the security of the main protocols in the F-hybrid model by proving the security of each function we call.
\begin{lemma}
\label{lemma:mul}
The protocol $\mathsf{MUL}$ in Algorithm \ref{alg:mul} in the Supplement securely realizes the functionality $\mathcal{F}_{\mathsf{MUL}}$.
\end{lemma}
\begin{proof}
In order to prove the correctness of our protocol we show that $\langle z\rangle_0+\langle z\rangle_1 = xy$.
\begin{equation}
\begin{split}
\langle z\rangle_0 + \langle z\rangle_1 ={}& f \cdot \langle x\rangle_0 + e \cdot \langle y\rangle_0 + \langle c\rangle_0 - e \cdot f\\
& +f \cdot \langle x\rangle_1 + e \cdot \langle y\rangle_1 + \langle c\rangle_1\\
={}& (fx+ey+c-ef)\\
={}& ((y-b)x+(x-a)y+c-(x-a)(y-b))\\
={}& xy-xb+xy-ya+c-xy+xb+ya-ab\\
={}& xy
\end{split}
\label{eq:mul}
\end{equation}
We prove the security of our protocol. During the protocol execution, $S_2$ sends a multiplication triple to $S_0$ and $S_1$ and does not receive any values. Thus the view of $S_2$ is empty and it is very easy to prove security in case $S_2$ is corrupted. $S_i$ where $i \in \{0,1\}$ sees $\langle a\rangle_i,\langle b\rangle_i,\langle c\rangle_i,\langle e\rangle_i$ and $\langle f\rangle_i$. These values are uniformly distributed random values and hence can be perfectly simulated.
\end{proof}
\begin{lemma}
\label{lemma:mux}
The protocol $\mathsf{MUX}$ in Algorithm \ref{alg:ss} in the main paper securely realizes the functionality $\mathcal{F}_{\mathsf{MUX}}$.
\end{lemma}
\begin{proof}
We first prove the correctness of our protocol. $\langle z\rangle_i$ is the output of $S_i$ where $i \in \{0,1\}$. We need to prove that $\mathsf{Reconstruct(\langle z\rangle_i)} = (1-b)x+by$.
\begin{equation}
\begin{split}
\langle z\rangle_0 + \langle z\rangle_1 ={}& \langle x\rangle_0+s-\langle b\rangle_0(\langle x\rangle_0-\langle y\rangle_0)+r_1\langle b\rangle_0\\
& +r_2(\langle x\rangle_0-\langle y\rangle_0)+r_2r_3 + \langle x\rangle_1+t\\
& -\langle b\rangle_1(\langle x\rangle_1-\langle y\rangle_1) + r_0(\langle x\rangle_1-\langle y\rangle_1)\\
& +r_0r_1 + r_3\langle b\rangle_1-\langle b\rangle_0\langle x\rangle_1+\langle b\rangle_0\langle y\rangle_1\\
& -\langle b\rangle_0r_1 - r_0\langle x\rangle_1+r_0\langle y\rangle_1-r_0r_1\\
& -\langle x\rangle_0\langle b\rangle_1-\langle x\rangle_0r_2+\langle y\rangle_0\langle b\rangle_1+\langle y\rangle_0r_2\\
& -r_3\langle b\rangle_1-r_3r_2-s-t\\
={}& (1-\langle b\rangle_0-\langle b\rangle_1)(\langle x\rangle_0+\langle x\rangle_1)\\
&+ (\langle b\rangle_0+\langle b\rangle_1)(\langle y\rangle_0+\langle y\rangle_1)\\
={}& (1-b)x+by
\end{split}
\label{eq:ss}
\end{equation}
Next we prove the security of our protocol. $S_2$ gets $M_2,M_3,M_5$ and $M_6$. All these values are uniformly random values because they are generated using uniformly random values $r_0,r_1,r_2,r_4$. $S_2$ computes $M_2M_5+M_3M_6$. The computed value is still uniformly random because it contains uniformly random values $r_0,r_1,r_2,r_4$. As a result, any value learned by $S_2$ is perfectly simulated. For each $i \in \{0,1\}$, $S_i$ learns a fresh share of the output. Thus $S_i$ cannot associate the share of the output with the shares of the inputs and any value learned by $S_i$ is perfectly simulatable.
\end{proof}
\begin{lemma}
\label{lemma:pc}
The protocol $\mathsf{PC}$ in Algorithm \ref{alg:pc} in the Supplement securely realizes the functionality $\mathcal{F}_{PC}$.
\end{lemma}
\begin{proof}
The proof of $\mathsf{PC}$ is given in \cite{wagh2018securenn}.
\end{proof}
\begin{lemma}
\label{lemma:mc}
The protocol $\mathsf{MC}$ in Algorithm \ref{alg:mc} in the main paper securely realizes the functionality $\mathcal{F}_{MC}$ in $\mathcal{F}_{\mathsf{PC}}$ hybrid model.
\end{lemma}
\begin{proof}
First, we prove the correctness of our protocol by showing $(\langle x\rangle_0^K + \langle x\rangle_1^K) \mod K = (\langle x\rangle_0 + \langle x\rangle_1) \mod L$. In the protocol, $y = (x + r) \mod K$ and $\mathsf{isWrap}(x,r,K) = r \overset{?}{>} y$, that is, $\mathsf{isWrap}(x,r,K)=1$ if $r > y$, $0$ otherwise. At the beginning, $S_0, S_1$ and $S_2$ call $\mathcal{F}_{PC}$ to compute $c=r \overset{?}{>} y$ and $S_0$ and $S_1$ obtain the boolean shares $c_0$ and $c_1$, respectively. Besides, $S_2$ sends also the boolean shares $w_0$ and $w_1$ of $w=\mathsf{isWrap}(\langle r\rangle_0,\langle r\rangle_1,K)$ to $S_0$ and $S_1$, respectively. If $\mathsf{isWrap}(\langle y\rangle_0,\langle y\rangle_1,K)$ is $1$ then $S_0$ adds $K$ to $\langle y\rangle_0$ to change the ring of $y$ from
$K$ to $L$. To convert $r$ from ring $K$ to ring $L$, $S_0$ and $S_1$ add $K$ to their shares of $r$ based on their boolean shares $w_0$ and $w_1$, respectively. If $w_0 = 1$, then $S_0$ adds $K$ to its $r_1$ and $S_1$ does the similar with its shares. Later, we need to fix is the summation of $x$ and $r$, that is the value $y$. In case of $x+r \geq K$, we cannot fix the summation value $y$ in ring $L$ by simply converting it from ring $K$ to ring $L$. This summation should be $x+r$ in ring $L$ rather than $(x+r)$ $mod$ $K$. To handle this problem, $S_0$ and $S_1$ add $K$ to their shares of $y$ based on their shares $c_0$ and $c_1$. As a result, we convert the values $y$ and $r$ to ring $L$ and fix the value of $y$ if necessary. The final step to obtain $x_i$ for party $S_i$ is simply subtract $r_i$ from $y_i$ where $i \in \{0,1\}$.
Next, we prove the security of our protocol. $S_2$ involves this protocol in execution of $\mathcal{F}_{PC}$. We give the proof $\mathcal{F}_{PC}$ above. At the end of the execution of $\mathcal{F}_{PC}$, $S_2$ learns $n^\prime$. However, $n^\prime = n \oplus (x>r)$ and $S_2$ does not know $n$. Thus $n^\prime$ is uniformly distributed and can be perfectly simulated with randomly generated values. $S_i$ where $i \in \{0,1\}$ sees fresh shares of $\langle r\rangle_i^K$, $\{\langle r[j]\rangle_{i}^{p}\}_{j \in[\ell]}$, $w_i^B$ and $n_i^B$. These values can be perfectly simulated with randomly generated values.
\end{proof}
\begin{lemma}
\label{lemma:cmp}
The protocol $\mathsf{CMP}$ in Algorithm \ref{alg:cmp} in the main paper securely realizes the functionality $\mathcal{F}_{CMP}$ in $\mathcal{F}_{\mathsf{MC}}$ hybrid model.
\end{lemma}
\begin{proof}
First, we prove the correctness of our protocol. Assume that we have $\ell$-bit number $u$. $v = u - (u \mod 2^{\ell-1})$ is either $0$ or $2^{\ell-1}$. $v$ is only represented with the most significant bit (MSB) of $u$. In our protocol, $\langle z\rangle_i$ is the output of $S_i$ where $i \in \{0,1\}$. We need to prove that $\mathsf{Reconstruct(\langle z\rangle_i)}$ is $1$ if $x<y$ and $0$ otherwise. $S_i$ where $i \in \{0,1\}$ computes $d_i^K=(x_i-y_i) \mod K$ which is a share of $d$ over $K$. $S_i$ computes $d_i$ which is a share of $d$ over $L$ by invoking $\mathsf{MC}$. Note that $z= x-y-\mathsf{Reconstruct(\langle d\rangle_i)}$ and all bits of $z$ are $0$ except MSB of $z$ which is equal to MSB of $(x-y)$. Now we need to map $z$ to $1$ if it's equal to $K$ and 0 if it's equal to $0$. $S_0$ sends the $z_0$ and $z_0+K$ in random order to $S_2$ and $S_1$ sends the $z_1$ to $S_2$. $S_2$ reconstructs two different values, divides these values by $K$, creates two additive shares of them, and sends these shares to $S_0$ and $S_1$. Since $S_0$ and $S_1$ know the order of the real MSB value, they correctly select the shares of its mapped value.
Second, we prove the security of our protocol. $S_i$ where $i \in \{0,1\}$ sees $\langle d\rangle_i$ which is a fresh share of $d$ and $\langle a[0]\rangle_i$ and $\langle a[1]\rangle_i$ one of them is a fresh share of the MSB of $x-y$ and the other is a fresh share of the complement of the MSB of $x-y$. Thus the view of $S_i$ can be perfectly simulated with randomly generated values.
\end{proof}
\begin{lemma}
\label{lemma:div}
The protocol $\mathsf{DIV}$ in Algorithm \ref{alg:div} in the main paper securely realizes the functionality $\mathcal{F}_{\mathsf{DIV}}$.
\end{lemma}
\begin{proof}
We first prove the correctness of our protocol. $\langle z\rangle_i$ is the output of $S_i$ where $i \in \{0,1\}$. We prove that $\mathsf{Reconstruct(\langle z\rangle_i)} = \frac{xF}{y}$.
\begin{equation}
\begin{split}
\langle z\rangle_0 + \langle z\rangle_1 ={}& \frac{(r_1\langle x\rangle_0+r_0\langle y\rangle_0+r_1\langle x\rangle_1+r_0\langle y\rangle_1)F}{r_1\langle y\rangle_0+r_1\langle y\rangle_1} - \frac{r_0F}{r_1}\\
={}& \frac{(r_1x+r_0y)F}{r_1y} - \frac{r_0F}{r_1}\\
={}& \frac{xF}{y} + \frac{r_0F}{r_1} - \frac{r_0F}{r_1} \\
={}& \frac{xF}{y}\\
\end{split}
\label{eq:div}
\end{equation}
$\mathsf{DIV}$ method produces correct results when $S_0$ and $S_1$ know the upper limit of $x$ and $y$ values. In this case, the values of $r_0$ and $r_1$ are chosen in such a way that the values $r_1\langle x\rangle_0+r_0\langle y\rangle_0+r_1\langle x\rangle_1+r_0\langle y\rangle_1$ and $r_1\langle y\rangle_0+r_1\langle y\rangle_1$ do not wrap around $\mathbb{Z}_L$. If wrapping occurs around $\mathbb{Z}_L$, $\mathsf{DIV}$ method produces the wrong result.
Next we prove the security of our protocol. $S_2$ gets $a_0,a_1,b_0$ and $b_1$. All these values are uniformly random values because they are generated using uniformly random values $r_0$ and $r_1$. As a result, any value learned by $S_2$ is perfectly simulated. For each $i \in \{0,1\}$, $S_i$ learns a fresh share of the output. Thus $S_i$ cannot associate the share of the output with the shares of the inputs and any value learned by $S_i$ is perfectly simulatable.
\end{proof}
\begin{lemma}
\label{lemma:auc}
The protocol in Algorithm \ref{alg:auc1} in the main paper securely computes AUROC in $(\mathcal{F}_{\mathsf{MUL}},\mathcal{F}_{\mathsf{DIV}})$ hybrid model.
\end{lemma}
\begin{proof}
In the protocol, we separately calculate the numerator $N$ and the denominator $D$ of the AUROC, which can be expressed as $AUROC=\frac{N}{D}$. Let us first focus on the computation of $D$. It is equal to the multiplication of the number of samples with label $1$ by the number of samples with label $0$. In the end, we have the number of samples with label $1$ in $TP$ and calculate the number of samples with label $0$ by $P - TP$. Then, the computation of $D$ is simply the multiplication of these two values. In order to compute $N$, we employed Equation \ref{eq:auc_wo_tie} in the main paper. We have already shown the denominator part of it. For the numerator part, we need to multiply the current $TP$ by the change in $FP$ and sum up these multiplication results. $\langle A\rangle \gets \mathsf{MUL}(\langle TP\rangle,\langle FP\rangle-\langle pFP\rangle)$ computes the contribution of the current sample on the denominator and we accumulate all the contributions in $N$, which is the numerator part of Equation \ref{eq:auc_wo_tie} in the main paper. Therefore, we can conclude that we correctly compute the AUROC.
Next, we prove the security of our protocol. $S_i$ where $i \in \{0,1\}$ sees $\{\langle RL\rangle\}_{j\in M}$, $\{\langle A\rangle\}_{j\in M}$, $\langle D\rangle$ and $\langle ROC\rangle$ which are fresh shares of these values. Thus the view of $S_i$ is perfectly simulatable with uniformly random values.
\end{proof}
\begin{lemma}
\label{lemma:aucdetec}
The protocol in Algorithm \ref{alg:ties} in the main paper securely marks the location of ties in the list of prediction confidences.
\end{lemma}
\begin{proof}
For the correctness of our protocol, we need to prove that for each index $j$ in $T$, $t[j].con=0$ if $(C[j]-C[j+1])=0$, $t[j].con=1$, otherwise. We first calculate the difference of successive items in $C$. Let assume we have two additive shares $(\langle a\rangle_0,\langle a\rangle_1)$ of a over the ring $\mathbb{Z}_L$. If $a=0$, then $(L-\langle a\rangle_0) \oplus \langle a\rangle_1 = 0$ and if $a\neq0$, then $(L-\langle a\rangle_0) \oplus \langle a\rangle_1 \neq 0$ where $L-\langle a\rangle_0$ is the additive modular inverse of $\langle a\rangle_0$. We use this fact in our protocol. $S_0$ computes the additive inverse of each item $\langle c\rangle_0$ in $\langle C\rangle_0$ which is denoted by $\langle c\rangle_0'$, XORes $\langle c\rangle_0'$ with a common random number in $R$, which is denoted by $\langle c\rangle_0''$ and permutes the bits of $\langle c\rangle_0''$ with a common permutation $\sigma$ which is denoted by $\langle c\rangle_0'''$. $S_1$ XORes each item $\langle c\rangle_1$ in $\langle C\rangle_1$ with a common random number in $R$ which is denoted by $\langle c\rangle_1''$ and permutes the bits of $\langle c\rangle_1''$ with a common permutation $\sigma$ which is denoted by $\langle c\rangle_1'''$. $S_i$ $i \in \{0,1\}$ permutes values in $\langle C\rangle_i'''$ by a common random permutation $\pi$ which is denoted by $\langle D\rangle_i$. After receiving $\langle D\rangle_0$ and $\langle D\rangle_1$, $S_2$ maps each item $d$ of $D$ to $0$ if $\langle d\rangle_0^\prime \oplus \langle d\rangle_1 = 0$ which means $\langle d\rangle_0 + \langle d\rangle_1 = 0$ and maps $1$ if $\langle d\rangle_0^\prime \oplus \langle d\rangle_1 \neq 0$ which means $\langle d\rangle_0 + \langle d\rangle_1 \neq 0$. After receiving a new share of $D$ from $S_2$, $S_i$ $i \in \{0,1\}$ removes dummy values and permutes remaining values by $\pi'$. Therefore, our protocol correctly maps items of $C$ to $0$ or $1$.
We next prove the security of our protocol. $S_i$ where $i \in \{0,1\}$ calculates the difference of successive prediction values. The view of $S_2$ is $D$ which includes real and dummy zero values. $S_i$ XORes each item of $\langle C\rangle_i$ with fresh boolean shares of zero, applies a random permutation to bits of each item of $\langle C\rangle_i$, applies a random permutation $\pi$ to $\langle C\rangle_i$ and add dummy zero and non-zero values. Thus differences, the index $j$ where $D[j]=0$, the index $j$ where $D[j] \neq 0$ are uniformly random. The number of zero and non-zero values are not known to $S_2$ due to dummy values. With common random permutations $\sigma_{j\in M}$ and common random values $R[j], j\in M$, each item in $C$ are hidden. Thus $S_2$ can not infer anything about real values in $C$. Furthermore, the number of repeating predictions is not known to $S_2$ due to a random permutation $\pi$.
\end{proof}
\begin{lemma}
\label{lemma:auctie}
The protocol in Algorithm \ref{alg:auc2} in the main paper securely computes AUROC in ($\mathcal{F}_{\mathsf{MUL}}$,$\mathcal{F}_{\mathsf{MUX}}$,$\mathcal{F}_{\mathsf{DIV}}$) hybrid model.
\end{lemma}
\begin{proof}
In order to compute the AUROC in case of tie, we utilize Equation \ref{eq:auc_with_tie} in the main paper, of which we calculate the numerator and the denominator separately. The calculation of the denominator $D$ is the same as Lemma \ref{lemma:auc}. The computation of the numerator $N$ has two different components, which are $N_1$ and $N_2$. $N_1$, more precisely the numerator of $T[i-1] * (F[i] - F[i-1])$, is similar to no-tie version of privacy preserving AUROC computation. This part corresponds to the rectangle areas in ROC curve. The decision of adding this area $A$ to the cumulative area $N_1$ is made based on the result of the multiplication of $A$ by $t.con$. $t.con=1$ indicates if the sample is one of the points of prediction confidence change, $0$ otherwise. If it is $0$, then $A$ becomes $0$ and there is no contribution into $N_1$. If it is $1$, then we add $A$ to $N_1$. On the other hand, $N_2$, which is the numerator of $(T[i] - T[i-1]) * (F[i] - F[i-1])$, accumulates the triangular areas. We compute the possible contribution of the current sample to $N_2$. In case this sample is not one of the points that the prediction confidence changes, which is determined by $t.con$, then the value of $A$ is set to $0$. If it is, then $A$ remains the same. Finally, $A$ is added to $N_2$. Since there is a division by $2$ in the second part of Equation \ref{eq:auc_with_tie} in the main paper, we multiply $N_1$ by $2$ to make them have common denominator. Afterwards, we sum $N_1$ and $N_2$ to obtain $N$ In order to have the term $2$ in the common denominator, we multiply $D$ by $2$. As a result, we correctly compute the denominator and the nominator of the AUROC.
Next, we prove the security of our protocol. $S_i$ where $i \in \{0,1\}$ sees $\{\langle RL\rangle\}_{j\in M}$, $\{\langle A\rangle\}_{j\in M}$, $\{\langle pFP\rangle\}_{j\in M}$, $\{\langle pTP\rangle\}_{j\in M}$, $\langle D\rangle$ and $\langle ROC\rangle$ which are fresh shares of these values. Thus the view of $S_i$ is perfectly simulatable with uniformly random values.
\end{proof}
\begin{lemma}
\label{lemma:prc}
The protocol in Algorithm \ref{alg:prc} in the main paper securely computes AUPR in ($\mathcal{F}_{\mathsf{MUL}}$,$\mathcal{F}_{\mathsf{MUX}}$,$\mathcal{F}_{\mathsf{DIV}}$) hybrid model.
\end{lemma}
\begin{proof}
In order to compute the AUPR , we utilize Equation \ref{eq:auc_with_tie} in the main paper of which we calculate the numerator and the denominator separately. We nearly perform the same computation with the AUROC computation in case of tie. The main difference is that we need perform division to calculate each precision value because denominators of each precision value are different. The rest of the computation is the same with the computation in Algorithm \ref{alg:auc2} in the main paper. The readers can follow the proof of Lemma \ref{lemma:auctie}.
Next, we prove the security of our protocol. $S_i$ where $i \in \{0,1\}$ sees $\{\langle RL\rangle\}_{j\in M}$, $\{\langle T\_PC\rangle\}_{j\in M}$, $\{\langle A\rangle\}_{j\in M}$, $\{\langle pPC\rangle\}_{j\in M}$, $\{\langle pRC\rangle\}_{j\in M}$, $\langle D\rangle$ and $\langle ROC\rangle$ which are fresh shares of these values. Thus the view of $S_i$ is perfectly simulatable with uniformly random values.
\end{proof}
\begin{lemma}
\label{lemma:sorting}
The sorting protocol in Section 5 in the main paper securely merges two sorted lists in ($\mathcal{F}_{\mathsf{CMP}}$,$\mathcal{F}_{\mathsf{MUX}}$) hybrid model.
\end{lemma}
\begin{proof}
First, we prove the correctness of our merge sort algorithm. $L_1$ and $L_2$ are two sorted lists. In the merging of $L_1$ and $L_2$, the corresponding values are first compared using the secure $\mathsf{CMP}$ operation. The larger values are placed in $L_1$ and the smaller values are placed in $L_2$, after the secure $\mathsf{MUX}$ operation is called twice. This process is called \textit{shuffling} because it shuffles the corresponding values in the two lists. After the shuffling process, we know that the largest element of the two lists is the top element of $L_1$. Therefore, it is removed and added to the global sorted list $L_3$. On the next step, the top elements of $L_1$ and $L_2$ are compared with the $\mathsf{CMP}$ method. The comparison result is reconstructed by $S_0$ and $S_1$ and the top element of $L_1$ or $L_2$ is removed based on the result of $\mathsf{CMP}$ and added to $L_3$. The selection operation also gives the largest element of $L_1$ and $L_2$ because $L_1$ and $L_2$ are sorted and the selection operation selects the larger of the top elements of $L_1$ and $L_2$. We show that shuffling and selection operations give the largest element of two sorted lists. This ensures that our merge sort algorithm that only uses these operations correctly merges two sorted lists in ordered manner.
Next, we prove the security of our merge sort algorithm. In the shuffling operation, $\mathsf{CMP}$ and $\mathsf{MUX}$ operations are called.
$\mathsf{CMP}$ outputs fresh shares of comparison of corresponding values in $L_1$ and $L_2$. Shares of these comparison results are used in $\mathsf{MUX}$ operations and $\mathsf{MUX}$ operation generates fresh shares of the corresponding values. Therefore, $S_0$ and $S_1$ cannot precisely map these values to the values in $L_1$ and $L_2$. In the selection operation, $\mathsf{CMP}$ is called and the selection is performed based on the reconstructed output of $\mathsf{CMP}$. $S_0$ and $S_1$ are still unable to map the values added to $L_3$ to the values in $L_1$ and $L_2$ precisely because at least one shuffling operation took place before these repeated selection operations. Shuffling and $\delta-1$ selection operations are performed repeatedly until the $L_1$ is empty. After each shuffling operation, the fresh share of the larger corresponding values in $L_1$ and the fresh share of the smaller corresponding values in $L_2$ are stored. The view of $S_0$ and $S_1$ are perfectly simulatable with random values due to the shuffling process performed at regular intervals.
It is possible in some cases to use unshuffled values in selection operations. To prevent this, the following rules are followed in the execution of the merge protocol. If there are two lists that do not have the same length, the longer list is chosen as $L_1$. If the $\delta$ is greater than the length of the $L_2$ list, it is set to the largest odd value smaller or equal to the length of $L_2$ so that the unshuffled values that $L_1$ may have are not used in selection processes. If the length of $L_2$ is reduced to $1$ at some point in the sorting, the $\delta$ is set to $1$. Thus $L_2$ will have $1$ element until the end of the merge and shuffling is done before each selection.
\end{proof}
\subsubsection{Privacy against Malicious Adversaries}
\citet{araki2016high} defined the notion of privacy against malicious adversaries in the client-server setting. In this setting, the servers performing secure computation on the shares of the inputs to produce the shares of the outputs do not see the plain inputs and outputs of the clients. This notion of privacy says that a malicious party cannot break the privacy of input and output of the honest parties. This setting is very similar to our setting. In our framework, two parties exchange a seed which is used to generate common randoms between them. Two parties randomize their shares using these random values which are not known to the third party. It is very easy to add fresh shares of zero to outputs of two parties with common random values shared between them. In our algorithms, we do not state the randomization of outputs with fresh shares of zero.
Thus, our framework provides privacy against a malicious party by relying on the security of a seed shared between two honest parties.
\bibliographystyle{unsrtnat}
|
{
"timestamp": "2021-07-01T02:17:52",
"yymm": "2102",
"arxiv_id": "2102.08788",
"language": "en",
"url": "https://arxiv.org/abs/2102.08788"
}
|
\section*{Introduction}}
\IEEEPARstart{B}{luetooth} Low Energy (BLE) beacons, commonly known as beacons, are devices growing in popularity and research potential. Their deployment is projected to reach 400 million deployed devices globally by the year 2020~\cite{deployed}. They can help with the next step from IoT devices to smart and social objects that interact with the users~\cite{atzori, zanella}. Beacons broadcast signals at certain intervals and within their transmission range. An analogy of the beacon's operation is with the operation of a lighthouse, which represents a known location that can be uniquely identified by its light. Every ship that sees the light, they know about the existence of the lighthouse. However, the lighthouse neither communicates with the ships nor does it know how many ships see its light. Similarly, a beacon is broadcasting a radio signal to advertise to BLE-enabled devices its presence in the area. It is not able to communicate with the devices nor to identify how many devices are receiving its signal.
Beacons operation is shown in Fig.~\ref{beac}. Several beacons in an area, they broadcast their signals. BLE-enabled devices, such as smartphones, smartwatches and single-board computers like Raspberry Pis can listen to the signal and through applications, they can trigger some actions. These applications are running on the hosting device, while the beacons are not aware of them nor of the number of nearby beacon devices.
\begin{figure}[t!]
\centering%
\includegraphics[width=\columnwidth]{beaconCITY.pdf}%
\caption{BLE beacons deployed in a city and broadcasting their signals to nearby devices. A device can listen signals from several beacons and take some actions in response.}%
\label{beac}%
\end{figure}
They are wireless devices that use Bluetooth Low Energy (BLE) technology, which was developed for the purposes of low power consumption for applications that require minimal data throughput. Their small size, low cost, and relatively long battery life increase their popularity. They broadcast packets that identify the particular beacon, along with possible telemetry data collected from sensors that may have been included by the beacon manufacturer.
Beacons are attractive solutions for several Internet of Things (IoT) applications~\cite{jeon}. Their small size and low cost provide a means of increased scalability and their BLE functionality establishes simple integration with smartphone devices, making them a highly versatile device. They can be used in a plethora of smart city applications, from advertisement and Location Based Services (LBS) to indoor localization~\cite{spachos}, positioning, and tracking~\cite{he}.
\subsection*{Wireless Technologies for Smart City Applications}
For wireless data transmission, smart city applications and IoT devices can take advantage of traditional wireless infrastructure, to minimize the additional deployment cost, or use some of the wireless technologies that are designed specifically for IoT devices and smart cities. The unique characteristics of each application should always be considered before the selection of the proper technology~\cite{morin}.
Among the technologies that have been used successfully in the past for general purpose devices and they are popular for IoT systems as well, is the IEEE 802.11 standard, commonly known as Wi-Fi. A popular technology due to the great distribution of access points and signal availability in different environments. Zigbee is another popular communication protocol based on the IEEE 802.15.4 standard, known for its low-power and secure networking. Zigbee is intended to be simpler and less expensive than general wireless networking. LoRaWAN is a long-range, low power consumption wireless technology while Near Fields Communication (NFC) allows wireless data transfer between two portable devices in close proximity. Radio Frequency Identification Device (RFID) was primarily designed for data transferring and storing, and it can be passive (tags), where the electromagnetic field of the reader powers the device, or active (reader), where the RFID device has its own power source. Another popular technology is cellular IoT which connects IoT devices using existing cellular networks. Technologies such as NB-IoT and LTE-M will be a key part of 5G, which is a promising solution for future IoT applications with ultra-low latency and wide range services. There are also technologies that are specifically designed for IoT devices such as the IEEE 802.11ah (Wi-Fi HaLow) and the Bluetooth Low Energy (BLE) that were designed to support the concept of IoT and smart cities.
A popular wireless technology for short range communication is Bluetooth~\cite{bluetooth}. The standard is managed by the Bluetooth Special Interest Group (SIG) and can be found in several devices from mobile phones to robotic systems and laptops. Usually, it is used in symmetric connections between two devices. Bluetooth 4.0 aimed at novel applications in the healthcare, fitness, beacons and Bluetooth Low Energy (BLE) which is part of Bluetooth 4.0, is the popular beacon technology. In comparison with traditional Wi-Fi, it has low energy requirements and extended range. BLE was designed specifically for IoT and smart cities application~\cite{ble}. It has low power requirements and good data transfer rates. BLE 4.0 can reach 25~Mbit/s at a distance of 60~meters. As a competitor of Wi-Fi HaLow among IoT devices, Bluetooth 5.0 was introduced recently. This latest version is claimed to have four times longer transmission range, exchange data eight times faster, while it has twice the speed of the previous version.
\section*{BLE Beacons Characteristics}
BLE beacons have received a lot of attention due to their unique characteristics that made them ideal for several applications. They are wireless devices with the main goal of bringing attention to their location. Beacons are very small size, very low power, and especially low-cost devices that broadcast a wireless signal to all nearby devices. There is a wide variety of BLE beacon devices available from many vendors with different hardware, firmware, and protocol.
\subsection*{Hardware}
Beacon hardware is compact and simple. The hardware components dictate important factors, such as the cost, the power consumption, the performance, the on-chip memory, and the size. Similar to other wireless device hardware, beacon hardware has three components: the radio chip, the microcontroller, and the power source. Additionally, there are beacons that have some sensors and general peripherals.
\begin{table}[t!]
\normalsize
\centering
\begin{tabular}{|l|c|c|c|c|}\hline
\multirow{2}{*}{Manufacture} & \multirow{2}{*}{SoC} & Integrated & Current \\
& & Processor &Cons. (RX/TX) \\ \hline \hline
\multirow{2}{*}{Texas} & CC2541 & 8051 & 18.2 mA \\ \cline{2-4}
& CC256x & External & - \\ \cline{2-4}
Instruments & CC26xx & Cortex-M3 & 5.9 mA \\ \hline \hline
Nordic & nRF51822 & Cortex-M0 & 9.7/ 8 \\ \cline{2-4}
Semiconductors & nRF8001 & External & 14.6/12.7 mA \\ \hline \hline
Dialog & \multirow{2}{*}{DA14580} &\multirow{2}{*}{Cortex-M0} & \multirow{2}{*}{3.6 mA } \\
Semiconductor && & \\ \hline \hline
Cypress & \multirow{2}{*}{PRoC} & \multirow{2}{*}{Cortex-M0} & \multirow{2}{*}{15.6/ 16.4 mA } \\
Semiconductor & & & \\ \hline
\end{tabular}
\caption{Chipset and their characteristics.}
\label{blechipset}
\end{table}
Texas Instruments, Nordic Semiconductors, Dialog Semiconductors, and Cypress are leading the beacon radio chip development process. Table~\ref{blechipset}, shows some representative popular BLE chipsets. The availability of an integrated processor, the flash and RAM capacity and the current consumption are important factors for the proper chip selection. The processor in some of these chipsets is on-chip while some of them come without a processor. The 8051, and the ARM Cortex-M0 and Cortex-M3 are popular choices. For many smart city applications, this should be sufficient, however, when more performance is required, standalone devices should be selected, that can work with an external microcontroller. The flash capacity starts from 32~kB and goes up to 256~kB and the RAM capacity is between 8~kB and 64~kB. It is important to note that all the chipsets support BLE v4.1 or v.4.2 which is the most common today.
The power source is another critical internal component. Coin cell batteries are popular among beacons. Depending on the size of the hardware, the battery size varies. There are coin cell batteries of 240~mAh allowing the beacon device to have very small dimensions, at the cost of reduced battery life. On the other hand, standard AA~batteries of 2,000~mAh can be used to drastically improve battery life, at the cost of a far larger dimension. There are also beacons with build-in Li-ion battery as well as solar-powered beacons. When external power is required, power outlet and USB outlet are popular choices.
\subsection*{Firmware}
Each beacon has a specific firmware that makes use of the available hardware. A critical characteristic of beacon-based applications is the lifespan of the beacon nodes. The firmware controls several characteristics that impact the total power consumption. Two are the main configuration parameters that greatly affect the power consumption: the transmission power, and the advertising interval.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{power.pdf}%
\caption{Average power consumption under different advertising intervals and transmission power levels.}%
\label{powerRelation}%
\end{figure}
Transmission power is the strength of the signal being broadcast, often represented as decibels with reference to 1~mW, i.e. dBm. As the signal travels in the air, its received signal strength decreases. This is a trade-off. High transmission power levels can achieve long distances with more power consumption requirements, while low transmission power achieves small ranges but less power is required. At the same time, long transmission range can increase the interference between beacons, while short ranges might not fit the application needs.
The second configuration parameter is the advertising interval. This is the frequency in which packet transmissions occur, often expressed in milliseconds. This is another trade-off. A high advertising interval of 100~ms (i.e. 10 times in a second) will lead to a faster battery drain, but the receiver can get more signals and perform tasks with high accuracy, such as micro localization~\cite{spachos}. On the other hand, a low advertising interval of 1,000~ms (i.e. 1 time in a second) will lead to an extended lifespan of the beacon, but it should be preferred in applications that can cope with this latency, such as in proximity-based applications. Fig.~\ref{powerRelation} depicts how the energy consumption changes over three transmission power levels and three advertising intervals for a BLE beacon. It is clear that the power consumption is proportional to the transmission power and inversely proportional to the advertising interval.
\subsection*{Protocol structure}
BLE has 40 physical channels in the 2.4GHz ISM band, each separated by 2MHz. BLE defines two types of transmissions, advertising and data transmission. Out of the 40 channels, three channels, 37, 38 and 39 are used for advertising and the rest for data transmission~\cite{bluetooth}. The three channels were selected to avoid conflict with Wi-Fi traffic in the area. It is important to note that beacons are connectionless devices, hence no device pairing is required. BLE defines a packer format for transmission. This format has four components: preamble, access address, Protocol Data Unit (PDU) and Cyclic Redundancy Check (CRC).
Beacons need a protocol that facilitates the integration of manufacturing, programming, transmission, and general functionality. Bluetooth SIG has not defined an official beaconing standard, however, among the most commonly used protocols is the iBeacon by Apple~\cite{ibeaconpacket}, the Eddystone by Google~\cite{eddystone}, the AltBeacon by Radius Network~\cite{altbeacons}, and the GeoBeacon by Tecno-World~\cite{geobeacon}. The structure of each protocol is shown in Fig.~\ref{protocols}. Among the different fields, the beacon ID is crucial, the Major and Minor fields can be used to identify different areas and applications while the RSSI and the coordinates provide useful information for positioning and tracking.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{protocols.pdf}%
\caption{Structure of four BLE protocols.}%
\label{protocols}%
\end{figure}
Each approach has advantages and for every smart city application, the protocol should be carefully selected. For instance, iBeacon and AltBeacon offer more space to forward information. Although iBeacon uses specific manufacturer ID for every chipset, in AltBeacon, which is open source protocol, the manufacturer ID can be defined by the user. On the other hand, Eddystone broadcasts three different types of packets: UID, URL, and TLM which can help to transfer different data, while GeoBeacon has a very compact data storage and can provide high resolution coordinates, especially for location-based applications.
\subsection*{Additional Sensing Capabilities (Peripherals)}\label{sensing}
BLE beacons often include a variety of additional low power sensors such as temperature and humidity sensors, luxometer, barometer, accelerometer, gyroscope, microphone, etc. The sensors enable the beacons to provide useful information about the environmental conditions in which it is deployed. At the same time, they can be used for many unique applications, such as micro-climate data collection and acoustic monitoring level prediction.
\section*{Smart City Applications}
Several smart city applications can take advantage of the unique characteristics of beacons. They can be placed in many environments, both indoor and outdoor and convert them into smart areas, by providing interaction with the users. Most of the applications are context-aware services and they fall into two general categories: Proximity-Based Services (PBS), and Location-Based
Services (LBS). In both services, beacons can be placed in a static position or attached to moving objects.
\begin{figure}[t!]
\centering
\captionsetup[subfloat]{farskip=0pt}%
\subfloat[Beacon beside an exhibit.]{\includegraphics[width=0.45\columnwidth]{museum.pdf}
\label{museum}}
\subfloat[Beacon inside a luggage.]{\includegraphics[width=0.45\columnwidth]{luggage.pdf}
\label{luggage}}
\caption{Proximity-Based Services: Beacon (a) besides Point of Interest and (b) attached to a moving object.}
\label{labraw}
\end{figure}
\subsection*{Proximity-Based Services}
PBS delivers information according to the proximity of the receiver node from the transmitter node.
\subsubsection*{Point of Interest}
When the beacons are static, they can be used as Point of Interest (PoI) solutions and enhance interactivity in a smart museum or for proximity marketing. In a smart museum~\cite{museum}, they can be placed beside the exhibits, shown in Fig.~\ref{museum}, and when the visitors are close to them, they can forward to the visitor's smartphone, useful information about the exhibit. In a shopping mall, offers can be provided to the users when they are about to enter a store or a restaurant, as long as they have their smartphone Bluetooth active and use the proper mobile application. Beacons can offer many more opportunities to deliver context and enhance interactivity at the right time and place. However, the proper placement of the beacons and their advertising interval are crucial. The users might end up getting notifications from every beacon in the area which can lead to too many notifications and information that might not be useful.
\subsubsection*{Moving objects}
Beacons can be attached to moving objects such as luggage, bicycles or even cars. Mobile applications can be implemented in order to collect beacon signals and notify the users when they are close to them. For instance, beacons can be placed inside a luggage, shown in Fig.~\ref{luggage}, and when the luggage in a crowded airport is close to their owner, their smartphone can send them a notification. The advantages of using beacons in such applications are the low cost and the ease of deployment. At the same time, the broadcasting nature of beacon signals poses some security concerns. The location of valuable assets can be revealed to eavesdroppers through the beacon signals.
\begin{figure}[t!]
\centering
\captionsetup[subfloat]{farskip=0pt}%
\subfloat[Indoor Positioning System.]{\includegraphics[width=0.5\columnwidth]{ips.pdf}
\label{ips}}
\subfloat[Real Time Location System.]{\includegraphics[width=0.5\columnwidth]{rtls.pdf}
\label{rtls}}
\caption{Location-Based Services: Beacon used (a) at an IPS and (b) at a RTLS.}
\label{labraw}
\end{figure}
\subsection*{Location-Based Services}
LBS delivers information according to the location of the user.
\subsubsection*{Indoor Positioning Systems}
In Indoor Positioning System (IPS), beacons can be deployed at static positions, shown in Fig.~\ref{ips}, in a complex indoor environment, such as an office or a University. A user can position in the area by performing localization on a BLE-enabled device such as a smartphone, according to the received signals. The beacons keep broadcasting while the users collect these signals with their smartphone and navigate in the area. However, several factors, such as the number of the transmitting beacons and their transmission power can greatly affect the accuracy of the system. These factors should always be selected after extensive experimentation in the area. The placement of the beacons would be a challenge, due to the fact the beacons performance decreases when the interference increases. Hence, beacons cannot be placed too close to each other. At the same time, techniques such as fingerprinting, which records the signal strength from several beacons in range and store this information in a database along with the known coordinates of each beacon, can be used to improve the overall system accuracy.
\subsubsection*{Real Time Locating Systems}
In Real Time Locating System (RTLS), beacons can be attached to valuable moving assets, shown in Fig.~\ref{rtls}, in a hospital or a warehouse. RTLS works in the opposite way of the IPS: the moving beacon transmits the signal to edge devices, which performs the localization. A hospital can integrate beacon technology in order to maximize information exchange. Beacons can be placed on critical medical assets and devices. They can report the location of the assets in real time in a general management system. The beacon will transmit the signal to nearby collecting devices, such as Raspberry Pi. Then, the beacon location can be estimated. There are many techniques that can be used to find the location of the beacon, such as trilateration from the Received Signal Strength of the beacons~\cite{spachos}. As the beacon is moving in the area, its location can be tracked based on the information it transmits to nearby devices. When needed, the exact location of each device in a large hospital can be found. In such applications, the advertising interval of the beacon should be carefully selected to meet the expected moving speed of the object. At the same time, factors such as the number of the collecting devices and their placement can also affect the performance of the system. Advance filtering approaches should be implemented to improve the localization accuracy of the system. With further processing, beacons on nearby devices can also navigate the user to the required device.
\section*{Security and Privacy Challenges}
The high deployment of beacons in smart city applications has also raised several security issues and privacy concerns~\cite{shao}. Some of these concerns are more challenges to cope with, depending on the application and the nature of the information.
\subsection*{Security issues}
There are some security issues regarding beacons. Some more challenging to cope with.
\begin{itemize}
\item \textbf{Cracking:} Due to their small size, beacons can be placed in many locations. Usually, beacons are ``hidden" in different spots to cover an area. An important security attack on beacons is their physical removal. An attacker can remove the beacon from a wall, open it up and have straight access to its hardware and any stored information.
A real time monitoring of the beacon status can alleviate this problem along with the decrease of the information that is stored inside the beacon memory. Information such as user passwords or user preferences should not be stored locally in the beacon. At the same time, when an interruption of the communication between the control system and the beacon happens, an alert should be sent to the system administrator. However, an increase in the communication between the beacon and the control system may affect the lifespan of the beacons dramatically. Therefore, there must be a balance between monitoring frequency and energy management.
\item \textbf{Spoofing:} Spoofing is when an attacker detects and clones a beacon. Beacons do not come with advanced encryption mechanisms. Hence, most of the time they broadcast their ID. An attacker who wants to attack the beacon and consequently, the users using this beacon, can use the same ID at another area and create a clone beacon. With the use of the clone beacon, the attacker can forward false information to the user. For instance, in a smart museum with beacons in the building, one beacon can send welcome messages to the visitors when they use the museum application and enter the building. An attacker can copy the beacon ID and replay the welcome message in another location, far away from the museum, leading the visitors to remove the application.
A technique to minimize the issue of spoofing is by dynamically changing the ID of the beacons periodically. Beacons can create random IDs periodically to minimize spoofing. However, the user has to accept a connection to the new beacon ID every time.
\item \textbf{Piggybacking:} Piggybacking is when an attacker listens to a beacon and captures the UUIDs, Majors, and Minors and adds them to another application without consent. The attacker can then even clone the first application. For instance, in a shopping mall, Store A can offer a BLE-based mobile application that sends promotion codes to customers, when they are close. Store B can clone the beacon and the mobile application for its customers. In this way, when the customers that have the application of Store B, enters Store A, they will receive promotions for Store B.
\item \textbf{Hijacking:} In the communication with the beacons, there is no encryption techniques. Passwords or important information that is broadcasted from the beacon can be hijacked by an eavesdropper. Advanced encryption mechanisms can be applied to alleviate some of the security issues. However, this may adversely affect the lifespan of the beacon.
\end{itemize}
\subsection*{Privacy concerns}\label{privacy}
Privacy is important, especially when it comes to interaction with everyday objects and can reveal private patterns and habits. The following privacy concerns should be considered for BLE beacon applications.
\begin{itemize}
\item \textbf{Static IP.} Most BLE beacons have a static IP. This static IP is broadcasted so everyone in the transmission area can receive it. Hence, an attacker can mimic a trusted beacon, by using the same trusted IP and have access to private information.
Many vendors have started research on dynamic IP assignment. This would require extra energy that may have varying effects on the lifespan of the beacon.
\item \textbf{Risk of unlawful surveillance.} Another important privacy concern comes from unlawful surveillance. Most of the beacon applications are based on localization. By using location services offered from beacons, the users share their location with them.
Any attack on the beacons can reveal behavioural patterns about the user. Hence a user can be under surveillance without permission or their behavioural and location patterns can be shared with unauthorized personnel through the beacons.
\item \textbf{Risk of undesired advertisement.} Beacons are used a lot for advertisement. For more targeted advertisements, information about the user can be shared with the beacon applications. This information can be used by third parties for further undesired advertisements.
\end{itemize}
\section*{Future Directions}
Although some of the challenges remain, their unique characteristics will increase the BLE beacons deployment in smart city applications. Since beacons can transmit a lot information over seconds, providing in this way a large amount of data, machine learning techniques can be used to improve their usage. It is important to have smart data processing for smart city applications. For instance, deep learning approaches can be applied to improve the localization accuracy of the deployed systems. The availability and the low cost of the BLE signals can help towards this direction. At the same time, general machine learning approaches can be applied to alleviate some of the current security and privacy challenged. The integration of machine learning approaches that can enhance security and privacy, into content aware location-based services along can open a new and promising research area. Similarly, social learning using BLE beacons can be used to promote wellness.
The new BLE v.5.0 can improve the accuracy of current navigation applications and lead to the development of several more. The addition of the direction finding capability can be used along with techniques such as Angle of Arrival (AoA) and Angle of Departure (AoD) to improve the performance of the BLE-based systems.
\section*{Conclusion}
BLE beacons are a promising, low cost and energy-efficient IoT solution, mainly for location application. The selection of the proper beacon and the optimal configuration of the beacon parameters are important factors for successful application deployment. Experiments should be conducted to examine the performance of different beacons in different areas while security and privacy should always be a concern. At the same time, the unique characteristics of BLE beacon make them an attractive solution for several smart city applications.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-02-18T02:17:39",
"yymm": "2102",
"arxiv_id": "2102.08751",
"language": "en",
"url": "https://arxiv.org/abs/2102.08751"
}
|
\section{Introduction}\label{sec:intro}
Modern university courses often use formative and summative e-assessment systems. Especially when courses have a large number of participants such tools are useful to give students individual feedback. Courses with quantitative content such as statistics and introductory mathematics are particularly suitable for e-assessment. This is because fill-in exercises -- which require students to submit a numeric answer -- conveniently allow to assess whether students can solve a task. The e-assessment system JACK is a framework for delivering and grading complex exercises of various kinds. It was originally created to check programming exercises in Java \cite{Striewe2009}, but has been extended to other exercise types such as multiple-choice and fill-in exercises \cite{Striewe2015a,Striewe2016,Schwinning2017}. JACK offers parameterizable content, meaning that exercises can contain different values each time an exercise is practiced. This does not only mean that different students get different parameterizations but, also, that each student gets different numbers each time he or she tackles an exercise. Hence, the task remains challenging until he or she understands the exercise's underlying concept.
In addition to fill-in exercises, JACK allows to design exercises with dynamic programming content. For instance, JACK offers exercises in R -- the standard statistical programming language see \cite{R}. Programming exercises do not only prepare students for modern statistical work, but have also been shown to be highly beneficial for fostering their understanding of statistics, see \cite{Otto2017,Massing2018b}.
In recent studies JACK data was analyzed to understand students' learning behavior more deeply in an introductory mathematical statistics course. The high correlation between learning effort in the semester and the final grades is well documented, see \cite{Massing2018a} and Section \ref{sec:related} for more examples.
We predict probabilities of passing the final exam for each student ahead of the final exam of the module based on exercise points gained from the JACK exercises of the course in the previous cohort. Subsequently, we send warning emails to students of the current cohort with a low probability of passing the exam to motivate them to study more intensively and thus allow a higher share of students to pass the exam. We note that these emails are administered next to several other interventions implemented to motivate students to learn. Summative assessment in JACK and quizzes during the lecture on the game-based learning platform Kahoot!\footnote{\href{https://kahoot.it}{Kahoot!} is an application where a lecturer can conduct multiple-choice quizzes, \href{https://kahoot.it}{https://kahoot.it}.} give students the chance to get bonus points for the final exam.
In this paper, we aim to investigate the effectiveness of warning emails we sent to the students during the semester. It turns out that the effect of sending warning emails has a positive but insignificant effect on the performance in the final exam.
The remainder of this paper is organized as follows: Section \ref{sec:related} provides a brief overview of related work. Section \ref{sec:method} introduces the statistics course analyzed here. Section \ref{subsec:data} presents the available data and the models used. Section \ref{sec:analysis} discusses the empirical results. Section \ref{sec:conclusion} concludes.
\section{Related work}\label{sec:related}
The overall engagement of students is one of the main covariates of academic success. In the case of mathematical statistics, \cite{Sosa2011} show in a meta-study that the simultaneous use of traditional classroom lectures and e-assessment has a positive effect on students' success. \cite{Massing2018a} substantiate the previous result by analyzing the learning activity on the e-assessment platform JACK. The study reveals that learning effort and success, measured by the total number of (correct) submissions on JACK over the course, positively affects the final grade in the exam. \cite{Otto2017} add additional R-programming exercises to the JACK framework and show that the newly introduced exercise type helps to improve the general understanding of fundamental statistical concepts and thus ultimately yields better results in the final exam.
Due to the empirically observed positive effect of a multitude of variables on academic performance, prediction of the latter has become possible. Here various statistical learning methods are applied to educational data in order to predict student learning outcomes. Often, this outcome is measured with a binary response of pass/fail in order to be able to provide an early-warning to students. \cite{James2013,Hastie2009,Meier2016} give a comprehensive overview of popular statistical learning methods used in the literature. For an overview of how to implement an early-warning-system see, e.g., \cite{binMat2013}.
The literature has identified a number of important predictors. \cite{Gray2014} find evidence for the importance of socioeconomic and psychometric variables as well as pre-university grades, although \cite{Oskouei2014} show that, especially among the socioeconomic variables, the predictive capability can vary across countries. \cite{Baars2017} additionally identify post-admission variables like obtained credits, degree of exam participation and exam success rate in previous courses to have an influence on students' success. \cite{Macfadyen2010,Wolff2013,Elbadrawy2015} analyze the learning activity on learning management systems and are able to accurately predict students' performance with appropriate variables. In a more assessment-based fashion \cite{Huang2013,Burgos2018,Massing2018b} use activity in e-learning frameworks as well as the results of mid-term exams to predict students' success in the final exam. \cite{Asif2017} identify the performance in selected courses as a predictor for the academic achievement at the end of the program. For a literature review on educational data mining, see \cite{Romero2010}.
Several studies show the possibility of prediction of academic performance early after the start of the course.
\cite{akccapinar2019,akccapinar2019b,lu2018,baneres2020,chen2020, chung2019} investigate developing early prediction systems in different contexts of e-learning systems. They all show a high accuracy of the prediction of students' success early in the semester. \cite{conijn2016} analyze the Moodle Learning Managment System (LMS), in which they predict students' performance. They show that for the purpose of early intervention or conditional on in-between assessment grades, LMS data are of little value.
E-learning data allows to use early warning systems to motivate students with a low probability to pass the course. There are several e-learning approaches which use early warning systems.
The Purdue University (West Lafayette), Indiana, developed a similar early intervention system called ``Course Signals'', see \cite{Arnold2010}. Students receive an email notification as well as signal lights (red, yellow, and green) on a traffic signal to inform them about their learning status. \cite{Arnold2012} analyze the retention and performance outcomes realized since the implementation of Course Signals. The quantitative data indicate a strong impact on students’ grades and retention behavior. However, they did not measure the effect of sending warning emails only. Instructors and students have provided information via surveys and interviews, which emphasize the usage of the system. \cite{csahin2019} designed an intervention engine, the Intelligent Intervention System (In2S), based on learning analytics. Students see signal lights for each assessment task as an instructional intervention. The system uses elements of gamification such as a leader board, badges, and notifications as motivational intervention. Learners using In2S indicate the usefulness of the system and want to use it in the context of other courses. \cite{Iver2019} built a Ninth-Grade Early Warning System and investigated on the impact of a ninth-grade intervention on student attendance and course passing. Analyses based on the
pre-specified student outcomes of attendance rate and percentage of ninth-grade course credits earned indicated no statistically significant impact of the intervention. On secondary outcome variables, results indicated that students in treatment schools were significantly less likely than control school students to be chronically absent.
The evaluations of these online-courses via interviews and surveys stress the possibility to detect at-risk students early in the semester.
There are further studies not only on warnings during the semester but on additional incentives created to motivate learners. \cite{Edmunds2002, stanfield2008} tried to determine the effects of various incentives on the reading motivation of third or fourth grade learners. The findings of the studies indicated that there were no significant differences in reading motivation between students who received incentives and those who did not.\\
\cite{Parkin2012} aimed to evaluate how a range of technical interventions might encourage students to engage with feedback and formulate actions to improve future learning. They found that the online publication of grades and feedback and the adaptive release of grades were found to significantly enhance students’ engagement with their feedback.
A suitable tool to quantify the effect of early interventions on students’ performance is the \textit{regression discontinuity design} (RDD). In this design there are two groups of individuals, in which one group gets a specific treatment. The value of a covariate lying on either side of a fixed threshold determines the assignment to the two groups. The comparison of individuals with values of the covariate just below the threshold to those just above can be used to estimate the effect of the treatment on a specific outcome.
\cite{McEwan2008} use regression discontinuity approaches to estimate the effect of delayed school enrollment on student outcomes.
\cite{Angrist1999} use the regression discontinuity approach to estimate the effect of class size on test scores. \cite{Jacob2004} studied the effect of remedial education on student achievement using a regression discontinuity design.
\cite{gamse2008} examine the impact of the Reading First program, a federal educational program in the United States to ensure that all children learn to read well by the end of third grade.
\section{Course Structure}\label{sec:method}
This section outlines the structure of the analyzed course.
The e-assessment system JACK was used for a lecture and exercise course in mathematical statistics at the University of Duisburg-Essen, Germany.
It started with 802 undergraduate first-year students. It is obligatory for several business and economics programs as well as in teachers' training.
Out of these 802 students, only 337 took an exam at the end of the course, while the others dropped the course in this term (see Table \ref{tab:nostudents}). In addition to classical fill-in and multiple-choice exercises, the course also introduces the statistics software R by offering programming exercises in the e-assessment system JACK where the correctness of students' code is assessed.
\begin{table*}
\center
\begin{tabular}{lccc}
\toprule
Students, who\ldots & counts & $\#$homework submissions & average per student\\
\midrule
took course& 802 & 175,480 & 283 \\
participated at an exam & 337 & 151,179 & 455\\
passed an exam & 127 & 73,641 & 580\\
\bottomrule
\end{tabular}
\caption{Overview of the number of students registered to the course and the number of submissions on JACK.}
\label{tab:nostudents}
\end{table*}
The course consisted of a weekly 2-hours lecture, which introduced concepts, and a 2-hours exercise class, which presented explanatory exercises and problems. Both classes were held classically in front of the auditorium. Due to the large number of students, these classes are limited in addressing students' different speeds of learning and individual questions. To overcome this and to encourage self-reliant learning, as well as to support students who had difficulties to attend classes, all homework was offered on JACK.
There were 175 different exercises on JACK, of which 43 were designed as R-programming exercises and the remainder as fill-in or multiple-choice exercises.
The individual learning success is supported by offering specific automated feedback and optional hints. In case of additional questions, which were neither covered by hints nor feedback, the students were able to ask questions in a Moodle help-forum.
In order to further encourage students to study continuously during the semester, and not only in the weeks prior to the exams, we offered five online tests using JACK. These tests lasted 40 minutes at fixed time slots. All of the online tests contained fill-in or multiple-choice exercises as well as one R exercise. Participation only required a device with internet access, but no compulsory attendance on campus. These summative assessments allowed students to assess their individual state of knowledge during the lecture period. It was not compulsory for students to participate at online tests in order to take the final exam at the end of the course.
Instead, we offered up to 10 bonus points to encourage participation. The bonus points were only added to final exam points if
students passed the exam without bonus. Students may earn up to 60 points in the exam.
Before the last online test of the previous cohort the points students reached in JACK exercises as well as in the previous online tests were analyzed in order to predict individual probabilities of passing the final exams. In the current cohort we split students into three groups, students with a high probability of passing the exam, one group with a moderate probability and the last group with a low probability of passing the exam. The students in the last two groups received a warning email, which was formulated more strictly for the students belonging to the group with a low probability of passing the course.
The final exams (2 in total) were also held electronically. While online tests during the semester could be solved at home with open books, the final exams were offered exclusively at university PC pools and proctored by academic staff. The exam consisted of R exercises ($\sim15\%$), short handwritten proofs ($\sim15\%$) and the remainder of fill-in exercises. Students can only retake an exam if they failed or did not take the previous
one (so that students can pass at most once), but can fail several times.\footnote{Students obtain 6 “malus points” for each failed exam of which they may collect at most 180 during their whole bachelor program.} The maximum points a student achieved in an exam (over both exams per semester) imply the final grade. The corresponding exam will be denoted as the final exam.
\section{Data and Model}\label{subsec:data}
\begin{table}[ht]
\centering
\begin{tabular}{rlr}
\toprule
grade & counts \\
\midrule
1 & 3 \\
2 & 48 \\
3 & 67 \\
4 & 9 \\
5 & 210 \\
$\sum$ & 337 \\
\midrule
failure rate & 0.623 \\
\bottomrule
\end{tabular}
\caption{Overview of the distribution of grades}
\label{tab:freq_grades}
\end{table}
This section presents the data and model used for the analysis. The raw data is collected from three different sources. First, we collected each student's homework submissions on JACK where we monitor the exercise ID, student ID, the number of points (on a scale from 0 to 100) and the time stamp with a minute-long precision. The second data source comprises the online tests and Kahoot!~results, whereby the student may earn extra points for their final grade, see section \ref{sec:method}. Until the treatment was assigned four out of five online test were conducted and every online test was graded on a scale between 0 and 400 points.
Lastly, the response variable is given by their final exam result. For their finale grade, which consists of the final exam result and earned bonus points, the following grading scheme was applied: very good (``1''), good (``2''), satisfactory (``3''), sufficient to pass (``4''), and failed (``5''). We assigned ``6'' to 465 students who participated in the course but did not take any final exams. This reflects our view that students who did not take any exam were even less prepared than students who failed the exams. Table \ref{tab:freq_grades} displays the distribution of the final grades.
Over the whole course JACK registered $175,480$ submissions of homework exercises from students (see Table \ref{tab:nostudents}). We compile the following information for each student $i$ from the raw data:
\begin{itemize}
\item the number of submissions over the whole course
\item the number of submissions in a given period (e.g., between the first and the second online test)
\item The score, defined as, letting $t$ be a day during the semester,
\[ score_{it} := \sum_{j = 1}^{n} \zeta_{ijt} \]
where $\zeta_{ijt}$ is the number of points of the latest submission up to time $t$ of student $i$ in exercise $j, \ j = 1, \ldots , n$. Put differently, the score is the sum of points of the last submissions to every exercise until day $t$ and may be interpreted as the learning progress of student $i$ at time $t$.
\end{itemize}
To determine who should receive a treatment (warning mail), we mainly considered the results from the first four online tests which were conducted until then. We used a Logit model to predict the probability that a student with these online test results would pass the exam. The model was trained with the data obtained from the same course given two years earlier, see \cite{Massing2018a,Massing2018b}. The predicted probability will serve as our running variable $W$ in the \textit{RDD} (equation \eqref{eq:rdd_mod}). These predictions were transformed to an ordinal variable. If the predicted probability of passing the exam were larger than 0.4, the student would be indicated with \textit{no} message (0), and with less than 0.4 with a warning message (1).
However, as the online tests were not mandatory, we further considered the students' general collaboration during the curse and thus modified whether or not an email was sent for a minority of the students. By viewing the collaboration during the course, we wanted to eliminate two disadvantages that could arise if we solely build the decision on the online tests.
First, the online tests were not mandatory and not all students took the online test -- despite the high incentive to earn extra points for their final grade. There are several possible explanations for that, e.g., perhaps some students could not participate since the online tests were at a fixed date and time.
Second, the students were allowed to cooperate during the online tests although all students were graded individually. This could potentially lead to the problem that a student may perform well in the online tests, although they did actually not comprehend the course content.
\begin{table}[]
\centering
\begin{tabular}{@{}llcc@{}}
\toprule
& \multicolumn{1}{l|}{} & \multicolumn{2}{l}{attendance} \\
& \multicolumn{1}{l|}{} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{0} \\ \midrule
\multirow{2}{*}{warning} & \multicolumn{1}{l|}{1} & 183 & 425 \\
& \multicolumn{1}{l|}{0} & 151 & 40 \\
\bottomrule
\end{tabular}
\caption{Distribution between warnings and attendance in at least one exam}
\label{tab:attendance}
\end{table}
Given our data and treatment design, which were not distributed randomly, but rather based on the probability to pass the exam, we use the \textit{RDD} to analyze the effectiveness of our intervention.\footnote{We also investigated alternative modeling approaches like propensity score matching. However, the results were similar and RDD seems to be most suitable given the problem at hand.}
The method allows us to compare students around the cutoff point and hence derive a possible treatment effect. Our identifying assumption is that the participants around the cutoff are similar with respect to other (important) properties -- often referred to as quasi-random. Additionally, more advanced approaches allow to control for other influences. To distinguish between the different RD designs first consider
\begin{align}
\label{eq:rdd_mod}
Y_i = \beta_0 + \alpha T_i + \beta W_i + u_i
\end{align}
and let
\begin{align*}
T_i = \begin{cases}
1 , & W_i \leq c,\\
0, & W_i > c,
\end{cases}
\end{align*}
where $T_i$ indicates if a student received an email, which is determined by the threshold $c$, in our case 0.4 of the predicted probability to pass the exam $W_i$. $Y_i$ is the sum of points of student $i$ in his/her (latest) final exam and $u_i$ is the error term. For the analysis only students who attended at least once in the final exam were included ($n = 337$). This design deterministically assigns the treatment which means that only if $W_i \leq c$ the student will receive the treatment. The treatment effect is represented by $\alpha$.
The approach above is a \textit{sharp RDD} since the two groups (treatment, no treatment) are perfectly separated by the cutoff, which is a crucial assumption in this method to identify a potential treatment effect. Unfortunately, this is not the case in our design as we also want to reflect the individual collaboration to the course in our decision. Thus, the groups are no longer perfectly separated. Consequently, we cannot apply this method to our data.
We therefore refer to an extension of this design, called \textit{fuzzy RDD}. In this case only the probability of receiving the treatment needs to increase considerably at the cutoff and not from 0 to 1 as in the sharp design. This non-parametric approach estimates a \textit{local average treatment effect (LATE)} $\alpha$ in equation \eqref{eq:2_stage} through an instrumental variable (IV) setting \cite{angristidentification1996}.
Consider the following model
\begin{align}
\label{eq:2_stage}
Y_i & = \beta_0+\alpha \ \widehat{T_i} + \delta_1 W_i + X_i^{\mathrm{T}} \ \pmb{\beta} + u_i \\
T_i & = \gamma_0 + \gamma_1 \ Z_i + \gamma_2 \ W_i + \nu_i, \label{eq:1_stage}
\end{align}
where equation \eqref{eq:1_stage} represents the first stage of the IV estimation with $T_i$ denoting if a student received the treatment, the instrument $Z_i = 1\left[ W_i \leq c \right]$ indicating if a student is below or above the cutoff of $c = 0.4$ (as in the sharp RDD), $W_i$ remains the predicted probability to pass the exam from the Logit model, while $\nu_i$ represents the error term. The fitted values $\widehat{T_i}$ of $T_i$ are inserted into equation \eqref{eq:2_stage} where $Y_i$ again represents the sum of points of student $i$ in his/her (latest) final exam. $u_i$ represents the error term, $X_i$ a covariate -- here the sum of points of online tests -- and $\alpha$ the (possible) treatment effect.
\FloatBarrier
To determine a possible treatment effect the following model assumptions must be met; (i) the running variable $W$ needs to be continuous around the cutoff, see \cite{mccrarymanipulation2008}. Otherwise, participants might be able to manipulate the treatment. (ii) the instrument $Z$ only appears in equation \eqref{eq:1_stage} for $T$ and not in equation \eqref{eq:2_stage} for $Y$ and the general assumptions for the IV estimation must hold \cite[pp. 883-885]{Cameron2005}.
For an IV estimation the two main requirements are that the instrument does covary with the variable $T$ and also the instrument must not covary with the error term $u_i$ (exogeneity). In an RDD this assumptions are met by construction of the approach since the instrument is a nonlinear (step) transformation of the running variable. \cite{Imbens2008,Lee2010}
\begin{figure}[]
\centering
\includegraphics[width=0.45\textwidth]{test_cont_big.png}
\caption{The McCrary sorting test for the running variable $W$ predicted probability to pass the exam ($x$-axis). There is no jump in the density around the cutoff point of $0.4$, i.e. the density left and right of the cutoff do not differ substantially.}
\label{fig:mccrary}
\end{figure}
\section{Empirical results} \label{sec:analysis}
Table \ref{tab:group_overview} shows the treatment (warning = 1) and control (warning = 0) group, as expected, to differ substantially. Both the performances in the online tests and JACK score (Testate$\_$1-4 and Score$\_$Jun25) as well as the -- observable -- effort (Begin$\_$2506)\footnote{Note that students may also or entirely learn outside of the JACK framework. However, since the final exam was taken via JACK students have a strong incentive to also learn on the platform to get used to the framework.} are much lower in the treatment group. These numbers suggest that a warning mail could have a possible impact.
We first check that the assumption (i) of a continuous running variable with no jump at the cutoff is met. For this, we perform the \cite{mccrarymanipulation2008} sorting test which tests continuity of the density of our running variable $W$ -- the predicted probability to pass the exam -- around the cutoff. In order to estimate the effect $\alpha$ correctly there must not be a jump in the density at the cutoff. Otherwise some participants could have manipulated the treatment and the results would no longer be reliable.
Figure~\ref{fig:mccrary} displays the McCrary sorting test. Inspection suggests no major changes around the cutoff. The test confirms this with a $p$-value of 0.509. Since there is no jump around the cutoff and the students were not informed beforehand about the warning email we can be relatively confident that the students were not able to manipulate the treatment. Apart from that, the incentive to worsen ones own performance to receive the treatment seems rather small as there is no direct benefit from receiving the warning. The idea and a possible effect of the email lie in a change of the effort students invest from that time on.
\begin{figure}[]
\centering
\includegraphics[width=0.5\textwidth]{model_plot_big.png}
\caption{Graphical illustration of the RDD with the probability to pass the exam $W$ on the $x$-axis and the exam points $Y$ on the $y$-axis. At the cutoff point of $c = 0.4$ we cannot see any (major) decrease in earned exam points.}
\label{fig:mod}
\end{figure}
\begin{table*}[t]
\begin{tabular}{@{}lccrrrrrrr@{}}
\toprule
variable & warning & count & min & Q0.25 & median & mean & Q0.75 & max & sd \\ \midrule
exam points & 0 & 151 & 3.40 & 17.50 & 25.30 & 23.80 & 30.00 & 47.00 & 8.20 \\
& 1 & 183 & 0.00 & 8.55 & 15.60 & 16.10 & 23.10 & 39.20 & 9.52\\
Testate$\_$1-4 & 0 & 191 & 341.00 & 579.00 & 755.00 & 761.00 & 900.00 & 1425.00 & 221.00 \\
& 1 & 607 & 0.00 & 0.00 & 0.00 & 98.50 & 167.00 & 700.00 & 133.00\\
Score$\_$Jun25 & 0 & 191 & 0.00 & 2216.00 & 3520.00 & 3799.00 & 5174.00 & 11580.00 & 2206.00 \\
& 1 & 425 & 0.00 & 200.00 & 983.00 & 1347.00 & 2033.00 & 8385.00 & 1363.00\\
Begin$\_$2506 & 0 & 191 & 0.00 & 74.50 & 125.00 & 128.00 & 204.00 & 677.00 & 125.00\\
& 1 & 425 & 0.00 & 5.00 & 25.00 & 47.80 & 60.00 & 499.00 & 67.40\\ \bottomrule
\end{tabular}
\caption{Overview of empirical quartiles, mean and standard deviation for the response variable and considered covariates}
\label{tab:group_overview}
\end{table*}
\begin{table*}[t]
\begin{tabular}{@{}lllllll@{}}
\toprule
& bandwidth & Observations & Estimate & Std. Error & $z$-value & $p$-value \\ \midrule
LATE & 0.255 & 126 & 0.146 & 4.852 & 0.030 & 0.976 \\
Half-BW & 0.127 & 54 & -3.151 & 10.759 & -0.293 & 0.770 \\
Double-BW & 0.507 & 306 & 5.295 & 3.436 & 1.541 & 0.123 \\ \midrule
& & & & & & \\
$F$-statistics & & & & & & \\
& F & Num. DoF & Denom. Dof & $p$-value & & \\ \midrule
LATE & 0.257 & 4 & 121 & 0.905 & & \\
Half-BW & 0.076 & 4 & 49 & 0.989 & & \\
Double-BW & 8.902 & 4 & 301 & 8.313 e-07 & & \\ \bottomrule
\end{tabular}
\caption{Summary of the regression discontinuity model \textit{with} covariates. At the top: The estimate of the treatment effect of the warning. At the bottom: An $F$-test for the different bandwidth.}
\label{tab:mod}
\end{table*}
\begin{table*}[t]
\begin{tabular}{@{}lllllll@{}}
\toprule
& bandwidth & Observations & Estimate & Std. Error & $z$-value & $p$-value \\ \midrule
LATE & 0.255 & 126 & 0.193 & 4.889 & 0.040 & 0.968 \\
Half-BW & 0.127 & 54 & -2.867 & 10.075 & -0.285 & 0.776 \\
Double-BW & 0.510 & 306 & 6.662 & 3.274 & 2.035 & 0.041 \\ \midrule
& & & & & & \\
$F$-statistics & & & & & & \\
& F & Num. DoF & Denom. Dof & $p$-value & & \\ \midrule
LATE & 0.324 & 3 & 122 & 0.808 & & \\
Half-BW & 0.088 & 3 & 50 & 0.966 & & \\
Double-BW & 11.013 & 3 & 302 & 6.977 e-07 & & \\ \bottomrule
\end{tabular}
\caption{Summary of the regression discontinuity model \textit{without} covariates. At the top: The estimate of the treatment effect of the warning. At the bottom: An $F$-test for the different bandwidths.}
\label{tab:mod_without_cov}
\end{table*}
Figure~\ref{fig:mod} and Table~\ref{tab:mod} (with covariates) and \ref{tab:mod_without_cov} (without covariates) report our estimation results. The LATE is not significant with a $p$-value of $0.976$ and $0.968$ (respectively without covariates). The benefit of comparing the mean outcomes of the left and right side around the cutoff rather than using polynomial regression is that this approach is more efficient since we need to estimate fewer parameters \cite{Lee2010}.
We use a bandwidth of $0.255$ which was determined with the data driven approach of \cite{imbensoptimal2009}. At the bottom of Table~\ref{tab:mod} and \ref{tab:mod_without_cov}, the $F$-test for the different bandwidth choices are displayed. The F-test tests if the bandwidth is not too wide, since this would lead to a biased estimate of the effect. This would violate the assumption that the participants around the cutoff only differ due to receiving the treatment. Since the F-test is not rejected for LATE and half bandwidth (HALF-BW), but for double bandwidth (Double-BW) we conclude that the bandwidth of $0.255$ yields the most efficient estimation of the LATE $\alpha$ without bias through the bandwidth choice. Therefore, only students with a probability inside the limits of $0.4$ (cutoff) $\pm \ 0.255$ (bandwidth) are included in the analysis.
\section{Conclusions and Discussion}\label{sec:conclusion}
In this paper we analyzed whether students who perform rather poorly in a current course can be positively influenced by a warning mail. Our results of the RDD do not provide any evidence that the warning mail has a significant effect on the results (or behavior) of the students. This might have several reasons. For instance a lot of the participants who received a warning did not take part in any final exam (see Table~\ref{tab:attendance}). This likely compromises the detection of an effect. There are several possible explanations for that. On the one hand, the warning might lead to the effect that the students are more likely to postpone participation to a later semester. The email could give students the impression that the chances of getting a good grade are already relatively low. Therefore students might be more likely to repeat the course a year later. On the other hand, the rate of students who did not take the exam are in our experience similar to the previous ones. In a sense, this is also a positive outcome, as we then at least prevent students from collecting malus points, cf.~footnote 2.
Another important aspect in this analysis is that students can earn extra points through the online tests and the Kahoot!~game. From the perspective of the students this is probably an even bigger incentive than the warning mail. These incentives during the semester could be taken into account in further research. Further incentives as well as further feedback, in combination with the warning emails during the semester, might a greater effect on student performance than the warning emails alone, which has to be studied in the future
To conclude, we were not able to detect a significant effect of the warning mails in our design. This is still noteworthy because successful motivation of weak and modest students remains challenging for instructors. We will keep track of the warning mail design in future editions of our course.
\section*{Acknowledgments}
We thank all colleagues who contributed to the course ``Induktive
Statistik'' in the summer term 2019.
Part of the work on this project was funded by the German Federal
Ministry of Education and Research under grant numbers 01PL16075 and 01JA1910 and by the Foundation for Innovation in University Teaching under grant number FBM2020-EA-1190-00081.
\bibliographystyle{abbrv}
|
{
"timestamp": "2022-01-31T02:16:28",
"yymm": "2102",
"arxiv_id": "2102.08803",
"language": "en",
"url": "https://arxiv.org/abs/2102.08803"
}
|
\section{Introduction}\label{sec:intrroduction}
The $\theta$ parameter of the 4d Yang-Mills theory controls relative
weights of different topological sectors in the path integral.
Despite long history, it still remains as a challenging problem to
identify the effect of the $\theta$ parameter on the non-perturbative
dynamics of the theory.
For the special value $\theta=\pi$ the Lagrangian has CP symmetry, and
we can ask whether or not the CP symmetry is spontaneously broken.
In the large $N$ limit~\cite{tHooft:1973alw} spontaneous CP violation at
$\theta=\pi$ was demonstrated in
Refs.~\cite{Witten:1980sp,tHooft:1981bkw,Witten:1998uka}.
For finite $N$ a mixed anomaly between the CP symmetry and the
$\mathbb{Z}_N$ center symmetry shows that the CP symmetry in the
confining phase has to be broken~\cite{Gaiotto:2017yup}.
A similar conclusion was derived by studying restoration of the
equivalence of local observables in $\mathrm{SU}$($N$) and $\mathrm{SU}$($N$)/$\mathbb{Z}_N$
gauge theories in the infinite volume limit~\cite{Kitano:2017jng}.
(See also Refs.~\cite{Azcoiti:2003ai,Yamazaki:2017dra,Wan:2018zql}.)
While these theoretical developments have narrowed down possible
scenarios, an explicit nonperturbative calculation is necessary to
unambiguously settle the fate of the CP symmetry at $\theta=\pi$.
Any direct numerical simulation at $\theta=\pi$, however, has been
difficult due to the notorious sign problem~\footnote{For recent related
efforts towards direct simulations, see, for example,
Refs.~\cite{Hirasawa:2020bnl,Gattringer:2020mbf}.}.
In Ref.~\cite{Kitano:2020mfk} three of the authors of the present paper
studied the vacuum energy density of the 4d $\mathrm{SU}$(2) Yang-Mills theory
around $\theta=0$ by lattice numerical simulations. The case of $\mathrm{SU}$(2)
gauge group is of particular interest since $N=2$ is farthest away from
the large $N$ limit: there is a well-known parallel between 2d CP$^1$
and 4d $\mathrm{SU}$(2) model~\footnote{See Refs.~\cite{Yamazaki:2017ulc,
Yamazaki:2017dra} for more precise connections between the two.}, and
the known vacuum at $\th=\pi$ in the former \cite{CP1} alludes to the
appearance of gapless theory in the latter.
By observing that the first two numerical coefficients in the $\theta$
expansion obey the large $N$ scaling~\footnote{See
Refs.~\cite{Lucini:2001ej,DelDebbio:2002xa,Bonati:2016tvi} for large $N$
scaling of the first two coefficients in the $\theta$ expansion in the
SU($N$) gauge theory.
For SU(2) theory the first coefficient in the $\theta$ expansion, the
topological susceptibility $\chi$, was estimated in
Refs.~\cite{deForcrand:1997esx,Alles:1997qe,DeGrand:1997gu,Lucini:2001ej,Berg:2017tqu}.},
it was inferred in Ref.~\cite{Kitano:2020mfk} that the 4d $\mathrm{SU}$(2)
Yang-Mills theory at $\theta=\pi$ has spontaneous CP breaking, contrary
to the naive expectation from the 2d CP$^1$ model.
In this work, we develop a subvolume method to explore the $\theta$
dependence of the free energy \emph{without any series expansion in
$\theta$} and apply to 4d $\mathrm{SU}$(2) theory.
We find that it indeed works and show an evidence of spontaneous CP
violation at $\theta=\pi$ at zero temperature, in consistency with the
results of Ref.~\cite{Kitano:2020mfk}.
The subvolume method here is inspired by Ref.~\cite{Luscher:1978rn} and
similar to that introduced in Ref.~\cite{KeithHynes:2008rw} to study 2d
CP$^{N\!-\!1}$\ model.
\section{Subvolume Method and Lattice Set Up}
\label{sec:method}
In what follows, the subvolume method is described.
After generating a number of gauge configurations at $\th=0$ and
implementing the $n_{\rm APE}$ steps of APE
smearing~\cite{Albanese:1987ds}, the topological charge density $q(x)$
is calculated with the five-loop improved topological charge
operator~\cite{deForcrand:1997esx} on the lattice.
Defining the subvolume topological charge $Q_{\rm sub}$,
\begin{align}
& Q_{\rm sub} = \displaystyle\sum_{x\in V_{\rm sub}} q(x)\ ,
\end{align}
we calculate the free energy density for the subvolume
as~\cite{Luscher:1978rn,KeithHynes:2008rw},
\begin{align}
& f_{\rm sub}(\th)
= \frac{-1}{V_{\rm sub}}\ln \frac{Z(\th)}{Z(0)}
= \frac{-1}{V_{\rm sub}}\ln \,\langle\, \cos\left(\theta\,Q_{\rm sub}\right)\,\rangle
\ ,
\label{eq:fL}\\
& Z(\th)=\int\!\!{\cal D}U\,e^{-S_g+i\thQ_{\rm sub}}
\ ,
\end{align}
where $U$ denotes the link variable and $S_g$ the gauge action.
Note that the expectation value $\langle\cdots\rangle$ is estimated on
the $\th=0$ configurations.
The free energy density is then obtained as the infinite volume
limit of $f_{\rm sub}(\th)$:
\begin{align}
& f(\th)
= \lim_{V_{\rm sub}\to\infty} f_{\rm sub}(\th)\ .
\label{eq:fsub}
\end{align}
The goal of this study is to answer whether the spontaneous CP
violation does occur at $\theta=\pi$.
Thus, the order parameter should be useful and is calculated through
\begin{align}
& \frac{d\,f(\th)}{d\,\th}
= \lim_{V_{\rm sub}\rightarrow\infty}
\frac{d\,f_{\rm sub}(\th)}{d\,\th}\ ,
\label{eq:dfdth}\\
& \frac{d\,f_{\rm sub}(\th)}{d\,\th}
= \frac{1}{V_{\rm sub}}
\frac{\left\langle Q_{\rm sub}\, \sin(\thQ_{\rm sub}) \right\rangle}
{\langle\,\cos(\th Q_{\rm sub})\,\rangle}\, .
\label{eq:dfdthL}
\end{align}
$f$ and $df/d\th$ are evaluated separately and later used to make a
consistency check.
Some cautions are in order.
First, the size of the subvolume $V_{\rm sub}$ cannot be arbitrary.
Obviously it has to be large enough to cover the typical correlation
length of the system, which is considered to be around
$1/(aT_c)= 9.50$~\cite{Giudice:2017dor} in the lattice unit.
We thus restrict the size of the subvolume to $V_{\rm sub} \ge 10^4$.
Suppose that $V_{\rm sub}$ is large enough but $V_{\rm sub}\ll V_{\rm full}$, where
$V_{\rm full}$ denotes the full volume.
Then, the resulting $f(\theta)$ is expected to be independent of
$V_{\rm full}$, and $f_{\rm sub}$ would shows a scaling behavior as a function
of $V_{\rm sub}$.
As $V_{\rm sub}$ grows and approaches $V_{\rm full}$, $Q_{\rm sub}$ becomes close to
an integer global $Q$, and $f_{\rm sub}$ starts to converge a full volume
result, in which we are not interested as it becomes clear later.
The critical goal in this method is to identify the scaling behavior
from which the infinite volume limit is extracted.
Secondly, the observables \eqref{eq:fL} and \eqref{eq:dfdthL} contain
the expectation value $\langle\cos(\th Q_{\rm sub})\rangle$.
Either observable becomes incalculable when
$\langle\cos(\th Q_{\rm sub})\rangle$ fluctuates across zero.
This is the sign problem in this method and sets the $\th$ dependent
upper limit on the size of $V_{\rm sub}$.
Thus, another crucial point is whether the scaling behavior is
realized before $V_{\rm sub}$ reaches the upper limit.
We will return to this point in the discussion section.
The free energy density of 4d $\mathrm{SU}$(2) Yang-Mills theory is calculated at
high and zero temperatures below.
We employ the configurations generated in our previous work at
$T=0$~\cite{Kitano:2020mfk}, while the ensemble corresponding to
$T=1.2\, T_c$ is newly generated at the lattice coupling $\beta=1.975$,
the same as the $T=0$ case.
The simulation parameters are summarized in Tab.~\ref{tab:lat-sim}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$N_S^3 \times N_T$ & $T/T_c$ &
[${n_{\rm APE}}_{\rm min},\ {n_{\rm APE}}_{\rm max}$]
& statistics & $a^4\chi \times 10^5$\\
\hline
$24^3\times 8$ & 1.21 & [25,\ 45] & 10,000 & 1.35(5) \\
$24^3\times 48$ & 0 & [20,\ 40] & 68,000 & 2.54(3) \\
\hline
\end{tabular}
\caption{
The lattice parameters of the ensembles.
The tree-level Symanzik improved action~\cite{Weisz:1982zw} is adopted
with the periodic boundary conditions in all four directions.
The value $1/(aT_c)=9.50$~\cite{Giudice:2017dor} gives the value of
$T/T_c$ in the table, where $T_c$ is the critical temperature at
$\th=0$.
}
\label{tab:lat-sim}
\end{center}
\end{table}
The topological charge density measured at each smearing step is
uniformly shifted as $q(x)\to q(x)+\epsilon$ at every configurations so
that the global topological charge $Q=\sum_{x\in V_{\rm full}}q(x)$
takes the integer closest to the original value.
The calculation of $Q_{\rm sub}$ is carried out every five smearing steps.
For the APE smearing, we take $\alpha=0.6$ in the notation of
Ref.~\cite{Alexandrou:2017hqw}.
Topological observables on the lattice can be distorted by topological
lumps originating from lattice artifacts.
One can take away those by the smearing procedure, but at the same time
the smearing may deform physical topological excitations, too.
We studied this point in detail in Ref.~\cite{Kitano:2020mfk}, and
developed the procedure to restore the physical information.
The procedure consists of the extrapolation of the observables to the
zero smearing limit by fitting over a suitable interval of the smearing
steps.
The fit range is fixed in advance by examining the response of the
global topological charge to the smearing as done in
Ref.~\cite{Kitano:2020mfk}.
The resulting fit ranges and the topological susceptibilities
$\chi=\langle\, Q^2 \,\rangle/V_{\rm full}$ thus determined using the global
topological charge $Q$ are shown in Tab.~\ref{tab:lat-sim} for later
use.
The $\theta$-dependence is explored in the range of $\th=k\,\pi/10$ with
$k\in [1,\,20]$.
Each configuration is separated by ten Hybrid Monte Carlo (HMC) trajectories.
In the following analysis, statistical errors are estimated by the
single-elimination jack-knife method with the bin size of 500 and 100
configurations for zero and high temperatures, respectively.
We mainly show the analysis of $f(\theta)$, but $df(\theta)/d\theta$ is
analyzed in parallel with similar quality.
\section{Testing the method at High Temperature}
\label{sec:finiteT}
We first apply the subvolume method to the calculation of the free
energy density above $T_c$, where the instanton prediction,
$f(\th)\sim\chi(1-\cos\th)$~\cite{'tHooft:1976fv,Callan:1977gz,Gross:1980br},
is believed to be valid and numerically supported for $\mathrm{SU}$($N$) with
$N\ge 3$~\cite{Bonati:2013tt,Frison:2016vuc} as well.
Using the translational invariance, a single-sized subvolume
$V_{\rm sub}/a^4=l^3\times N_T$ with $l=10,\ 12,\ \cdots,\ 24$ is taken from 64
places per a configuration, and the results are averaged.
The $l$ dependence of $f_{\rm sub}$ is shown in Fig.~\ref{fig:v-vs-f-24x8}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.5 \textwidth]
{v-vs-f-24x8-p.eps}
\end{center}
\caption{
The subvolume dependence of $f_{\rm sub}(\th)$ and the extrapolation to
the infinite volume limit.
}
\label{fig:v-vs-f-24x8}
\end{figure}
It is found that in general the measured $l$ dependence is not constant
and the leading correction linear in $1/l$ is inevitable.
The linear dependence is seen for $\th=\pi/2$ for $l\le 22$, whereas it
ends around $l=20$ for $\th=\pi$ and it is hard to determine the linear
region for $\th=3\pi/2$.
Thus, it turns out to be difficult to identify the scaling region
unambiguously especially when $\th$ is large, and hence we give up the
precise determination.
Instead, we choose three fit ranges in each extrapolation and try to
estimate the potential size of the systematic uncertainty due to the
ambiguity of the scaling region.
Three fit ranges, $l\in [12,16]$, $[14,18]$ and $[16,20]$, are examined
when fitting to the expected scaling behavior
\begin{align}
f_{\rm sub}(\th) = f(\th) + \frac{a^{-1}\,s(\th)}{l}\, ,
\label{eq:v-fit-form}
\end{align}
where $s(\th)$ denotes the surface tension of the nonzero $\th$ domain
and $a$ the lattice spacing.
All fits performed in this analysis yield $\chi^2/\mbox{dof} < 3$.
It is interesting to see that the relative relation
$f_{\rm sub}(3\pi/2)>f_{\rm sub}(\pi)$ at small $l$ flips toward the large
$l$ limit and $f(\th)$ ends up with non-monotonic function.
Since the data of $f_{\rm sub}$ smoothly (but sometimes rapidly) connect the
full volume ($l^3\times 6=24^3\times 6$) results from the above, the
extrapolation fitting the data near the full volume tends to be smaller
than that fitting the data far from the full volume.
As a result, the discrepancy, {\it i.e.} the potential size of the
systematic uncertainty, turns out to be larger at larger $\th$.
The results thus obtained are then extrapolated to $n_{\rm APE}=0$ at each
value of $\th$ with the fit range shown in Tab.~\ref{tab:lat-sim}.
In the extrapolation, the linear fit goes well with
$\chi^2/\mbox{dof}<3$.
The stability against small shifts of the fit range is seen in
Fig.~\ref{fig:nape-vs-f-24x8}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5 \textwidth]
{nape-vs-f-24x8-p.eps}
\caption{
The linear extrapolation of $f(\th)$ to $n_{\rm APE}=0$, where $f(\th)$ is
obtained by the fit with $l\in[14,18]$.
}
\label{fig:nape-vs-f-24x8}
\end{figure}
Finally, the free energy density obtained with three fit ranges are
shown in Fig.~\ref{fig:th-vs-f-24x8} together with the full volume
result (filled squares), where $f(\th)$ is normalized by the topological
susceptibility in Tab.~\ref{tab:lat-sim}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5 \textwidth]
{th-vs-f-24x8-p.eps}
\caption{
The $\theta$ dependence of $f(\th)$ at $T=1.2\,T_c$.
The results obtained with different fit ranges in
Fig.~\ref{fig:v-vs-f-24x8}, $l\in [12,16]$, $[14,18]$ and $[16,20]$,
are shown in triangle-up, circle and triangle-down, respectively.
}
\label{fig:th-vs-f-24x8}
\end{figure}
The prediction from the dilute instanton gas approximation,
$1-\cos(\th)$, is shown by the dashed curve.
The function, $\th^2/2$, is also shown as the solid curve for
comparison.
Taking into account the uncertainty arising from the ambiguity of the
scaling region, the numerical results are consistent with the
instanton prediction.
Note that non-monotonic behavior of $f(\th)$ seems robust at high
temperature but is far from obvious before the extrapolations, as the
surface tension term in Eq.~\eqref{eq:v-fit-form} is monotonic.
$f(\th)$ can also be obtained from the numerical integration of
$df(\th)/d\th$ as shown by the dotted curves.
The agreement with those curves supports that the two nontrivial
extrapolations included in the whole analysis do not pick up accidental
fluctuations and are stable.
The result with full volume is found to well agree with the instanton
prediction.
One may think that this is the simplest way to obtain $f(\th)$.
However, we will see that it does not work at $T=0$.
From the test, assuming that the instanton prediction is valid at high
temperature, we learn that the scaling behavior of $f_{\rm sub}$ would be
linear and the region showing such a behavior starts around the
dynamical length scale ($\sim 1/(aT_c)$).
\section{Applying to Zero Temperature}
\label{sec:zeroT}
Next we apply the subvolume method to calculate the vacuum energy
density.
This time the subvolume is defined by $V_{\rm sub}/a^4=l^4$ with
$l=10, 12,\ \cdots,\ 24$ and taken from 512 places per configuration.
The $l$ dependence of $f_{\rm sub}(\th)$ is shown in
Fig.~\ref{fig:v-vs-f-24x48} as before.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5 \textwidth]
{v-vs-f-24x48-p.eps}
\caption{
The linear extrapolation of $f_{\rm sub}(\th)$
to the infinite volume limit.
}
\label{fig:v-vs-f-24x48}
\end{figure}
Due to the sign problem in this method, some results at large $\th$ and
large $l$ could not be calculated.
But the available data show linear behavior.
Following the previous analysis, three fit ranges of $l\in [10,14]$,
$[12,16]$ and $[14,18]$ are taken in the fit to \eqref{eq:v-fit-form} to
estimate the systematic uncertainty.
Contrary to the high temperature case, $f(\th)$ turns out to be stable
against the variation of the fit range, and does not show any sign of
the flip, indicating monotonic behaviors of $f(\th)$ as a function of
$\th$.
The linear extrapolation to $n_{\rm APE}=0$ is carried out with the fit range
shown in Tab.~\ref{tab:lat-sim}, and the fit is found to work well
with $\chi^2/$dof $<3$ as shown in Fig.~\ref{fig:nape-vs-f-24x48}.
\begin{figure}[bt]
\centering
\includegraphics[width=0.5 \textwidth]
{nape-vs-f-24x48-p.eps}
\caption{
The linear extrapolation of $a^4f(\th)$
to $n_{\rm APE}=0$ for the data obtained with $l\in[12,16]$.
}
\label{fig:nape-vs-f-24x48}
\end{figure}
The stability against shift of the fit range is also confirmed.
Finally, the resulting $f(\th)$ and $df(\th)/d\th$ are shown in
Fig.~\ref{fig:th-vs-f-24x48-f-df} together with the predictions from the
large $N$ ($\th^2/2$) and the instanton calculus ($1-\cos\th$).
\begin{figure}[tb]
\centering
\includegraphics[width=0.5 \textwidth]
{th-vs-f-24x48-p.eps} \\
\includegraphics[width=0.5 \textwidth]
{th-vs-df-24x48-p.eps}
\caption{
$f(\th)$ (top) and $df(\th)/d\th$ (bottom).
The results obtained with different fit ranges in
Fig.~\ref{fig:v-vs-f-24x48}, $l\in [10,14]$, $[12,16]$ and $[14,18]$,
are shown in triangle-up, circle and triangle-down, respectively.
}
\label{fig:th-vs-f-24x48-f-df}
\end{figure}
The stability of the two extrapolations during the analysis is confirmed
as $f(\th)$ and $df(\th)/d\th$ well agree with the dotted curves.
While the full volume calculation works only in the vicinity of $\th=0$,
the subvolume method succeeds to calculate, at least, to $\th\sim \pi$.
There are crucial differences from the high temperature case.
First, the different choices of the fit range in $l$ yield consistent
results, and hence the potential systematic error from the ambiguity of
the scaling region seems to be under control.
Second, $f(\th)$ is a monotonically increasing function, at least, to
$\th\sim\pi$, and the direct calculation of $df(\th)/d\th$ clearly shows
$d\,f(\th)/d\th|_{\th=\pi} \ne 0$.
Since $d\,f(\th)/d\th = -i \langle q(x) \rangle$ is CP odd, we conclude
that CP is spontaneously broken at $\theta = \pi$ in the vacuum of the
4d $\mathrm{SU}$(2) Yang-Mills theory~\footnote{See also Refs.~\cite{Unsal:2012zj}
and \cite{Unsal:2020yeh} for analytic discussions.} and that there is a
phase transition to recover the CP symmetry at some finite temperature.
In other words, it is found that the 4d SU(2) Yang-Mills theory is in the
large-$N$ class unlike the 2d CP$^1$ model~\footnote{There is a logical
possibility that there is a phase transition at some $\theta$ below
$\pi$, which the subvolume method could not detect.
In that case, the CP symmetry may be left unbroken at $\theta = \pi$.
In any case, the fact that the free energy does not show the $2\pi$
periodicity indicates that there are multiple branches in the vacuum
structure as in the case of the large $N$ limit.
We thank Yuya Tanizaki for discussion on this point.}.
\section{Discussion}
\label{sec:discussion}
The symmetry of $\mathrm{SU}$($N$) gauge theories indicates $f(\th)=f(-\th)$ and
$f(\th)=f(\th+2\pi)$.
In the subvolume method, $f(\th)=f(-\th)$ is automatic from
\eqref{eq:fL} but the $2\pi$-periodicity is not seen in $f(\th)$ shown
in Fig.~\ref{fig:th-vs-f-24x48-f-df}.
The subvolume method is equivalent to modifying the value of $\th$
inside the subvolume.
If the difference of $\th$ is a multiple of $2 \pi$ and the calculation
respects the $2\pi$-periodicity, the free energy would scale as the
surface area of the subvolume when the subvolume is large enough.
The lack of $2\pi$ periodicity in the free energy density should thus be
interpreted as the presence of a meta-stable vacuum for a fixed value of
$\theta$ (except for $\theta = \pi$ where two vacua interchanged by CP
are degenerate and stable).
Thus, we expect that the meta-stable vacuum should eventually decay into
the stable one by the creation of a dynamical domain wall that attaches
to the interface.
The absence of the decay of the domain into the domain-wall in the
lattice calculation has an analog in the calculation of the static
potential~\footnote{The similar reasoning is found for 2d CP$^{N\!-\!1}$\ -model in
\cite{KeithHynes:2008rw}.}.
The static potential is calculated by inserting a Wilson loop, and
should show the string breaking for configurations with light dynamical
quarks when the two test charges are distant enough.
But it does not occur, at least, within naive methods, and the resulting
potential sticks to the original branch even after passing the
transition point.
The probable reason is that the overlap between the original state with two
static charges and another lower energy state with two mesons are
extremely small.
We infer that the same happens to the calculation of $f(\th)$ for
$\th>\pi$ and the first order phase transition is missing~\footnote{We
expect that the subvolume method can capture second order transitions in
principle because meta-stable states do not exist.}.
It is clearly interesting to directly see the formation of the domain
wall on the lattice though it would not be straightforward.
We have mentioned the $\th$ dependent upper limit on the size of $V_{\rm sub}$
in sec.~\ref{sec:method}.
We examine the relation between the limit and
$\th\langle|Q_{\rm sub}|\rangle$ at $T=0$.
Figure~\ref{fig:allowed-range} shows $\th\langle|Q_{\rm sub}|\rangle$ as a
function of $1/l$, where we have used the approximate relation
$\langle|Q_{\rm sub}|\rangle=(l^4/V_{\rm full})^{1/2}\langle|Q|\rangle$ and
the measured value of $\langle|Q|\rangle$ at $T=0$ by ignoring the
corrections to the relation of $O(1/l)$ and $O(1/(\chiV_{\rm sub}))$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5 \textwidth]
{allowed-range-24x48-p.eps}
\caption{
Subvolume dependence of $\th\langle\, |Q_{\rm sub}| \,\rangle$.
The open symbols represent the calculations which succeeded while the
filled ones represent those which failed.
}
\label{fig:allowed-range}
\end{figure}
In the figure, the filled symbols represent the points where the
calculations are failed due to the sign problem.
It is seen that the upper limit indeed decreases with $\th$.
Numerical investigation suggests that the upper limit on the subvolume
scales as $1/\th$.
\section{Summary}
\label{sec:summary}
We developed the subvolume method, which enables us to extract the $\th$
dependence of the free energy density in 4d Yang-Mills theory not
restricted to $\theta\sim 0$.
At high temperature, the method yields $\th$ dependence consistent with
the instanton prediction, as expected, within the large uncertainty due
to the ambiguity of the scaling region.
To fix this ambiguity, it is necessary to go to larger lattices.
On the other hand, at zero temperature the sign problem arises instead,
but still $f(\th)$ could be calculated to $\th\sim\pi$ with the
systematic uncertainty under control.
Combining the numerical result with the theoretical requirement leads to
the conclusion that the vacuum of 4d $\mathrm{SU}$(2) Yang-Mills theory undergoes
spontaneous CP violation at $\th=\pi$ as large $N$ theory does.
Although the overlap problem prohibits the domain-wall from being
formed, it is interesting to learn that such a object actually exists in
the Yang-Mills theory~\cite{Luscher:1978rn}.
We have tested the stability of the results by exchanging the order of
extrapolations and obtained the consistent results with enlarged
uncertainties.
In order to promote this study to the quantitative level, it is
necessary to perform lattice simulations with larger volumes and
finer lattice spacings.
Further studies will be presented in the forthcoming
paper~\cite{InProgress}.
It is a fascinating question to ask if our method is applicable for
other questions with sign problems, such as gauge theories with finite
values of the chemical potential.
While numerical results are not accurate past $\th \sim 3 \pi/2$,
there are indications that the derivative $df(\theta)/d\theta$ decreases
past $\theta=\pi$, and becomes smaller near $\theta \sim 2\pi$.
This is consistent with the expectation \cite{Yamazaki:2017ulc} that
there are two meta-stable branches of the $\mathrm{SU}$(2) theory, each of which
has $4\pi$ periodicity.
\section*{Acknowledgments}
This work is based in part on the Bridge++ code~\cite{Ueda:2014rya}
and is supported in part by JSPS KAKENHI Grant-in-Aid
for Scientific Research (Nos.~19H00689 [RK, NY, MY], 18K03662 [NY],
19K03820, 20H05850, 20H05860 [MY]) and MEXT KAKENHI Grant-in-Aid for Scientific Research on
Innovative Areas (No.~18H05542 [RK]).
Numerical computation in this work was carried out in part on the
Oakforest-PACS and Cygnus under Multidisciplinary Cooperative Research
Program (No.~17a15) in Center for Computational Sciences, University of
Tsukuba; Fujitsu PRIMERGY CX600M1/CX1640M1 (Oakforest-PACS) in the
Information Technology Center, the University of Tokyo.
|
{
"timestamp": "2021-10-07T02:10:05",
"yymm": "2102",
"arxiv_id": "2102.08784",
"language": "en",
"url": "https://arxiv.org/abs/2102.08784"
}
|
\section{Introduction: Projectivity and statistical relational artificial
intelligence}
Statistical relational artificial intelligence
has emerged over the last 25 years as a means to specify statistical
models for relational data. Since then, many different frameworks
have been developed under this heading, which can broadly be classified
into those who extend logic programming to incorporate probabilistic
information (probabilistic logic programming under the distribution
semantics) and those who specify an abstract template for probabilistic
graphical models (sometimes known as knowledge-based model construction).
Both classes share the distinction between a general model (a template
or a probabilistic logic program with variables) and a specific domain
used to ground the model. Ideally, the model would be specified abstractly
and independently of a specific domain, even though a specific domain
may well have been involved in learning the model from data.
However, a significant hurdle is the generally hard to predict or
undesirable behaviour of the model when
applied to domains of different sizes. This extrapolation problem has received
much attention in the past years \cite{PooleBKKN14,JaegerS20}.
Recently Jaeger
and Schulte \shortcite{JaegerS18,JaegerS20}
have identified \emph{projectivity} as a strong form of good scaling
behaviour: in a projective model, the probability of a given property
holding for a given object in the domain is completely independent
of the domain size. However, the examples of Poole et al. \shortcite{PooleBKKN14} show
that projectivity cannot be hoped for in general statistical relational
models, and Jaeger and Schulte \shortcite{JaegerS18} identify very restrictive fragments
of common statistical relational frameworks as projective.
The question remains, however, whether those fragments completely capture the projective families of distributions expressible by a statistical relational representation.
We will show in this contribution that in the case of probabilistic
logic programming under the distribution semantics, this is true, as every projective
probabilistic logic program is equivalent to a determinate acyclic probabilistic
logic program.
Our method will show that, moreover, every probabilistic
logic program is asymptotically equivalent to an acyclic determinate probabilistic
logic program. This result is of some independent interest, as it shows that the probabilities of queries expressed by a logic program converge as domain size increases. Moreover, the asymptotic equivalence provides an explicit representation using which the asymptotic query probabilities can be computed.
This will be an application of an asymptotic quantifier elimination
result for probabilistic logic programming derived from classical
finite model theory, namely from the study of the asymptotic theory
of first-order and least fixed point logic in the 1980s (particularly 0-1 laws, applied in the form of Blass et al. \shortcite{BlassGK85}).
This application is also methodologically interesting as it opens
another way in which classical logic can contribute to
cutting-edge problems in learning and reasoning. That the
theory developed around 0-1 laws would be a natural candidate for
such investigations may not surprise, as it is highly developed and
is itself in the spirit of ``finite probabilistic model theory''
\cite[Section 7]{CozmanM19}, and one might hope for more
cross-fertilisation between the two fields in the future.
\subsection{Outline of the paper}
We will first formally introduce the framework of families of distributions and the notion of projectivity that we will refer to throughout.
In the following section, we present the abstract distribution semantics, which bridges the gap between the tools from finite model theory and the semantics of probabilistic logic programming.
We also discuss asymptotic quantifier elimination and introduce the main classical results from finite model theory.
We introduce least fixed point logic, an adequate representation for (probabilistic) logic programs.
We then give the necessary background on the asymptotic behaviour of least fixed point logic.
We harness the relationship between probabilistic logic programming and least fixed point distributions to show that every probabilistic logic program is asymptotically equivalent to an acyclic determinate probabilistic logic program.
In the following section, we will apply this analysis to study the projective families of distributions expressible in probabilistic logic programming. We see that every projective
logic program is actually everywhere equivalent to an acyclic determinate logic program, and we derive some properties for the projective distributions expressible in this way.
For the case of a unary vocabulary, we show that only very few projective families of distributions are expressible in probabilistic logic programming, and we give a concrete example to highlight that point.
Finally, we conclude the paper with a brief discussion of the complexity of asymptotic quantifier elimination and some impulses for further research.
Proofs to all the statements made here can be found in Appendix A in the supplementary material.
\subsection{Notation}
An introduction to the terminology of first-order logic used in this paper can be found in Appendix B.1, in the supplementary material.
We just point out here that we use $\mathfrak{P}(A)$ to
indicate the power set of a set $A$ and $\vec{x}$ as a shorthand for a finite tuple $x_1, \dots, x_n$ of arbitrary length.
\subsection{Projectivity}
We will introduce projective families of distributions in accordance
with Jaeger and Schulte \shortcite{JaegerS18,JaegerS20}, where one can find a much more detailed
exposition of the terms and their background. As we are interested
in statistical relational representations as a means of abstracting
away from a given ground model, we will refer to families of distributions
with varying domain sizes.
\begin{defn}
A \emph{family of distributions} for a relational vocabulary $\mathcal{S}$
is a sequence $\left(Q^{(n)}\right)_{n\in\mathbb{N}}$ of probability
distributions on the sets $\Omega_{n}$ of all $\mathcal{S}$-structures
with domain $\left\{ 1,\ldots,n\right\} \subseteq\mathbb{N}$.
\end{defn}
\begin{defn}
A family of distributions is called \emph{exchangeable }if every $Q^{(n)}$ is invariant
under $\mathcal{S}$-isomorphism.
It is called \emph{projective} if, in addition, for all $m<n\in\mathbb{N}$ and
all $\omega\in\Omega_{m}$ the following holds:
\[
Q^{(m)}(\{\omega\})=Q^{(n)}\left(\left\{ \omega'\in\Omega_{n}|\textrm{\ensuremath{\omega} is the substructure of }\omega'\textrm{ with domain \ensuremath{\left\{ 1,\ldots,n\right\} } }\right\} \right)
\]
\end{defn}
Projectivity encapsulates a strong form of domain size independence.
Consider, for instance, the query $R(x)$, where $R$ is a relation symbol in $\mathcal{S}$.
Then in an exchangeable family of distributions, the unconditional probability of $R(x)$ holding in a world is independent of the precise interpretation of $x$, and depends only on the domain size.
If the family of distributions is projective, then the probability of $R(x)$ is independent even of the domain size.
As an immediate consequence, this implies that the computational complexity of quantifier-free queries is constant with domain size, since queries can always be evaluated in a domain consisting just of the terms mentioned in the query itself.
Projectivity also has important consequences for the statistical consistency of learning from randomly sampled subsets \cite{JaegerS18}.
An important class of examples of projective families of distributions are those in which $R(a)$ is independent of $P(b)$ for all $R,P,a,b$.
For instance, consider a vocabulary $\mathcal{S}$ with unary relations $P$ and $R$, and a family of distributions in which for every domain element $a$, $P(a)$ and $R(a)$ are determined independently with probabilities $p$ and $r$ respectively.
Then the probability that a subset $A$ of a domain $D$ has $\mathcal{S}$-structure $M$ is given by
\[
p^{\left| a \in M | M \models P(a) \right|} \cdot (1-p)^{\left| a \in M | M \models \neg P(a) \right|} \cdot r^{\left| a \in M | M \models R(a) \right|} \cdot (1-r)^{\left| a \in M | M \models \neg R(a) \right|}
\]
regardless of the size of $D$.
The work of Jaeger and Schulte \shortcite{JaegerS20} provides a complete characterisation of projective families of distributions in terms of exchangeable arrays (AHK representations).
However, it is not clear how this representation translates to the statistical relational formalisms currently in use, such as probabilistic logic programming.
We will see below that there are indeed projective families of distributions that are not expressible by a probabilistic logic program.
Furthermore, Jaeger and Schulte \shortcite{JaegerS20} claimed in Proposition 7.1 of their paper an independence property for the subclass of AHK- distributions.
While this proposition proved to be incorrect for the class of AHK- distributions \cite{JaegerS20a}, we will see here that for a projective family of distributions induced by a probabilistic logic program, the independence property holds.
In the remainder of this paper, we will investigate the interplay
between the asymptotic behaviour of logical theories as they have
been studied in finite model theory and the families of distributions
that are induced by them. We therefore introduce a notion of asymptotic
equivalence of families of distributions.
\section{\label{sec:Asymptotic-Quantifier-Elimination}Abstract distribution semantics}
As a bridge between classical notions from finite model theory and probabilistic logic programming, we introduce the abstract distribution semantics.
It builds on the \emph{relational Bayesian network specifications} of Cozman and Maua \shortcite{CozmanM19}, which combine random and independent root predicates with non-root predicates that are defined by first-order formulas.
Here we streamline and generalise this idea to a unified framework that we call the \emph{abstract distribution semantics}.
In particular, we will generalise away from first-order logic (FOL) to a general \emph{logical language:}
\begin{defn}
Let $\mathcal{R}$ be a vocabulary. Then a \emph{logical language
}$L(\mathcal{R})$ consists of a collection of
functions $\varphi$ which take an $\mathcal{R}$-structure
$M$ and returns a subset of $M^{n}$ for some $n\in\mathbb{N}$ (called
the \emph{arity} of $\varphi$).
In analogy to the formulas of first-order logic, we refer to those functions as \emph{$L(\mathcal{R})$-formulas} and write $M\models\varphi(\vec{a})$ whenever $\vec{a} \in \varphi(M)$.
\end{defn}
The archetype of a logical language is the first-order predicate calculus, where an $R$-formula $\varphi$ defines a function $\varphi(M) := \{a \in M | M \models \varphi(a) \}$ and $\models$ is used in the sense of ordinary first-order logic.
The concept as defined here is sufficiently general to accommodate
many other choices, however, and we will later apply it to least fixed point logic in particular.
\begin{defn}
Let $\mathcal{S}$ be a relational vocabulary,
$\mathcal{R}\subseteq\mathcal{S}$, and let $L(\mathcal{R})$
be a logical language over $\mathcal{R}$. Then an \emph{abstract
$L$-distribution over $\mathcal{R}$ (with vocabulary $\mathcal{S}$)}
consists of the following data:
For every $R\in\mathcal{R}$ a number $q_{R}\in\mathbb{Q}\cap[0,1]$.
For every $R\in\mathcal{S}\backslash\mathcal{R}$, an $L(\mathcal{R})$-formula
$\phi_{R}$ of the same arity as $R$.
\end{defn}
In the following we will assume that all vocabularies are finite.
The semantics of an abstract distribution is only defined relative
to a domain $D$, which we will also assume to be finite. The formal
definition is as follows:
\begin{defn}
Let $L(\mathcal{R})$ be a logical language over $\mathcal{R}$ and
let $D$ be a finite set. Let $T$ be an abstract $L$-distribution
over $\mathcal{R}$. Let $\Omega_{D}$ be the set of all $\mathcal{R}$-structures
with domain $D$.
Then the \emph{probability distribution on $\Omega_{D}$ induced by
$T$, written $Q_{T}^{(D)}$, }is defined as follows:
For all $\omega\in\Omega_{D}$, if $\exists_{\vec{a}\in\vec{D}}\exists_{R\in\mathcal{S} \setminus \mathcal{R}}:R(\vec{a})\nLeftrightarrow\phi_{R}(\vec{a})$,
then $Q_{T}^{(D)}(\{\omega\})\coloneqq0$
Otherwise, $Q_{T}^{(D)}(\{\omega\})\coloneqq\underset{R\in\mathcal{R}}{\prod}(q_{R}^{|\{\vec{a}\in\vec{D}|R(\vec{a})\}|})\times\underset{R\in\mathcal{R}}{\prod}(1-q_{R})^{|\{\vec{a}\in\vec{D}|\neg R(\vec{a})\}|}$
\end{defn}
In other words, all the relations in $\mathcal{R}$ are independent
with probability $q_{R}$ and the relations in $\mathcal{S} \setminus \mathcal{R}$
are defined deterministically by the $L(\mathcal{R})$-formulas $\phi_{R}$.
We will illustrate that with an example.
\begin{exam}
Let $\mathcal{R}=\{R,P\}$, $\mathcal{S} = \{R,P,S\}$, for a unary relation $R$ a binary relation $P$ and a unary relation $S$.
Then an abstract distribution over $\mathcal(R)$ has numbers $q_{R}$ and $q_{P}$ which encode probabilities. Consider the FOL-distribution $T$ with $\varphi_S = \exists_y \left( R(x) \wedge P(x,y) \right)$.
For any domain $D$, $Q_{T}^{(D)}$ is obtained by making an independent choice of $R(a)$ or $\neg R(a)$ for every $a \in D$, with a $q_{R}$ probability of $R(a)$. Similarly, an independent choice of $P(a,b)$ or $\neg P(a,b)$ is made for every pair $(a,b)$ from $D^2$, with a $q_{P}$ probability of $P(a,b)$.
Then, for any possible $\mathcal{R}$-structure, the interpretation of $S$ is determined by $\forall_x S(x) \leftrightarrow \varphi_S(x)$.
The resulting family of distributions is not projective, since the probability of $Q(a)$ increases with the size of the domain as more possible candidates $b$ for $P(a,b)$ are added.
\end{exam}
As this example has shown, abstract FOL distributions do not necessarily give rise to projective families.
If the $\varphi$ are all given by quantifier-free formulas, however, then the induced families distributions are indeed projective. We call such abstract $L(\mathcal{R})$- distributions, in which $L(\mathcal{R})$ is the class of quantifier-free FOL-formulas over $\mathcal{R}$, \emph{quantifier-free distributions}.
\begin{prop}\label{prop:QF_implies_projective}
Every abstract quantifier-free distribution induces a projective family of distributions.
\end{prop}
Quantifier-free distributions also hold a special role in model-theoretic analysis.
In particular, \emph {asymptotic quantifier elimination} has been shown for various logics of interest to artificial intelligence.
\subsection{Asymptotic quantifier elimination}
We introduce our notion of asymptotic equivalence for families of distributions:
\begin{defn}
Two families of distributions $(Q^{(n)})$ and $(Q'^{(n)})$ are \emph{asymptotically
equivalent} if $\stackrel[n\rightarrow\infty]{}{\lim}\underset{A\subseteq\Omega_{n}}{\sup}|Q^{(n)}(A)-Q'^{(n)}(A)|=0$
\end{defn}
\begin{rem*}
In measure theoretic terms, the families of distributions $(Q^{(n)})$
and $(Q'^{(n)})$ are asymptotically equivalent if and only if the
limit of the total variation difference between them is $0$.
\end{rem*}
We extend the notion to abstract distributions by calling abstract distributions asymptotically equivalent if they induce asymptotically equivalent families of distributions.
This gives us the following setting for asymptotic quantifier elimination:
\begin{defn}
Let $L(\mathcal{R})$ be an extension of the class of quantifier-free $\mathcal{R}$-formulas.
Then \emph{$L(\mathcal{R})$ has asymptotic quantifier elimination} if every abstract $L(\mathcal{R})$ distribution is asymptotically equivalent to a quantifier-free distribution over $L(\mathcal{R})$.
\end{defn}
It is well-known that first-order logic has asymptotic quantifier elimination.
Indeed, the asymptotic theory of relational first-order logic can be summarised as follows \cite[Chapter 4]{EbbinghausF06}:
\begin{defn}
\label{fact:Random} Let $\mathcal{R}$ be a relational vocabulary.
Then the first order theory\emph{ }$\mathrm{RANDOM}(\mathcal{R})$
is given by all axioms of the following form, called \emph{extension
axioms over $\mathcal{R}$}:
\[
\forall_{v_{1},\ldots,v_{r}}\left(\underset{1\leq i<j\leq r}{\bigwedge}v_{i}\neq v_{j}\rightarrow\exists_{v_{r+1}}\left(\underset{1\leq i\leq r}{\bigwedge}v_{i}\neq v_{r+1}\wedge\underset{\varphi\in\Phi}{\bigwedge}\varphi\wedge\underset{\varphi\in\Delta_{r+1}\backslash\Phi}{\bigwedge}\neg\varphi\right)\right)
\]
where $r\in\mathbb{N}$ and $\Phi$ is a subset of
\[
\Delta_{r+1}\coloneqq\left\{ R(\vec{x})|R\in\mathcal{R},\textrm{ \ensuremath{\vec{x}} a tuple from \ensuremath{\{v_{1},\ldots,v_{r+1}\}} containing \ensuremath{v_{r+1}}}\right\} .
\]
\end{defn}
\begin{fact}
\label{fact:FOL_QE}$\mathrm{\ensuremath{RANDOM}(\mathcal{R})}$ eliminates
quantifiers, i. e. for each formula $\varphi(\vec{x})$ there is a
quantifier-free formula $\varphi'(\vec{x})$ such that $\mathrm{\ensuremath{RANDOM}(\mathcal{R})}\vdash\forall_{\vec{x}}(\varphi(\vec{x})\leftrightarrow\varphi'(\vec{x}))$.
\end{fact}
It is sometimes helpful to characterise this quantifier-free formula
somewhat more explicitly:
\begin{prop}
\label{prop:form_QE}Let $\varphi(\vec{x})$ be a formula of first-order
logic. Then:
\begin{enumerate}
\item $\varphi'(\vec{x})$ as in Fact \ref{fact:FOL_QE} can be chosen such
that only those relation symbols occur in $\varphi'$ that occur in
$\varphi$.
\item If every atomic subformula of $\varphi$ contains at least one free
variable not in $\vec{x}$, and no relation symbol occurs with different
variables in different literals, then either $\mathrm{\ensuremath{RANDOM}(\mathcal{R})}\vdash\forall_{\vec{x}}\varphi(\vec{x})$
or $\mathrm{\ensuremath{RANDOM}(\mathcal{R})}\vdash\forall_{\vec{x}}\neg\varphi(\vec{x})$.
\end{enumerate}
\end{prop}
The importance of $\mathrm{RANDOM}(\mathcal{R})$ comes from its role
as the asymptotic limit of the class of all $\mathcal{R}$-structures.
In fact, it axiomatises the limit theory of $\mathcal{R}$-structures
even when the individual probabilities of relational atoms are given
by $q_{R}$ rather than $\frac{1}{2}$:
\begin{fact}\label{fact:asymptotic_theory_FOL}
$\underset{n\rightarrow\infty}{\lim}Q_{T}^{(n)}(\varphi)=1$
for all abstract distributions $T$ over $\mathcal{R}$ and all extension
axioms $\varphi$ over $\mathcal{R}$.
\end{fact}
\begin{cor}
First-order logic has asymptotic quantifier elimination.
\end{cor}
\section{Probabilistic logic programs as least fixed point distributions}\label{sec:LFP}
We will now proceed briefly to discuss fixed point logics. Our presentation follows the book by Ebbinghaus and Flum (2006, Chapter 8), to which we refer the reader for a more detailed exposition.
We begin by introducing the syntax.
As atomic second-order formulas occur, as subformulas of least fixed point formulas, we will introduce second-order variables.
\begin{defn}
Assume an infinite set of second-order variables, indicated customarily by upper-case letters from the end of the alphabet, each annotated with a natural number arity.
Then an \emph{atomic second-order formula} $\varphi$ is either a (first-order) atomic formula, or an expression of the form $X(t_1, \dots, t_n)$, where $X$ is a second-order variable of arity $n$ and $t_1, \dots, t_n$ are constants or (first-order) variables.
\end{defn}
We now proceed to least fixed point formulas.
\begin{defn}
\label{def:LFP}A formula $\varphi$ is called\emph{ positive in a
variable $x$} if $x$ is in the scope of an even number of negation
symbols in $\varphi$.
A \emph{formula in least fixed point logic }or \emph{LFP formula} over a vocabulary
$\mathcal{R}$\emph{ }is defined inductively as follows:
\begin{enumerate}
\item Any atomic second-order formula is an LFP formula.
\item If $\varphi$ is an LFP formula, then so is $\neg\varphi$.
\item If $\varphi$ and $\psi$ are LFP formulas, then so is $\varphi\vee\psi$
\item If $\varphi$ is an LFP formula, then so is $\exists x\varphi$ for
a first-order variable $x$.
\item If $\varphi$ is an LFP formula, then so is $[\mathrm{LFP}_{\vec{x},X}\varphi]\vec{t}$,
where $\text{\ensuremath{\varphi}}$ is positive in the second-order
variable $X$ and the lengths of the string of first-order variables
$\vec{x}$ and the string of terms $\vec{t}$ coincide with the arity
of $X$.
\end{enumerate}
An occurrence of a second-order variable $X$ is \emph{bound} if it is in the scope of an LFP quantifier $\mathrm{LFP}_{\vec{x},X}$ and \emph{free} otherwise.
\end{defn}
Fixed point semantics have been used extensively in (logic) programming
theory \cite{Fitting02}, and we will exploit this
when relating the model theory of LFP to probabilistic logic programming
below.
We first associate an operator with each LFP formula $\varphi$:
\begin{defn}
\label{def:F_=00005Cvarphi}Let $\varphi(\vec{x},\vec{u},X,\vec{Y})$
be an LFP formula, with the length of $\vec{x}$ equal to the arity
of $X$, and let $\omega$ be an $\mathcal{R}$-structure with domain
$D$. Let $\vec{b}$ and $\vec{S}$ be an interpretation of $\vec{u}$ and
$\vec{Y}$ respectively. Then we define the operator $F^{\varphi}:\mathfrak{P}(D^{k})\rightarrow\mathfrak{P}(D^{k})$
as follows:
\[
F^{\varphi}(R)\coloneqq\left\{ \vec{a}\in D^{k}|\omega\models\varphi(\vec{a},\vec{b},R,\vec{S})\right\} .
\]
\end{defn}
Since we have restricted Rule 5 in Definition \ref{def:LFP} to positive
formulas, $F^{\varphi}$ is monotone for all $\varphi$ (i. e. $R\subseteq F^{\varphi}(R)$
for all $R\subseteq D^{k}$). Therefore we have:
\begin{fact}
\label{fact:LFP_exists}For every $\mathrm{LFP}$ formula $\varphi(\vec{x},\vec{u},X,\vec{Y})$
and every $\mathcal{R}$-structure on a domain $D$ and interpretation
of variables as in Definition \ref{def:F_=00005Cvarphi}, there is
a relation $R\subseteq D^{k}$ such that $R=F^{\varphi}(R)$ and that
for all $R'$ with $R'=F^{\varphi}(R')$ we have $R\subseteq R'$.
\end{fact}
\begin{defn}
We call the $R$ from Fact \ref{fact:LFP_exists} the \emph{least
fixed point} of $\varphi(\vec{x},\vec{u},X,\vec{Y})$
\end{defn}
Now we are ready to define the semantics of least fixed point logic:
\begin{defn}
By induction on the definition of an LFP formula, we define when
an LFP formula $\varphi(\vec{X},\vec{x})$ is said to \emph{hold}
in an $\mathcal{R}$-structure $\omega$ for a tuple $\vec{a}$ from
the domain of $\omega$ and relations $\vec{A}$ of the correct arity:
The first-order connectives and quantifiers $\neg$, $\vee$ and $\exists$
as well as $\wedge$ and $\forall$ defined from them in the usual
way are given the usual semantics.
An atomic second order formula $X(\vec{x},\vec{c})$ holds if and
only if $(\vec{a},\vec{c_{\omega}})\in A$.
$[\mathrm{LFP}_{\vec{x},X}\varphi]\vec{t}$ holds if and only if $\vec{a}$
is in the least fixed point of $F^{\varphi(\vec{x},X)}$.
\end{defn}
\subsection{Probabilistic logic programming}
Our discussion on probabilistic logic programming employs the simplification
proposed by Riguzzi and Swift \shortcite{RiguzziS18} and considers a probabilistic logic program as a stratified Datalog program over probabilistic facts.
This \emph{distribution semantics} covers several different equally expressive formalisms \cite{RiguzziS18,deRaedtK15}.
Note that in particular, probabilistic logic programs as used here do not involve function symbols, unstratified negation or higher-order constructs.
See Appendix B.2 in the supplementary material or the book by Ebbinghaus and Flum (2006, Chapter 9) for an introduction to the syntax and semantics of stratified Datalog programs in line with this paper.
We will use the notation $(\Pi,P)\vec{t}$ for an intensional symbol
$P$ of a stratified logic program $\Pi$ to mean that ``the program
$\Pi$ proves $P\vec{t}$''.
\begin{defn}
A \emph{probabilistic logic program }consists of probabilistic facts
and deterministic rules, where the deterministic part is a stratified
Datalog program. We will consider it in our framework of abstract
distribution semantics as follows:
$\mathcal{R}$ is given by relation symbols $R'$ for every probabilistic
fact $p_{R}::R(\vec{x})$, with $q_{R'}\coloneqq p_{R}$. Their arity
is just the arity of $R$.
$\mathcal{S}$ is given by the vocabulary of the probabilistic logic
program and additionally the $R'$ in $\mathcal{R}$.
Let $\Pi$ be the stratified Datalog program obtained by prefixing
the program $\{R'(\vec{x})\leftarrow R(\vec{x})|R'\in\mathcal{R}\}$
to the deterministic rules of the probabilistic logic program.
Then $\phi_{P}$ for a $P\in\mathcal{S}\backslash\mathcal{R}$ is
given by $(\Pi,P)\vec{x}.$
\end{defn}
The distribution semantics for probabilistic logic programming is related
to the LFP distribution semantics introduced above through the following
fact \cite[Theorem 9.1.1]{EbbinghausF06}:
\begin{fact}
\label{fact:S-Datalog_to_LFP}For every stratifiable Datalog formula
$(\Pi,P)\vec{x}$ as above, there exists an $\mathrm{LFP}$ formula
$\varphi(\vec{x})$ over the extensional vocabulary $\mathcal{R}$
of $\Pi$ such that for every $\mathcal{R}$-structure $\omega$ and every
tuple $\vec{a}$ of elements of $\omega$ of the same length as $\vec{x}$, $\omega \models\varphi(\vec{a})$ if
and only if $\omega\models(\Pi,P)\vec{a}$.
\end{fact}
\begin{rem*}
In fact, it suffices to consider formulas in the so-called bounded
fixed point logic, whose expressiveness lies between first order logic
and least fixed point logic \cite{EbbinghausF06}.
\end{rem*}
\begin{notation*}
Although we have allowed second-order variables in the inductive definitions
above, we will assume from now on unless mentioned otherwise that
LFP formulas do not have free second-order variables.
\end{notation*}
\subsection{Asymptotic quantifier elimination for probabilistic logic programming}
We discuss the asymptotic reduction
of LFP to FOL by Blass et al. \shortcite{BlassGK85} and conclude that abstract LFP distributions and therefore probabilistic logic programs have asymptotic quantifier elimination.
The main theorem of Blass et al. \shortcite{BlassGK85} shows that $\mathrm{RANDOM}(\mathcal{R})$
not only eliminates classical quantifiers, but also least fixed point
quantifiers:
\begin{fact}
\label{fact:LFP_FOL}Let $\varphi(\vec{x})$ be an \emph{LFP} formula
over $\mathcal{R}$. Then there is a finite subset $G$ of $\mathrm{RANDOM}(\mathcal{R})$
and a quantifier-free formula $\varphi'(\vec{x})$ such that $G\vdash\forall_{\vec{x}}\varphi(\vec{x})\leftrightarrow\varphi'(\vec{x})$.
\end{fact}
Putting this together, we can derive the following:
\begin{thm}
\label{thm:QE_LFP_distribution} Least fixed point logic has asymptotic quantifier elimination.
\end{thm}
To obtain a characterisation within probabilistic logic programming,
however, we need to translate quantifier-free first order formulas
back to stratifiable Datalog.
In fact, they can be mapped to a subset of stratified Datalog that
is well-known from logic programming:
\begin{defn}
A Datalog program, Datalog formula or probabilistic logic program
is called \emph{determinate }if every variable occurring in the
body of a clause also occurs in the head of that clause.
\end{defn}
\begin{exam}
Examples of determinate clauses in this sense are $R(x) \coloneq P(x)$ or $Q(x,y) \coloneq R(x)$.
Indeterminate clauses include $R(x) \coloneq P(y)$ or $R(x) \coloneq Q(x,y)$.
\end{exam}
Determinacy corresponds exactly to the fragment of probabilistic
logic programs identified as projective by Jaeger and Schulte (2018, Proposition 4.3).
Indeed, Ebbinghaus and Flum's \shortcite{EbbinghausF06} proof of their Theorem 9.1.1 shows:
\begin{fact}
\label{fact:QF_to_dS-Datalog}Every quantifier-free first order formula
is equivalent to an acyclic determinate stratified Datalog formula.
\end{fact}
Therefore, we can conclude from Proposition \ref{prop:QF_implies_projective}:
\begin{prop}
\label{prop:determinate->projective}Every determinate probabilistic
logic program is projective.
\end{prop}
We now turn to the main result of this
subsection.
\begin{thm}
\label{thm:Probabilistic_Logic_Programs}Every probabilistic logic
program is asymptotically equivalent to an acyclic determinate probabilistic
logic program.
\end{thm}
\section{\label{sec:Discussion} Projective probabilistic logic programs}
As an application of our results, we investigate the projective families of distributions that are expressible by probabilistic logic programs.
The key is the following observation:
\begin{prop}
\label{prop:projective_and_AE_->_Equ}Two projective families of distributions
are asymptotically equivalent if and only if they are equal.
\end{prop}
As modelling in the distribution semantics often involves introducing auxiliary predicates, the family of distributions we want to model will usually be defined on a smaller vocabulary than the abstract distribution (or probabilistic logic program) itself.
We therefore note here that asymptotic equivalence
is preserved under reduct.
First we clarify how we build reducts
of distributions in the first place:
\begin{defn}
Let $Q_{\text{\ensuremath{}}}^{(n)}$ be a distribution over a vocabulary
$\mathcal{S}$. Then its \emph{reduct} $Q_{\mathcal{S'}}^{(n)}$ \emph{to
a subvocabulary} $\mathcal{S'}\subseteq \mathcal{S}$ is defined such that for
any world $\omega\in\Omega_{n}^{\mathcal{S'}}$, $Q_{\mathcal{S'}}^{(n)}(\omega)\coloneqq Q^{(n)}(\{\omega'\in\Omega_{n}^{\mathcal{S}}|\omega'_{\mathcal{S'}}=\omega\})$.
\end{defn}
\begin{rem*}
$Q_{\mathcal{T}}^{(n)}$ is the pushforward measure of $Q_{\text{\ensuremath{}}}^{(n)}$
with respect to the reduct projection from $\Omega_{n}^{\mathcal{S}}\rightarrow\Omega_{n}^{\mathcal{T}}$.
\end{rem*}
We can now formulate preservation of asymptotic equivalence under
reducts:
\begin{prop}
\label{prop:AE-reducts}The reducts of asymptotically equivalent families
of distributions are themselves asymptotically equivalent.
\end{prop}
In combination, we obtain:
\begin{thm}\label{thm:Proj_implies_QF}
Let $L$ be a logical language with asymptotic quantifier elimination that extends quantifier-free first-order logic. Let $\mathcal{R} \subseteq \mathcal{S}$ be vocabularies, and let $\mathcal{S'} \subseteq \mathcal{S}$. Furthermore let $T$ be an $L$-distribution over $\mathcal{R}$ with vocabulary $\mathcal{S}$. Lastly, let $(Q^{(n)})$ be the family of distributions induced by $T$.
Then the following holds: If $Q_{\mathcal{S'}}^{(n)}$ is projective, then there is a quantifier-free distribution $T_q$ over $\mathcal{R}$ with vocabulary $\mathcal{S}$ such that $Q_{\mathcal{S'}}^{(n)}$ is the reduct of the family of distributions induced by $T_q$ to $\mathcal{S'}$.
\end{thm}
In particular, a projective family of distributions that can be expressed in probabilistic logic programming at all can in fact be expressed already by a determinate probabilistic logic program.
\section{Implications and discussion}
The results have immediate consequences for the expressiveness of
probabilistic logic programming.
We first discuss a particularly striking observation:
\subsection{Asymptotic loss of information}
Very insightful is the case of a probabilistic \emph{rule}, i.e. a
clausal formula annotated with a probability. Because of its intuitive
appeal, this is a widely used syntactic element of probabilistic logic
programming languages such as Problog, and its semantics is defined
by introducing a new probabilistic fact to model the uncertainty of
the rule. More precisely:
\[
p::R(\vec{x}) \coloneq Q_{1}(\vec{x}_{1},\vec{y}_{1}),\ldots,Q_{n}(\vec{x}_{n},\vec{y}_{n})
\]
(where $\vec{x}$ are the variables appearing in $R$, $\vec{x_{i}}\subseteq\vec{x}$)
is interpreted as
\[
p::I(\vec{x},\vec{y});R(\vec{x}) \coloneq Q_{1}(\vec{x}_{1},\vec{y}_{1}),\ldots,Q_{n}(\vec{x}_{n},\vec{y}_{n}),I(\vec{x},\vec{y})
\]
(where $\vec{y}\coloneqq\bigcup\vec{y}_{i}$).
It is now easy to see from Proposition \ref{prop:form_QE} that in
the asymptotic quantifier-free representation of this probabilistic
rule, $I$ will no longer occur, since it originally occurred implicitly
quantified in the body of the clause. However, $I$ was the only connection
between the probability annotation of the rule and its semantics!
Therefore, the asymptotic probability of $R(\vec{x})$ is independent
of the probability assigned to any non-determinate rule with
$R(\vec{x})$ as its head.
\subsection{Expressing projective families of distributions}
Our results also show how few of the projective families of distributions
can be expressed in those formalisms. This confirms the suspicion
voiced in by Jaeger and Schulte \shortcite{JaegerS20} that despite the ostensible similarities
between languages such as independent choice logic, which are based
on the distribution semantics, and the array representation introduced by Jaeger and Schulte \shortcite{JaegerS20}, a direct application of techniques from probabilistic logic programming to general projective families of distributions might prove challenging.
We start by displaying some properties shared by the projective distributions induced by a probabilistic logic program.
\begin{defn}
A projective family of distributions has the \emph{Independence Property} or \emph{IP} if for all $\mathcal{S}$-formulas $\varphi(x_1, \dots x_n)$ and $\psi(x_1, \dots x_m)$ the events $\{1, \dots, n\} \models \varphi$ and $\{n+1, \dots, n+m\} \models \psi $ are independent under $Q^{(n+m)}$.
A projective family of distributions $(Q^{(n)})$ of $\mathcal{S}$-structures has the \emph{Conditional Independence-Property} or \emph{CIP} if for all $n$ and all quantifier-free $\mathcal{S}$-formulas $\varphi(x_1, \dots x_n)$ and every $\mathcal{S}$-structure $\omega$ on a domain with $n-1$ elements, the events $\{1, \dots, n\} \models \varphi $ and $\{1, \dots, n+1\} \setminus \{n\} \models \varphi $ are conditionally independent over $\{1, \dots, n-1\} \models \omega$ under $Q^{(n+1)}$.
\end{defn}
IP has been studied extensively in the field of pure inductive logic \cite{ParisV15}, while CIP is a generalisation of the property that Jaeger and Schulte \shortcite{JaegerS20} claimed in their Proposition 7.1 for AHK- distributions, to arbitrary quantifier-free formulas rather than worlds.
\begin{exam}\label{exam:CIP-IP}
Consider the quantifier-free abstract distribution with a probabilistic fact $R(x)$ with associated probability $p$ and a binary predicate $P(x,y)$ with definition $\phi_P$ = $x = y \vee R(x)$. Then its induced family of distributions satisfies CIP and IP.
However, the reduct to the vocabulary $\{P\}$ does not satisfy CIP; indeed, consider the domain with elements $\{1, 2, 3\}$. Then there is just one $\{P\}$-structure $\omega$ with domain $\{1\}$ that has probability 1, namely the world where $P(1,1)$ is true. Consider the events $P(1,3)$ and $P(1,2)$. They are not independent, since in fact $P(1,2)$ iff $R(1)$ iff $P(1,3)$. Since there is just one possible $\{P\}$-structure $\omega$ on $\{1\}$, conditioning on $\omega$ does not alter the probabilities.
\end{exam}
\begin{prop} \label{prop:CIP-IP}
Let $(Q^{(n)})$ be a projective family of distributions induced by a quantifier-free abstract distribution. Then $(Q^{(n)})$ satisfies CIP. If it does not have any nullary relation symbols, it also satisfies IP.
\end{prop}
As mentioned above, one often expands the vocabulary of interest when modelling in the distribution semantics. It is worth noting, therefore, that IP is trivially transferred to reducts, while CIP is not (see Example \ref{exam:CIP-IP} above).
We can view our results as positive or negative, depending on our viewpoint. We will begin with the positive formulation:
\begin{cor} \label{cor:Proj_CIP}
If a projective family of distributions is induced by a probabilistic logic program, it satisifies CIP.
\end{cor}
As CIP is a generalisation of the property claimed by Jaeger and Schulte \shortcite{JaegerS20} in their Proposition 7.1, this shows that while the class of AHK- representations does not satisfy this property (see the discussion in the appendix to Jaeger and Schulte's corrected version \shortcite{JaegerS20a}), every projective family of distributions induced by a probabilistic logic program does.
Since CIP does not transfer to reducts, however, we look towards IP for a property that all projective families of distributions expressible in probabilistic logic programming satisfy.
\begin{cor}\label{cor:Proj_IP}
Let $\mathcal{S'} \subseteq \mathcal{S}$ be relational vocabularies without nullary relation symbols. Then for every probabilistic
logic program with vocabulary $\mathcal{S}$, if the reduct $(Q^{(n)}_{\mathcal{S'}})$ is projective, $(Q^{(n)}_{\mathcal{S'}})$ satisfies IP.
\end{cor}
If we allow nullary relations in $\mathcal{S}$, we obtain finite sums of distributions with IP instead.
\begin{prop}\label{prop:finite_sums_of_IP}
Let $\mathcal{S'} \subseteq \mathcal{S}$ be relational vocabularies, possibly with nullary relation symbols. Then for every probabilistic
logic program with vocabulary $\mathcal{S}$, if the reduct $(Q^{(n)}_{\mathcal{S'}})$ is projective, $(Q^{(n)}_{\mathcal{S'}})$ is a finite sum of distributions satisfying IP.
\end{prop}
It is natural to ask how strong the condition imposed by the previous results is, bringing us to the negative part of our results.
As a special case, we consider a unary vocabulary $\mathcal{S'}$, which only has unary relation symbols, since the projective families of distributions are very well understood for such vocabularies.
Here, \emph{de Finetti's Representation Theorem} \cite[Chapter 9]{ParisV15} says that the projective families of distributions in a unary vocabulary are precisely the \emph{potentially infinite} combinations of those that satisfy IP, while those projective families of distributions expressible in probabilistic logic programs are merely the \emph{finite} combinations of those satisfying IP; so, in some sense ``almost all'' projective families of distributions in unary vocabularies cannot be expressed in probabilistic logic programming.
As a concrete example, we show that already in the very limited vocabulary of a
single unary relation symbol $R$, there is no probabilistic logic
program that induces the distribution that is uniform on isomorphism
classes of structures:
\begin{defn}
\label{def:Carnap's function}Let $\mathcal{S}\coloneqq\{R\}$ consist
of one unary predicate, and let $\mathfrak{m}^{*}$ be the family
of distributions on $\mathcal{S}$-structures defined by
$\mathfrak{m}^{*}(\{\omega\})\coloneqq\frac{1}{|D|*N_{\omega}}$ for
a world $\omega\in\Omega_{D}$, where $N_{\omega}\coloneqq\left|\left\{ \omega'\in\Omega_{D}|\omega\cong\omega'\right\} \right|$.
This gives each isomorphism type of structures equal weight, and then
within each isomorphism type every world is given equal weight too.
\end{defn}
$\mathfrak{m}^{*}$ is an important probability measure for two reasons;
it plays a special role in finite model theory since the so-called
unlabeled 0-1 laws are introduced with respect to this measure. Furthermore,
it was introduced explicitly by Carnap \shortcite{Carnap50,Carnap52} as
a candidate measure for formalising inductive reasoning, as part of
the so-called \emph{continuum of inductive methods}.
Paris and Vencovsk\'{a} \shortcite{ParisV15} provide a modern exposition of Carnap's theory.
$\mathfrak{m}^{*}$ is easily seen to be exchangeable; it is also projective, and in
fact an elementary calculation shows that for any domain $D$ and
any $\left\{ a_{1},\ldots a_{n+1}\right\} \subseteq D$,
\begin{equation}
\mathfrak{m}^{*}\left(R(a_{n+1})|\left\{ R(a_{i})\right\} _{i\in I\subseteq\{1,\ldots,n\}}\cup\left\{ \neg R(a_{i})\right\} _{i\in\{1,\ldots,n\}\backslash I}\right)=\frac{\left|I\right|+1}{n+1}\label{eq:m*}
\end{equation}
(see any of the sources above for a derivation).
\begin{prop}\label{prop:Carnap_function}
Let $\mathcal{S}'$ be a finite vocabulary extending $\mathcal{S}$
from Definition \ref{def:Carnap's function}. Then there is no probabilistic
logic program with vocabulary $\mathcal{S}'$ such that the reduct
of the induced family of distributions to $\mathcal{S}$ is equal
to $\mathfrak{m}^{*}$.
\end{prop}
\subsection{Complexity results}
Since the theory of random structures is decidable, the asymptotic quantifier results in this paper provide us with an algorithmic procedure for determining an asymptotically equivalent acyclic determinate program for any given probabilistic logic program, and by extension a procedure for determining the asymptotic probabilities of quantifier-free queries.
What can we say about the complexity of this procedure?
Since the operation takes a non-ground probabilistic logic
program as input and computes another probabilistic logic program,
the notion of data complexity does not make sense in this context.
Instead, program complexity is an appropriate measure.
In our context, the input program could be measured in different ways.
Since our analysis is based on the setting of abstract distributions,
we will be considering as our input abstract distributions obtained
from (stratified) probabilistic logic programs . We will furthermore
fix our vocabularies $\mathcal{R}$ and $\mathcal{S}$. Since the transformation
acts on each $\phi_{R}$ in turn and independently, it suffices to
consider the individual $\phi_{R}$ as input. It is natural to ask
about complexity in the \emph{length} of $\phi_{R}$.
In fact, one can extract upper and lower bounds from the work of Blass et al. \shortcite{BlassGK85}, who build on the work of Grandjean \shortcite{Grandjean83} for analysing the complexity of their asymptotic results.
The task of determining whether the probability
of a first-order sentence converges to 0 or 1 with increasing domain
size, which is a special case of our transformation, is complete in
PSPACE \cite[Theorem 1.4]{BlassGK85}. Therefore the program transformation
is certainly PSPACE-hard. On the other hand, asymptotic elimination
of quantifiers in least fixed point logic is complete in EXPTIME \cite[Theorems 4.1 and 4.3]{BlassGK85}, so the program transformation is certainly
in EXPTIME.
In order to specify further, we note that for abstract first-order
distributions, which correspond to acyclic probabilistic logic programs,
the transformation can be performed in PSPACE:
Let $R$ be of arity $n$. Then enumerate the (finitely many) quantifier-free
$n$-types $\left(\varphi_{i}\right)$ in $\mathcal{R}$. Now for
any $\phi_{R}$of arity $n$ we can check successively in polynomial
space in the length of $\phi_{R}$, whether the probability of $\varphi_{i}\rightarrow\phi_{R}$converges
to 0 or 1. Then $\phi_{R}$ is asymptotically equivalent to the conjunction
of those quantifier-free $n$-types for which 1 is returned.
In the general case of least fixed point logic, Blass et al. \shortcite{BlassGK85}
show that the problem of finding an asymptotically equivalent first-order
sentence is EXPTIME complete. However, to represent stratified Datalog,
only the fragment known as \emph{bounded }or \emph{stratified }least
fixed point logic is required \cite[Sections 8.7 and 9.1]{EbbinghausF06}.
Therefore, the complexity class of the program transformation of stratified
probabilistic logic programs corresponds to the complexity of the
asymptotic theory of \emph{bounded} fixed point logic, which to the
best of our knowledge is still open.
\section{Conclusion and further Work}
By introducing the formalism of abstract distributions, we have related the asymptotic analysis of finite model theory to the distribution semantics underlying probabilistic logic programming.
Thereby, we have shown that every probabilistic logic program is asymptotically equivalent to an acyclic determinate logic program.
In particular, this representation provides us with an algorithm to evaluate the asymptotic probabilities of quantifier-free queries with respect to a probabilistic logic program.
We have also seen that the asymptotic representation of a probabilistic logic program invoking probabilistic rules is in fact independent of the probability with which the rule is annotated.
We applied our asymptotic results to study the projective families of distributions that can be expressed in probabilistic logic programming.
We saw that they have certain independence properties, and that in particular the families of distributions induced on the entire vocabulary satisfy the conditional independence property.
We also see that at least in the case of a unary vocabulary, only a minority of projective families of distributions can be represented, excluding important example such as Carnap's family of distributions $\mathfrak{m}^*$.
\subsection{Further work}
The analysis presented here suggests several strands of further research.
While some widely used directed frameworks can be subsumed under the
probabilistic logic programming paradigm, undirected models such as Markov
logic networks (MLNs) seem to require a different approach.
Indeed, the projective fragment of MLNs isolated by Jaeger and Schulte \shortcite{JaegerS18} is particularly
restrictive, since it only allows formulas in which every literal
has the same variables. Those are precisely the $\sigma$-determinate formulas discussed by Domingos and Singla \shortcite{DomingosS07};
cf. also the parametric classes of finite model theory \cite[Section 4.2]{EbbinghausF06}. It might therefore be expected
that if an analogous result to Theorem \ref{thm:Probabilistic_Logic_Programs}
holds for MLNs, they could express even fewer projective families
of distributions than probabilistic logic programs.
Beyond the FOL or LFP expressions used in current probabilistic logic
programming, another direction is to explore languages with more expressive
power. Candidates for this are for instance Keisler's \shortcite{Keisler85} logic with probability
quantifiers or Koponen's \shortcite{Koponen20} conditional probability
logic. Appropriate asymptotic quantifier elimination
results have been shown in both settings \cite{Koponen20,KeislerL09},
allowing an immediate application of our results there.
The asymptotic quantifier elimination presented here excludes
higher-order programming constructs from our probabilistic logic programs.
Investigating the asymptotic theory of impredicative programs under a formalised semantics
such as that presented by Bry \shortcite{Bry20} could have direct consequences for the expressiveness of such more general probabilistic logic programs.
Finally, the failure of the classical paradigm under investigation
to express general projective families of distributions suggests one
may have to look beyond the current methods and statistical relational frameworks
to address the challenge of learning and inference for general projective
families of distributions issued by Jaeger and Schulte \shortcite{JaegerS20}.
\bibliographystyle{acmtrans}
|
{
"timestamp": "2021-08-20T02:04:54",
"yymm": "2102",
"arxiv_id": "2102.08777",
"language": "en",
"url": "https://arxiv.org/abs/2102.08777"
}
|
\section{Introduction}
Very recently, the LHCb Collaboration has announced the finding of a
new hidden-charm pentaquark state with strangeness in the analysis of
$\Xi_b^-\to J/\psi\Lambda K^-$
decays~\cite{Aaij:2020gdg}. This hidden-charm pentaquark baryon with
strangeness is christened as $P_{cs}^0(4459)$. The mass and width of
$P_{cs}$ is determined to be respectively $4458.8\pm
2.9_{-1.1}^{+4.7}$ MeV and $17.3_{-5.7}^{+8.0}$ MeV. While the quark
content of $P_{cs}^0(4459)$ can be given as $udsc\bar{c}$, its
spin-parity quantum number is not known yet because of lack of the
data. This finding broadens our understanding of how the quarks form
multi-quark hadrons in addition to the heavy pentaquark baryons
$P_c$~\cite{Aaij:2015tga, Aaij:2016phn, Aaij:2019vzc} and
many charmonium-like tetraquark mesons~\cite{Choi:2003ue,
Aubert:2003fg} (see recent experimental and theoretical
reviews~\cite{Chen:2016qju, Esposito:2016noz, Dong:2017gaw,
Olsen:2017bmm, Guo:2017jvc}). The structure of $P_c$ and $P_{cs}$
has been theoretically studied in various works~
\cite{Maiani:2015vwa,
Li:2015gta, Ghosh:2017fwg, Cheng:2015cca,
Anisovich:2015zqa, Wang:2015wsa, Chen:2015sxa, Feijoo:2015kts,
Lu:2016roh, Chen:2016ryt, Xiao:2019gjd, Wang:2019nvm, Chen:2020uif,
Peng:2020hql, Chen:2021tip,
Wu:2010jy, Wu:2010vk, Yuan:2012wz, Santopinto:2016pkp, Takeuchi:2016ejt,
Yamaguchi:2016ote, Yamaguchi:2017zmn, Yamaguchi:2019seo, He:2019ify}.
The internal structure of the hidden-charm
pentaquark states is still under debate. Since the mass of
$P_{cs}^0(4459)$ is about 19 MeV below the $\bar{D}^* \Xi_c^0$
threshold, it is arguably considered to be a hadronic molecular
state~\cite{Chen:2015sxa,Chen:2016ryt, Xiao:2019gjd, Wang:2019nvm,
Chen:2020uif, Peng:2020hql, Chen:2021tip}. On the other
hand, the hidden-charm pentaquark states are interpreted as compact
pentaquarks consisting of two diquarks and an antiquark bound
states~\cite{Maiani:2015vwa, Li:2015gta, Wang:2015wsa, Ghosh:2017fwg,
Wang:2020eep}, hadrocharmonium states~\cite{Eides:2019tgv,Anwar:2018bpu,
Ferretti:2020ewe}, coupled-channel unitary approach with the local
hidden gauge formalism~\cite{Wu:2010jy, Wu:2010vk}, five-quark states
~\cite{Yuan:2012wz, Santopinto:2016pkp, Takeuchi:2016ejt},
meson-baryon molecules with coupled channels~\cite{Yamaguchi:2016ote},
meson-baryon molecules coupled to the five-quark states
~\cite{Yamaguchi:2017zmn, Yamaguchi:2019seo}, and as hadronic molecule
states in a quasi-potential Bethe-Salpeter equation approach
~\cite{He:2019ify}.
Theoretically, the
spin-parity quantum number of the $P_{cs}^0(4459)$ is proposed to be
$1/2^-(3/2^-)$. Reference~\cite{Peng:2020hql} argues that $J^P=3/2^-$
is preferable over $J^P=1/2^-$ based on the hadronic molecular picture
of $P_{cs}^0(4459)$, though it should be determined by experiments.
In principle, the hidden-charm pentaquark states can be produced by
meson beams such as the pion and kaon. Since several experimental
programs to measure charmed hadrons have been planned at the Japan
Proton Accelerator Research Complex (J-PARC)~\cite{Noumi, Shirotori,
Kim:2014qha, Kim:2015ita, Kim:2016imp}, it is also of great
importance to investigate the production mechanism of
the hidden-charm pentaquark states. In Ref.~\cite{Lu:2015fva},
the production of the $P_c^0(4380)$ and $P_c^0(4450)$ was studied
in the $\pi^- p\to J/\psi n$ reaction, based
on the effective Lagrangian approach. This approach provides a simple but
clear understanding of how the $P_c$'s can be created at the level of
the Born approximation. The transition amplitude includes the $P_{c}$'s
as the resonance baryons in the $s$ channel explicitly together
with $\pi$ and $\rho$ exchanges in the $t$ channel and the $P_c$'s
exchange in the $u$ channel. They found that the contributions of the
$P_c^0(4380)$ and $P_c^0(4450)$ bring about the clear peak structures
in order of $1\mu\mathrm{b}$ at the energies corresponding to the
masses of $P_c$'s. On the other hand, Ref.~\cite{Kim:2016cxr} examined
the $\pi^- p\to J/\psi n$ reaction, using the Regge approach. The
$t$ channel for the hidden charm reaction is distinguished from that
for the open charm reaction, since the hidden charm processes are
suppressed by the Okubo-Zweig-Iizuka (OZI) rule. This indicates that
it is difficult to determine the coupling constant for $P_c$ by using
some model calculations. Thus, one needs to make a reasonable
assumption for the branching ratios of $P_c$. A similar situation is
expected also for the $K^- p\to J/\psi \Lambda$ reaction.
In the present work, we investigate the production of $P_{cs}^0(4459)$
in the $K^-p\to J/\psi \Lambda$ reaction, based on two different
theoretical models, i.e. the effective Lagrangian method and Regge
approach. In particular, since the energy of the initial kaon should
be enough to create the charmonium $J/\psi$ and $\Lambda$, it is
worthwhile to consider also the Regge approach. In
Refs.~\cite{Kim:2014qha, Kim:2015ita}, both the effective Lagrangian
method and Regge approach were used for the study of the open-charm
process $\pi^- p\to D^{*-}\Lambda_c^+$. It turns out that the Regge
approach describes the experimental data very well over the whole
energy region. However, while the Regge approach describes the general
behavior of the cross sections at very high energies, it has certain
difficulties to describe experimental data quantitatively. One
effective way of improving this Regge approach is that one can replace
the Feynman propagators in the transition amplitudes derived based on
the effective Lagrangian by the Reggeized propagator. This method is
often called the hybridized Regge approach.
Actually, the Regge approach was used for the description of the
$\pi^- p\to J/\psi n $ reaction~\cite{Kodaira:1979sf} in which the
total cross section for the reaction was estimated to be around $1$ pb
at the momentum $p=50\,\mathrm{GeV}/c$. Moreover,
the hybridized Regge approach was developed and successfully applied
to photoproduction of mesons~\cite{Guidal:1997hy}.
In the present work, we take the same strategy such that we will
employ both the effective Lagrangian and Regge approaches and compare
the results each other, since these two approaches are complementary
each other. Since the
spin-parity quantum number
of $P_{cs}^0(4459)$ is experimentally unknown, we will consider six
different cases, i.e. $J^{P}=1/2^{\pm}$, $J^{P}=3/2^{\pm}$, and
$J^{P}=5/2^{\pm}$, emphasizing the cases of $J^P=1/2^-$ and $3/2^-$.
Then, we scrutinize the differences among the contributions of
$P_{cs}$ to the $K^- p \to J/\psi \Lambda$ with the different
spin-parity quantum number assigned.
The present work will provide helpful guidance on possible
future experiments at the J-PARC and on determining the spin-parity
quantum number of $P_{cs}$.
We sketch the present work as follows: In Section II, we explain the
general formalism for the effective Lagrangian and Regge
approaches. Since the coupling constants at the vertices including
$P_{cs}$ are not known, we first estimate them by imposing reasonable
assumptions on the branching ratios of the $P_{cs}$ decays.
In Section III, we present the results for the total and differential
cross sections, emphasizing the differences arising from different
spin-parity quantum numbers. In the final Section, we summarize the
present work and will draw conclusions.
\section{General formalism}
We first start with the effective Lagrangian approach and then will
continue to formulate the transition amplitude for the $K^-p\to J/\psi
\Lambda$ reaction in the Regge approach.
\subsection{Effective Lagrangian method}
\begin{figure}[htp]
\includegraphics[scale=1.2]{fig1.pdf}
\caption{The tree-level Feynman diagrams for the $K^- + p\to J/\psi +
\Lambda$ reaction. In the left panel the $s$-channel is drawn,
whereas in the center and right panels, the $t$-channel and
$u$-channel diagrams are depicted. $p_i$ stand for the four-momenta
of hadrons involved in the reaction.}
\label{fig:feydiag}
\end{figure}
In the effective Lagrangian approach for the $K^-p\to J/\psi
\Lambda$ reaction, we can consider three different Feynman diagrams
that are drawn in Fig.~\ref{fig:feydiag}. In the $s$ channel, we can only
include $P_{cs}^0(4459)$ with the experimental data on its mass and
decay width taken into account~\cite{Aaij:2020gdg}. Though we can
include other hyperons with strangeness $S=-1$, we will neglect them,
because we do not have any information on the coupling constant for
the vertices such as $Y\Lambda J/\psi$ and furthermore their
contributions will be negligible, since they are far from
on-mass-shell. The $t$ channel contains $K$ and $K^*$ exchange. In the
$u$ channel, we can introduce the nucleon. Note that it is not
possible to include $P_{cs}$ in the $u$ channel, which is very
different from the case of the $\pi^- p \to J/\psi n$ reaction.
Since the spin-parity quantum number of $P_{cs}^0(4459)$ is unknown,
we assume six different cases: $J^{P}=1/2^\pm,\,3/2^\pm,\,5/2^\pm$.
Taking into account these different quantum numbers, we can express the
effective Lagrangians for $P_{cs}$ as follows~\cite{Lu:2015fva,
Kim:2016cxr,Mart:2015jof,Kim:2011rm,Wang:2015jsa}
\begin{align}
\mathcal{L}_{P\Lambda J/\psi}^{1/2\pm}=&\, -g_{P\Lambda J/\psi}
\bar{P}\Gamma^{\mp}_\mu\Lambda\psi^{\mu}
+ \frac{f_{P\Lambda J/\psi}}{2 m_\Lambda}
\bar{P}\sigma_{\mu\nu}\Gamma^{\pm}\Lambda\psi^{\mu\nu}
+ \mathrm{h.c.},\cr
\mathcal{L}_{P\Lambda J/\psi}^{3/2\pm}=&\,-\frac{g_{P\Lambda
J/\psi}}{2 m_\Lambda} \bar{P}_{\mu}\Gamma^{\pm}_\nu\Lambda\psi^{\mu\nu}
-\frac{f_{P\Lambda J/\psi}}{4 m_\Lambda^2}
\bar{P}_{\mu}\Gamma^{\mp}\partial_{\nu}
\Lambda\psi^{\mu\nu} -\frac{h_{P\Lambda J/\psi}}{4 m_\Lambda^2}
\bar{P}_{\mu}\Gamma^{\mp}\Lambda\partial_{\nu}\psi^{\mu\nu}+
\mathrm{h.c.},\cr
\mathcal{L}_{P\Lambda J/\psi}^{5/2\pm}=&\,
-\frac{g_{P\Lambda J/\psi}}{2 m_\Lambda^2}
\bar{P}_{\mu\alpha}\Gamma^{\mp}_\nu\Lambda\partial^\alpha
\psi^{\mu\nu}-\frac{f_{P\Lambda J/\psi}}{4 m_\Lambda^3} \bar{P}_{\mu\alpha}
\Gamma^{\pm}\partial_{\nu}\Lambda\partial^\alpha\psi^{\mu\nu}
-\frac{h_{P\Lambda J/\psi}}{4 m_\Lambda^3} \bar{P}_{\mu\alpha}\Gamma^{\pm}
\Lambda\partial^\alpha\partial_{\nu}\psi^{\mu\nu} +\mathrm{h.c.},
\label{Eq:pljcoup}
\end{align}
where $P$, $\Lambda$, $\psi^\mu$ denote the fields corresponding
respectively to $P_{cs}^0(4459)$, $\Lambda^0$, and
$J/\psi$. $\psi_{\mu\nu}$ is defined as $\partial_\mu \psi_\nu
-\partial_\nu \psi_\mu$. $m_\Lambda$ stands for the mass of the
$\Lambda$ hyperon. $\Gamma_\mu$ and $\Gamma$ are given respectively by
\begin{align}
\Gamma^{\pm}_\mu =
\begin{pmatrix}
\gamma_\mu\gamma_5 \\ \gamma_\mu
\end{pmatrix}
\mbox{ and }
\Gamma^{\pm} = \begin{pmatrix} 1 \\ i\gamma_5
\end{pmatrix},
\end{align}
with different parities considered. Since we consider the production
of $P_{cs}$ in the vicinity of the $J/\psi \Lambda$ threshold, we will
take into account the first terms in each effective Lagrangians.
We will consider only the terms with $g_{P\Lambda J/\psi}$ in the
effective Lagrangian, assuming that those with $f_{P\Lambda J/\psi}$
and $h_{P\Lambda J/\psi}$ are rather small near the threshold.
The effective Lagrangians for the $NP_{cs}K$ vertex are written as
\begin{align}
\mathcal{L}_{P N K}^{1/2\pm}=&\,- g_{PNK} \bar{P}
\Gamma^{\mp}N K + \mathrm{h.c.},\cr
\mathcal{L}_{P N K}^{3/2\pm}=&\,- \frac{g_{PNK}}{M_{P_{cs}}\, m_N}
\varepsilon^{\mu\nu\alpha\beta}\partial_\mu\bar{P}_\nu
\Gamma^{\pm}_\alpha N \partial_\beta K + \mathrm{h.c.},\cr
\mathcal{L}_{P N K}^{5/2\pm}=&\,\, - \frac{g_{PNK}}{M_{P_{cs}} \,m_N^2}
\varepsilon^{\mu\nu\alpha\beta}\partial_\mu\bar{P}_{\nu\rho}
\Gamma^{\mp}_\alpha N \partial^\rho\partial_\beta K
+ \mathrm{h.c.},
\end{align}
where $M_{P_{cs}}$ and $m_N$ represent the masses of $P_{cs}$ and the
nucleon respectively.
Since there is no information on the coupling constants for the
$P_{cs}J/\psi \Lambda$ and $P_{cs}KN$ vertices experimentally,
it is very difficult to determine them. As will be discussed
soon, one possible way is to resort to some guessworks based
on theoretical works~\cite{Kim:2016cxr,Paryev:2018fyv,Wang:2019dsi}
and recent experimental data on $\pi N$ and $\bar{K}N$
scattering~\cite{Jenkins:1977xb,Chiang:1986gn,Zyla:2020zbs}.
Note that we have used the $\pi N$ and $\bar{K}N$ scattering
data to extrapolate the $P_{cs}J/\psi\Lambda$ and $P_{cs}KN$ coupling
constants. This is an assumption justified by the fact that the energy
of the $P_{cs}$ production is rather high such that the effects of the
explicit SU(3) symmetry breaking are also suppressed, considering the
fact that the ratio between the strange current quark
mass $m_s$ and the kinetic energy of the $\Lambda$ baryon,
is rather small. The coupling constants for $P_{cs}$ are
extracted by using the partial-wave decay width given by
\begin{align}
\Gamma (P_{cs}\to M B) =& \frac{|\mathbf{k}|}{8\pi
M_{P_{cs}}^2}\,\frac{1}{2J+1} \sum_{\lambda_1 = -J}^{J}
\sum_{\lambda_2,\lambda_3} |A(P_{cs}\to M B)|^2 ,
\label{eq:decaywidth}
\end{align}
where $M$ and $B$ denote the produced meson and baryon in the final
state, respectively. $|\mathbf{k}|$ is the momentum of the meson in
the final state and $J$ represents the total angular momentum of the final
state. The $\lambda_i$ are the spin projections of the particles
involved. The decay amplitudes $A(P_{cs}\to MB)$ for $P_{cs}\to
J/\psi\Lambda$ are obtained from the effective Lagrangian with
spin-parity quantum numbers for $P_{cs}$ given
\begin{align}
A^{1/2\pm}_{P \Lambda J/\psi} =&\, -g_{P\Lambda J/\psi}\,
\bar{u}_{P}\,\Gamma^{\mp}_\mu\, \epsilon^{\mu}\, u_\Lambda, \cr
A^{3/2\pm}_{P \Lambda J/\psi} =&\, i\frac{g_{P\Lambda J/\psi}}{2m_\Lambda}\,
\bar{u}_{P\mu}\,\Gamma^{\pm}_\nu\, (q_\psi^\mu\epsilon^\nu -
q_\psi^\nu\epsilon^\mu)\,u_\Lambda , \cr
A^{5/2\pm}_{P \Lambda J/\psi} =&\, \frac{g_{P\Lambda J/\psi}}{2m_\Lambda^2}\,
\bar{u}_{P\mu\alpha}\,\Gamma^{\mp}_\nu\,
(q_\psi^\mu\epsilon^\nu
-q_\psi^\nu\epsilon^\mu)\,q_\psi^\alpha\,u_\Lambda,
\label{eq:5-7}
\end{align}
whereas those for $P_{cs}\to KN$ are expressed as
\begin{align}
A^{1/2\pm}_{P N K} =& \,- g_{PNK}\, \bar{u}_{P} \,\Gamma^{\mp}\,
u_{N},\cr
A^{3/2\pm}_{P N K} =& \,- \frac{g_{PNK}}{M_{P_{cs}}\, m_N}
\varepsilon_{\mu\nu\alpha\beta} \,\bar{u}_P^\nu
\,q_P^\mu\,\Gamma_{\pm}^\alpha\, q_{K}^\beta\, u_{N},\cr
A^{5/2\pm}_{P N K} =& \, i\frac{g_{PNK}}{M_{P_{cs}}\, m_N^2}
\varepsilon_{\mu\nu\alpha\beta}\,\bar{u}_P^{\nu\rho}\,q_P^\mu\,
\Gamma_{\mp}^\alpha\, q_{K}^\beta \,q_{K \rho}\, u_{N}.
\end{align}
$\epsilon_\mu$ in Eq.~\eqref{eq:5-7} stands for the polarization vector of
$J/\psi$. $q_i^\mu$ ($i=P,K,\psi$) denote respectively the momenta of
$P_{cs}$, $K$ and $J/\psi$ in the center of mass(CM) frame. Note that
$P_{cs}$ is at rest before it decays. The Rarita-Schwinger spinor for
$P_{cs}$ with higher spins ($s\geq 3/2$) is given by
the following recursive equation~\cite{Rarita:1941mf}
\begin{align}
u^{n+1/2}_{\mu_1\cdots\mu_{n-1}\mu}(p,s) \equiv \sum_{r,m}
\left(n+1/2,s|1,r;n-1/2,m\right)
u^{n-1/2}_{\mu_1\cdots\mu_{n-1}}(p,m) \varepsilon_{\mu}^{r}(p),
\end{align}
where $s, m$ and $r$ designate the projections of spin-$(n+1/2)$,
spin-$(n-1/2)$, and the polarization of a massive spin-$1$ particle
respectively.
To determine the coupling constants for the $P_{cs}J/\psi \Lambda$ and
$P_{cs} KN$ vertices, one should know the experimental data on their
branching ratios. Unfortunately, however, they are not known at all.
Even in the case of the $P_c$ its branching ratios are
unknown experimentally. This means that we have to make reasonable
assumptions of the branching ratios of $P_{cs}\to J/\psi\Lambda$ and
$P_{cs}\to KN$. A previous investigation on photoproduction of the
hidden-charm pentaquark $P_c(4450)$~\cite{Paryev:2018fyv} proposed
that if the branching ratio of $P_c(4450)\to J/\psi p$ is $1\%$ or
less, then one can explain the threshold enhancement of the $J/\psi$
production due to $P_{c}$ and the modification of the $J/\psi$ mass in
nuclear medium. However, this is still a very crude estimate for the
branching ratio of $P_c \to J/\psi p$. Note that even the $\pi^- p\to
J/\psi n$ reaction was not much studied experimentally and only the
upper limit of its total cross section is
known~\cite{Jenkins:1977xb, Chiang:1986gn}. Nevertheless, in
Refs.~\cite{Kim:2016cxr, Wang:2019dsi}, the upper limit of the total
cross section for the $\pi^- p\to J/\psi n$ reaction was cautiously
investigated with $P_c$ resonances taken into account, in which the
constraint on the branching ratio of $P_c$ was discussed, especially
in the region near the threshold. The branching ratio of $P_c \to
J/\psi n$ decay was estimated to be about a few percents whereas $P_c
\to \pi^- p$ decay was given to be of order $10^{-4}$, since it is
the OZI-suppressed process. These estimates are in agreement with
recent findings from the GlueX Collaboration~\cite{Ali:2019lzf}.
When it comes to that for $P_{cs}$ decays, the situation is even
worse than the $P_c$ case. Since there is no experimental information
on the decay of $P_{cs}$ at all, it is very difficult to determine the
coupling constant for the $P_{cs}J/\psi\Lambda$ and $P_{cs} K N$
vertex. Nevertheless, it is
worthwhile to estimate the branching ratio of the $P_{cs}\to J/\psi
\Lambda$. Since the threshold energy of the $P_{cs}$ production is
rather high, the effects of the explicit SU(3) symmetry breaking are
also suppressed, considering the fact that the ratio between the
strange current quark mass $m_{\mathrm{s}}$ and the kinetic energy of
$\Lambda$, $T_K(\Lambda)$, is rather small
($m_{\mathrm{s}}/T_K(\Lambda)\ll 1$). Actually, this assumption is a
reasonable one, since the magnitude of the total cross section of $K^-
p $ scattering is similar to $\pi^- p $
scattering~\cite{Zyla:2020zbs}. Based on this assumption, we are
able to estimate the upper limit of the total cross section for the
$K^- p\to J/\psi \Lambda$ reaction near threshold to be around 1
nb. This implies that the branching ratios of the $P_{cs} \to
J/\psi\Lambda$ and $K^- p$ decays are about $1\%$ and $0.01\%$
respectively. If the branching ratio of $P_{cs} \to J/\psi\Lambda$
were larger than $10~\%$, then one would have found the evidence for
the existence of $P_{cs}$ already from the old data for $K^- p $
scattering, which we will discuss later. Moreover, note
that this $1~\%$ branching ratio of the $P_{cs} \to J/\psi\Lambda$
decay is in line with recent investigations on the structure of
$P_{cs}$ with the molecular picture taken into
account~\cite{Chen:2016qju, Xiao:2021rgp}.
Using this estimate of the branching ratio, we can obtain the coupling
constant for the $P_{cs}KN$ vertex. The results for the coupling constants
for $P_{cs}$ are listed in Table~\ref{tab:1}. Note that we take the positive values
for the coupling constants.
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.5}
\begin{table}[htp]
\caption{Numerical results for the coupling constants
$g_{P_{cs}J/\psi \Lambda}$ and $g_{P_{cs} KN}$. The branching ratios
of $P_{cs}\to J/\psi\Lambda$ and $P_{cs} \to pK$ decays are assumed to be
$1\,\%$ and $0.01\,\%$, respectively. Note that we choose the
positive values for the coupling constants.}
\label{tab:1}
\begin{tabular}{l c c c c c c}
\hline
\hline
$g_{P_{cs}MB}(J^P)$ & $1/2^+$ & $1/2^-$ & $3/2^+$ & $3/2^-$ & $5/2^+$
& $5/2^-$ \\
\hline
$P_{cs}\,J/\psi\,\Lambda$ & $1.26\times 10^{-1}$ &
$4.41\times 10^{-2}$ & $1.48\times 10^{-1}$ & $5.46\times 10^{-2}$
& $1.33\times 10^{-1}$ & $3.83\times 10^{-1}$ \\
$P_{cs}\,K\,p$ & $5.82\times 10^{-3}$
& $3.77\times 10^{-3}$ & $2.06\times 10^{-3}$
& $3.18\times 10^{-3}$ & $1.84\times 10^{-3}$
& $1.19\times 10^{-3}$ \\
\hline
\hline
\end{tabular}%
\end{table}
Once the values of the coupling constants are given, it is
straightforward to express the transition amplitudes in the
$s$ channel
\begin{align}
\mathcal{M}_{1/2^\pm} &= i g_{P\Lambda J/\psi} g_{PNK}\,
\bar{u}(p_4,\lambda_4)
\,\Gamma^\mu_\mp \,\epsilon_\mu^*(p_3,\lambda_3)\,
\frac{\slashed{q} + M_{P_{cs}}}{s - M_{P_{cs}}^2} \,
\Gamma_\mp \,u(p_2,\lambda_2),
\label{eq:8} \\
\mathcal{M}_{3/2^\pm} &= -\frac{g_{P\Lambda J/\psi}
g_{PNK}}{2M_{P_{cs}}
m_N m_\Lambda} \bar{u}(p_4,\lambda_4)\, \Gamma_\nu^\pm \,(p_3^\mu
\epsilon^{*\nu}(p_3,\lambda_3) - \epsilon^{*\mu}(p_3,\lambda_3)
p_3^\nu)
\frac{\Delta_{\mu\sigma}}{s - M_{P_{cs}}^2}\, \varepsilon^{\rho \sigma
\alpha \beta}
q_\rho \,\Gamma_\alpha^\pm \, p_{1\beta}\, u(p_2,\lambda_2),
\label{eq:9}\\
\mathcal{M}_{5/2^\pm} &= -\frac{g_{P\Lambda J/\psi}
g_{PNK}}{2M_{P_{cs}}
m_N m_\Lambda^2} \bar{u}(p_4,\lambda_4)\, \Gamma_\nu^\mp p_3^{\lambda}
\,(p_3^\mu
\epsilon^{*\nu}(p_3,\lambda_3) - \epsilon^{*\mu}(p_3,\lambda_3)
p_3^\nu)
\frac{\Delta_{\mu \lambda \sigma \delta}}{s - M_{P_{cs}}^2}
\,\varepsilon^{\rho \sigma
\alpha \beta} q_\rho \,\Gamma_\alpha^\mp\, p_{1\beta}\, p_1^\delta \,
u(p_2,\lambda_2),
\label{eq:10}
\end{align}
where $\epsilon^*_\mu$ denotes the polarization vector for $J/\psi$
and $q$ stands for the momentum of $P_{cs}$ given by
$q=p_1+p_2=p_3+p_4$. Taking into account the decay width of $P_{cs}$,
we change the $P_{cs}$ mass $M_{P_{cs}}$ in the propagator to be
$(M_{P_{cs}} -i\Gamma_{P_{cs}}/2)$. The spin projection operators for
$P_{cs}$ with spin 3/2 and 5/2 are defined respectively
as~\cite{Kim:2012pz}
\begin{align} \label{}
\Delta_{\mu \sigma} &= (\slashed{q} + M_{P_{cs}})
\left[-g_{\mu \sigma} + \frac{1}{3} \gamma_\mu \gamma_\sigma
+ \frac{1}{3M_{P_{cs}}}(\gamma_\mu q_\sigma - \gamma_\sigma q_\mu)
+ \frac{2}{3M_{P_{cs}}^2} q_\mu q_\sigma \right], \cr
\Delta_{\mu \lambda \sigma \delta} &= (\slashed{q} + M_{P_{cs}})
\left[ \frac{1}{2} (\bar{g}_{\mu \sigma} \bar{g}_{\lambda \delta} +
\bar{g}_{\mu \delta} \bar{g}_{\lambda \sigma})
- \frac{1}{5} \bar{g}_{\mu \lambda} \bar{g}_{\sigma \delta}
- \frac{1}{10} (\bar{\gamma}_\mu \bar{\gamma}_\sigma \bar{g}_{\lambda \delta}
+ \bar{\gamma}_\mu \bar{\gamma}_\delta \bar{g}_{\lambda \sigma}
+ \bar{\gamma}_\lambda \bar{\gamma}_\sigma \bar{g}_{\mu \delta}
+ \bar{\gamma}_\lambda \bar{\gamma}_\delta \bar{g}_{\mu \sigma}) \right],
\end{align}
where
\begin{align} \label{}
\bar{g}_{\mu \nu} = g_{\mu \nu} - \frac{q_\mu q_\nu}{M_{P_{cs}}^2}, \;\;\;
\bar{\gamma}_\mu = \gamma_\mu - \frac{q_\mu}{M_{P_{cs}}^2} \slashed{q}.
\end{align}
In the $t$-channel, we consider the exchange of the $K$ and $K^*$
mesons. The effective Lagrangians for the $J/\psi KK$ and $J/\psi
KK^*$ vertices are given as
\begin{align}
\mathcal{L}_{J/\psi K K} =&\,\, -i g_{J/\psi KK}\, \psi^\mu
\left(K^+\partial_\mu K^- -
K^-\partial_\mu K^+\right),\cr
\mathcal{L}_{J/\psi K K^*} =&\,\, -\frac{g_{J/\psi KK^*}}{m_{\psi}}
\varepsilon^{\mu\nu\alpha\beta}\partial_\mu\psi_\nu
K \partial_\alpha K^*_\beta,
\label{eq:elback}
\end{align}
where $m_{\psi}$ denotes the mass of $J/\psi$. The coupling constant will
be determined by using a similar method as in the $s$-channel case. The
decay amplitudes for the corresponding decays in Eq.~\eqref{eq:elback}
are obtained to be
\begin{align}
A_{J/\psi K K} =& \,- g_{J/\psi KK}\,(q_K-q'_K)_\mu \epsilon^\mu ,\cr
A_{J/\psi K K^*} =&\, -\frac{g_{J/\psi KK^*}}{m_{\psi}}
\varepsilon^{\mu\nu\alpha\beta}q_{\psi\mu}
\,q_{K^*\alpha}\,\epsilon_\nu\,
\epsilon^*_{K^*\beta} ,
\end{align}
where $q'_K$ stands for the momentum of the kaon that goes to the
opposite direction with $q_K$. The polarization vector of $K^*$ is
expressed by $\epsilon^\mu_{K^*}$. Since the decay widths of
$J/\psi$ to the $K$ and $K^*$ mesons are experimentally known
as~\cite{Zyla:2020zbs}
\begin{align}
\Gamma_{J/\psi\to KK} =&\, 2.66\times 10^{-2}\,\mathrm{keV},\;\;\;
\Gamma_{J/\psi\to KK^*} = 5.57\times
10^{-1}\,\mathrm{keV},
\end{align}
we can directly obtain the coupling constants $g_{J/\psi KK}$ and
$g_{J/\psi KK^*}$, respectively, as follows
\begin{align}
g_{J/\psi KK} =&\, 7.12\times 10^{-4}, \;\;\; g_{J/\psi KK^*} =
8.82\times 10^{-3}.
\label{eq:16}
\end{align}
Those for the $\Lambda N K$ and $\Lambda N K^*$ vertices are rather
well known. The semi-phenomenological nucleon-hyperon interaction such
as the Nijmegen extended-soft-core model
(ESC08a)~\cite{Rijken:2010zzb} provides us with their values. Then,
the effective Lagrangians for the $\Lambda N K$ and $\Lambda N K^*$
vertices are expressed as
\begin{align}
\mathcal{L}_{\Lambda N K} =&\,\, -\frac{f_{\Lambda N
K}}{m_\pi}\bar{\Lambda} \gamma_\mu
\gamma_5 N \partial^\mu K +
\mathrm{h.c.},\\
\mathcal{L}_{\Lambda N K^*} =&\,\, -g_{\Lambda N K^*}
\bar{\Lambda}\gamma^{\mu} N K^*_\mu -
\frac{f_{\Lambda N K^*}}{4
m_N}\bar{\Lambda}\sigma^{\mu\nu} N
\left(\partial_\mu K^*_\nu
- \partial_\nu K^*_\mu\right) +
\mathrm{h.c.},
\end{align}
with the coupling constants given by
\begin{align}
f_{\Lambda N K} = -0.2643,\;\;\; g_{\Lambda N K^*} = -1.1983,\;\;\;
f_{\Lambda N K^*} = -4.2386\,.
\end{align}
Thus, the resulting transition amplitudes for $K$ and $K^*$ exchanges
are respectively given as
\begin{align}
\mathcal{M}_{K} =& \, \frac{g_{J/\psi KK}f_{\Lambda N
K}}{m_\pi}\,\bar{u}(p_4,\lambda_4)\gamma_5 \frac{(2
p_1 -
p_3)\cdot\epsilon^{*}(p_3,\lambda_3)}{t-m_K^2}
\slashed{q}_t u(p_2,\lambda_2),\\
\mathcal{M}_{K^*} =& \, i \frac{g_{J/\psi KK^*}g_{\Lambda N
K^*}}{m_\psi}\,\bar{u}(p_4,\lambda_4)
\frac{\varepsilon_{\mu\nu\alpha\beta}
p_3^\mu\epsilon^{*\nu}(p_3,\lambda_3)
q_t^\alpha}{t-m_{K^*}^2}
\left(-g^{\beta\sigma}+\frac{q_t^\beta
q_t^\sigma}{m_{K^*}^2}\right) \left(
\gamma_\sigma +
i\frac{\kappa_{K^*}}{2m_N}
\sigma_{\gamma\sigma}q_t^\gamma\right)
u(p_2,\lambda_2),
\end{align}
where $q_t = p_3-p_1$ and $\kappa_{K^*}=f_{\Lambda N
K^*}/g_{\Lambda N K^*}$.
As for the $u$-channel contribution, we consider only the $N$
exchange. The effective Lagrangian for the $NNJ/\psi$ vertex is
similar to the $P_{cs}$ with spin-$1/2^+$ as in Eq.~\eqref{Eq:pljcoup}
\begin{align}
\mathcal{L}_{J/\psi N N}=&\, -g_{J/\psi NN}\bar{N}\gamma_\mu
\psi^{\mu}N - \frac{f_{J/\psi NN}}{2M_N}
\bar{N} \sigma_{\mu\nu}\psi^{\mu\nu} N
+ \mathrm{h.c.}.
\end{align}
Since the $J/\psi$ vector meson has a nature similar to the $\phi$
vector meson, we ignore the second term with the tensor coupling
constant, since its value is related to the charmed magnetic moment of
the nucleon, which can be neglected. It is also difficult to determine
the vector coupling constant $g_{J/\psi NN}$. We take its value from
Ref.~\cite{Barnes:2006ck}:
$g_{J/\psi NN}=g_{J/\psi N\bar{N}}=1.62\times10^{-3}$. This small
value indicates already that the $u$-channel contribution will be very
tiny. The corresponding $u$-channel amplitude is obtained as
\begin{align}
\mathcal{M}_N = -\frac{g_{J/\psi NN}f_{\Lambda N
K}}{m_\pi}\,\bar{u}(p_4,\lambda_4)\gamma_5
\slashed{p}_1\,\frac{\slashed{q}_u + m_N}{u - m_N^2}
\, \slashed{\epsilon}^*(p_3,\lambda_3)
u(p_2,\lambda_2),
\end{align}
where $q_u = p_4-p_1$.
Since hadrons have finite sizes and structures, it is essential to
consider a form factor at each vertex.
Actually, there is no firm theoretical ground as to
how one can determine the values of the cutoff masses. In
practice, the values of the cutoff masses are usually
fitted to the experimental data. Unfortunately, we do not
have experimental data enough to determine them
in the present case. Nevertheless, there is one
theoretical guideline. As discussed in Ref.~\cite{Kim:2018nqf},
heavier baryons are considered to be more compact than lighter ones,
which was found by examining the electromagnetic form
factors of singly heavy baryons. By "\textit{more compact}"
we mean that the intrinsic size of the heavier baryons (or
hadrons) should be smaller than the light ones, which leads
to larger values of the cutoff masses in general. Being
guided by this, we have chosen the cutoff masses $\Lambda$
in such a way that $\Lambda - m \simeq 600-700$ MeV.
In the present work, we will take the
form factors, which are most used in reaction calculations. So,
we introduce the form factors in the $s$-, $t$- and
$u$-channels, respectively, as follows:
\begin{align}
F_s (q^2) &= \frac{\Lambda^4}{\Lambda^4+(s-m^2)^2},\cr
F_t (q_t^2) &= \frac{\Lambda^2-m^2}{\Lambda^2-t},\cr
F_u (q_u^2) &= \frac{\Lambda^2-m^2}{\Lambda^2-u} ,
\end{align}
with the values of the cutoff masses taken to be
\begin{align}
\Lambda_{P_{cs}} = 5.0\,\mathrm{GeV}, \;\;\;
\Lambda_K = 1.0\,\mathrm{GeV}, \;\;\;
\Lambda_{K^*} = 1.4\,\mathrm{GeV}, \;\;\;
\Lambda_N = 1.5\,\mathrm{GeV}.
\end{align}
Note that these values of the cutoff masses have been used in
various different reactions.
\vspace{0.5cm}
\subsection{Regge Approach}
The effective Lagrangian method is known to describe well the hadronic
productions at low-energy regions, in particular, in the vicinity of
the threshold energy. However, since this method is based on the Born
approximation, i.e., a tree-level calculation, it is not suitable to
explain the exclusive or diffractive hadronic processes at higher
energies. On the other hand, the Regge approach explains the general
high-energy behaviors of the hadronic reactions but only
qualitatively. To overcome this disadvantage, a hybridized Regge
approach was phenomenologically proposed~\cite{Guidal:1997hy} in an
attempt to improve the Regge approach quantitatively. This approach is
characterized by replacing the Feynman propagator derived from the
effective Lagrangian method by the Regge one
\begin{align} \label{}
\frac{1}{t-m_X^2} \longrightarrow \mathcal{P}_{\mathrm{Regge}}^\pm =
-\Gamma\left(-\alpha_X
(t)\right) \xi_X^\pm \alpha_X' \left(\frac{s}{s_0}\right)^{\alpha_X(t)}.
\end{align}
This method was successfully applied to hadronic reactions
throughout broad energy regions including even the resonance
regions, $\sqrt{s}\sim 3~\mathrm{GeV}$~\cite{Kim:2014qha,Kim:2015ita}.
\subsubsection{$K$ and $K^*$ Reggeon exchange}
\begin{figure}[htp]
\includegraphics[scale=1]{fig2.pdf}
\caption{The $t$-channel schematic diagram for the hidden-charm $K p\to
J/\psi \Lambda$ reaction.}
\label{fig:2}
\end{figure}
In Fig.~\ref{fig:2}, we depict schematically the $t$-channel diagram
in terms of the quark lines~\cite{Kim:2016cxr}. As shown in
Fig.~\ref{fig:2}, hadronic $J/\psi$ productions by the photon, $\pi$
or $K$ beams are all OZI suppressed, being similar to the $\phi$-meson
production. So, we consider the light-Reggeon exchanges in the
$t$-channel, i.e. the $K$ and $K^*$ Reggeons. We employ here a
hybridized Regge method, in which the Feynman propagators in the
transition amplitudes obtained in the previous subsection are replaced
by the Regge
propagator~\cite{Don:2002,Guidal:1997hy,Kim:2016imp,Kim:2017nxg}.
Thus, we can express the transition amplitudes with the $K$- and
$K^*$-Reggeon exchanges, respectively, as
\begin{align}
\mathcal{M}_{K}^R(s,t) &= -\mathcal{M}_{K}(s,t)
\left\{\begin{array}{c}
1 \\
e^{-i\pi\alpha_K(t)}
\end{array}\right\}
\Gamma(-\alpha_K
(t))\alpha_K'(m_K^2)\left(\frac{s}{s_0}\right)^{\alpha_K(t)}
\left(t-m_K^2\right), \\
\mathcal{M}_{K^*}^R(s,t) &= -\mathcal{M}_{K^*}(s,t)
\left\{\begin{array}{c}
1 \\
e^{-i\pi\alpha_{K^*}(t)}
\end{array}\right\}\Gamma(1-\alpha_{K^*}
(t))\alpha_{K^*}'(m_{K^*}^2)\left(\frac{s}{s_0}\right)^{\alpha_{K^*}(t)-1}
\left(t-m_{K^*}^2\right),
\label{Eq:regge}
\end{align}
where $\alpha_K$ and $\alpha_{K^*}$ denote the Regge trajectories for
the $K$ and $K^*$ mesons, respectively. $\alpha'(t)$ represents the
derivative of $\alpha$ with respect to $t$: $\alpha'(t)=\partial
\alpha/ \partial t$. The scale parameter $s_0$ is a free
parameter. Though this can be fitted to the data, if they exist, its
value is widely taken to be $s_0 = 1~\mathrm{GeV}^2$, which
corresponds to a typical hadronic scale. This can be also estimated
theoretically. If the $t$-channel diagram as shown in
Fig.~\ref{fig:2} were a planar diagram, the energy-scale parameter
$s_0$ could have been calculated by using the planar diagram
decomposition~\cite{Titov:2008yf,Kim:2017hhm}. However, the
$t$-channel diagram for the $K^- p\to J/\psi \Lambda$ reaction is not
a planar one. So, there is no clear way to determine the value
of $s_0$. In the present work, we will utilize the result of Model I
as a guideline to determine $s_0$. Since the Regge amplitude have
to be consistent with that of Model I at the Regge pole position, we
extract the value of $s_0$ by comparing the results for the $d\sigma/
dt$ from Model I with those for Model II near the pole. The reasonable
values of $s_0$ turn out to be $s_0= 5~ \mathrm{GeV}^2$ for $K^{*}$-
and $s_0 = 2~\mathrm{GeV}^2$ for
$K$-Reggeon exchange.
\begin{figure}[htp]
\includegraphics[scale=1]{fig3a.pdf}
\includegraphics[scale=1]{fig3b.pdf}
\caption{Regge trajectories for $K$, $K^*$ and nucleon}
\label{fig:3}
\end{figure}
Though the linear Regge trajectories are given, we will adopt the
nonlinear Regge trajectories~\cite{Brisudova:1999ut}, since it
describes the trajectories more realistically as shown in
Fig.~\ref{fig:3}. Thus, $\alpha_K$ and $\alpha_{K^*}$ are parametrized
as
\begin{align}
\alpha_{K(K^*)}(t) = \alpha_{K(K^*)}(0) + \gamma \left(\sqrt{T_{K(K^*)}} -
\sqrt{T_{K(K^*)}-t}\right),
\end{align}
where $\gamma$ governs the slope of the trajectories and $T_{K(K^*)}$
denote their terminal points. The parameters for the $K$ and $K^*$
trajectories are fixed to be
\begin{align} \label{Eq:parameters}
&\gamma = 3.65~\mathrm{GeV}^{-1}, \;\;\;
\alpha_K(0) = -0.151,\;\;\; \alpha_{K^*}(0) = 0.414, \cr
&\sqrt{T_{K}} = 2.96~\mathrm{GeV}, \;\;\; \sqrt{T_{K^*}} =
2.58~\mathrm{GeV}.
\end{align}
Note that in the limit $t \to 0$, this square-root trajectory reduces
to the linear function
\begin{align} \label{eq:linear_regge}
\alpha(t) \approx \alpha(0) + \frac{\gamma}{2\sqrt{T}}t
= \alpha(0) + \alpha'(0) t.
\end{align}
Before we carry out the numerical calculation, it is of great interest
to examine the asymptotic behavior of the differential cross section
$d\sigma/dt$. It is known that in the large $s$ limit the asymptotic
behavior of $d\sigma/dt$ is given as
\begin{align} \label{eq:asymp_s}
\frac{d\sigma}{dt}(s \to \infty, t \to 0) \propto s^{2\alpha(0)-2}.
\end{align}
We found that the transition amplitudes are proportional to $t$
and $s$ as follows
\begin{align} \label{eq:aymp_feynamp}
\lim_{s\to\infty} \sum_{\lambda_i, \lambda_f}
\left|\mathcal{M}_{K^*}\left(t-m_{K^*}^2\right)\right|^2 \propto
s^2t
\end{align}
and the differential cross section
\begin{align}
\frac{d\sigma}{dt} = \frac{1}{64\pi s} \frac{1}{|p_\mathrm{cm}|^2}
\sum_{\lambda_i,\lambda_f} \left|\mathcal{M}_{K^*}^R\right|^2
\propto \sum_{\lambda_i,\lambda_f} \left|\mathcal{M}_{K^*}
\left(t-m_{K^*}^2\right)\right|^2
s^{2\alpha(t) - 4} \underset{s\to\infty}{\propto} s^{2\alpha(t)-2},
\label{eq:dcs}
\end{align}
which reproduces correctly the asymptotic behavior given in
Eq.~\eqref{eq:asymp_s}. Here, $p_\mathrm{cm}$ stands for the initial
momentum in the CM frame, which is proportional to $\sqrt{s}$ in the
large $s$ limit. The numerical results for $d\sigma/dt$ with $K$ and
$K^*$ considered only are depicted in Fig.~\ref{fig:4}. As one can
see already in Eq.~\eqref{eq:aymp_feynamp}, the contribution of
$K^*$ exchange to $d\sigma/dt$ decreases rapidly at very
forward scattering $t \to 0$ in the same context of $\gamma N \to
K\Lambda$~\cite{Guidal:1997hy} and $\pi N \to
K^*\Lambda$~\cite{Kim:2015ita} reactions.
As $t$ increases, $d\sigma/dt$ falls off linearly for $K$
exchange, whereas that for $K^*$ exchange grows very fast in
the forward direction, and then decreases almost linearly.
\begin{figure}[htp]
\includegraphics[scale=0.4]{fig4a.pdf}
\includegraphics[scale=0.4]{fig4b.pdf}
\caption{$d\sigma/dt$ as a function of $-t$ for the $K$ and $K^*$
contributions from $W=10$ GeV to $25$ GeV.}
\label{fig:4}
\end{figure}
As shown in the left panel of Fig.~\ref{fig:3}, the even and odd
signatured $K$ ($K^*$) poles are lying on the same trajectory, which
means that the $K$ $(K^*)$ Regge trajectory is \textit{degenerate}.
When the total transition amplitudes are derived, the even and odd Regge
propagators can be added or
subtracted~\cite{Guidal:1997hy,Corthals:2005ce}.
Thus, the Regge propagator for $K$ ($K^*$) thus contains either $1$
(constant phase) or $e^{-i\pi\alpha (t)}$ (rotating phase). However,
we find that the results for the total and differential cross
sections are not much changed by the signature factor, so we choose the
constant signature factor. On the other hand, note that the asymmetry
will be quite sensitive to this factor, which will not be computed in
the present work.
\subsubsection{$N$ Reggeon exchange}
We will follow the same method for the nucleon Reggeon in the $u$-channel.
Replacing the Feynman propagator by the Regge propagator, we obtain
the transition amplitudes for the $u$-channel as follows
\begin{align}
\mathcal{M}_{R}(s,u) = -\mathcal{M}_N(s,u) \xi^+_N
\Gamma(0.5-\alpha_N(u))\alpha_N'\left(\frac{s}{s_0}\right)^{\alpha_N(u)-0.5}
\left(u-m_N^2\right).
\label{Eq:regge-u}
\end{align}
We take the linear trajectory as in Ref.~\cite{Storrow:1983ct}. Based on the
nucleon trajectory drawn in the right panel of Fig.~\ref{fig:3}, we find the
Regge trajectory for the even signatured nucleon~\cite{Zyla:2020zbs} as
\begin{align}
\alpha_N(u) = \alpha_N(0) + \alpha_N' u\,; \;\;\;
\alpha_N(0) = -0.384, \;\; \alpha_N' = 0.996\,.
\end{align}
Since one can distinguish the even and odd $N$ trajectory for the
nucleon, so the signature factor for the nucleon Regge trajectory can
be taken to be
\begin{align} \label{eq:nuc_sig}
\xi^+_N = \frac{1 + e^{-i\pi \alpha_N(u)}}{2}.
\end{align}
The energy-scale parameter $s_0$ cannot be obtained by using the
similar way as in the $t$-channel because of the following reason. It
is related to the asymptotic behavior of the $u$-channel Regge
propagator. At very high energy and in the very forward direction, which
correspond to $s\to\infty$ and $t\to 0$, respectively, we get
$u\approx -s$ that leads to $\alpha(u)\approx -\alpha' s$. Moreover, using
the asymptotic behavior of the $\Gamma$ function when $z\to \infty$,
we find an approximated relation
\begin{align}
\Gamma(z)\approx \sqrt{2\pi(z-1)}\left(\frac{z-1}{e}\right)^{z-1}.
\end{align}
Thus, Eq.~\eqref{Eq:regge-u} is reduced to
\begin{align}
\mathcal{M}_{R}(s\to \infty,u\approx -s) \approx \mathcal{M}_{F}(s,u)
C s^\beta \left(\alpha_N' s_0/e \right)^{\alpha_N' s}.
\label{eq:37}
\end{align}
The last factor in Eq.~\eqref{eq:37} gives a hint on $s_0$. If $\alpha'
s_0 > e$, then the above given amplitude will diverge as $s$
grows. Since $\alpha_N'$ is less than $1$, we are able to fix the
energy-scale parameter to be $s_0 = 2 \,\mathrm{GeV}^2$ such that the
amplitude is kept to be convergent.
\vspace{0.5cm}
\section{Results and discussion}
\begin{figure}[htp]
\includegraphics[scale=0.7]{fig5.pdf}
\caption{Numerical results for the total cross section as a function
of the total CM energy ($W$) from Model I. We consider two different
cases of spin-parity quantum number for $P_{cs}$, i.e. $J^P=1/2^-$
and $J^P=3/2^-$. The $s$-channel contribution is drawn in the solid
and dashed curves in the case of $J^P=1/2^-$ and
$J^P=3/2^-$, respectively. The dot-dashed curve depicts the
contribution from $K^*$ exchange in the $t$ channel, whereas the
dotted one illustrates that from $K$ exchange. The two-dot-dashed one
draws the contribution from $N$ exchange in the $u$ channel.}
\label{fig:5}
\end{figure}
We first examine each contribution to the total cross section. In
Fig.~\ref{fig:5}, we show the results for each contribution to the
total cross section for the $K^- p\to J/\psi \Lambda$ reaction. We
consider here the hidden-charm pentaquark $P_{cs}$ with $J^P=1/2^-$
and $J^P = 3/2^-$ in the $s$ channel. The resonance peak reaches the
magnitude of nb order, i.e. $\sigma \sim 1$ nb at $W\approx 4.46$
GeV. The contribution from $K^*$ exchange in the $t$ channel is the
most dominant one apart from the resonance region. Those from $K$ and
$N$ exchanges are negligibly small, since they are approximately 100
times smaller than the contribution from $K^*$ exchange. The reason
can be found from the difference in the values of the coupling
constants, given in Eq.~\eqref{eq:16}. The coupling constant for the
$J/\psi KK^*$ vertex is at least ten times larger than that for the
$J/\psi KK$ vertex. Thus, the contribution from $K^*$ exchange to the
total cross section is much larger than those from both $K$ and $N$
exchanges.
\begin{figure}[htp]
\includegraphics[scale=0.7]{fig6.pdf}
\caption{Numerical results for the total cross section as a function
of the total CM energy ($W$) from Model II. Notations are the same
as in Fig.~\ref{fig:5}.
}
\label{fig:6}
\end{figure}
In Fig.~\ref{fig:6}, we draw the results for each contribution, which
are obtained from Model II, i.e., from the Regge approach. Since the
$s$-channel diagram is simply the same as that from Model I, we
discuss the contributions from $K^*$, $K$, and $N$ exchanges. As
mentioned previously, the value of the energy-scale parameter $s_0$ is
important for the size of the transition amplitudes. Since we use the
results from Model I as a guiding principle for determining $s_0$, we
expect that the magnitudes of the $K^*$- and $K$-Reggeon contributions
should be comparable to those from Model I. However, $s_0$ in the
Regge transition amplitude for $N$-Reggeon exchange in the $u$ channel
is constrained by the convergence condition. This means that the effect
of $N$ exchange is extremely small, so that we can even ignore it.
Comparing the results from Model II, we find that the $K^*$
contributions from Model I exhibit different dependence on $W$. It is
known from the asymptotic behavior of the differential cross sections
shown in Eq.~\eqref{eq:dcs} that the contributions of $K$- and
$K^*$-Reggeon exchanges should fall off slowly as $W$ increases. As
depicted in Fig.~\ref{fig:6}, $K^*$-Reggeon contribution indeed
decreases as $W$ increases. On the other hand, the results for $K^*$
exchange in Model I slowly increase as $W$ increases. This implies
that the effective Lagrangian method is limited in describing hadronic
processes at higher energies, though it is a very effective
method in the vicinity of the threshold.
The contribution of $K$-Reggeon exchange seems to arise as $W$
increases. However, if one further increases $W$, the contribution of
$K$-Reggeon exchange starts to fall off.
\begin{figure}[htp]
\includegraphics[scale=0.53]{fig7a.pdf}\hspace{0.7 cm}
\includegraphics[scale=0.53]{fig7b.pdf}
\caption{Numerical results for the total cross section as a function
of the total CM energy ($W$) from Model I (left panel) and Model II
(right panel) with the branching
ratio $B(P_{cs}\to J/\psi \Lambda)$ varied in the range of
$(1-50)\,\%$.}
\label{fig:6s}
\end{figure}
In Fig.~\ref{fig:6s}, we will examine the dependence of the results
for the total cross section on the values of the branching ratio
of the $P_{cs}\to J/\psi \Lambda$ decay. As expected, if the value of
the branching ratio increases, the peak corresponding to $P_{cs}$
is enhanced clearly. Interestingly, the size of the peak reaches
approximately 10 nb when $\mathrm{Br}(P_{cs}\to J/\psi \Lambda)
=10~\%$ is used. When $\mathrm{Br}(P_{cs}\to J/\psi \Lambda)
=50~\%$ is employed, $\sigma$ is obtained to be almost 100 nb in the
vicinity of the resonance. This implies that if $\mathrm{Br}(P_{cs}\to
J/\psi \Lambda)$ is larger than $10~\%$, then $P_{cs}$ would have been
already found in the data for $K^-p$ scattering. Thus, the $1~\%$
branching ratio is a quite reasonable one, which is in agreement with
that from Refs.~\cite{Chen:2016qju, Xiao:2021rgp}.
\begin{figure}[htp]
\includegraphics[scale=0.7]{fig8.pdf}
\caption{Numerical results for the total cross section as a function
of the total CM energy ($W$) from Model I with possible $J^P$ quantum
numbers employed. Six different combinations of the spin and parity
for the hidden-charm strange pentaquark $P_{cs}$ are taken into
account.
}
\label{fig:7}
\end{figure}
\begin{figure}[htp]
\includegraphics[scale=0.7]{fig9.pdf}
\caption{Numerical results for the total cross section as a function
of the total CM energy ($W$) from Model II with possible $J^P$ quantum
numbers employed. Six different combinations of the spin and parity
for the hidden-charm strange pentaquark $P_{cs}$ are taken into
account.
}
\label{fig:8}
\end{figure}
The spin-parity quantum number for $P_{cs}^0(4459)$ is experimentally
unknown yet. While it may have favorably either $J^P=1/2^-$ or
$J^P=3/2^-$, it is of great interest whether one can see how the
total cross sections and other observables for the $K^-p\to J/\psi
\Lambda$ reaction can provide a hint on the spin-parity quantum number
for $P_{cs}$. If the final states consisting of
$J/\psi$ and $\Lambda$ in the $S$ wave, the spin-parity quantum
numbers $J^P=1/2^-$ and $J^P=3/2^-$ of $P_{cs}$ will be
favored. However, there is no reason to reject other states with
higher values of the orbital angular momentum. Thus, we consider six
different combinations for the spin and parity for $P_{cs}$, i.e.,
$J^p=1/2^-$, $1/2^+$, $3/2^-$, $3/2^+$, $5/2^-$, and $5/2^+$. In
Figs.~\ref{fig:7} and~\ref{fig:8}, we draw the results for the total
cross sections by considering the six different combinations of the
spin-parity quantum numbers for $P_{cs}$, using Model I and Model II,
respectively. We find that except for the case of $J^P=5/2^-$, all the
results seem very similar each other. While the result for
$J^P=5/2^-$ from Model I shows a similar behavior in the resonance
region, it increases monotonically faster than all the other cases as
$W$ increases. On the other hand, the results from Model II decrease
monotonically as $W$ increases again except for the $J^P=5/2^-$
case. Even the total cross section for $P_{cs}(J^P=5/2^-)$ will
decrease, if $W$ further increases, though we did not show it in
Fig.~\ref{fig:8}.
\begin{figure}[htp]
\includegraphics[scale=0.59]{fig10a.pdf}
\includegraphics[scale=0.59]{fig10b.pdf}
\includegraphics[scale=0.59]{fig10c.pdf}
\includegraphics[scale=0.59]{fig10d.pdf}
\caption{Results for the differential cross sections
($d\sigma/d\cos\theta$) as functions of $\cos{\theta}$ for a given total
energy ($W$) from Model I. The notation of the curves is the same
as in Fig.~\ref{fig:7}.
}
\label{fig:9}
\end{figure}
\begin{figure}[htp]
\includegraphics[scale=0.59]{fig11a.pdf}
\includegraphics[scale=0.59]{fig11b.pdf}
\includegraphics[scale=0.59]{fig11c.pdf}
\includegraphics[scale=0.59]{fig11d.pdf}
\caption{Results for the differential cross sections
($d\sigma/d\cos\theta$) as functions of $\cos{\theta}$ for a given total
energy ($W$) from Model II. The notation of the curves is the same
as in Fig.~\ref{fig:7}.
}
\label{fig:10}
\end{figure}
Figures~\ref{fig:9} and~\ref{fig:10} depict the numerical results for
the differential cross sections $d\sigma/d\cos\theta$ as functions of
$\cos\theta$ with four different values of $W$ given. The results near
the threshold ($W=4.259$ GeV) clearly show that the magnitudes of the
differential cross sections in the forward direction are the largest
ones and then decrease monotonically as $\cos\theta$ goes from $+1$ to
$-1$. So, the results for the differential cross sections are mostly
diminished in the backward direction. While the results from Model II
at $W=4.259$ GeV exhibit similar behaviors to those from Model I,
detailed dependences on $\cos\theta$ look different.
When it comes to the resonance region at $W=4.459$ GeV, the
results are noticeably distinguished for different assignments of
$J^P$ to $P_{cs}$. Scrutinizing first the results for the cases of
$J^P=1/2^-$ and $J^P=3/2^-$, we find that the $\cos\theta$ dependence
of them is rather different. The result for $P_{cs}^0(J^P=1/2^-)$
is suppressed in the forward direction, whereas that for
$P_{cs}^0(J^P=3/2^-)$ gets enhanced as $\cos\theta$ increases. This
implies that the resonance and the $K^*$ exchange contributions
interfere differently each other. When $J^P=1/2^-$ is assumed, the two
terms interfere destructively, while they do constructively with
$J^P=3/2^-$ assumed. When one takes $J^P=3/2^+$ for $P_{cs}$, the
corresponding differential cross section becomes maximum at
$\cos\theta=0$, i.e., $\theta=90^\circ$. On the other hand, if
$J^P=5/2^+$ is assumed, the values of the differential cross section
will be the minimum at $\theta=90^\circ$. When $J^P=5/2^-$ is imposed,
the result for $d\sigma/d\cos\theta$ becomes more complicated.
Thus, the measurement of the differential cross
sections near the resonance region may provide one way of determining
the spin-parity quantum numbers for $P_{cs}$ in the $K^-p\to J/\psi
\Lambda^0$ reaction.
\begin{figure}[htp]
\includegraphics[scale=0.59]{fig12a.pdf}
\includegraphics[scale=0.59]{fig12b.pdf}
\includegraphics[scale=0.59]{fig12c.pdf}
\includegraphics[scale=0.59]{fig12d.pdf}
\caption{Results for the differential cross sections
($d\sigma/d\cos\theta$) as functions of $s$ for a given angle
($\cos\theta$) from Model I. The notation of the curves is the same
as in Fig.~\ref{fig:7}.
}
\label{fig:11}
\end{figure}
\begin{figure}[htp]
\includegraphics[scale=0.59]{fig13a.pdf}
\includegraphics[scale=0.59]{fig13b.pdf}
\includegraphics[scale=0.59]{fig13c.pdf}
\includegraphics[scale=0.59]{fig13d.pdf}
\caption{Results for the differential cross sections
($d\sigma/d\cos\theta$) as functions of $s$ for a given angle
($\cos\theta$) from Model II. The notation of the curves is the same
as in Fig.~\ref{fig:7}.
}
\label{fig:12}
\end{figure}
In Figs.~\ref{fig:11} and~\ref{fig:12}, we depict the results for the
differential cross sections $d\sigma/dt$ as functions of $W$, which
are obtained from Model I and II, respectively, varying the scattering
angle $\cos\theta$ from $\cos\theta =0.9$ to $\cos\theta=-0.9$. In
the forward direction, the results for $d\sigma/dt$ look similar to
those for the total cross sections. However, the results at
$\cos\theta =0.1$ and $\cos\theta=-0.1$ enable us to distinguish among
those with different $J^P$. While the shapes of the resonances
corresponding to $P_{cs}$ look all similar, one can distinguish them
each other as $s$ increases. Getting out of the resonance regions, the
results decrease as $s$ increases except for the case of $J^P=5/2^-$,
in particular, when one uses Model I. In fact,
we already found this behavior in the results for the total
differential cross sections. However, we can see this particular
behavior more prominently in those for $d\sigma/dt$ as $s$
increases. The reason is clear. As shown in
Eqs.~\eqref{eq:8},~\eqref{eq:9}, and~ \eqref{eq:10}, the
transition amplitudes contain strong momentum dependence with higher
spin of $P_{cs}$ assumed. As $s$ further increases, the results for
$J^P=5/2^+$ also start to increase slowly. This comes from the fact
that the difference in the parity also affects the interference
effects. Moreover, this peculiar dependence of $d\sigma/dt$ for
$J^P=5/2^-$ on $s$ implies that the effective Lagrangian method may
not be valid anymore at very high energies. On the other hand, the
Regge approach nicely produces the asymptotic behavior of $d\sigma/dt$
as $s$ increases. Even the result for $d\sigma/dt$ with $J^P=5/2^-$
assigned starts to fall off when $s$ further increases, though we do
not show in Fig.~\ref{fig:12} explicitly.
\section{Summary and conclusion}
In the present work, we aimed at investigating the production of
$P_{cs}^0(4459)$ in the $K^- p\to J/\psi \Lambda^0$ reaction,
employing two different theoretical frameworks, i.e. the effective
Lagrangian method and the Regge approach. We call these two different
approaches as Model I and Model II, respectively. We first determined
the coupling constants for all the relevant hadronic vertices. Since
there is lack of experimental data on them, we made various reasonable
assumptions. To determine the coupling constant for the $P_{cs} J/\psi
\Lambda$ vertex, we assumed that the branching ratio of $P_{cs}\to
J/\psi \Lambda$ decay is about $1~\%$. That of
$P_c\to J/\psi N$ was also proposed to be about $0.01~\%$. Thus, the
coupling constant $g_{P_{cs} J/\psi \Lambda}$ is of order $0.1$. When
one considers the hidden-charm pentaquark with higher spins ($J^P\ge
3/2^{\pm}$), the tensor couplings are naturally introduced. However,
since $J/\psi$ is an isosinglet, the tensor coupling constants can be
neglected as in the case of the $\omega$ meson. Moreover, since we are
mainly interested in the resonance $P_{cs}$ region, which is not far
from the threshold of $J/\psi$ and $\Lambda$, the contributions from
the tensor couplings can be taken to be very small. Since the
Okubo-Zweig-Iizuka rule indicates that the coupling between a
nucleon and a $\phi$ meson ($s\bar{s}$) should be very small, the
same is applied to the coupling between a hyperon and a charmonium
($c\bar{c}$). Thus, we also took the value of the coupling
constant for the $KP_{cs}N$ vertex to be very small. By
estimating the branching ratio of $P_{cs}\to K^- p$, we found that the
value of the $KP_{cs}N$ coupling constant is of order
$10^{-3}$.
Since the decay widths of $J/\psi$ to the $K$ and $K^*$mesons
are known, we were able to determine directly the corresponding
coupling constants from experimental data.
Our results are obtained by setting the cut-off mass for the
off-shell $P_{cs}$ to $5$ GeV, which is a rather plausible choice,
even if we observe that our predictions are extremely sensitive to
the value of the cut-off, which means that if one can change the
value a little bit, then the results would be very much changed.
On the other hand, since there are no experimental data to determine
the values of the cut-off masses certain uncertainties caused by them
are inevitable. As for the form factors for $K$ and $K^*$, we fixed
the values of the cutoff masses to be $\Lambda_K=1$ GeV and
$\Lambda_{K^*}=1.4$ GeV. In the case of the Regge approach, we
considered a nonlinear form of the $K$ and $K^*$ Regge trajectories,
which fit the experimental data much better than the linear ones.
We first scrutinized the results for the total cross sections as
functions of the CM total energy $W$, with different spin-parity
quantum numbers $J^P$ taken into account. While the shape of the
resonance does not much depend on the given value of $J^P$, the
dependence on $W$ is different. In particular, the result with
$J^P=5/2^-$ increases faster than the other ones as $W$ increases. We
found a similar feature in the case of the Regge approach. However,
$W$ increases further, all the results for the total cross sections
are lessened as $W$ increases. Thus, the Regge approach produces the
results more consistently than those from the effective Lagrangian
method. Secondly, we examined the results for the differential cross
sections as functions of the scattering angle with several different
values of the CM total energy. The results in the resonance region
clearly are distinguished, as different sets of the spin-parity
quantum numbers are used. This implies that the measurement of
differential cross sections for the $K^-p\to J/\psi \Lambda$ reaction
may give a clue on the spin-parity quantum number of $P_{cs}$. We also
studied the differential cross sections $d\sigma/dt$ as functions of
the CM total energy squared, i.e., $s$. When the scattering angle near
$\theta=90^\circ$, $s$ dependences of the differential cross sections
prominently reveal the differences among the results with different
sets of $J^P$.
The present results may be used as a theoretical guide for possible
future experiments for findings of the hidden-charm pentaquarks with
strangeness. Similar studies for other $P_{cs}$ are also under way.
\begin{acknowledgments}
The present work was supported by Basic Science Research Program
through the National Research Foundation of Korea funded by the
Ministry of Education, Science and Technology
(Grant-No. 2018R1A5A1025563).
\end{acknowledgments}
|
{
"timestamp": "2021-06-29T02:32:54",
"yymm": "2102",
"arxiv_id": "2102.08737",
"language": "en",
"url": "https://arxiv.org/abs/2102.08737"
}
|
\section{Introduction}
\label{sec:intro}
Born in supernova explosions neutron (or compact) stars (NSs) are the densest cosmic bodies in the modern Universe. They provide a unique domain of density range to study the novel states of matter. Indeed, matter in compact stars is compressed by the gravitational force to densities a few times nuclear saturation density, $n_0$~\cite{1996cost.book.....G,Lattimer2016PhR, 2007PrPNP..59...94W,Sedrakian2007PrPNP}.
During the last decade electromagnetic as well as gravitational wave observations placed a number of constraints on the global properties of compact stars (masses, radii, deformabilities, etc.) which significantly constrain the range of admissible equation of state (EoS) models of dense matter. We briefly list below the most important observational results. The masses of massive $M \sim 2 M_{\odot}$ compact star (millisecond pulsars) in binaries with white dwarfs were determined for J$1614-2230$ ($M=1.908\pm 0.016 M_\odot$) \cite{2010Natur.467.1081D,Arzoumanian_2018}, J$0348+0432$ ($2.01\pm 0.04$ M$_\odot$) \cite{2013Sci...340..448A} and J$0740+6620$ ($2.14^{+0.20}_{-0.18}M_\odot$ with 95$\%$ credibility) \cite{2020NatAs...4...72C}. The radius of a canonical $1.4 M_{\odot}$ compact stars was inferred from low-mass X-ray binaries in globular clusters in the range $10\le R\le 14$~km \cite{2018MNRAS.476..421S}. The mass-radius measurements of PSR J$0030+0451$ by the NICER experiment determined $M=1.44^{+0.15}_{-0.14} M_\odot$, $R=13.02^{+1.24}_{-1.06}$ km \cite{2019ApJ...887L..24M} and $M=1.34^{+0.15}_{-0.16}M_\odot$, $R=12.71^{+1.14}_{-1.19}$ km
(with $68.3\%$ credibility) \cite{2019ApJ...887L..21R}. The first multimessenger gravitational-wave event GW170817 observed by the LIGO-Virgo collaboration (LVC)~\citep{LIGO_Virgo2017b,LIGO_Virgo2017c,LIGO_Virgo2017a} set constraints on the tidal deformabilities of involved stars which through a tight correlation with the radii predict a radius $12\le R_{1.4}\le 13$~km for the canonical-mass star $M=1.4M_{\odot}$. The LVC observation of the GW190425 event in gravitational waves determined the component masses range $1.46-1.87 M_{\odot}$ \cite{2020ApJ...892L...3A}. Another event GW190814 suggests a binary with a light component with a mass $2.59^{+0.08}_{-0.09} M_\odot$ \cite{2020ApJ...896L..44A} which falls in the ``mass-gap'' ($2.5 M_\odot\leq M \leq5 M_\odot$). The nature of the lighter companion is still not resolved \cite{2020MNRAS.499L..82M,2020arXiv200706057T,Bombaci2020,Fattoyev2020,Zhang2020}, but the neutron star interpretation appears to be in tension with formation of heavy baryons (hyperons, $\Delta$-resonances) in compact stars~\cite{2020PhRvD.102d1301S,LI2020135812,Dexheimer2020}.
Due to large densities reached in the core region of compact stars, new hadronic degrees of freedom are expected to nucleate in addition to the nucleons. One such possibility is the onset of hyperons, as initially suggested in Ref.~\cite{1960SvA.....4..187A}. This occurs in the inner core of compact stars at about $(2-3)n_0$. Even though the presence of hyperons in compact stars may seem to be unavoidable, it leads to an incompatibility of the theory with the observations of massive pulsars mentioned above, as is evidenced by many studies
which used either phenomenological \cite{1985ApJ...293..470G, Weber:1989uq, 1995PhRvC..52.3470K, 1997NuPhA.625..435B, 2008PhRvC..78e4306Z} or microscopic \cite{1995PhLB..355...21S, 1998PhRvC..58.3688B, 2000PhRvC..61b5802V, 2009PhRvC..79c4301S,2010PhRvC..81c5803D} approaches.
Specifically, hyperons lead to a softening of the EoS and imply a low value of the maximum mass of compact stars, below those observed. This problem is known as the ``hyperon puzzle''. The studies prior to the discovery of massive pulsars,
the work during the last decade focused mainly on models which provide sufficient repulsion among the hadronic interactions which guarantees stiffer EoS and larger maximum masses of hypernuclear stars; these have been carried out mostly within the covariant density functional theory~\citep{Weissenborn2012a,Bonanno2012A&A,Colucci_PRC_2013,Dalen2014,Oertel2015, Chatterjee2015,Fortin_PRC_2016,Chen_PRC_2007,Drago_PRC_2014,Cai_PRC_2015, Zhu_PRC_2016,Sahoo_PRC_2018,Kolomeitsev_NPA_2017,Li_PLB_2018,Li2019ApJ, Ribes_2019,Li2020PhRvD}.
But microscopic models have also been employed~\cite{Yamamoto2016EPJA,Shahrbaf2020PhRvC}.
Another fascinating possibility of the onset of non-nucleonic degrees of freedom is the appearance of stable $\Delta$-resonances in the matter. Whether $\Delta$-resonances play any role in the NSs is still a matter of debate \cite{ Li_PLB_2018,2020PhLB..80235266M}. Early work \cite{1985ApJ...293..470G,1991PhRvL..67.2414G} indicated that the threshold density for the appearance of $\Delta$-resonances could be as high as $(9-10)~n_0$. More recent works \cite{ Li_PLB_2018,2014PhRvD..89d3014D,PhysRevC.90.065809,2015PhRvC..92a5802C} have shown that indeed these non-strange baryons may appear in nuclear matter at density in the range $(1-2)~n_0$. In particular, the recent work which included both hyperons and $\Delta$-resonance \cite{ Li_PLB_2018,Li2019ApJL} showed that the inclusion of $\Delta$-resonances into the NS matter composition reduces the radius of a canonical $1.4 M_{\odot}$ mass compact star, whereas the maximum mass implied by the EoS does not change significantly. The onset of $\Delta$-resonances also shifts the onset of hyperons to higher densities~\cite{2014PhRvD..89d3014D, Li_PLB_2018,Li2019ApJL}.
Yet another possibility of a new non-nucleonic degree of freedom at high densities is the appearance of various meson (pion, kaon, $\rho$-meson) condensates. Initially, pion-condensation and its implications for neutron star physics was studied~\cite{Migdal_1972,1982ApJ...258..306H,particles2030025}. Later, the focus shifted towards the strangeness-carrying (anti)kaons ($\bar K$) condensate, initially suggested within a chiral perturbative model in Refs.~\cite{1988NuPhA.479..273K,1987PhLB..192..193N}; for further models and developments see~\cite{1994NuPhA.567..937B,1994PhLB..326...14L,1995PhRvC..52.3470K}. It has been then realized that the repulsive optical potential developed by the $K^+$ mesons in the nuclear matter disfavors the presence of kaons in neutron star matter. Several authors \cite{1999PhRvC..60b5803G, 2001PhRvC..63c5802B,1997PhR...280....1P,1996PhRvC..53.1416S,2001PhRvC..64e5805B,Malik:2020jlb,PhysRevD.102.123007} have studied the (anti)kaon condensation in nuclear as well as hypernuclear matter. The onset of (anti)kaons in the compact star matter is very sensitive to the $K^-$ optical potentials as well as the presence of hyperons. In the latter case, it is observed that the threshold density of (anti)kaons is shifted to even higher matter densities \cite{2014PhRvC..90a5801C}.
A generic feature of the onset of the condensates is the softening of the EoS and the reduction of the maximum masses of compact stars, which could become potentially incompatible with the observations of massive pulsars. The onset of (anti)kaon condensation affects many properties of compact stars beyond the equation of state, such as superfluidity~\cite{2018MNRAS.474.3576X}, neutrino emission via direct Urca processes~\cite{2009A&A...506L..13D,2020AdAst2020E..13X}, and bulk viscosity~\cite{Chatterjee_2008}. This is a direct consequence of the changes in the single-particle spectrum of fermions, e.g., the Fermi momenta, effective masses, etc.
In the present work, we explore the possibility of (anti)kaon condensation in $\beta$-equilibrated $\Delta$-resonance admixed hypernuclear matter in the core region of compact stars within the framework of covariant density functional (CDF) model. To construct the EoS, we implement the DD-ME2 parametrization of density functional with density-dependent couplings~\cite{2005PhRvC..71b4312L}. This model has been extended previously to the $\Delta$-resonance admixed hypernuclear matter without (anti)kaon condensation~\cite{ Li_PLB_2018,Li2019ApJL}, showing that the resulting EoS is broadly compatible with the available astrophysical constraints. It has been also extended to include the effect of strong magnetic fields~\cite{particles3040043}. This work, therefore, will focus on the novel aspects that are introduced by the (anti)kaon condensation.
The paper is arranged as follows. In Sec.~\ref{sec:formalism} we briefly describe the density-dependent CDF formalism and its extension to (anti)kaons condensation in $\Delta$-resonances admixed hypernuclear matter. Sec.~\ref{sec:results} is devoted to numerical results and their discussions. The conclusions and future perspectives are given in Sec.~\ref{sec:conclusions}. We use natural units $\hbar=c=1$ throughout.
\section{Formalism} \label{sec:formalism}
\subsection{Density Dependent CDF Model}
In this work, we consider the density dependent CDF model to study the transition of matter from hadronic to (anti)kaon condensed phase in $\beta$-equilibrated $\Delta$-resonance admixed hypernuclear matter. The matter composition is considered to be of the baryon octet ($b\equiv N,\Lambda,\Sigma,\Xi$), $\Delta$-resonances ($d\equiv \Delta^{++},\Delta^+,\Delta^0,\Delta^-$), (anti)kaons ($\bar{K}\equiv K^-, \bar{K}^0$) alongside leptons ($l$) such as electrons and muons. The strong interactions between the baryons as well as the (anti)kaons are mediated by the isoscalar-scalar $\sigma$, $\sigma^*$, isoscalar-vector $\omega^\mu$, $\phi^{\mu}$ and isovector-vector $\boldsymbol{\rho}^{\mu}$ meson fields. The additional hidden strangeness mesons ($\sigma^*, \phi^\mu$) are considered to mediate the hyperon-hyperon as well as (anti)kaon-hyperon interactions. The total Lagrangian density consisting of the baryonic, leptonic and kaonic parts is given by \cite{1999PhRvC..60b5803G,2000NuPhA.674..553P,2001PhRvC..63c5802B,Li_PLB_2018}
\begin{equation} \label{eqn.1}
\begin{aligned}
\mathcal{L} & = \sum_{b} \bar{\psi}_b(i\gamma_{\mu} D^{\mu}_{(b)} - m^{*}_b) \psi_b + \sum_{d} \bar{\psi}_{d\nu}(i\gamma_{\mu} D^{\mu}_{(d)} - m^{*}_d) \psi^{\nu}_{d} \\ & + \sum_{l} \bar{\psi}_l (i\gamma_{\mu} \partial^{\mu} - m_l)\psi_l + D^{(\bar{K})*}_\mu \bar{K} D^\mu_{(\bar{K})} K - m^{*^2}_K \bar{K} K \\
& + \frac{1}{2}(\partial_{\mu}\sigma\partial^{\mu}\sigma - m_{\sigma}^2 \sigma^2) + \frac{1}{2}(\partial_{\mu}\sigma^*\partial^{\mu}\sigma^* - m_{\sigma^*}^2 \sigma^{*2}) \\
& - \frac{1}{4}\omega_{\mu\nu}\omega^{\mu\nu} + \frac{1}{2}m_{\omega}^2\omega_{\mu}\omega^{\mu} - \frac{1}{4}\boldsymbol{\rho}_{\mu\nu} \cdot \boldsymbol{\rho}^{\mu\nu} + \frac{1}{2}m_{\rho}^2\boldsymbol{\rho}_{\mu} \cdot \boldsymbol{\rho}^{\mu} \\
& - \frac{1}{4}\phi_{\mu\nu}\phi^{\mu\nu} + \frac{1}{2}m_{\phi}^2\phi_{\mu}\phi^{\mu}
\end{aligned} \end{equation}
where the fields $\psi_b$, $\psi^{\nu}_d$, $\psi_l$ correspond to the baryon octet, $\Delta$-baryon and lepton fields. $m_b$, $m_d$, $m_K$ and $m_l$ represent the bare masses of members of baryon octet, $\Delta$-quartet, isospin doublet for (anti)kaons and leptons respectively. The covariant derivative in Eq.\eqref{eqn.1} is
\begin{equation}\label{eqn.2}
D_{\mu (j)} = \partial_\mu + ig_{\omega j} \omega_\mu + ig_{\rho j} \boldsymbol{\tau}_j \cdot \boldsymbol{\rho}_{\mu} + ig_{\phi j} \phi_\mu
\end{equation}
with $j$ denoting the baryons ($b,d$) and (anti)kaons ($\bar{K}$). The density-dependent coupling constants are denoted by $g_{pj}$ where `$p$' index labels the mesons.
The isospin operator for the isovector-vector meson fields is represented by $\tau_j$.
The gauge field strength tensors for the vector meson fields are given by
\begin{equation} \label{eqn.3}
\begin{aligned}
\omega_{\mu \nu} & = \partial_{\nu}\omega_{\mu} - \partial_{\mu}\omega_{\nu} ,\\
\boldsymbol{\rho}_{\mu \nu} & = \partial_{\nu}
\boldsymbol{\rho}_{\mu} - \partial_{\mu}\boldsymbol{\rho}_{\nu}, \\
\phi_{\mu \nu} & = \partial_{\nu}\phi_{\mu} - \partial_{\mu}\phi_{\nu} .
\end{aligned}
\end{equation}
The Dirac effective baryon and (anti)kaon masses in Eq.\eqref{eqn.1} are given by
\begin{equation} \label{eqn.4}
\begin{aligned}
m_{b}^* & = m_b - g_{\sigma b}\sigma - g_{\sigma^* b}\sigma^*,\quad
m_{d}^* = m_d - g_{\sigma d}\sigma, \\
m_{K}^* & = m_K - g_{\sigma K}\sigma - g_{\sigma^* K}\sigma^*
\end{aligned}
\end{equation}
In the relativistic mean-field approximation, the meson fields obtain
expectation values which are given by
\begin{widetext}
\begin{equation} \label{eqn.5}
\begin{aligned}
\sigma & = \sum_{b} \frac{1}{m_{\sigma}^2} g_{\sigma b}n_{b}^s + \sum_{d} \frac{1}{m_{\sigma}^2} g_{\sigma d}n_{d}^s + \sum_{\bar{K}} \frac{1}{m_{\sigma}^2} g_{\sigma K}n_{\bar{K}}^s, \quad
\sigma^* = \sum_{b} \frac{1}{m_{\sigma^*}^2} g_{\sigma^* b}n_{b}^s + \sum_{\bar{K}} \frac{1}{m_{\sigma^*}^2} g_{\sigma^* K}n_{\bar{K}}^s,\\
\omega_{0} & = \sum_{b} \frac{1}{m_{\omega}^2} g_{\omega b}n_{b} + \sum_{d} \frac{1}{m_{\omega}^2} g_{\omega d}n_{d} - \sum_{\bar{K}} \frac{1}{m_{\omega}^2} g_{\omega K}n_{\bar{K}}, \quad
\phi_{0} = \sum_{b} \frac{1}{m_{\phi}^2} g_{\phi b}n_{b} - \sum_{\bar{K}} \frac{1}{m_{\phi}^2} g_{\phi K}n_{\bar{K}} \\
\rho_{03} & = \sum_{b} \frac{1}{m_{\rho}^2} g_{\rho b}
\boldsymbol{\tau}_{b3}n_{b} + \sum_{d} \frac{1}{m_{\rho}^2} g_{\rho d}
\boldsymbol{\tau}_{d3}n_{d}
+ \sum_{\bar{K}} \frac{1}{m_{\rho}^2} g_{\rho K} \boldsymbol{\tau}_{\bar{K}3}n_{\bar{K}}
\end{aligned}
\end{equation}
\end{widetext}
where $n^s= \langle\bar{\psi} \psi \rangle$ and $n=\langle\bar{\psi} \gamma^0 \psi \rangle$ denote the scalar and vector (number) densities respectively. The explicit form of scalar and vector density of baryons in the $T=0$ limit is
\begin{equation} \label{eqn.6}
\begin{aligned}
n^{s}_j & = \frac{2J_j + 1}{4 \pi^2} m^{*}_{j} \left[ p_{{F}_{j}} E_{F_j} - m_{j}^{*^2} \ln \left( \frac{p_{{F}_j} + E_{F_j}}{m_{j}^{*}} \right) \right], \\
n_j & = \frac{2J_j + 1}{6 \pi^2}p_{{F}_{j}}^{3}
\end{aligned}
\end{equation}
respectively with $J_j$, $p_{{F}_{j}}$ and $E_{F_j}$ being the spin, Fermi momentum and Fermi energy of the $j$-th baryon. For the case of $s$-wave (anti)kaons, the number density is given as
\begin{equation} \label{eqn.7}
\begin{aligned}
n_{K^-, \bar{K}^0} & = 2 \left( \omega_{\bar{K}} + g_{\omega K} \omega_0 + g_{\phi K} \phi_0 \pm \frac{1}{2} g_{\rho K} \rho_{03} \right) \\
& = 2 m^*_K \bar{K} K.
\end{aligned}
\end{equation}
Here, $\omega_{\bar{K}}$ represents the in-medium energies of (anti)kaons and are given by (considering isospin projection as $\mp 1/2$ for $K^-, \bar{K}^0$)
\begin{equation} \label{eqn.8}
\omega_{K^{-} , \bar{K}^0} = m^*_K - g_{\omega K} - g_{\phi K} \phi_0 \mp \frac{1}{2} g_{\rho K} \rho_{03}.
\end{equation}
In case of leptons ($l$), the number density is given by $n_l = p_{{F}_{l}}^{3}/3 \pi^2$. The chemical potential of the $j$-th baryon is
\begin{equation}\label{eqn.9}
\begin{aligned}
& \mu_{j} = \sqrt{p_{F_j}^2 + m_{j}^{*2}} + \Sigma_{B},
\end{aligned}
\end{equation}
where $\Sigma_B = \Sigma^{0} + \Sigma^{r}$ denotes the vector self-energy with
\begin{equation}\label{eqn.10}
\begin{aligned}
\Sigma^{0} & = g_{\omega j}\omega_{0} + g_{\phi j}\phi_{0} + g_{\rho j} \boldsymbol{\tau}_{j3} \rho_{03},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}\label{eqn.rear}
\Sigma^{r} & = \sum_{b} \left[ \frac{\partial g_{\omega b}}{\partial n}\omega_{0}n_{b} - \frac{\partial g_{\sigma b}}{\partial n} \sigma n_{b}^s + \frac{\partial g_{\rho b}}{\partial n} \rho_{03} \boldsymbol{\tau}_{b3} n_{b} \right. \\
& \left. + \frac{\partial g_{\phi b}}{\partial n}\phi_{0}n_{b} \right] + \sum_{d} (\psi_b \longrightarrow \psi_{d}^{\nu}).
\end{aligned}
\end{equation}
Eq.\eqref{eqn.rear} is the rearrangement term which is required in case of density-dependent meson-baryon coupling models to maintain the thermodynamic consistency \cite{2001PhRvC..64b5804H}. Here, $n=\sum_{j} n_j$ represents the total baryon number density.
The threshold condition for the onset of $j$-th baryon into the nuclear matter is given by \cite{2001PhRvC..63c5802B}
\begin{equation} \label{eqn.11}
\mu_j = \mu_n - q_j \mu_e
\end{equation}
where $q_j$ is the charge of the $j$-th baryon. $\mu_e= \mu_n -\mu_p$ is the chemical potential of electron with $\mu_n$, $\mu_p$ denoting the same for neutron and proton. With increasing density, the Fermi energy of electrons increases and once it reaches the rest mass of muons i.e. $\mu_e = m_{\mu}$, muons start to appear in the nuclear matter.
In case of (anti)kaons, the threshold conditions are governed by the strangeness changing processes such as, $N \rightleftharpoons N + \bar{K}$ and $e^- \rightleftharpoons K^-$ \cite{1997PhR...280....1P,1996cost.book.....G} and are given by
\begin{equation} \label{eqn.12}
\begin{aligned}
\mu_n - \mu_p = \omega_{K^-} = \mu_e, \quad \omega_{\bar{K}^0} = 0.
\end{aligned}
\end{equation}
The total energy density due to the fermionic part is given by
\begin{widetext}
\begin{eqnarray} \label{eqn.13}
\begin{aligned}
\varepsilon_f & = \frac{1}{2}m_{\sigma}^2 \sigma^{2} + \frac{1}{2} m_{\omega}^2 \omega_{0}^2 + \frac{1}{2}m_{\rho}^2 \rho_{03}^2 + \sum_{j\equiv b,d} \frac{2J_j + 1}{2 \pi^2} \left[ p_{{F}_j} E^3_{F_j} - \frac{m_{j}^{*2}}{8} \left( p_{{F}_j} E_{F_j} + m_{j}^{*2} \ln \left( \frac{p_{{F}_j} + E_{F_j}}{m_{j}^{*}} \right) \right) \right] \\
& + \frac{1}{2}m_{\sigma^*}^2 \sigma^{*2} + \frac{1}{2} m_{\phi}^2 \phi_{0}^2 + \frac{1}{\pi^2}\sum_l \left[ p_{{F}_l} E^3_{F_l} - \frac{m_{l}^{2}}{8} \left( p_{{F}_l} E_{F_l} + m_{l}^{2} \ln \left( \frac{p_{{F}_l} + E_{F_l}}{m_{l}} \right) \right) \right].
\end{aligned}
\end{eqnarray}
\end{widetext}
And the energy density contribution from the kaonic matter is
\begin{equation} \label{eqn.14}
\varepsilon_{\bar{K}} = m^*_K (n_{K^-} + n_{\bar{K}^0})
\end{equation}
giving the total energy density as $\varepsilon = \varepsilon_{\bar{K}}+\varepsilon_f$. Now, because (anti)kaons being bosons are in the condensed phase at $T=0$, the matter pressure is provided only by the baryons and leptons and is given by the Gibbs-Duhem relation
\begin{equation}\label{eqn.15}
p_m = \sum_{j\equiv b,d} \mu_j n_j + \sum_{l} \mu_l n_l - \varepsilon_f.
\end{equation}
The rearrangement term in Eq.~\eqref{eqn.rear} contributes explicitly to the matter pressure term only through the vector self-energy term.
Two additional constraints -- the charge neutrality and global baryon number conservation -- should be taken into account to calculate the equation of state self-consistently.
The charge neutrality condition is given by
\begin{eqnarray} \label{eqn.16}
\sum_b q_b n_b + \sum_d q_d n_d - n_{K^-} - n_e - n_\mu = 0.
\end{eqnarray}
\subsection{Coupling parameters}
In the density dependent CDF model implemented in this work, DD-ME2 \cite{2005PhRvC..71b4312L} coupling parametrization is incorporated. The coupling functional dependence of the scalar $\sigma$ and vector $\omega$-meson on density is given by
\begin{equation}\label{eqn.17}
g_{i N}(n)= g_{i N}(n_{0}) f_i(x), \quad \quad \text{for }i=\sigma,\omega,
\end{equation}
where, $x=n/n_0$, $n$, $n_0$ being the total baryon number density and nuclear saturation density respectively with
\begin{equation}\label{eqn.18}
f_i(x)= a_i \frac{1+b_i (x+d_i)^2}{1+c_i (x+d_i)^2}
\end{equation}
For the case with $\rho$-meson, the density dependence coupling functional is defined as
\begin{equation}\label{eqn.19}
g_{\rho N}(n)= g_{\rho N}(n_{0}) e^{-a_{\rho}(x-1)}
\end{equation}
The parameters of the meson-nucleon couplings in DD-ME2
parametrization model is given in Table \ref{tab:1}. The coefficients
associated with DD-ME2 model are fitted to reproduce nuclear
phenomenology; the details of which can be found in
Ref.~\cite{2005PhRvC..71b4312L}.
Since the nucleons do not couple to the strange mesons, $g_{\sigma^*
N}=g_{\phi N}=0$.
\begin{table} [h!]
\centering
\caption{The meson masses and parameters of the DD-ME2 parametrization used in Eq.~\eqref{eqn.17}
and \eqref{eqn.18}.}
\begin{tabular}{ccccccc}
\hline \hline
Meson ($i$) & $m_i$ (MeV) & $a_{i}$ & $b_{i}$ & $c_{i}$ & $d_{i}$ & $g_{iN}$ \\
\hline
$\sigma$ & 550.1238 & 1.3881 & 1.0943 & 1.7057 & 0.4421 & 10.5396 \\
$\omega$ & 783 & 1.3892 & 0.9240 & 1.4620 & 0.4775 & 13.0189 \\
$\rho$ & 763 & 0.5647 & & & & 7.3672 \\
\hline
\end{tabular}
\label{tab:1}
\end{table}
The masses of the additional hidden strangeness mesons are taken as $m_{\sigma^*}=975$ MeV and $m_{\phi}=1019.45$ MeV. The nuclear saturation properties are provided in Table~\ref{tab:2}. The parameters $E_0$, $K_0$, $Q_0$ denote the saturation energy, incompressibility, and skewness in isoscalar sector, and $E_{sym}$, $L_{sym}$ for symmetry energy coefficient and its slope in isoscalar sector, all evaluated at the saturation density.
It should be noted, that the
experimentally obtained values of some of these parameters have
an uncertainty range given by $n_0 \in [0.14-0.17]$ fm$^{-3}$~\cite{2018PhRvC..97b5805M}, $-E_0\in [15-17]$ MeV~\cite{2018PhRvC..97b5805M}, $K_0\in [220-260]$ MeV~\cite{2006EPJA...30...23S,2010JPhG...37f4038P}, $E_{sym}\in [28.5-34.9]$
MeV~\cite{2013PhLB..727..276L,2017RvMP...89a5007O}. Once the parameters of the
model are fixed to particular values within the range indicated above, the EoS is obtained by straightforward extrapolation to the high-density regime. At present, the high-density properties of dense matter are constrained
by astrophysics of compact stars and modeling of heavy-ion collision experiments,
both of which carry uncertainties of their own.
\begin{table} [h!]
\centering
\caption{The nuclear properties of the density-dependent CDF model (DD-ME2) at $n_0$.}
\begin{tabular}{ccccccc}
\hline \hline
$n_0$ & $E_0$ & $K_0$ & $Q_0$ & $E_{sym}$ & $L_{sym}$ & $m^*_N/m_N$ \\
(fm$^{-3}$) & (MeV) & (MeV) & (MeV) & (MeV) & (MeV) & \\
\hline
0.152 & $-16.14$ & 250.9 & 478.9 & 32.3 & 51.3 & 0.57 \\
\hline
\end{tabular}
\label{tab:2}
\end{table}
The bare masses of the members of the baryon octet, $\Delta$-quartet and (anti)-kaons considered in this work are, $m_{\Lambda}=1115.68$ MeV, $m_{\Xi^0}=1314.86$ MeV, $m_{\Xi^-}=1321.71$ MeV, $m_{\Sigma^+}=1189.37$ MeV, $m_{\Sigma^0}=1192.64$ MeV, $m_{\Sigma^+}=1197.45$ MeV, $m_{\Delta}=1232$ MeV, $m_{K}=493.69$ MeV.
For the meson-hyperon vector coupling parameters, we incorporated the SU(6) symmetry and quark counting rule \cite{2001PhRvC..64e5805B,2011PhRvC..84c5809R} as
\begin{equation}\label{eqn.20}
\begin{aligned}
\frac{1}{2}g_{\omega \Lambda} & = \frac{1}{2}g_{\omega \Sigma}=g_{\omega \Xi}= \frac{1}{3}g_{\omega N}, \\
2g_{\phi \Lambda} & =2g_{\phi \Sigma}=g_{\phi \Xi}= -\frac{2\sqrt{2}}{3}g_{\omega N}, \\
\frac{1}{2}g_{\rho \Sigma} & =g_{\rho \Xi}=g_{\rho N}, \ \ g_{\rho \Lambda}=0.
\end{aligned}
\end{equation}
The scalar meson-hyperon couplings are calculated by considering the optical potentials of $\Lambda$, $\Sigma$, $\Xi$ as $-30$ MeV, $+30$ MeV and $-14$ MeV respectively \cite{particles3040043}. Furthermore, the scalar strange meson $\sigma^*$-hyperon coupling is evaluated from the measurements on light double-$\Lambda$ nuclei and fitted to the optical potential depth $U^{\Lambda}_{\Lambda}(n_0/5)=-0.67$ MeV \cite{2018EPJA...54..133L}. The scalar meson-hyperon couplings for the other strange baryons can be obtained from the relationship,
\begin{equation}\label{eqn.21}
\frac{g_{\sigma^* Y}}{g_{\phi Y}}=\frac{g_{\sigma^* \Lambda}}{g_{\phi
\Lambda}},
\ \ Y \in \text{$\{\Xi,\Sigma\}$}.
\end{equation}
Table~\ref{tab:3} provides the numerical values of the meson-hyperon couplings at nuclear saturation density,
where $R_{\sigma Y}=g_{\sigma Y}/g_{\sigma N}$, $R_{\sigma^* Y}=g_{\sigma^* Y}/g_{\sigma N}$ denote the scaling factors for non-strange and strange scalar mesons coupling to hyperons respectively.
\begin{table}[h!]
\centering
\caption{Scalar meson-hyperon coupling constants for DD-ME2 parametrization.}
\begin{tabular}{cccc}
\toprule
& $\Lambda$ & $\Xi$ & $\Sigma$ \\
\hline
$R_{\sigma Y}$ & 0.6105 & 0.3024 & 0.4426 \\
$R_{\sigma^* Y}$ & 0.4777 & 0.9554 & 0.4777 \\
\hline
\end{tabular}
\label{tab:3}
\end{table}
Because experimental information on the $\Delta$-resonance is scarce, the meson-$\Delta$ baryon couplings are treated as parameters. In the subsequent discussion we consider $g_{\omega d}=1.10~g_{\omega N}$ and $g_{\rho d}= g_{\rho N}$ for vector-meson couplings \cite{ Li_PLB_2018,2019PhRvC.100a5809L}. For the scalar meson-$\Delta$ baryon couplings we use two values of the isoscalar potentials viz. $V_{\Delta}= V_{N}$ and $5/3~V_{N}$ with $V_N$ being the nucleon potential \cite{ Li_PLB_2018,LI2020135812}. These values were extracted from the studies of electron and pion scattering off nuclei studies as well as studies of heavy-ion-collisions which involved $\Delta$-resonance production.
The numerical values of scalar meson-$\Delta$-baryon coupling parameters with $V_{\Delta}= V_N$ is $R_{\sigma d}=1.10$ and that with $V_{\Delta}= 5/3~V_N$ is $R_{\sigma d}=1.23$, where $R_{\sigma d}= g_{\sigma d}/g_{\sigma N}$ denotes the non-strange scalar meson coupling to $\Delta$-resonances. Similar to the nucleons, $\Delta$-resonances do not couple to $\sigma^*$, $\phi$-mesons, i.e, $g_{\sigma^* d}= g_{\phi d}=0$.
The meson-(anti)kaon couplings are fixed according to
Refs.~\cite{2013PhRvC..87d5802G,2014PhRvC..90a5801C} and are taken as
density indepedent. The vector meson-(anti)kaon coupling parameters
are evaluated from the isospin counting rule
\cite{2014PhRvC..90a5801C} and are given as
\begin{equation}\label{eqn.22}
g_{\omega K} = \frac{1}{3} g_{\omega N}, \quad g_{\rho K} = g_{\rho N}.
\end{equation}
And in case of the additional hidden strange force mediating mesons, the couplings are given as \cite{2001PhRvC..64e5805B}
\begin{equation}\label{eqn.23}
g_{\sigma^* K} = 2.65, \quad g_{\phi K} = 4.27.
\end{equation}
The scalar meson-(anti)kaon coupling constants are calculated by fitting to the real part of $K^-$ optical potential at nuclear saturation density. The readers may refer to Ref.~\cite{PhysRevD.102.123007} for details.
Refs.~\cite{1997NuPhA.625..287W,1999PhRvC..60b4314F, particles2030025} show that the (anti)kaons experience an attractive potential in nuclear matter whereas the opposite is true for the case with kaons in nuclear matter \cite{1997NuPhA.625..372L,2000PhRvC..62f1903P}. Different model calculations \cite{1994PhLB..337....7K, 1997NuPhA.625..287W,1998PhLB..426...12L,2000NuPhA.669..153S,1999PhRvC..60b4314F} have provided the $K^-$ optical potential in normal nuclear matter to be in the range from $-40$ MeV to $-200$ MeV. We have chosen a $K^-$ optical potential range of $-120\leq U_{\bar{K}}\leq -150$ MeV in this work and numerical
values of $g_{\sigma K}$ for the mentioned optical potential range is provided in Table~\ref{tab:4}.
\begin{table} [h!]
\centering
\caption{Scalar $\sigma$ meson-(anti)kaon coupling parameter values in DD-ME2 parametrization at $n_0$.}
\begin{tabular}{ccccc}
\hline \hline
$U_{\bar{K}}$ (MeV) & $-120$ & $-130$ & $-140$ & $-150$ \\
\hline
$g_{\sigma K}$ & 0.4311 & 0.6932 & 0.9553 & 1.2175 \\
\hline
\end{tabular}
\label{tab:4}
\end{table}
\section{Results} \label{sec:results}
In this section we report our numerical results for matter composition with (anti)kaons and (a) Nucleons + Hyperons (NY), (b) Nucleons + Hyperons + $\Delta$-resonances (NY$\Delta$) for varying values of (anti)kaon optical potentials. The case of pure nuclear matter with (anti)kaons was considered already in Ref.~\cite{PhysRevD.102.123007} and the reader is referred to that work. From calculations, it is found that the phase transition to (anti)kaon condensed phase is of the second-order for both NY and NY$\Delta$ compositions. In all the calculations the $K^-$-meson is observed to appear before the onset of $\bar{K}^0$. Table~\ref{tab:6} provides the threshold densities of (anti)kaons for different values of $\Delta$-baryon as well as $U_{\bar{K}}$ potentials for two matter compositions. \begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio]{Figures/chem_DDME2.pdf}
\caption{The effective energy of (anti)kaons as a function of baryon number density in NY$\Delta$ matter for $\Delta$-potential values $V_{\Delta}=1$ (top panels) and $5/3~V_N$ (bottom panels). Left and right panels
show the energies of $K^-$ and $\bar{K}^0$ respectively. The chemical potential of electron for the
same matter composition is depicted by the dashed curve. The solid, dash-dotted, dotted lines represent the $U_{\bar{K}}$ values of $-130,-140,-150$ MeV respectively.}
\label{fig-4}
\end{center}
\end{figure}
\begin{figure*} [t!]
\begin{center}
\includegraphics[width=14.5cm,keepaspectratio ]{Figures/EoS_new.pdf}
\caption{Pressure as a function of energy density (EoS) for zero-temperature, charge-neutral NY matter (solid lines), NY$\Delta$ matter
with $\Delta$-potential $V_\Delta=V_N$ (dashed lines) and $V_\Delta=5/3~V_N$ (dash-dotted lines). The three panels correspond to different values of (anti)kaon potential: $U_{\bar{K}}=0$ (left panel), $U_{\bar{K}}=-140$
(middle panel), and $U_{\bar{K}}=-150$ MeV (right panel). }
\label{fig-1}
\end{center}
\end{figure*}
It is observed that the (anti)kaons do not appear at all in case of $U_{\bar{K}}=-120$ MeV for all matter compositions. (Anti)kaons are observed to appear only after $U_{\bar{K}}=-130$ MeV with $V_{\Delta}=5/3~V_N$. This happens as the higher $\Delta$-potential shifts the onset of hyperons to higher densities making the way for the (anti)kaons. In all the cases considered, it is observed that with the inclusion of $\Delta$-resonances into the composition of matter the threshold densities of onset of (anti)kaon is shifted to higher densities.
\begin{table} [h!]
\centering
\caption{Threshold densities, $n_{u}$ for (anti)kaon condensation in NY and NY$\Delta$
matter for different values of $\Delta$-potentials and $K^-$ optical potential depths $U_{\bar{K}}(n_0)$.}
\begin{tabular}{c|cc|cc|cc}
\hline \hline
Config. & \multicolumn{2}{c}{NY$\bar{K}$} & \multicolumn{4}{|c}{NY$\Delta \bar{K}$} \\
\cline{1-7}
& & & \multicolumn{2}{c|}{$V_{\Delta}=V_N$} & \multicolumn{2}{c}{$V_{\Delta}= 5/3~V_N$} \\
\cline{4-7}
$U_{\bar{K}}$ & $n_{u}$($K^-$) & $n_{u}$($\bar{K}^0$) & $n_{u}$($K^-$) & $n_{u}$($\bar{K}^0$) & $n_{u}$($K^-$) & $n_{u}$($\bar{K}^0$) \\
(MeV) & ($n_0$) & ($n_0$) & ($n_0$) & ($n_0$) & ($n_0$) & ($n_0$) \\
\hline
$-120$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\
$-130$ & $-$ & $-$ & $-$ & $-$ & 5.86 & 6.79 \\
$-140$ & 3.97 & 6.95 & 4.26 & 6.92 & 4.37 & 5.05 \\
$-150$ & 3.06 & 5.59 & 3.33 & 5.39 & 3.90 & 4.37 \\
\hline
\end{tabular}
\label{tab:6}
\end{table}
Figure~\ref{fig-4} shows the in-medium (effective) energies of $\bar{K}$ mesons as a function of baryon (vector) number density in NY$\Delta $ matter described by the DD-ME2 CDF. The onset of $K^-$ mesons condense in the compact star matter occurs when the respective effective energy crosses the electron chemical potential, which then marks the threshold density. In the case $\bar{K}^0$ mesons, the condensate appears when their in-medium energy value reaches zero. With the increase in the values of $U_{\bar{K}}$, the density threshold for the onset of the (anti)kaons is shifted to lower densities.
The EoSs with NY and NY$\Delta$ matter compositions in the absence as well as in presence of (anti)kaon degrees of freedom are shown in Fig.~\ref{fig-1}. In the case with no (anti)kaons in the matter, the EoSs of NY$\Delta$ matter is stiffer than the EoS of NY matter in the high-density regime and the opposite is true in the low-density regime.
This is consistent with the results of Ref.~\cite{Li_PLB_2018} found within the same DD-ME2 parametrization.
The middle and right panels of Fig.~\ref{fig-1} include (anti)kaons with potential values $U_{\bar{K}}=-140, -150$ MeV respectively. It is seen that the onset of (anti)kaon condensation softens the EoS, which is marked by a change in the slope of EoSs beyond the condensation threshold. Furthermore, the softening is more pronounced in the case of NY$\Delta$ composition, which reverses the high-density behavior seen in the left panel: the EoS
with NY$\Delta$ composition is now the softest among all considered cases. It is further seen that the
higher the value of $U_{\bar{K}}$ the more pronounced is the softening of the EoSs.
The mass-radius ($M$-$R$) relations corresponding to the EoSs in Fig.~\ref{fig-1} were obtained by solving the Tolman-Oppenheimer-Volkoff (TOV) equations for static non-rotating spherical stars \cite{1996cost.book.....G} and are shown in Fig.~\ref{fig-3}. For the crust region, the BPS EoS ~\cite{1971ApJ...170..299B} is used.
The inclusion of additional exotic degrees of freedom reduces the maximum mass of NSs in comparison to nucleonic matter from $2.5~M_{\odot}$ to $\sim 2~M_{\odot}$. The compactness is also observed to be enhanced due to the appearance of $\Delta^-$-resonance at lower densities.
The parameter values of the maximum mass stars are provided in a tabulated form in Table~\ref{tab:7}. From Tables~\ref{tab:6} and \ref{tab:7} it can be inferred that $K^-$ meson appears in all the EoS models with $U_{\bar{K}}=-140, -150$ MeV. But $\bar{K}^0$ meson does not appear in the hypernuclear star with $U_{\bar{K}}=-140$ MeV and $\Delta$-baryon admixed hypernuclear star with $V_{\Delta}=V_N$ and $U_{\bar{K}}=-140$ MeV. Consistent with the (anti)kaon softening of the EoS seen in Fig.~\ref{fig-1} the maximum masses of the stars with NY$\Delta$ composition and (anti)kaon condensation lie below those without $\Delta$ resonances, which is the reverse of what is observed when (anti)kaon condensation is absent.
\begin{figure*} [t!]
\begin{center}
\includegraphics[width=14.5cm,keepaspectratio ]{Figures/MR_new.pdf}
\caption{The mass-radius relationships for EoS shown in Fig.~\ref{fig-1} for
NY matter (solid lines), NY$\Delta$ matter
with $\Delta$-potential $V_\Delta=V_N$ (dashed lines) and $V_\Delta=5/3~V_N$ (dash-dotted lines). The three panels correspond to different values of (anti)kaon potential: $U_{\bar{K}}=0$, i.e., no (anti)kaon condensation,
(left panel), $U_{\bar{K}}=-140$
(middle panel), and $U_{\bar{K}}=-150$ MeV (right panel).
The astrophysical constraints from GW190425 \cite{2020ApJ...892L...3A}, GW190814 \cite{2020ApJ...896L..44A}, MSP J$0740+6620$ \cite{2020NatAs...4...72C}, PSR J$0030+0451$ \cite{2019ApJ...887L..24M,2019ApJ...887L..21R}, low-mass X-ray binaries \cite{2018MNRAS.476..421S} are shown by dot-double dashed, dotted, long-, short-dashed boxes and horizontal solid line respectively.}
\label{fig-3}
\end{center}
\end{figure*}
\begin{table*} [t!]
\centering
\caption{Properties of maximum mass stars for various compositions and values of (anti)kaon potential $U_{\bar{K}}(n_0)$. For each composition/potential value the enteries include: maximum mass (in units of $M_{\odot}$) the radius (in units of km), and central number density (in units of $n_0$).}
\begin{tabular}{c|ccc|ccc|ccc}
\hline \hline
Configuration & \multicolumn{3}{c|}{NY$\bar{K}$} & \multicolumn{6}{|c}{NY$\Delta \bar{K}$} \\
\cline{1-10}
& & & & \multicolumn{3}{c|}{$V_{\Delta}=V_N$} & \multicolumn{3}{c}{$V_{\Delta}= 5/3~V_N$} \\
\cline{5-10}
$U_{\bar{K}}$ (MeV) & $M_{max}$($M_{\odot}$) & $R$(km) & $n_{c}$($n_0$) & $M_{max}$($M_{\odot}$) & $R$(km) & $n_{c}$($n_0$) & $M_{max}$($M_{\odot}$) & $R$(km) & $n_{c}$($n_0$) \\
\hline
$0$ & $2.008$ & $11.651$ & $6.107$ & $2.021$ & $11.565$ & $6.160$ & $2.049$ & $11.226$ & $6.349$ \\
$-140$ & $2.005$ & $11.652$ & $6.096$ & $2.019$ & $11.566$ & $6.151$ & $2.032$ & $11.343$ & $ 6.214$ \\
$-150$ & $1.994$ & $11.664$ & $6.13$ & $2.006$ & $11.61$ & $6.143$ & $1.973$ & $11.448$ & $6.028$ \\
\hline
\end{tabular}
\label{tab:7}
\end{table*}
From the analysis above, we conclude that compact stars containing (anti)kaons are consistent with the astrophysical constraints set by the observations of massive pulsars, the NICER measurements of parameters of PSR J$0030+0451$, the low-mass X-ray binaries in a globular cluster, and the gravitational wave event GW190425, see Sec.~\ref{sec:intro}. Although we do not provide here the deformabilities of our models, from the values of the radii obtained it is clear that our models are also consistent with the GW170817 event. Finally, our models are inconsistent with the interpretation of the light companion of the GW190814 binary as a compact star. Including the rotation even at its maximal mass-shedding limit will not be sufficient to produce a $\sim 2.5M_{\odot}$ mass compact star, see Refs.~\cite{2020PhRvD.102d1301S,LI2020135812}.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/fraction_NYK.pdf}
\caption{Particle abundances $n_i$ (in units of $n_0$) as a function of normalized baryon number density in NY matter for values of $U_{\bar{K}}-140$ MeV (top panel) and $-150$ MeV (bottom panel). }
\label{fig-5}
\end{center}
\end{figure}
Figure~\ref{fig-5} shows the particle composition in NY matter with (anti)kaons as a function of baryon number density and for $U_{\bar{K}}=-140,-150$ MeV. At low densities, before the onset of strange particles, the charge neutrality is maintained among the protons, electrons and muons. At somewhat higher density ($\ge 2n_0$) $\Lambda$ and $\Xi^{-}$ appear in the matter (because of the repulsive nature of $\Sigma$-potential in dense nuclear matter, $\Sigma$-baryons do not appear in the composition). Finally, the (anti) kaons and $\Xi^{0}$ appear
in the high-density regime ($\ge 4n_0$). Comparing the upper and lower panels of the figure, we observe that the higher $U_{\bar{K}}$ value implies a lower density threshold of the onset of (anti)kaon, as expected.
The onset of (anti)kaons also affects the population of leptons; $K^-$ are efficient in replacing electrons and muons once they appear, thus they contribute to the extinction of leptons, which occurs at lower densities for higher values of $U_{\bar{K}}$. In the case of $U_{\bar{K}}=-150$ MeV, the $\Xi^-$ fraction is seen to be strongly affected with the appearance of $K^-$ mesons. This is expected as $K^-$ being bosons are more energetically favorable for maintaining the charge neutrality compared to fermionic $\Xi^-$. The composition in the case of $U_{\bar{K}}=-140$ MeV, does have $\bar{K}^0$ mesons ($n_u \sim 6.95~n_0$) whereas for $U_{\bar{K}}=-150$ MeV, $\bar{K}^0$ appears at onset density $n_u \sim 5.59~n_0$ which leads to an additional softening of the EoS.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/fraction_kaons_140.pdf}
\caption{Same as Fig.~\ref{fig-5} but for NY$\Delta$ matter
for $V_{\Delta} = V_N$ (top panel) and $V_{\Delta} = 5/3~V_N$
(bottom panel) and fixed value of $U_{\bar{K}}=-140$ MeV.}
\label{fig-6}
\end{center}
\end{figure}
Figure~\ref{fig-6}, which is analogous to Fig.~\ref{fig-5}, shows the particle population in $NY\Delta$-matter as a function of baryon number density for $U_{\bar{K}}=-140$ MeV. It is observed that for $V_{\Delta}=~V_N$ only $\Delta^-$ resonance appears, whereas for $V_{\Delta}=5/3~V_N$ the onset of the entire quartet of $\Delta$-resonances is possible. It seen that in general the $\Delta$-resonances effectively shift the threshold densities of hyperons to higher densities, thus diminishing their role. This concerns both the neutral $\Lambda$ as well as $\Xi^-$-hyperon. This shift is stronger for larger values of $V_{\Delta}$. Resonances also suppress the lepton fraction by lowering the density at which they disappear in NY$\Delta$-matter, this effect being magnified for larger values of $V_{\Delta}$. In the high-density regime the negative charge is provided by $\Delta^-$--$\Xi^-$--$K^-$ mixture and it is seen that the rapid increase in the $K^-$ population suppresses the $\Delta^-$-$\Xi^-$ abundances for $V_{\Delta}=5/3~V_N$, as kaons are energetically more favorable than the heavy-baryons. Note also that the onset of $\bar{K}^0$ meson abruptly decreases the abundance of $\Xi^-$, as seen in the lower panel; (in the upper panel, i.e. for $U_{\bar{K}}=-140$ MeV and $V_{\Delta}=V_N$, the $\bar{K}^0$ mesons do not appear). There is some qualitative differences between the two cases $V_{\Delta}=1$ and $5/3~V_N$: (a) the $\Delta^-$ baryon disappears at higher matter densities for $V_{\Delta}=1$ but its abundance is almost constant in for $V_{\Delta}=5/3~V_N$; (b) the $\Lambda$ hyperon dominates over the neutron fraction at higher density for $\sim 5.5~n_0$ in case of $V_{\Delta}=V_N$ compared to $\sim 4.5~n_0$ in case of $V_{\Delta}=5/3~V_N$.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/fraction_kaons_150.pdf}
\caption{
Same as Fig.~\ref{fig-6} but for a larger (absolute) value of potential $U_{\bar{K}}=-150$ MeV.}
\label{fig-7}
\end{center}
\end{figure}
Figure~\ref{fig-7} shows the same as in Fig.~\ref{fig-6} but for $U_{\bar{K}}=-150$ MeV.
The particle fractions show identical trends as in Fig.~\ref{fig-6} until the appearance of (anti)kaons.
The larger potential favors earlier onset of (anti)kaons in matter; for example, the $K^-$ sets in before
the $\Xi^-$ and it is now the dominant negatively charged component shortly after the density
increases beyond the onset value.
The effect of the onset of $\bar{K}^0$ on the $\Xi^-$ and $\Delta^-$, which is mediated via changes in the abundances of $K^-$, is seen clearly again. As before, for a large value of $V_{\Delta}=5/3~V_N$, all the members of the quartet of $\Delta$-resonances are present in the matter composition. Another notable fact is the complete extinction of $\Xi^{-,0}$ baryons, which is consistent with the trends seen in Figs.~\ref{fig-5} and \ref{fig-6}. Interestingly, in the case $V_{\Delta}=5/3~V_N$ the (anti)kaons abundances are the largest among all particles in the high-density regime, which leads also to the softening of the EoS observed above.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/effective_kaon_mass.pdf}
\caption{Effective (anti)kaon mass (in units of its bare mass, $m_{\bar{K}}$) as a function of baryon number
density for NY and NY$\Delta$ matter compositions and two values of (anti)kaon potential depth. }
\label{fig-8}
\end{center}
\end{figure}
Figure~\ref{fig-8} shows the (anti)kaon effective mass as a function of normalized baryon number density for various strengths of $U_{\bar{K}}$ with different matter compositions. The effective mass of (anti)kaons tends to decrease rather steeply in case of higher strengths of $U_{\bar{K}}$. It is observed that in the low-density regime, the (anti)kaon effective mass decreases relatively quickly in the case of $\Delta$-resonances admixed matter compared to that with the only hyperonic matter. The reason is the larger scalar potential values arising from the onset of additional non-strange baryons at lower densities. And at higher densities, the (anti)kaon effective mass values are observed to be larger in the former case than the latter one. This may be attributed to the delayed onset of hyperons because of the $\Delta$-resonances appearance.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/EoS_mesons.pdf}
\caption{The EoS of NY matter (left panel) and NY$\Delta$ matter (right panel) with (anti)kaon potential
$U_{\bar{K}}=-120$ MeV including $\sigma^*$ meson (dashed lines) and without it (solid lines).
The $\Delta$-potential value is fixed at $5/3V_{N}$.
}
\label{fig-9}
\end{center}
\end{figure}
The matter pressure as a function of energy density for different matter compositions with and without $\sigma^*$ meson for the hyperon-hyperon interactions is shown in Fig.~\ref{fig-9}. Being a scalar, $\sigma^*$ meson makes the EoS softer as is evident from the figure. It is observed that incorporating $\sigma^*$ meson rules out the possibility of (anti)kaon phase transition with $U_{\bar{K}}=-120$ MeV. This is because this scalar meson further reduces the effective mass of (anti)kaons halting their onset in the matter. The phase transition from the purely hadronic to (anti)kaon condensed phase is second-order.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/MR_mesons.pdf}
\caption{The $M$-$R$ relations corresponding to the EoSs in Fig.~\ref{fig-9} are shown for NY matter (left panel) and NY$\Delta$ matter (right panel) with (anti)kaon potential $U_{\bar{K}}=-120$ MeV including $\sigma^*$ meson (dashed lines) and without it (solid lines). The $\Delta$-potential value is fixed at $5/3V_{N}$. The astrophysical observables (constraints) are similar as in Fig.~\ref{fig-3}.}
\label{fig-10}
\end{center}
\end{figure}
The results of mass-radius ($M$-$R$) relationship obtained by solving the TOV equations for non-rotating spherical stars corresponding to the EoSs in Fig.~\ref{fig-9} are presented in Fig.~\ref{fig-10}. It is observed that
in both cases of NY for NY$\Delta$ matter the inclusion of $\sigma^*$ meson leads to lower maximum mass. It is also seen that the addition of $\Delta$'s reduces the radius of the of the stars and mildly increases the maximum, which consistent with the findings without (anti)kaon condensation.
Table~\ref{tab:8} provides the stellear maximum masses, radii and corresponding central densities evaluated from the EoSs in Fig.~\ref{fig-9} with $U_{\bar{K}}=-120$ MeV.
\begin{table} [h!] \centering \caption{ Properties of maximum mass stars for various compositions, $U_{\bar{K}}=-120$ MeV, $V_{\Delta}= 5/3~V_N$ in the cases with $\sigma^*$ meson and without. In both cases we list the maximum mass (in units of $M_{\odot}$) the radius (in units of km), and central number density (in units of $n_0$).} \begin{tabular}{c|ccc|ccc}
\hline \hline
Config. & \multicolumn{3}{c}{NY$\bar{K}$} & \multicolumn{3}{|c}{NY$\Delta \bar{K}$ ($V_{\Delta}= 5/3~V_N$)} \\
\cline{1-7}
& $M_{max}$ & $R$ & $n_{c}$ & $M_{max}$ & $R$ & $n_{c}$ \\
& ($M_{\odot}$) & (km) & ($n_0$) & ($M_{\odot}$) & (km) & ($n_0$) \\
\hline
$\sigma \omega \rho \phi$ & $2.124$ & $11.673$ & $5.973$ & $2.137$ & $11.023$ & $6.538$ \\
$\sigma \omega \rho \sigma^* \phi$ & $2.008$ & $11.651$ & $6.107$ & $2.049$ & $11.226$ & $6.349$ \\
\hline
\end{tabular}
\label{tab:8}
\end{table}
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/fraction_mesons_NYK.pdf}
\caption{
Particle abundances $n_i$ (in units of $n_0$) as a function of normalized baryon number density in NY matter for value of $U_{\bar{K}}= -120$ in the case of $\sigma \omega \rho \phi$ exchange (top panel) and
$\sigma \omega \rho \sigma^* \phi$ (bottom panel). (Anti)kaons are absent in the second case.
}
\label{fig-11}
\end{center}
\end{figure}
Figure~\ref{fig-11} shows the particle abundances in case of hypernuclear matter with $U_{\bar{K}}=-120$ MeV with and without $\sigma^*$ meson. The main qualitative difference is that $K^-$ appears for $n \ge 5.4~n_0$ in the first case and it does not appear up to $n\sim 7~n_0$ in the second case. Consequently, the charge neutrality is maintained between $e-\Xi^-+K^-$ and protons in the first case and only $e-\Xi^-$ and protons in the second case. Given by more than one order of magnitude smaller abundance of electrons, the abundances of $\Xi^-$ and protons almost coincide in the second case. Another feature seen in Fig.~\ref{fig-11} is that the electron and muon populations disappear faster with increasing density in the case where the $\sigma^*$ meson is included.
\begin{figure} [h!]
\begin{center}
\includegraphics[width=8.5cm,keepaspectratio ]{Figures/fraction_mesons_NY5_3DK.pdf}
\caption{Same as Fig.~\ref{fig-11} but for NY$\Delta$ matter with $V_{\Delta}=5/3~V_N$.}
\label{fig-12}
\end{center}
\end{figure}
Figure~\ref{fig-12}, which is similar to Fig.~\ref{fig-11}, shows the composition of particles in NY$\Delta$ matter and for $U_{\bar{K}}=-120$ MeV. In this case also, (anti)kaons are observed to appear only in the EoS where $\sigma^*$ meson is excluded. It is seen, that the main difference between the two cases is that
$\sigma^*$ driven interactions prefer lower threshold density of $\Xi^0$ and their larger fraction, which
effectively leads to an exclusions of (anti)kaons in the density range considered. Unlike the case with only hypernuclear matter, in this case the lepton fractions are unaffected by the exclusion or inclusion of $\sigma^*$ meson, because of the negative charge is supplied by the $\Delta^-$-resonance.
\section{Summary and Conclusions}
\label{sec:conclusions}
In this work, we discussed the second-order phase transition to Bose-Einstein condensation of (anti)kaons in hypernuclear matter with and without an admixture of $\Delta$-resonances within the framework of density-dependent CDF theory. The resulting EoS, matter composition, and the structure of the associated static, spherically symmetrical star models were presented. The strong interactions viz. baryon-baryon and (anti)kaon-baryon are handled on the same footing. The mediators considered in this work are $\sigma$, $\omega$, $\rho$ for the non-strange baryons and two strange particle interaction mediating mesons- $\sigma^*$, $\phi$. The $K^-$ optical potentials ($-120\leq U_{\bar{K}} \leq -150$ MeV) at nuclear saturation density are considered in a range which fulfills the observational compact star maximum mass constraint ($\sim 2M_{\odot}$).
We find that the (anti)kaon condensates cannot appear in the hypernuclear matter, within our parametrization, if $U_{\bar{K}} \leq -130$ MeV. $\bar{K}^0$ condensation is absent in maximum mass compact stars with $U_{\bar{K}}=-140$ MeV. The inclusion of hyperons into the matter composition shifts the onset of (anti)kaons to higher density regimes in comparison to the case without hyperons, i.e. only nuclear matter, c.f. to Ref.~\cite{PhysRevD.102.123007}. For higher $U_{\bar{K}}$ values, the appearance of both the (anti)kaons becomes possible
in the maximum mass models. The $K^-$ meson fraction is seen to dominate over the $\Xi^-$ baryon for high $U_{\bar{K}}$ strengths. This can be attributed to the fact that
the $K^-$ particle being bosons is more favored over the fermionic $\Xi^-$-particles.
Next, in the case of $\Delta$ baryon admixed hypernuclear matter, the onset of (anti)kaons is shifted to even higher densities compared to only hyperonic matter. (Anti)kaon condensation is absent with $U_{\bar{K}} \leq -120$ MeV. The condensed phase is observed to appear in matter with $U_{\bar{K}}=-130$ MeV and $V_{\Delta}=V_N$. However, $\bar{K}^0$ condensation is absent for this particular $U_{\bar{K}}$ strength. Larger values of $\Delta$-potentials $V_{\Delta}$ imply that the entire $\Delta$-resonances quartet is present in matter. It is also observed that in a particular matter composition ($U_{\bar{K}}=-150$ MeV, $V_{\Delta}=V_N$), the onset of $K^-$ occurs even before that of $\Xi^-$ particles. Moreover, for higher strengths of $U_{\bar{K}}$ and $V_{\Delta}$, the $\Delta$-baryons and (anti)kaons take over the $\Xi^{-,0}$ particles leading to their complete suppression in the matter. Lepton populations are suppressed with increasing density more quickly in case of higher strengths of $V_{\Delta}$. We find that the effective mass of (anti)kaons is weakly dependent on the composition of matter and decreases almost linearly in the relevant density range $2\le n/n_0\le 6$, which reflects the density dependence of the scalar potential.
The influence of the strange scalar interaction mediating meson $\sigma^*$ on the composition and EoS are twofold: firstly, including the $\sigma^*$ meson softens the EoS significantly leading to lower maximum masses of compact stars. Secondly, exclusion of $\sigma^*$ meson allows for (anti)kaon $K^-$ to appear for weakly attractive potential strength $U_{\bar{K}}\sim -120$ MeV in both the hyperonic as well as $\Delta$ admix hypernuclear matter.
As indicated in the discussion (Sec.~\ref{sec:results}) the present model with a suitable choice of parameters characterizing the (anti)kaon condensate is consistent with the currently available
astrophysical constraints listed in Sec.~\ref{sec:intro}. The present model can, therefore, be used to model
physical processes in (anti)kaon condensate featuring $\Delta$-admixed hypernuclear star.
Examples include cooling processes, bulk viscosity, thermal conductivity, to list a few.
\begin{acknowledgements}
The authors thank the anonymous referee for the constructive comments which have bestowed to enhance the quality of the manuscript notably.
VBT and MS acknowledge the financial support from the Science and Engineering Research Board, Department of Science and Technology, Government of India through Project No. EMR/2016/006577 and Ministry of Education, Government of India. They are also thankful to Sarmistha Banik and Debades Bandyopadhyay for vital and fruitful discussions. M.S. also thanks Alexander von Humboldt Foundation for the support of a visit to Goethe University, Frankfurt am Main. J. J. Li acknowledges the A. von Humboldt Foundation for support in the initial stages of this
work. A. S. acknowledges the support by the Deutsche Forschungsgemeinschaft (Grant No. SE 1836/5-1)
and the European COST Action CA16214 PHAROS ``The multi-messenger physics and astrophysics of
neutron stars''.
\end{acknowledgements}
|
{
"timestamp": "2021-02-18T02:19:39",
"yymm": "2102",
"arxiv_id": "2102.08787",
"language": "en",
"url": "https://arxiv.org/abs/2102.08787"
}
|
\section{Introduction}
Radio-frequency capacitively coupled plasmas (RF-CCPs) operated at low-pressures are a core part of modern technology \cite{LiebermanBook, ChabertBook, MakabeBook}.
Especially for semiconductor fabrication, plasma processes like ion-assisted etching \cite{HandbookEtching, etching1} and ion implantation \cite{implantation1, implantation2, implantation3} are key technologies.
Plasma tools help to achieve an integration depth of only a few nanometers \cite{dev-scale1, dev-scale2, dev-scale3}.
One major challenge of these processes is to control the energy and flux of impinging ions on the wafer separately \cite{LiebermanBook, ChabertBook, MakabeBook, IEDFcontrol1, IEDFcontrol2, IEDFcontrol3, IEDFcontrol4, IEDFcontrol5}.\par
Techniques using multiple driving frequencies, such as voltage waveform tailoring \cite{IEDFcontrol1}, succeed to independently control the plasma generation and the ion bombardment energy \cite{MF1, MF2}.
The plasmas investigated in these studies are predominantly argon plasmas \cite{IEDFcontrol2, IEDFcontrol5, VWT1}.
However, industrially relevant etching plasmas consist of rather complex gas mixtures like CF\textsubscript{4}/H\textsubscript{2} \cite{chemistry1, chemistry2, chemistry3} or SF\textsubscript{6}/O\textsubscript{2} \cite{chemistry4, chemistry5}.
For these plasmas, the interplay of several charged and neutral heavy species impacts the ion dynamics.
The ion dynamics in the plasma eventually determine how ions reach the walls.
Here, both the quantitative (e.g., how many ions reach the target/substrate?) and the qualitative perspective (e.g., how are the ions affected by collisions?) need to be considered.\par
Researching complex plasma chemistry in RF-CCPs is a tedious task.
Experimental studies show that the ion energy distribution functions (IEDFs) at the electrodes become rather complicated \cite{IEDF1, IEDF2, IEDF3, Kawamura, IEDF5}.
Commonly used tools such as the retarding field analyzer filter the incident ions by energy and do not differentiate between the ion species \cite{IEDF1, IEDF5, IEDF6}.
There is recent and ongoing work to utilize ion mass spectrometry to overcome these issues \cite{IEDF7}.
Nevertheless, this technique is currently not widely applied as a diagnostic tool to analyze plasmas.
Therefore, theoretical studies and simulations are necessary to help to interpret and to understand the measured data.
However, the inherently complex chemistry renders a complete simulation cumbersome.
The commonly used kinetic Particle-In-Cell/Monte Carlo collisions (PIC/MCC) method typically avoids complex chemistry mainly due to lack of cross section data (although conceptually possible).
The reasoning is to keep the number of species and superparticles traceable \cite{Kim}.
Otherwise, the computational load of PIC/MCC simulation would not be feasible.\par
Combining complex discharge chemistry with the multi-frequency approaches mentioned above makes a detailed assessment of ion dynamics' features too cumbersome to conduct collectively.
Hence, we decide to investigate the fundamental principles of a discharge with two ion species for this study.
The mixture of the noble gases argon and xenon has some history of being an adequate model for complex chemistry.
In low-pressure plasmas, the plasma chemistry of noble gases becomes relatively simple \cite{Gudmundsson1, Gudmundsson2}.
Therefore, studies on ion acoustic waves \cite{IAW1, IAW2} and a generalized Bohm criterion \cite{Bohm1, Gudmundsson1, Gudmundsson2} depicted this mixture as a simple example for a multi-ion discharge.
Recent studies by Kim et al.\cite{Kim2} and Adrian et al.\cite{Adrian} contributed to those discussions using or referring to Ar/Xe plasmas.\par
Apart from being a model system, there are some academic applications of Ar/Xe plasmas (e.g., as trace gas for mass spectrometry \cite{PlasmaTorch, MassSpec}, for the diagnostics of the electron temperature \cite{ ElTemp}, or in halide lamp simulations \cite{Lamp}).
Furthermore, the mixture has had great success as the illuminant \cite{ArXePDP} or as part of the illuminant mixture \cite{Kim, PDP1, PDP2} of plasma display panels (PDPs).
This historical background causes both gases to be relatively well researched.
This fact entails many valuable data for theory and simulation.\par
This work aims to add to the existing studies conducted for various gas mixtures \cite{mixture1, mixture2, mixture3, mixture4}, investigating their intrinsic mixture dynamics.
This knowledge will eventually enable the adaptation of the known means of plasma control to the complex discharges of industrial relevance.
In contrast to the existing studies, our work is focused on the impact of the gas mixture composition on the ion dynamics.
We will show that the gas composition is a suitable control parameter for the ion dynamics (e.g., the impingement energy of ions at the surface).\par
This manuscript presents our findings as follows: in section \ref{methods}, we introduce our simulation framework and a model for the missing cross section data.
Moreover, we introduce an energy balance model for CCPs with multiple ion species.
The findings of these models are interpreted in section \ref{results}.
We first discuss the influence of a variation of the gas composition on the ion dynamics.
Then, we validate the energy balance model with our simulation data.
Afterward, we apply this energy balance model to support and to analyze our gas composition variation findings.
We conclude section \ref{results} by discussing and examining the influence of a variation of the gas composition combined with a variation of the driving voltage on the ion dynamics.
Finally, in section \ref{conclusion}, we summarise our findings, draw a conclusion, and set this work into the context of industrial applications.
\section{Methods and models} \label{methods}
\subsection{Particle-In-Cell Simulation} \label{PIC_sec}
The first particle simulations were introduced in the 1940s \cite{HockneyBook}, and the PIC/MCC scheme was developed in the 1960s \cite{Birdsall}.
Since then, the PIC/MCC method became a commonly used tool to self-consistently simulate low-pressure plasmas \cite{Kim, Birdsall, Verboncoeur}.
Despite having the disadvantage of a substantial computational load, its most significant advantage is the statistical representation of distribution functions in phase-space, allowing the method to capture non-local dynamics \cite{Verboncoeur, Wilczek1}.\par
%
For this work, a benchmarked PIC/MCC implementation called \textit{yapic1D} \cite{PICbenchmark} is used to generate the results.
The original code is modified to include two background gases and multiple ion species.
Aside from that, diagnostics for the energy balance model mentioned above is added to the original code.\par
%
This simulation setup is taken to be fully geometrically symmetric (compare Wilczek et al. for details \cite{Wilczek1}).
1d3v electrostatic simulations are executed using a Cartesian grid with 800 grid cells representing an electrode gap of $25\,$mm.
The resulting cell size $\Delta{x}$ meets the requirement to resolve the Debye length $\lambda_\mathrm{D}$ \cite{Birdsall, PICbenchmark, Wilczek1}.
Similarly, the single harmonic driving frequency $f_\mathrm{RF} = 13.56\,$MHz is sampled with 3000 points per RF period.
The time step $\Delta{t}$ is sufficiently small to fulfill the requirement regarding the electron plasma frequency $\omega_\mathrm{pe}$ \cite{Birdsall, PICbenchmark, Wilczek1}.
Several other studies mention the influence of the number of superparticles on the statistics and the plasma density \cite{PICbenchmark, Erden, Kim-PIC}.
For this work, we did not include individual weighting for different particle species.
To have an acceptable resolution for each ion species at all values of the xenon fraction $x_\mathrm{Xe}$, we simulated about 800.000 super-electrons for each case.
The advantage of this choice is that an average of 3000 converged RF-cycles provides satisfactory results.\par
%
The ideal gas law defines the neutral species' total density, and the neutral fraction $x_i$ is varied.
Thereby, the gas pressure $p_\mathrm{gas}$ is kept constant at $3\,$Pa, and the gas temperature $T_\mathrm{gas}$ at $300\,$K.
First, we choose the amplitude of the RF voltage $V_\mathrm{RF}$ to be $100\,$V.
Later in section \ref{VolVar}, we discuss the implications of a voltage variation between $100\,$V and $1000\,$V on the ion dynamics.
All the parameters presented in this section are typical for baseline studies of RF-CCPs \cite{IEDFcontrol3, IEDFcontrol4, IEDFcontrol5, PICbenchmark, Wilczek1}.
\subsection{Discharge chemistry} \label{chemistry}
\begin{table}[t!]
\centering
\begin{tabular}{l l l c c}\hline
$\hash$ & reaction & process name & $\varepsilon_\mathrm{thr}$ [eV] & data source \\\hline
1 & e$^-$ + Ar $\rightarrow$ e$^-$ + Ar & elastic scattering & - & Phelps \\
2 & e$^-$ + Ar $\rightarrow$ e$^-$ + Ar$^*$ & electronic excitation & 11.5 & Phelps \\
3 & e$^-$ + Ar $\rightarrow$ 2$\,$e$^-$ + Ar$^+$ & ionization & 15.8 & Phelps \\
4 & e$^-$ + Xe $\rightarrow$ e$^-$ + Xe & elastic scattering & - & Phelps \\
5 & e$^-$ + Xe $\rightarrow$ e$^-$ + Xe$^*$ & electronic excitation & 8.32 & Phelps \\
6 & e$^-$ + Xe $\rightarrow$ 2$\,$e$^-$ + Xe$^+$ & ionization & 12.12 & Phelps \\\hline
7 & Ar$^+$ + Ar $\rightarrow$ Ar$^+$ + Ar & isotropic scattering & - & Phelps \\
8 & Ar$^+$ + Ar $\rightarrow$ Ar + Ar$^+$ & resonant charge exchange & - & Phelps \\
9 & Ar$^+$ + Xe $\rightarrow$ Ar$^+$ + Xe & isotropic scattering & - & LJ pot \\
10 & Xe$^+$ + Xe $\rightarrow$ Xe$^+$ + Xe & isotropic scattering & - & Phelps \\
11 & Xe$^+$ + Xe $\rightarrow$ Xe + Xe$^+$ & resonant charge exchange & - & Phelps \\
12 & Xe$^+$ + Ar $\rightarrow$ Xe$^+$ + Ar & isotropic scattering & - & Viehland\\\hline
\end{tabular}
\caption{Plasma chemistry and collision processes considered in the simulation.
Meaning of the data sources: ``Phelps'' refers to the cross section data found initially in the JILA database \cite{Phelps} and now distributed by the LXCat project \cite{lxcat1, lxcat2, lxcat3}.
``LJ pot'' refers to a cross section obtained based on a phenomenological Lennard-Jones potential as described by Laricchiuta et al.\cite{Laricchiuta}.
``Viehland'' marks a cross section calculated from an interaction potential given by Viehland et al.\cite{Viehland}.
Details of the calculations can be found in section \ref{xSectData}.}
\label{tab:chemistry}
\end{table}
For PIC simulations to provide a realistic representation of the particle distribution functions and physics in a low-pressure discharge, collisions need to be considered.
The method of choice is the Monte Carlo collision technique \cite{Birdsall, Verboncoeur, Wilczek1} that is combined with a so-called null collision scheme \cite{Skullerud, PICbenchmark, Wilczek1}.
Both techniques require the knowledge of momentum transfer cross sections.\par
%
The chemistry set for argon and xenon is in line with the work of Gu\eth mundsson et al.\cite{Gudmundsson1, Gudmundsson2}.
All reactions can be seen in detail in table \ref{tab:chemistry}.
In contrast to Gu\eth mundsson et al., we decide to take advantage of the commonly used and acknowledged \cite{Wilczek1, PICbenchmark, VWT1, DonkoPIC} cross section data obtained by Phelps.
The data was initially distributed via the JILA database \cite{Phelps} and is now available at the LXCat project website \cite{lxcat1, lxcat2, lxcat3}.
Phelps combines the cross sections for all electronically excited states into one ``effective excitation'' cross section.
This effective excitation reduces the total number of reactions and the numerical load.\par
%
The second difference compared to Gu\eth mundsson et al. is our treatment of the missing cross section data for Ar$^+$/Xe and Xe$^+$/Ar.
Both conclude to neglect charge transfer collisions between argon and xenon due to it being a non-resonant process that requires a third particle to ensure momentum and energy conservation.
The disparity lies in our treatment of the remaining scattering process.
They assume the cross sections for processes 7 and 9 and the cross sections for processes 10 and 12, respectively, to be equal.
We adopt a physical model to procure the necessary cross sections from interaction potentials.
In this way, we create individual cross sections for processes 9 and 12.
The details of how to calculate these cross sections will be presented in the following section. \par
%
The cross section data used for this work are depicted in figure \ref{fig:crosssections}.
It is noticeable that the cross sections of processes involving xenon species generally have higher values than corresponding processes that involve argon species.
In terms of a hard-sphere model \cite{LiebermanBook}, this deviation is explained by the different covalent atom radii of argon and xenon \cite{CRCHandbook}.
Xenon is, compared to argon, simply the bigger target.
In terms of a more sophisticated collision model \cite{LiebermanBook}, one, for example, needs to consider the atomic polarisability of the neutral particle.
Nevertheless, such a view leads to the same insight.
Xenon has a higher atomic polarisability than argon \cite{CRCHandbook} and stronger interaction with charged particles.
Correspondingly, the cross section for charged particles interacting with xenon has to be larger than the cross section for the interaction of charged particles and argon.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure1.jpg}
\caption{Cross sections for the electron and ion collisions used in this work.
a) shows the cross section of collision processes from electrons with argon neutrals.
b) shows the cross sections of collision processes from electrons with xenon neutrals.
c) shows cross sections of the collisions of Ar\textsuperscript{+} ions.
d) shows cross sections of the collisions of Xe\textsuperscript{+} ions.
The data source and a detailed description of each process are found in table \ref{tab:chemistry}.
Abbreviations used in the legend:
\textit{ela} = elastic collision electron/neutral,
\textit{exc} = electronic excitation electron/neutral,
\textit{ion} = electron impact ionization,
\text{iso} = isotropic scattering ion/neutral as defined by \cite{Phelps},
\textit{back} = backscattering ion/neutral as defined by \cite{Phelps}.}
\label{fig:crosssections}
\end{center}
\end{figure*}
\subsection{The calculation of the cross sections} \label{xSectData}
On an elementary level of theory, all cross sections are based on an interaction potential between the colliding particles.
If the literature does not provide a cross section, a possible solution is to make it the modeler's task to develop an interaction potential by making several assumptions.
A classic example of this is the Langevin capture cross section \cite{Langevin} used in studies to make up for unknown cross sections \cite{Donko2}.
Despite the Langevin cross section's advantages, a complete implementation is numerically extensive and leads to anisotropic scattering \cite{Nanbu1, Nanbu2}.
The cross sections given by Phelps are a kind of momentum transfer cross sections \cite{Phelps}.
There, the scattering angles are found in an isotropic manner \cite{Birdsall}.
Hence, it is questionable to apply anisotropic scattering for two collisions while the other collisions are treated isotropically.
We perceive another approach to be more suitable for this work.\par
%
The approach used in this work is based on Laricchiuta et al. \cite{Laricchiuta}, who use a phenomenological potential to describe a two-body interaction given by
\begin{align}
V_{ij}(x) = \epsilon_{p,ij}\left[\frac{m}{n_{ij}\left(x_{ij}\right)-m}\left(\frac{1}{x_{ij}}\right)^{n_{ij}\left(x_{ij}\right)}-\frac{n_{ij}\left(x_{ij}\right)}{n_{ij}\left(x_{ij}\right)-m}\left(\frac{1}{x_{ij}}\right)^{m}\right],
\end{align}
where the standard exponents of the Lennard Jones potential, 12 and 6, are replaced by $n(x_{ij})$ and $m$.
Depending on the type of interaction, $m$ is either 4 for neutral-ion interactions or 6 for neutral-neutral interactions.
In this work, the potential is applied to neutral-ion interactions only.
Hence, m is always equal to 4.
The dimensionless coordinate $x=r/r_{m,ij}$ depends on the parameterized position of the potential well $r_{p,ij}$.
The potential itself is scaled by the parameterized potential well depth $\epsilon_{p,ij}$.
Both parameterizations are empirical approximations that depend on atomic properties like the polarizability.
More details related to the exact empirical formulas can be found in Laricchiuta et al. \cite{Laricchiuta}, Cambi et al. \cite{Cambi}, Cappelletti et al. \cite{Cappelletti}, and Aquilanti et al. \cite{Aquilanti}.\par
%
Two additional steps are required to obtain the cross section.
The first step is calculating the scattering angle, $\chi_{ij}$ according to
\begin{align}
\chi_{ij}\left(\epsilon_{ij},b\right) = \pi -2 b \int_{r_0}^\infty
\frac{\text{d}r}{r^2
\sqrt{1-\frac{b^2}{r^2}-\frac{V\left(r\right)}{\epsilon_{ij}}}}
\end{align}
with $\epsilon_{ij}$ the kinetic energy in the center of mass frame, $b$ the impact parameter, $r$ the distance between the particles, and $r_0$ the distance of closest approach.
The scattering angles are calculated using a program that is based on Colonna et al. \cite{Colonna}.
The second step is calculating the cross section $\sigma_{ij}$
\begin{align}
\sigma_{ij}^{\left(l\right)}\left(\epsilon_{ij}\right)=2 \pi \int_0^\infty
\left[1-\cos^l \chi_{ij}\left(\epsilon_{ij},b\right)\right]b\text{d}b,
\end{align}
with $l$ an integer that indicates which type of cross section is calculated.
In this work, we used $l=1$, which corresponds to the momentum transfer cross section.
The cross section is integrated based on an algorithm developed by Viehland \cite{Viehland2010}.
Finally, the scattering angle corresponding to the obtained momentum transfer cross sections is consistently taken to be isotropic in our simulations.
\subsection{Energy balance model} \label{EB_sec}
The conservation of energy is one of the central continuity equations of physics and so knowing how the energy disperses into a system is key to understanding the process.
In terms of low-temperature plasma physics, a frequently used model as given by Lieberman and Lichtenberg \cite{LiebermanBook} for a geometrically symmetric situation reads:
\begin{align}
S_\mathrm{abs} = 2\,n_\mathrm{s}\,u_\mathrm{B}\,\varepsilon_\mathrm{tot} = 2\,\Gamma_\mathrm{B}\,\varepsilon_\mathrm{tot} = 2\,e\,\Gamma_\mathrm{B}\,( \varepsilon_\mathrm{e} + \varepsilon_\mathrm{c} + \varepsilon_\mathrm{i} ).
\label{eq:cl_energybalance}
\end{align}
$S_\mathrm{abs}$ denotes the total energy flux into the system,
$n_\mathrm{s}$ the plasma density at the sheath edge,
$u_\mathrm{B}$ denotes the Bohm velocity,
$\Gamma_\mathrm{B}$ is the ion flux at the Bohm point,
and $\varepsilon_\mathrm{tot}$ is the total energy loss in eV.
The last transformation in equation \eqref{eq:cl_energybalance} shows that the energy loss per electron-ion pair created may be split into
an energy loss due to electrons hitting the bounding surface ($\varepsilon_\mathrm{e}$),
an energy loss due to collisions ($\varepsilon_\mathrm{c}$), and
an energy loss due to ions impinging at the bounding surface ($\varepsilon_\mathrm{i}$).
The loss terms $\varepsilon_\mathrm{e}$ and $\varepsilon_\mathrm{i}$ describe an averaged energy loss of the system per lost particle (neglecting particle reflections).
The third term $\varepsilon_\mathrm{c}$ is treated differently.
It represents the collisional losses per newly created electron/ion pair.\par
%
Previous work \cite{Wilczek2} has shown that an adaptation of equation \eqref{eq:cl_energybalance} gives insight into the system's electron dynamics by calculating all necessary terms from a PIC/MCC simulation.
An essential insight is that, due to flux conservation, the Bohm flux $\Gamma_\mathrm{B}$ can be exchanged by the electron flux $\Gamma_\mathrm{e,el}$ or the ion flux $\Gamma_\mathrm{i,el}$ at the electrode.\par
%
In detail, the energy conversion through collisions $\varepsilon_\mathrm{c}$ consists of an electron $\varepsilon_\mathrm{c,e}$ and an ion contribution $\varepsilon_\mathrm{c,i}$.
For low-pressure plasmas, it is argued that the energy loss due to ion collisions $\varepsilon_\mathrm{c,i}$ is often negligible \cite{LiebermanBook}.
However, a PIC/MCC study by Jiang et al.\cite{Jiang} showed that $\varepsilon_\mathrm{c,i}$ can significantly impact the energy balance of low-pressure plasmas.\par
%
Using both insights, we evolve equation \eqref{eq:cl_energybalance} into an energy balance model for two ion species, here explicitly given for our case of an Ar/Xe mixture:
\begin{alignat}{3}
&S_\mathrm{abs,tot} &&= S_\mathrm{abs,e} + S_\mathrm{abs,Ar+} + S_\mathrm{abs,Xe+} &&, \label{eq:tot}\\
&S_\mathrm{abs,e} &&= 2\,( \Gamma_\mathrm{e}\,\varepsilon_\mathrm{e} + \Gamma_\mathrm{Ar+}\,\varepsilon_\mathrm{c,e,Ar} + \Gamma_\mathrm{Xe+}\,\varepsilon_\mathrm{c,e,Xe} ) &&, \label{eq:e}\\
&S_\mathrm{abs,Ar+} &&= 2\, \Gamma_\mathrm{Ar+}\, (\varepsilon_\mathrm{i,Ar+} + \varepsilon_\mathrm{is,Ar+} + \varepsilon_\mathrm{cx,Ar+}) &&, \label{eq:Ar}\\
&S_\mathrm{abs,Xe+} &&= 2\, \Gamma_\mathrm{Xe+}\, (\varepsilon_\mathrm{i,Xe+} + \varepsilon_\mathrm{is,Xe+} + \varepsilon_\mathrm{cx,Xe+}) &&. \label{eq:Xe}
\end{alignat}
%
For this and more complex systems, it is useful to split the total energy flux $S_\mathrm{abs,tot}$ into a separate term for each species.
This separation is done in equation \eqref{eq:tot}.
Besides, we split the collisional losses to the background gas for ions, $\varepsilon_\mathrm{c,i}$, into two terms.
One represents the losses due to charge exchange collisions for Ar\textsuperscript{+} ions ($\varepsilon_\mathrm{cx,Ar+}$) and Xe\textsuperscript{+} ions ($\varepsilon_\mathrm{cx,Xe+}$).
The other term gives the losses caused by the remaining isotropic scattering.
It separates the isotropic losses for Ar\textsuperscript{+} ions ($\varepsilon_\mathrm{is,Ar+}$), and Xe\textsuperscript{+} ions ($\varepsilon_\mathrm{is,Xe+}$).
This distinction is based on the nomenclature of Phelps \cite{Phelps} and will prove useful for understanding the ion dynamics.\par
%
The terms for the electron flux ($\Gamma_\mathrm{e}$), the Ar\textsuperscript{+} ion flux ($\Gamma_\mathrm{Ar+}$) and the Xe\textsuperscript{+} ion flux ($\Gamma_\mathrm{Xe+}$) in this model are obtained from the PIC/MCC simulation at the surface of the electrodes.
\section{Results and Discussion} \label{results}
\subsection{Influence of the neutral gas composition on the discharge}\label{GasVar}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure2.jpg}
\caption{The trend of the plasma density while varying the background gas composition.
a) shows the development of the time and space averaged total ion density.
b) shows the fraction of Xe\textsuperscript{+} ions in the discharge.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $V_\mathrm{RF} = 100\,$V, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:ni_GasVar}
\end{center}
\end{figure*}
Initially, the influence of the gas composition on the discharge is investigated by varying the Ar/Xe density ratio.
Figure \ref{fig:ni_GasVar} a) shows the total ion density $n_\mathrm{i,tot}$ as a function of the xenon gas fraction $x_\mathrm{Xe}$ or the argon gas fraction $x_\mathrm{Ar}$, respectively.
Here, the total ion density $n_\mathrm{i,tot}$ is defined as the sum of the spatially and temporally averaged number densities of Ar\textsuperscript{+} and Xe\textsuperscript{+} ions.
The gas fractions of argon and xenon are defined by the ratio of the respective species density and the total gas density.
In analogy to this definition, we define an ion fraction, e.g., the fraction of Xe\textsuperscript{+} ions $x_\mathrm{Xe+}$, as the ratio of the number density of Xe\textsuperscript{+} ions and the total ion density $n_\mathrm{i,tot}$.
Figure \ref{fig:ni_GasVar} b) depicts this Xe\textsuperscript{+} ion ratio as a function of the xenon gas fraction $x_\mathrm{Xe}$ or argon gas fraction $x_\mathrm{Ar}$, respectively.\par
%
When varying the gas mixture from pure argon to pure xenon by successively increasing the xenon fraction $x_\mathrm{Xe}$, the plasma density rises significantly over about one order of magnitude (fig. \ref{fig:ni_GasVar} a)).
The ratio of Xe\textsuperscript{+} ions (fig. \ref{fig:ni_GasVar} b)) reveals that even small admixtures of xenon to an argon gas produce a high amount of Xe\textsuperscript{+} ions.
A xenon fraction of $x_\mathrm{Xe} \approx 0.15$ is already sufficient for Xe\textsuperscript{+} ions to become the dominant ion species.
Xenon admixtures of about 30 percent ($x_\mathrm{Xe} = 0.3$) produce a strongly Xe\textsuperscript{+} dominated discharge ($x_\mathrm{Xe+} \gtrsim 0.8$).
Both the development of the plasma density and the fraction of Xe\textsuperscript{+} ions in the discharge as a function of the gas composition show non-linear relations.
Hereby, the trend of figure \ref{fig:ni_GasVar} a) approximates a compressed parabola whilst the trend of \ref{fig:ni_GasVar} b) resembles the function of the square root.
In the following, the overall dominance of Xe\textsuperscript{+} ions will be examined and explained in more detail.
The difference in the ionization energies gives a basic explanation of the observed behavior.
The ionization threshold for xenon ($\varepsilon_\mathrm{thr,i,Xe} = 12.12\,$eV) is much smaller than the threshold for argon ($\varepsilon_\mathrm{thr,i,Ar} = 15.8\,$eV) (comp. tab. \ref{tab:chemistry}).
This disparity allows lower energetic electrons to ionize xenon in contrast to argon.
Additionally, the ionization cross section of xenon $\sigma_\mathrm{i,Xe}$ is about one order of magnitude bigger than the corresponding cross section $\sigma_\mathrm{i,Ar}$ for argon (comp. fig. \ref{fig:crosssections} a) and b)).
As a result, Xe\textsuperscript{+} ions are prevalent, even for low xenon admixtures, and dominate the discharge for a wide mixture range.
This result agrees with previous works \cite{mixture2, mixture4} that, for different mixtures, have shown at least one dominant ion species for a wide range of admixtures.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure3.jpg}
\caption{Ion energy distribution function (IEDF) at the electrode for different compositions of the background gas.
The left column shows distribution functions for Ar\textsuperscript{+} ions.
The corresponding distributions for Xe\textsuperscript{+} ions are on the right side of the plot.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $V_\mathrm{RF} = 100\,$V, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:IEDF_GasVar}
\end{center}
\end{figure*}
The influence of the gas composition also directly manifests in a variation of the IEDFs for both ion species.
The plots of figure \ref{fig:IEDF_GasVar} show IEDFs for both Ar\textsuperscript{+} and Xe\textsuperscript{+} ions at the electrode surface.
The energy, plotted on the abscissa, is given in eV.
The ordinate shows the IEDF normed on the respective ion flux $\Gamma_\mathrm{i,s}$ at the electrode.
Each row of figure \ref{fig:IEDF_GasVar} represents results for both ion species and the same case.
The cases are distinguished by the xenon fraction $x_\mathrm{Xe}$ as indicated.
Here, the plots in the right column show IEDFs of Ar\textsuperscript{+} ions, and the results for Xe\textsuperscript{+} ions are shown in the right column. \par
%
In section \ref{chemistry}, we argue that the charge exchange between Ar\textsuperscript{+}/Xe and Xe\textsuperscript{+}/Ar, respectively, is a non-resonant process and a three-body collision.
We conclude that this process is negligible.
As a result, a variation of the gas composition changes the ions' probability to perform charge exchange collisions.
Therefore, Ar\textsuperscript{+} ions, for high argon fractions $x_\mathrm{Ar}$, show an IEDF clearly dominated by collisions.
This IEDF becomes a collisionless distribution for small argon admixtures to a xenon background (fig. \ref{fig:IEDF_GasVar} left).
The IEDF of Xe\textsuperscript{+} ions shows a similar trend except that Xe\textsuperscript{+} ions have a less distinct bimodal behaviour for the cases with high argon fraction.
This difference is explained by the scaling of the width of the bimodal peak being proportional to $\sqrt{m_\mathrm{i}^{-1}}$ \cite{ChabertBook, Kawamura}.
Besides, an argon fraction $x_\mathrm{Ar}$ of 0.2 and 0.3 or, vice versa, a xenon fraction $x_\mathrm{Xe}$ of 0.2 and 0.3 creates an intermediate or hybrid regime.
A significant number of ions experiences the discharge as being collision dominated, while the remaining ions cross the sheath collisionlessly.
In figure \ref{fig:IEDF_GasVar}, the described regime is visible for argon at $x_\mathrm{Xe} = 0.2$ and xenon at $x_\mathrm{Xe} = 0.8$.
Several distinct peaks are visible at low energies that stem from charge exchange collisions, and at high energies, the characteristic collisionless bimodal peak is clearly established.
In these cases, particularly, the scaling of the bimodal peak width can be observed.
For both cases, Xe\textsuperscript{+} ions establish a bimodal peak narrower than the bimodal peak formed by Ar\textsuperscript{+} ions.
\subsection{Revision and analysis of the energy balance model} \label{en-validation}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure4.jpg}
\caption{Evaluation of the energy balance equations (eq. \eqref{eq:tot} - \eqref{eq:Xe}) for various gas compositions.
All parameters have been calculated by means of a PIC/MCC simulation.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $V_\mathrm{RF} = 100\,$V, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:confirm_eb}
\end{center}
\end{figure*}
For a fundamental understanding of the energy distribution within the system, the energy balance model resembled by eqs. \eqref{eq:tot} - \eqref{eq:Xe} may be used, as shown in figure \ref{fig:confirm_eb}.
We calculate all parameters and properties by means of a PIC/MCC simulation averaged over 3000 RF periods.
The plot shows two bars for each of the chosen gas compositions.
The grey bar on the left-hand side represents the total absorbed energy flux $S_\mathrm{abs,tot}$.
The colored bars on the right-hand side resolve the different channels of energy dissemination in detail.
The colors blue (electron energy lost at the electrode $\propto\ \varepsilon_\mathrm{e}$),
red (averaged energy consumption per e/Ar\textsuperscript{+}-pair $\propto\ \varepsilon_\mathrm{c,e,Ar}$),
and green (averaged energy consumption per e/Xe\textsuperscript{+}-pair $\propto\ \varepsilon_\mathrm{c,e,Xe}$) represent the right-hand side of equation \eqref{eq:e}.
The right-hand side of equation \eqref{eq:Ar} is depicted in pink (Ar\textsuperscript{+} ion energy loss at the electrode $\propto\ \varepsilon_\mathrm{i,Ar+}$),
cyan (energy loss by isotropic scattering $\propto\ \varepsilon_\mathrm{is,Ar+}$),
and purple (energy loss by backscattering $\propto\ \varepsilon_\mathrm{cx,Ar+}$).
The remaining colors olive (Xe\textsuperscript{+} ion energy loss at the electrode $\propto\ \varepsilon_\mathrm{i,Xe+}$),
brown (energy loss by isotropic scattering $\propto\ \varepsilon_\mathrm{is,Xe+}$),
and orange (energy loss by backscattering $\propto\ \varepsilon_\mathrm{cx,Xe+}$) visualize the right-hand-side of equation \eqref{eq:Xe}. \par
%
At first, it is noticeable that figure \ref{fig:confirm_eb} shows a roughly square-root-shaped increase of the absorbed energy flux density $S_\mathrm{abs}$ as a function of the xenon fraction $x_\mathrm{Xe}$.
This trend is a consequence of the boundary conditions in combination with the varied gas composition.
The PIC/MCC simulations considered in this work use a single-frequency voltage source as a boundary condition for calculating the electric field.
The energy flux density is calculated self-consistently according to the plasma state.
At low xenon fractions $x_\mathrm{Xe}$, xenon neutrals and Xe\textsuperscript{+} ions successively provide additional loss mechanisms, and the energy consumption increases rapidly.
Whereas at higher xenon fractions, xenon already dominates the discharge, and the energy consumption slowly saturates.
Lieberman and Lichtenberg present the scaling law $n_\mathrm{s} \propto\ S_\mathrm{abs}$ \cite{LiebermanBook}.
In section \ref{GasVar}, we discussed that the trend of the plasma density $n_\mathrm{i,tot}$ as a function of the xenon fraction $x_\mathrm{Xe}$ (comp. fig. \ref{fig:ni_GasVar} a)) is approximated by a parabola.
Combined with the square-root-shaped trend of the absorbed energy flux density $S_\mathrm{abs}$ as a function of the xenon fraction $x_\mathrm{Xe}$,
we see the resulting trend of $n_\mathrm{i,tot}$ and $S_\mathrm{abs}$ match the anticipated scaling.\par
%
The results calculated for pure argon ($x_\mathrm{Xe} = 0.0$) and pure xenon ($x_\mathrm{Xe} = 1.0$) discharges resemble the classical model given by equation \eqref{eq:cl_energybalance}.
The results demonstrate that all individual loss terms sum up to the total energy flux and thus prove the models' exact energy conservation.
Both the argon case and xenon case reveal that the energy loss due to colliding ions (argon: cyan and purple, xenon: brown and orange) has a significant contribution to the energy balance (argon: $\approx 31.1\,$\% of the total energy, xenon: $\approx 35.6\,$\%).
These findings are similar to the study of Jiang et al.\cite{Jiang}.\par
%
The remaining bars of figure \ref{fig:confirm_eb} review the modified energy balance model presented in equation \eqref{eq:tot} to \eqref{eq:Xe}.
It shows, for some exemplary gas mixtures, that the suggested balance for multiple ion species is complete and that each species' energy transfer can be traced individually.
Furthermore, the results show that for a complete energy balance for plasmas with two ion species, colliding ions' energy transfers are at least as important as they are in mono ionic plasmas \cite{Jiang}.
Especially, the energy losses due to charge exchange collisions ($\varepsilon_\mathrm{cx,Ar+}$ -purple- or $\varepsilon_\mathrm{cx,Xe+}$ -orange-, resp.) make up for a significant amount of the transferred energy.\par
%
Both the individual energy transfers of each particle species and the exact resolution of specific loss channels will in the following prove useful to understand and analyze the discharge.
To make the results comparable, we decide to switch the representation of the energy flux density $S_\mathrm{abs}$ from absolute units to relative units (comp. fig. \ref{fig:EB_GasVar}).
Thereby, we refer to the energy fluxes of each case individually with respect to the total energy flux $S_\mathrm{abs,tot}$.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure5.jpg}
\caption{The energy balance equations \eqref{eq:tot} - \eqref{eq:Xe} applied for the background gas variation.
All properties are calculated from a PIC simulation and referred to the total absorbed energy flux $S_\mathrm{abs,tot}$.
All plots show the right-hand side of their corresponding equation in relative units.
a) represents equation \eqref{eq:tot}, b) equation \eqref{eq:e}, c) equation \eqref{eq:Ar} and d) equation \eqref{eq:Xe}.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $V_\mathrm{RF} = 100\,$V, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:EB_GasVar}
\end{center}
\end{figure*}
Figure \ref{fig:EB_GasVar} shows a rearrangement of the data of figure \ref{fig:confirm_eb} in the relative representation explained before.
Each of the subplots a) to d) respectively present the right-hand side of equations \eqref{eq:tot} to \eqref{eq:Xe}.
The abscissa of all plots mark energy flux densities in relative units, and the ordinates are in units of the gas fractions ($x_\mathrm{Xe}$ or $x_\mathrm{Ar}$, resp.).
The color schemes for figures \ref{fig:EB_GasVar} b), c), and d) are similar to the ones used in figure \ref{fig:confirm_eb}.
Figure \ref{fig:EB_GasVar} a) introduces a new color scheme for the total energy fluxes absorbed by electrons (bright blue), Ar\textsuperscript{+} ions (fuchsia), and Xe\textsuperscript{+} ions (lime green).\par
%
In section \ref{GasVar}, we point out two observations.
First, Xe\textsuperscript{+} ions are for a wide range of mixtures the dominant ion species.
Second, for constant gas pressure, collisional features of the IEDF depend on the gas composition, and even a collisional/collisionless hybrid regime can be reached.
Both observations are confirmed and explained by the energy balance.
Figure \ref{fig:EB_GasVar} a) shows that for a xenon fraction $x_\mathrm{Xe}$ between 0.15 and 0.2, Ar\textsuperscript{+} ions and Xe\textsuperscript{+} ions absorb an equal amount of energy ($30\, \%$ of $S_\mathrm{abs}$ or $\approx 3\,$W/m\textsuperscript{2}).
Simultaneously, the production of Xe\textsuperscript{+} ions is more effective than the production of Ar\textsuperscript{+} ions.
This increased effectiveness is due to the lower ionization energy of xenon ($\varepsilon_\mathrm{thr,Xe} = 12.12\, \mathrm{eV}$) compared to argon's ionization energy ($\varepsilon_\mathrm{thr,Ar} = 15.8\, \mathrm{eV}$).\par
%
The case for a xenon admixture of 20 percent ($x_\mathrm{Xe} = 0.2$) serves as the best example for this finding.
There are, with a Xe\textsuperscript{+} ion fraction $x_\mathrm{Xe+} \approx 0.7$ (comp. fig. \ref{fig:ni_GasVar}), more Xe\textsuperscript{+} ions than Ar\textsuperscript{+} ions inside the discharge.
Nevertheless, more energy per electron-ion pair is consumed to produce Ar\textsuperscript{+} ions (red) than for the generation of Xe\textsuperscript{+} ions (green) (fig. \ref{fig:EB_GasVar} b)).
This finding is explained by the lower excitation and ionization levels of xenon compared to argon.
Simultaneously, these lower excitation and ionization levels open up new loss channels for the electrons inside the system.
Raising the xenon fraction $x_\mathrm{Xe}$ yields more and more electrons that are not energetic enough to participate in inelastic processes in an argon discharge.
Thus, the averaged electron energy $\overline{\varepsilon_\mathrm{e}}$ drops when going from an argon discharge to a xenon discharge.
The decreasing loss term $\varepsilon_\mathrm{e}$ (blue) in figure \ref{fig:EB_GasVar} b) hints at the average electron energy of the system and gives evidence of this explanation.
All in all, this shows that the production of Xe\textsuperscript{+} ions fills an unoccupied energetic niche where numerous low energetic electrons can participate.
Therefore, a significant production of Xe\textsuperscript{+} ions is observed even for low xenon fractions $x_\mathrm{Xe}$ and Xe\textsuperscript{+} ions are the dominant ion species for the majority of the possible Ar/Xe mixtures. \par
%
The trends observed in the IEDFs (fig. \ref{fig:IEDF_GasVar}) and the conclusions drawn from this observation are confirmed by the energy balance as well (fig. \ref{fig:EB_GasVar} c) and d)).
Looking at the losses due to charge exchange collisions $\varepsilon_\mathrm{cx,i}$ for both Ar\textsuperscript{+} ions (fig. \ref{fig:EB_GasVar} c), purple) and Xe\textsuperscript{+} ions (fig. \ref{fig:EB_GasVar} d), orange), it becomes apparent that the collisional features are switched between Ar\textsuperscript{+} and Xe\textsuperscript{+} ions when going towards more argon, or xenon respectively, dominated gas mixtures.
The losses due to charge exchange for Ar\textsuperscript{+} ions $\varepsilon_\mathrm{cx,Ar+}$ monotonically fall as a function of the xenon fraction $x_\mathrm{Xe}$ (fig. \ref{fig:EB_GasVar} c), purple) while the corresponding term for Xe\textsuperscript{+} ions $\varepsilon_\mathrm{cx,Xe+}$ monotonically raises, when displayed as the same relation (fig. \ref{fig:EB_GasVar} d), orange).
The slight difference in the trends is explained by the dominance of the Xe\textsuperscript{+} ions in the discharge.
While the density of Xe\textsuperscript{+} ions rapidly increases, when adding small amounts of xenon to an argon background (comp. fig. \ref{fig:ni_GasVar} a)), the density of Ar\textsuperscript{+} ions vanishes as fast among the dominant Xe\textsuperscript{+} ions.
Hence, there are not enough Ar\textsuperscript{+} ions present in discharges dominated by Xe\textsuperscript{+} ions, so that the losses of Ar\textsuperscript{+} ions in total cannot significantly contribute to the energy absorbed by the discharge (fig. \ref{fig:EB_GasVar} a), fuchsia).\par
%
In addition to this, the mean energy of Xe\textsuperscript{+} ions at the electrode $\varepsilon_\mathrm{i,Xe+}$ shows a very different trend than all the collisional quantities (fig. in \ref{fig:EB_GasVar} d), olive).
Instead of monotonically rising with the xenon fraction $x_\mathrm{Xe}$ as the corresponding Ar\textsuperscript{+} term does as a function of the argon ratio $x_\mathrm{Ar}$ (comp. fig. \ref{fig:EB_GasVar} c), pink), the Xe\textsuperscript{+} curve shows a maximum at $x_\mathrm{Xe} = 0.4$.
This maximum is closely connected to the dominance of Xe\textsuperscript{+} ions.
At 40 percent xenon admixture ($x_\mathrm{Xe} = 0.4$), Xe\textsuperscript{+} ions already make up for about 90 percent of the ions in the discharge (fig. \ref{fig:ni_GasVar} a)).
At the same time, argon is the dominant background rendering Xe\textsuperscript{+} ions more or less incapable of doing a relevant amount of charge exchange collisions.
This lack of charge exchange collisions is seen in the IEDF of Xe\textsuperscript{+} ions, that even for a xenon fraction $x_\mathrm{Xe} = 0.5$ shows a characteristic collisionless single bimodal peak (fig. \ref{fig:IEDF_GasVar}, right).
For lower xenon fractions $x_\mathrm{Xe}$, the number density $n_\mathrm{Xe+}$ and the flux density $\Gamma_\mathrm{Xe+}$ are lower and fewer Xe\textsuperscript{+} ions reach the electrode.
This decrease results in a lower energy loss.
For higher xenon fractions $x_\mathrm{Xe}$, the charge transfer collision of Xe/Xe\textsuperscript{+} becomes more and more probable.
This trend manifests in the IEDFs (fig. \ref{fig:IEDF_GasVar}, right) and the trend of the loss term for charge exchange $\varepsilon_\mathrm{cx,Xe+}$ (fig. \ref{fig:EB_GasVar} d), orange).
Thus, the energy loss of Xe\textsuperscript{+} ions to the surface finally drops because the energy gets dissipated more strongly to the neutral gas via charge exchange collisions. \par
%
The minimum of the total energy flux density absorbed by electrons $S_\mathrm{abs,e}$ (fig. \ref{fig:EB_GasVar} a), bright blue) has a similar explanation.
For a xenon fraction $x_\mathrm{Xe} = 0.5$, electrons absorb the lowest amount of energy.
Under these conditions, Xe\textsuperscript{+} ions make up for almost all the ions in the discharge.
Figure \ref{fig:ni_GasVar} b) shows that for a xenon fraction $x_\mathrm{Xe} = 0.5$ the Xe\textsuperscript{+} ion fraction $x_\mathrm{Xe+}$ is approximately 0.9.
At the same time, xenon atoms make up for just 50 percent of the background gas.
The amount of collisions with argon or xenon particles respectively is, as argued before, significantly reduced compared to mixtures with a high amount of either of the gases.
Thus, for xenon fractions $x_\mathrm{Xe} < 0.5$, the production of Ar\textsuperscript{+} ions causes electrons to absorb and invest more energy.
For xenon fraction $x_\mathrm{Xe} > 0.5$, collisions with xenon neutrals become successively more probable, and the production of Xe\textsuperscript{+} ions consumes more energy (comp. fig. \ref{fig:EB_GasVar} b), green) without significantly changing the discharge conditions any more (comp. fig. \ref{fig:ni_GasVar}).\par
%
Additionally, figure \ref{fig:IEDF_GasVar} shows that $x_\mathrm{Xe} = 0.5$ is optimal for producing high energetic ions.
Both ion species establish the characteristic collisionless bimodal peaks and impact the surface with high energies.
Therefore, the relative amount of energy brought by ions to the surface is maximal.
For lower xenon fraction ($x_\mathrm{Xe} < 0.5$), the IEDF of Ar\textsuperscript{+} ions is visibly affected by collisions and vice versa for higher xenon fraction ($x_\mathrm{Xe} > 0.5$).
\subsection{Influence of the driving voltage on the discharge} \label{VolVar}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure6.jpg}
\caption{The trend of the plasma density while varying the background gas composition and driving voltage.
a) shows the development of the time and space averaged total ion density.
b) shows the relative fraction of Xe\textsuperscript{+} ions in the discharge.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:ni_VolVar}
\end{center}
\end{figure*}
In terms of our simulation, a raised driving voltage equals, if all other parameters (gas composition, pressure, etc.) are kept constant, raising energy input to the system.
Figure \ref{fig:ni_VolVar} a) shows a semi-logarithmic representation of the time and space averaged total plasma density $n_\mathrm{i,tot}$ as a function of the gas fractions ($x_\mathrm{Xe}$ or $x_\mathrm{Ar}$, resp.).
The different colors differentiate the data for different RF amplitudes (black = $100\,$V, red = $250\,$V, blue = $500\,$V, green = $1\,$kV).
The black curve shows the same data as figure \ref{fig:ni_GasVar} a).
Due to the aforementioned higher input energy, the plasma density is raised in general while the several curves' general trend is kept.
Independent of the driving voltage, argon discharges have a significantly lower plasma density than xenon discharges, and the transition while varying the gas composition shows the same non-linear trend.
In sections \ref{GasVar} and \ref{en-validation}, we discuss that in this context, non-linear means parabolic. \par
%
Apart from this, a varied driving voltage alters the dominance of Xe\textsuperscript{+} ions.
Figure \ref{fig:ni_VolVar} b) shows a similar plot to figure \ref{fig:ni_GasVar} b).
The Xe\textsuperscript{+} ion fraction is presented as a function of the gas fraction ($x_\mathrm{Xe}$ or $x_\mathrm{Ar}$, resp.).
The colors have the same meanings as in figure \ref{fig:ni_VolVar} a), and the black curve was also presented before (see fig. \ref{fig:ni_GasVar} b)).
Figure \ref{fig:ni_VolVar} b) shows that, for a fixed xenon fraction $x_\mathrm{Xe}$, a raised voltage reduces the fraction of Xe\textsuperscript{+} ions $x_\mathrm{Xe+}$ present in the discharge.
The case for $x_\mathrm{Xe} = 0.2$ is a good example of this observation.
When increasing the driving voltage from 100$\,$V to 1$\,$kV, the ratio of Xe\textsuperscript{+} ions $x_\mathrm{Xe+}$ drops from approximately 0.7 to roughly 0.6.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure7.jpg}
\caption{The energy balance equations \eqref{eq:tot} - \eqref{eq:Xe} applied for both the variation of the background gas and the driving voltage $V_\mathrm{RF}$.
All properties are calculated from a PIC simulation and referred to the total absorbed energy flux $S_\mathrm{abs,tot}$.
All plots show one term on the right-hand side of their corresponding equation in relative units.
a) shows the electron's part $S_\mathrm{abs,e}$ of eq. \eqref{eq:tot}. a\textsubscript{1}) - a\textsubscript{3}) show the three terms of equation \eqref{eq:e} and add up to the respective curve of a).
b) represents the Ar\textsuperscript{+} ions' part $S_\mathrm{abs,Ar+}$ of equation \eqref{eq:tot}. b\textsubscript{1}) - b\textsubscript{3}) present the three terms of equation \eqref{eq:Ar} and sum up to the respective curve of b).
c) shows the Xe\textsuperscript{+} ions' part $S_\mathrm{abs,Xe+}$ of equation \eqref{eq:tot}. c\textsubscript{1}) - c\textsubscript{3}) depict the three terms of equation \eqref{eq:Xe}, and their addition gives the respective curve of c).
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:EBQs_VolVar}
\end{center}
\end{figure*}
Once again, the energy balance (fig. \ref{fig:EBQs_VolVar}) explains the discharge mechanisms governing how an increased driving voltage raises the plasma density.
Similar to figure \ref{fig:EB_GasVar}, terms on the right-hand side of the energy balance equations \eqref{eq:tot} - \eqref{eq:Xe} are shown in relative units and as a function of the gas fractions ($x_\mathrm{Xe}$ or $x_\mathrm{Ar}$, resp.).
In contrast to figure \ref{fig:EB_GasVar}, each panel of figure \ref{fig:EBQs_VolVar} represents just one term of the respective equation's right-hand side.
The different curves represent data for different driving voltages $V_\mathrm{RF}$, ranging from $V_\mathrm{RF} = 100\,$V to $V_\mathrm{RF} = 1000\,$V.
The color scheme is analog to figures \ref{fig:confirm_eb} and \ref{fig:EB_GasVar}.
Figure \ref{fig:EBQs_VolVar} a) shows the total energy flux density absorbed by electrons ($S_\mathrm{abs,e}$) in bright blue.
Figure \ref{fig:EBQs_VolVar} b) depicts the total energy flux density absorbed by Ar\textsuperscript{+} ions ($S_\mathrm{abs,Ar+}$) in fuchsia, and figure \ref{fig:EBQs_VolVar} c) presents the total energy flux density absorbed by Xe\textsuperscript{+} ions ($S_\mathrm{abs,Xe+}$) in lime green.
Together figures \ref{fig:EBQs_VolVar} a) - c) show the right-hand side of equation \eqref{eq:tot}.
Therefore, the corresponding data points horizontally always add up to give $100\,\%$ (or the total energy flux density $S_\mathrm{abs,tot}$, resp.).
Vertically the details of each particle species' power absorption are presented.
Figures \ref{fig:EBQs_VolVar} a\textsubscript{1}) - a\textsubscript{3}) each show one term of the right-hand side of equation \eqref{eq:e}.
The average energy loss of electrons at the electrodes $\varepsilon_\mathrm{e}$ is shown in figure \ref{fig:EBQs_VolVar} a\textsubscript{1}) in blue.
The averaged amount of energy needed to create an electron/Ar\textsuperscript{+} ion pair ($\varepsilon_\mathrm{c,e,Ar}$) is found in panel a\textsubscript{2}) in red, and the related term for electron/Xe\textsuperscript{+} ion pairs ($\varepsilon_\mathrm{c,e,Xe}$) is depicted in panel a\textsubscript{3}) in green.
The individual terms of the right-hand side of equation \eqref{eq:Ar} are shown in figures \ref{fig:EBQs_VolVar} b\textsubscript{1}) - b\textsubscript{3}).
They reveal the details of the Ar\textsuperscript{+} ion dynamics by presenting the average energy loss by Ar\textsuperscript{+} ions at the electrodes ($\varepsilon_\mathrm{i,Ar+}$, fig. \ref{fig:EBQs_VolVar} b\textsubscript{1}), pink), the energy loss of Ar\textsuperscript{+} ions caused by isotropic scattering ($\varepsilon_\mathrm{is,Ar+}$, fig. \ref{fig:EBQs_VolVar} b\textsubscript{2}), cyan), and the energy loss of Ar\textsuperscript{+} ions due to backscattering ($\varepsilon_\mathrm{cx,Ar+}$, fig. \ref{fig:EBQs_VolVar} b\textsubscript{3}), purple).
Similarly, figures \ref{fig:EBQs_VolVar} c\textsubscript{1}) - c\textsubscript{3}) show the right-hand side of equation \eqref{eq:Xe}.
They unravel the details of the Xe\textsuperscript{+} ion dynamics by showing the average impingement energy of Xe\textsuperscript{+} ions at the electrodes ($\varepsilon_\mathrm{i,Xe+}$, fig. \ref{fig:EBQs_VolVar} c\textsubscript{1}), olive), the energy lost by Xe\textsuperscript{+} ions in isotropic scattering collisions ($\varepsilon_{is,Xe+}$, fig. \ref{fig:EBQs_VolVar} c\textsubscript{2}), brown), and the energy lost by Xe\textsuperscript{+} ion in backscattering collisions ($\varepsilon_{cx,Xe+}$, fig. \ref{fig:EBQs_VolVar} c\textsubscript{3}), orange).
Vertically, the sum of the data in the subscript labeled panels gives the curves of the non-subscript labeled one (e.g., panels a\textsubscript{1}) - a\textsubscript{3}) sum-up to panel a)).\par
%
In general, it is apparent that a raised driving voltage reduces the ratio of energy coupled to the electrons (fig. \ref{fig:EBQs_VolVar} a)) and raises the fraction absorbed by both Ar\textsuperscript{+} and Xe\textsuperscript{+} ions (fig. \ref{fig:EBQs_VolVar} b) or fig. \ref{fig:EBQs_VolVar} c), resp.).
The increased energy consumption into the ion contribution mainly consists of two parts.
First, a raised driving voltage $V_\mathrm{RF}$ increases the voltage drop across the boundary sheaths, and ions gain higher impingement energies after crossing the sheath collisionlessly.
This is shown in figure \ref{fig:EBQs_VolVar} b\textsubscript{1}) for Ar\textsuperscript{+} ions and in figure \ref{fig:EBQs_VolVar} c\textsubscript{1}) for Xe\textsuperscript{+} ions.
Second, an increased energy gain for the ions inside the sheath goes along with an increased energy loss caused by charge exchange collisions.
The corresponding terms $\varepsilon_\mathrm{cx,Ar+}$ for Ar\textsuperscript{+} ions (fig. \ref{fig:EBQs_VolVar} b\textsubscript{3})) and $\varepsilon_\mathrm{cx,Xe+}$ for Xe\textsuperscript{+} ions (fig. \ref{fig:EBQs_VolVar} c\textsubscript{3})) support this hypothesis.
Furthermore, the cross sections for charge exchange dominate the ones for isotropic scattering at high energies (comp. fig. \ref{fig:crosssections} c) and d)).
Correspondingly, the already low energy losses by Ar\textsuperscript{+} ions ($\varepsilon_\mathrm{is,Ar+}$, fig. \ref{fig:EBQs_VolVar} b\textsubscript{2})) and Xe\textsuperscript{+} ions ($\varepsilon_\mathrm{is,Xe+}$, fig. \ref{fig:EBQs_VolVar} c\textsubscript{2})) caused by isotropic scattering decrease due to the increased driving voltage $V_\mathrm{RF}$.
The maximum of figure \ref{fig:EBQs_VolVar} c\textsubscript{1}) was discussed in section \ref{en-validation}.
The energy-efficient production of Xe\textsuperscript{+} ions already creates a high amount of Xe\textsuperscript{+} ions for small xenon fractions $x_\mathrm{Xe}$.
Thus, there are optimal parameters for Xe\textsuperscript{+} ions to bombard the surface with the least collisional loss ($x_\mathrm{Xe} = 0.4$ for $V_\mathrm{RF} = 100\,$V, sec. \ref{en-validation}).
The aforementioned enhanced role of backscattering and decreased influence of isotropic scattering causes the optimal parameters for higher driving voltages $V_\mathrm{RF}$ to shift to higher xenon fractions $x_\mathrm{Xe}$ (e.g., $X_\mathrm{Xe} = 0.5$ for $V_\mathrm{RF} = 1000\,$V, fig. \ref{fig:EBQs_VolVar} c\textsubscript{1})). \par
%
In terms of ion production, the previous assessment shows that the higher the driving voltage is set, the smaller the fraction of the energy consumed for creating new electron/ion pairs becomes.
Furthermore, the maximal amount of energy consumed for creating Xe\textsuperscript{+} ions in a pure xenon background ($x_\mathrm{Xe} = 1.0$, fig. \ref{fig:EBQs_VolVar} a\textsubscript{3})) is always lower than the corresponding maximum for Ar\textsuperscript{+} ions in a pure argon background ($x_\mathrm{Ar} = 1.0$, fig. \ref{fig:EBQs_VolVar} a\textsubscript{2})).
As argued before, this finding correlates with the fact that the threshold energies of all inelastic processes involving xenon are significantly lower than those involving argon.
This observation additionally reveals why Xe\textsuperscript{+} ions dominate the discharge for most conditions.
It becomes best visible by comparing the pure argon case ($x_\mathrm{Xe}=0.0$, fig. \ref{fig:EBQs_VolVar} a\textsubscript{2})) with the pure xenon case ($x_\mathrm{Xe}=1.0$, fig. \ref{fig:EBQs_VolVar} a\textsubscript{3})) for a driving voltage of 1$\,$kV.
Here, roughly eight percent of the total energy flux density is used to produce an electron/Ar\textsuperscript{+} ion pair (fig. \ref{fig:EBQs_VolVar} a\textsubscript{2})).
For the corresponding pure xenon case, only five percent of the total energy is used to produce an electron/Xe\textsuperscript{+} ion pair (fig. \ref{fig:EBQs_VolVar} a\textsubscript{3})).
Simultaneously, the xenon case's plasma density is more than one order of magnitude higher than in the argon case (comp. fig. \ref{fig:ni_VolVar} a)).\par
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure8.jpg}
\caption{The trend of the ion densities while varying the background gas composition and driving voltage.
a) shows the development of the time and space averaged Ar\textsuperscript{+} ion density.
b) shows the development of the time and space averaged Xe\textsuperscript{+} ion density.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $l_\mathrm{gap} = 25\,$mm, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:ind_den_VolVar}
\end{center}
\end{figure*}
Since the production of Xe\textsuperscript{+} ions remains more effective for all applied driving voltages, there has to be another reason why the dominance of Xe\textsuperscript{+} ions is reduced.
A close examination of figures \ref{fig:EBQs_VolVar} a\textsubscript{1}) and \ref{fig:ind_den_VolVar} explains the observed.
Both panels of figure \ref{fig:ind_den_VolVar} are similar in structure to figure \ref{fig:ni_VolVar} a), but show the individual ion densities ($n_\mathrm{Ar+}$ in fig. \ref{fig:ind_den_VolVar} a) or $n_\mathrm{Xe+}$ in fig. \ref{fig:ind_den_VolVar} b), resp.) as a function of the gas fraction ($x_\mathrm{Xe}$ or $x_\mathrm{Ar}$, resp.).
The different colors again mark different values of the driving voltage $V_\mathrm{RF}$, and the color scheme is the same as in figure \ref{fig:ni_VolVar} a).
The trend of the Ar\textsuperscript{+} ion density in figure \ref{fig:ind_den_VolVar} a) already reveals the underlying process responsible for the decreased dominance of Xe\textsuperscript{+} ions.
Even for the base case ($V_\mathrm{RF} = 100\,$V), the maximum of the density of Ar\textsuperscript{+} ions is found for a xenon fraction $x_\mathrm{Xe}=0.2$ and not for a xenon fraction $x_\mathrm{Xe} = 0.0$ as it is vice versa the case for Xe\textsuperscript{+} ions (comp. fig. \ref{fig:ind_den_VolVar}).
This maximum is shifted by a raised driving voltage to a xenon admixture of 30 percent ($x_\mathrm{Xe} = 0.3$, fig. \ref{fig:ind_den_VolVar} a)).
Recalling figure \ref{fig:ni_VolVar} a), it was observed that adding xenon to an argon background is equivalent to monotonically raising the plasma density.
Therefore, a small xenon admixture to an argon discharge means creating more electrons that will mostly collide with an argon atom.
As a result, the probability of ionization of an argon atom is higher than it is in a case with no or lower xenon admixture, that is, without these additional electrons.
Thus, the density of Ar\textsuperscript{+} ions is higher than in a discharge without xenon admixture.
This effect affects the Ar\textsuperscript{+} ions for low voltages as long as most neutrals are argon atoms.
A raised driving voltage shifts the maximum of the Ar\textsuperscript{+} ion density and the benefits of this synergy effect to higher xenon fractions.\par
%
For Xe\textsuperscript{+} ions, on the other hand, this synergy effect cannot be observed (fig. \ref{fig:ind_den_VolVar} b)).
This observation is due to the higher ionization energy of argon.
Figure \ref{fig:EBQs_VolVar} a\textsubscript{1}) helps to understand this observation by showing the energy lost by electrons at the electrodes $\varepsilon_\mathrm{e}$.
The general trend of the curves for $\varepsilon_\mathrm{e}$ is a reduction by a raised driving voltage (fig. \ref{fig:EBQs_VolVar} a\textsubscript{1})).
An equivalent conclusion is that energy is dissipated more efficiently inside the volume of the discharge.
In terms of our non-equilibrium low-pressure discharge, there are just two ways for electrons to lose energy.
Either they interact inelastically with the background or transfer their energy to the surface by arriving at the electrodes.
The first option was discussed before (fig. \ref{fig:EBQs_VolVar} a\textsubscript{2}) and a\textsubscript{3})), and the second option is discussed here.
Both processes similarly respond to the increased driving voltage, which means that a higher driving voltage increases the ion production efficiency.
Simultaneously, the energy dissemination efficiency is increased the more xenon is added to the background gas.
In section \ref{GasVar}, we discuss that an increased amount of xenon atoms in the discharge provides lower energetic electrons with the opportunity to get involved in inelastic processes compared to a discharge with lower or no xenon addition (see fig. \ref{fig:EB_GasVar} d)).
In figure \ref{fig:EBQs_VolVar} a\textsubscript{1}), the same trend is observed for all depicted driving voltages.
As a function of the xenon fraction $x_\mathrm{Xe}$, the energy lost by electrons at the electrode $\varepsilon_\mathrm{e}$ is monotonically falling.
Vice versa, argon has higher thresholds for inelastic processes, especially ionization, than xenon (comp. tab. \ref{tab:chemistry}).
Thus, adding argon to a xenon background cannot produce a higher electron density that would cause more ionization of xenon.
The synergy effect does not take place for Xe\textsuperscript{+} ions that benefit from additional ionization of argon.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure9.jpg}
\caption{Ion energy distribution function (IEDF) at the electrode for different driving voltages.
The left column shows distribution functions for Ar\textsuperscript{+} ions.
The corresponding distributions for Xe\textsuperscript{+} ions are on the right side of the plot.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $x_\mathrm{Xe} = 0.1$, $l_\mathrm{gap} = 25\,$mm, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:IEDF_VolVar10}
\end{center}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width = \textwidth]{figures/figure10.jpg}
\caption{Ion energy distribution function (IEDF) at the electrode for different driving voltages.
The left column shows distribution functions for Ar\textsuperscript{+} ions.
The corresponding distributions for Xe\textsuperscript{+} ions are on the right side of the plot.
(conditions: $p_\mathrm{gas} = 3\,$Pa, $x_\mathrm{Xe} = 0.9$, $l_\mathrm{gap} = 25\,$mm, $f_\mathrm{RF} = 13.56\,$MHz)}
\label{fig:IEDF_VolVar90}
\end{center}
\end{figure*}
Figures \ref{fig:IEDF_VolVar10} and \ref{fig:IEDF_VolVar90} show, similar to figure \ref{fig:IEDF_GasVar}, IEDFs normalized to the respective particle flux densities at the electrode surface.
The difference between figure \ref{fig:IEDF_VolVar10} and \ref{fig:IEDF_VolVar90} is in the gas composition (fig. \ref{fig:IEDF_VolVar10}: $x_\mathrm{Xe} = 0.1$, fig. \ref{fig:IEDF_VolVar90} $x_\mathrm{Xe} = 0.9$).
Both figures contain IEDFs for Ar\textsuperscript{+} ions in the left column panels and IEDFs for Xe\textsuperscript{+} ions in the right one.
The difference between each figure's four rows is the altered amplitude of the RF voltage $V_\mathrm{RF}$ given on each panel's top.
Panels of the same row share the same voltage. \par
%
The figures show that for the IEDF, a raised driving voltage, first of all, means that the averaged sheath voltage $\langle \phi_\mathrm{s} \rangle$ increases.
This increase manifests in the width of the characteristic collisionless single bimodal peak.
Its width scales with V$_\mathrm{RF}$ and $\sqrt{\langle \phi_\mathrm{s}(t) \rangle }\, \mathbf{/}\, \langle s(t) \rangle$ \cite{Kawamura, Benoit-Cattin}.
Kawamura et al. \cite{Kawamura} give the averaged sheath width $\langle \mathrm{s}(t) \rangle$ in terms of the collisionless Child-Langmuir law:
%
\begin{align}
\langle \mathrm{s}(t) \rangle = \frac{2}{3}\, \left( \frac{2\, e}{m_\mathrm{i}} \right)^{1/2} \, \left( \frac{\varepsilon_0}{\langle j_\mathrm{i}(t) \rangle} \right)\, \langle \phi_\mathrm{s}(t) \rangle^{3/4}
\end{align}
%
with $e$ the elementary charge, $m_\mathrm{i}$ the ion mass, $\varepsilon_0$ the vacuum permitivity, and $\langle j_\mathrm{i}(t) \rangle$ the averaged ion current inside the sheath.
For argon, the increased bimodal peak's width is found in figure \ref{fig:IEDF_VolVar90} (left) and for xenon in figure \ref{fig:IEDF_VolVar10} (right).
From top ($V_\mathrm{RF} = 100\,$V) to bottom ($V_\mathrm{RF} = 1000\,$), the width of the highest energetic bimodal peak increases (Ar\textsuperscript{+} ions: fig. \ref{fig:IEDF_VolVar90} left, Xe\textsuperscript{+} ions: fig. \ref{fig:IEDF_VolVar10} right).\par
%
At the same time, a higher driving voltage at a constant pressure produces higher energetic ions.
Higher kinetic energy enlarges the mean free path of those ions since the mean free path is energy-dependent \cite{ChabertBook} and the collision cross sections fall at high energies (see fig. \ref{fig:crosssections}).
Thus, the distance between the peaks in the low energetic part of the IEDFs that are connected to charge exchange collisions is increased with the driving voltage \cite{IEDF2, Wild2}.\par
%
Furthermore, a raised driving voltage causes the emergence of multiple bimodal structures within the IEDFs.
These structures have first been reported as double peaks by Wild et al.\cite{Wild2}.
For our scenario, they become visible for $V_\mathrm{RF} = 500\,$V and $V_\mathrm{RF} = 1000\,$V and establish both for Ar\textsuperscript{+} (fig. \ref{fig:IEDF_VolVar10}) and Xe\textsuperscript{+} ions (fig. \ref{fig:IEDF_VolVar90}).
Here, charge exchange collisions are responsible for the appearance of the low energetic peak.\par
%
Section \ref{GasVar} discusses that low energetic peaks vanish for Ar\textsuperscript{+} ions when the xenon fraction $x_\mathrm{Xe}$ is raised and vice versa.
For a second or third bimodal peak to establish, two requirements have to be met.
First, ions have to be able to react to the sheath electric field.
Second, there has to be some sort of hybrid regime that we discussed in section \ref{GasVar}.
Combining these requirements also means that only charge exchange collisions that happen clearly above the averaged sheath position can establish an additional bimodal structure.
Under these conditions, the slow ions produced through charge exchange experience the sheath's modulation that eventually determines their impingement energy.
A charge exchange collision inside the sheath during the collapsing phase causes the ions to gain slightly lower impingement energy than a charge exchange during the expanding sheath phase. \par
%
According to Lieberman and Lichtenberg \cite{LiebermanBook}, there is a weak dependency between the average position of the sheath edge and the voltage amplitude ($s_\mathrm{m} \propto\ V_\mathrm{RF}^{1/4}$).
Thus, it is more likely for the collisional structures of IEDFs at higher voltages to show bimodal structures.
The IEDFs of Xe\textsuperscript{+} ions at a xenon fraction $x_\mathrm{Xe} = 0.9$ (fig. \ref{fig:IEDF_VolVar90}) are the best example for this conclusion of our study.
For $V_\mathrm{RF} = 100\,$V, the results clearly show a single bimodal peak and several non-bimodal charge exchange peaks.
For $V_\mathrm{RF} = 250\,$V, the main bimodal peak is centered around $\approx 110\,$eV, and at least one additional bimodal peak around $87\,$eV is visible.
At $500\,$V, the IEDF has at least four bimodal peaks (centered around $\approx 130\,$eV, $\approx 170\,$eV, $\approx 180\,$eV, and $\approx 210\,$eV).
The case for $V_\mathrm{RF} = 1000\,$V shows at least four bimodal peaks as well (centered around $\approx 190\,$eV, $\approx 260\,$eV, $\approx 325\,$eV, and $\approx 410\,$eV).
For that case, solely charge exchange collisions that take place deep inside the boundary sheath and close to the electrode do not show any sign of bimodal features.\par
%
The hybrid regime of the IEDFs itself is also influenced by a raised driving voltage $V_\mathrm{RF}$.
For a voltage amplitude $V_\mathrm{RF} = 250\,$V, a slightly higher voltage than that of the base case, the hybrid regime appears for lower admixtures of xenon (fig. \ref{fig:IEDF_VolVar10}) or argon, respectively (fig. \ref{fig:IEDF_VolVar90}).
Here, the broadening and amplification effects of a raised driving voltage prevail.
Thus, the hybrid regime establishes earlier than for lower voltages.\par
%
The IEDFs for even higher driving voltages (see $V_\mathrm{RF} = 500\,$V and $V_\mathrm{RF} = 1000\,$V in fig. \ref{fig:IEDF_VolVar10} and \ref{fig:IEDF_VolVar90}) are again more collision-dominated and show a different trend.
For $500\,$V, the bimodal part of the distribution function is less populated than the low energetic part.
For $1000\,$V, the highest energetic peak for both Ar\textsuperscript{+} (fig. \ref{fig:IEDF_VolVar10}) and Xe\textsuperscript{+} ions (fig. \ref{fig:IEDF_VolVar90}) are damped compared to the lowest energetic peaks.
This trend arises from the fact that the cross section for charge exchange collisions for high energies drops much slower than the cross sections for isotropic scattering (comp. fig. \ref{fig:crosssections} c) and d)).
Therefore, charge exchange is the preferred process at high energies.
For driving voltages much higher than $100\,$V, the hybrid regime is shifted back to higher mixing ratios.
\section{Conclusion} \label{conclusion}
The objective of this work was to investigate the ion dynamics of plasmas containing two ion species.
This investigation was conducted by simulating a low-pressure capacitively coupled plasma with a mixture of argon and xenon as the background gas.
The overall result is that the gas composition serves as a means to control the collisionality of the ion species and thus the ion dynamics.
Section \ref{GasVar} shows that the gas composition (more specifically the argon fraction $x_\mathrm{Ar}$ or xenon fraction $x_\mathrm{Xe}$, respectively) significantly affects the discharge, especially the ion dynamics.
The effect on the discharge resembles a parabolic function of the plasma density and the xenon fraction $x_\mathrm{Xe}$ (comp. fig. \ref{fig:ni_GasVar}).
A complete energy balance that we self-consistently calculate based on a PIC/MCC simulation helps understand this effect.
Inelastic processes in xenon (e.g., ionization with $\varepsilon_\mathrm{i,Xe} = 12.12\,$eV) have significantly lower energetic thresholds.
Thus, electrons distribute their energy more efficiently when the xenon fraction $x_\mathrm{Xe}$ is raised.
We show that especially the ionization process in xenon is energetically more favorable than in argon.
This disparity leads to Xe\textsuperscript{+} ions being the dominant ion species for a broad range of xenon fractions $x_\mathrm{Xe}$.\par
For the ion dynamics, we present that the gas composition controls the collisional characteristics of the IEDF.
Between argon and xenon, only non-resonant charge transfer collisions are possible.
Three-body collisions do not occur in relevant amounts in the low-pressure regime.
Therefore, a varied xenon fraction $x_\mathrm{Xe}$ shifts the multiple low energetic peaks (characteristic for charge exchange and a collision dominated regime) from argon (most pronounced at $x_\mathrm{Xe} = 0$, fig. \ref{fig:IEDF_GasVar} left) to xenon (most pronounced at $x_\mathrm{Xe} = 1$, fig. \ref{fig:IEDF_GasVar} right).
Additionally, a collisional/collisionless hybrid regime is present for specific gas fractions.
Some ions experience the discharge within this hybrid regime as collision dominated while others traverse the boundary sheath without collisions.
The analysis of the energy balance helps to understand these effects as well.
It reveals that charge exchange is, even at low-pressures, a relevant energy loss process for ions.
A raised xenon fraction $x_\mathrm{Xe}$ depletes (Ar\textsuperscript{+} ions) or contributes to (Xe\textsuperscript{+} ions) this process for the respective ions (fig. \ref{fig:EB_GasVar}).
Thus, the addition of xenon increases (Ar\textsuperscript{+} ions) or decreases (Xe\textsuperscript{+} ions) the impingement energies of the respective ions.
Furthermore, the energy balance reveals optimal parameters for the impingement energy of ions in this mixture.
In this context, optimal refers to overall minimal collisional losses for the ions, thus desirable conditions for processes (e.g., ion-assisted etching).
For $x_\mathrm{Xe} = 0.4$, the combined fraction of the total energy that ions lose at the surface is maximal.
This example shows that the gas compositions allow tailoring the discharge to the requirements of specific applications.\par
A variation of the driving voltage $V_\mathrm{RF}$ attenuates the dominance of the Xe\textsuperscript{+} ions (sec. \ref{VolVar}).
The reason for this observation is a synergy effect.
The argon's ionization process benefits from additional electrons created during the ionization of xenon.
An extensive analysis of the energy balance is needed to understand this synergy effect and differentiate why it only occurs for Ar\textsuperscript{+} but not Xe\textsuperscript{+}.
Furthermore, it is presented that the increased driving voltage $V_\mathrm{RF}$ intensifies structures (e.g., broadens the width of bimodal peaks) and further complicates the IEDFs (e.g., by creating multiple bimodal peaks).
The energy dependence of the cross section for charge exchange causes the hybrid regime to shift to different mixing ratios when solely varying the driving voltage.
Both observations are supported by the analysis of the energy balance too.
Overall, the energy balance has proven to be a practical and impactful diagnostic.
The results of section \ref{VolVar} show that the gas composition controls the ion dynamics over a wide range of driving voltages.
However, the effect of varied gas compositions is not entirely independent of the driving voltage.\par
Future work based on this study will develop in two directions.
On the one hand, the model system Ar/Xe needs to be left behind.
The presented basic principles have to be investigated in more complex and process relevant gas mixtures like Ar/CF\textsubscript{4} or CF\textsubscript{4}/H\textsubscript{2}.
The energy balance model can be adapted to and should be tested for these gas mixtures.
On the other hand, based on this work's findings, the influence of a combination of multi-frequency discharges and a varied gas composition on the ion dynamics should be investigated.
For example, a multi-frequency approach could be used to optimize the ion production, which at $V_\mathrm{RF} = 100\,$V was found to be optimal for a xenon fraction $x_\mathrm{Xe} = 0.4$, further.\par
Another open research question is: How does the addition of secondary electron emission and realistic surface coefficients alter the ion dynamics?
The argon ionization's synergy effect, especially, could be significantly affected when secondary electrons cause an amplification of the ionization process.
To our best knowledge, there are no published experimental results that analyze the influence of the gas mixture on the IEDFs.
Nor are there studies that experimentally report about the hybrid regime or the synergy effect within the ionization of argon.
All of these studies would be crucial to validate our findings and simulation.
\ack This work was supported by the German Research Foundation (DFG) via Collaborative Research Centre CRC 1316 (Project ID: 327886311), Transregio TRR87 (Project-ID: 138690629), and project MU 2332/6-1.
\section*{ORCID IDs}
\noindent M. Klich: \href{https://orcid.org/0000-0002-3913-1783}{https://orcid.org/0000-0002-3913-1783}\\
\noindent S. Wilczek: \href{https://orcid.org/0000-0003-0583-4613}{https://orcid.org/0000-0003-0583-4613}\\
\noindent T. Mussenbrock: \href{http://orcid.org/0000-0001-6445-4990}{http://orcid.org/0000-0001-6445-4990}\\
\noindent J. Trieschmann: \href{http://orcid.org/0000-0001-9136-8019}{http://orcid.org/0000-0001-9136-8019}
\section*{References}
|
{
"timestamp": "2021-02-18T02:19:43",
"yymm": "2102",
"arxiv_id": "2102.08789",
"language": "en",
"url": "https://arxiv.org/abs/2102.08789"
}
|
{
"timestamp": "2021-02-18T02:16:31",
"yymm": "2102",
"arxiv_id": "2102.08720",
"language": "en",
"url": "https://arxiv.org/abs/2102.08720"
}
|
|
\section{Introduction}
The aim of this report is to present and provide access to two novel benchmarks for the Job Shop Scheduling Problem (JSSP). The JSSP is one of the most studied scheduling problem and as such, there are a large number of benchmarks available in literature (\textit{e}.\textit{g}. \cite{lawrence,taillard,adams,storer,apple}). However, a common shortcoming of classic problem instances is the limited number of jobs and operations in comparison with real industrial scenarios.
The industrial field is one of the domains that had more impact in the development of scheduling theory \cite{fisher,Blazewicz2007,baker2013}, to the point that even the terminology adopted by scholars derives from its semantic field. In fact, terms often used in the scheduling domain are strictly coupled with industrial concepts, like \emph{machines} to indicate resources and \emph{jobs} to indicate tasks. Following this terminology, the factory layout (\textit{i}.\textit{e}. the number of machines and their functionality) is called \emph{shop}, and when a job is composed by sub-tasks ordered in a specific sequence, they are called \emph{operations}. Operations are linked to machines with the concept of \emph{operation type}. In fact, every operation has a type and each machine can process operations of certain types, and not others.
Despite this strong link, scholars have begun to study the scheduling problem in a more abstract and ``pure'' form. This allowed researchers to concentrate on the aspects that are at the core of the problem complexity (\textit{e}.\textit{g}. \cite{garey1976}), but made more and more difficult to apply the academic results on real-life scenarios \cite{fuchigami2017}.
One of the aspects where this discrepancy is more visible is the size of the scheduling problems.In fact, one of the few benchmark targeted to industrial scenarios is the Taillard benchmark, from 1992 \cite{taillard}. He published a benchmark simulating the size of real industrial data, with the largest JSSP instances reaching 100 jobs to be scheduled on 20 machines. After his work, however, little effort was put into maintaining the parallelism between real industrial problems and JSSP instances. Nowadays it is not uncommon to reach scheduling problems as big as 1000 jobs on 1000 machines for a rather short planning horizon (e.g.\ a week); however, in JSSP there are no benchmarks of such size. Some recent studies, like Zhai \textit{et al}. in 2014 \cite{zhai2014}, address the problem, but are still reluctant to test instances beyond 100 jobs on 50 machines. Others, like one proposed by IBM, mention tests done on large-scale scheduling instances that go beyond 1000 jobs, but said benchmark is not of public domain \cite{laborie2018ibm}.
The experimentation of scenarios close to the real-world industrial applications is at the core of this report. Given the small size of the available benchmarks, we worked towards the definition of a new testing environment conforming to modern industrial standards. With this goal in mind, two novel large-scale benchmarks for JSSP were produced.
\section{The Large-TA benchmark}
The first benchmark can be seen as an ``extension'' of the Taillard benchmark, meaning that the instances are created with the same procedure of \cite{taillard}, but in a larger scale. In fact, there are a total of 90 instances, divided in 9 groups of 10 instances each, spanning from $10 \times 10$ to $1000 \times 1000$, as shown in Table \ref{table:lt}. Following Taillard's specification, every job has to be processed by every machine and just once per machine. Thus, the number of operations of an instance corresponds to the number of jobs times the number of machines (instances of this type are typically referred to as ``rectangular''), peaking at one million operations to be scheduled, in the largest group. This benchmark was first introduced in a publication that compared two state-of-the-art constraint solvers on realistic industrial scheduling problems \cite{dacol2019cp}.
\begin{table}[h]
\centering
\begin{tabular}{rrr|c}
\textbf{Machs} & \textbf{Jobs} & \textbf{Operations} & \textbf{Instances} \\ \hline
10 & 10 & $100$ & 10 \\
100 & 10 & $1000$ & 10 \\
1000 & 10 & $10\,000$ & 10 \\ \hline
10 & 100 & $1000$ & 10 \\
100 & 100 & $10\,000$ & 10 \\
1000 & 100 & $100\,000$ & 10 \\ \hline
10 & 1000 & $10\,000$ & 10 \\
100 & 1000 & $100\,000$ & 10 \\
1000 & 1000 & $1\,000\,000$ & 10
\end{tabular}
\caption{List of the instances of the Large-TA benchmark, grouped by size. The last column indicates the number of instances for that group.}
\label{table:lt}
\end{table}
\section{The Known-Optima Benchmark} Contrary to the Large-TA, this benchmark does not comprehend rectangular instances, \textit{i}.\textit{e}. a job can have less operations than the number of machines, and two operations of the same job are not forcibly processed by the same machine.
The creation process is as follows: First produce an optimal solution without (idle) holes for (1) a given number of job operations to be scheduled, (2) the number of machines and (3) the desired optimal makespan. This is done by randomly partitioning the machines' time continuum into the predefined number of partitions. Each partition corresponds to the processing period of one operation. Consequently, the optimal makespan, average operation length (avg(length)), number of operations ($\#ops$) and number of machines ($\#machines$) relate conforming to $makespan = \frac{\#ops \times avg(opLength)}{\#machines}$. Based on such a partitioning, successor relations are randomly generated. Each partition has maximally one successor and/or predecessor such that the successor's starting time is greater than the predecessors finishing time. Figure \ref{instanceGeneration} shows the principle. This creates scheduling problem instances where the best makespan is known before-hand, giving the opportunity to immediately check the quality of a given solution.
\begin{figure}
\center
\includegraphics[width=8.5cm]{instanceGeneration.png}
\caption{Principle of instance generation}
\label{instanceGeneration}
\end{figure}
We applied two different procedures for generating random successor relations based on a pre-calculated solution:
\begin{enumerate}
\item \textbf{Short-jobs instances: }For each operation $op$ (in random order) define as successor $suc$ a random operation such that
\begin{itemize}
\item $suc$ is not on the same machine as $op$.
\item $suc$ starts later than $op$ ends.
\item $suc$ is not yet a successor of another operation.
\item If no such $suc$ exists $op$ has no successor.
\end{itemize}
\item \textbf{Long-jobs instances: }For each operation $op$ (in random order) define as successor $suc$ an operation such that
\begin{itemize}
\item $suc$ is not on the same machine as $op$.
\item $suc$ starts later than $op$ ends.
\item $suc$ is not yet a successor of another operation and
\item the time between $op$ ends and $suc$ starts is minimal.
\item In case that there are multiple possible successors, a random one is chosen.
\item If no such $suc$ exists $op$ has no successor.
\end{itemize}
\end{enumerate}
The two different generating approaches result in benchmark instances that are different in nature: (1) produces many jobs consisting of a small number of operations. In contrary, (2) produces fewer jobs but with a larger number of operations per job. This way, we can simulate complex products that need a lot of steps to be completed, as well as cases when there is a vast variety of simple jobs. The total amount of generated instances is 24 (12 short, 12 long). The total number of operations goes up to 100 thousand operations to be scheduled on up to one thousand machines. All instances have a minimal makespan of 600000, which is roughly a week in seconds, i.e.\ a very common planning horizon in the semi-conductor domain.
The tables in Figure \ref{fig:bench} offer some statistical data on the instances. Tables 2 and 4 show the number of jobs and the min, max and average number of operation per job. The first group of three instances in Table 2 shows that in the long jobs, the max number of operations exceeds the number of machines. This means that there are jobs that are processed twice or more by the same machine. Similarly, there are jobs that go trough just a subset of the machines, since the min $\#ops$ is smaller than the number of machines. The second group has 1000 machines, and presents much shorter jobs on average. If we compare it with the same group of the short job instances in table 4, the long jobs have three times the operations on average (3.5 vs 10), compared to the 20 times multiplication factor of the first group (4.6 vs 97.1). The min $\#ops$ of instance 1000-10000-1 reveal that there are also jobs composed by a single operation even in the long-job instances.
\begin{figure}[ht]
\centering
\includegraphics[width=1.1\textwidth]{bench_stats.PNG}
\caption{Collection of statistical elements about the two types of instances of the Known-Optima benchmark: long-jobs and short-jobs instances.}
\label{fig:bench}
\end{figure}
Table 4 shows that in the short-job instances there are no cases where a job is processed by all machines, in fact, the max $\#ops$ is 19. The low avg $\#ops$ results in a high number of jobs (up to 20 thousand) with 3 to 5 operations on average.
Table 3 and 5 show the distribution of the operation length of the operations. They vary a lot in both short and long job instances,between 2 to $600\, 000$. The optimal makespan on every instance is $600\, 000$, which represents roughly one week in seconds, to mirror real industrial scenarios, where it is typical to plan a whole week of schedule in one go. Given the optimal makespan at $600\, 000$, it means that in instance 1000-10000-1 long-jobs, there is at least one operation which is as wide as the whole optimal schedule. This benchmark is used is several recent works about industrial scheduling (\cite{teppan2018dispatching,dacol2019iclp,Teppan2020}).
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{instance.png}
\caption{Explanation of the instance representation of the Large-TA benchmark. The instance shown is the file \emph{Large-TA/tai\_j10\_m10\_1.data}. Values in the grey field represent the line numbers.}
\label{fig:instance}
\end{figure}
\section{File representation}
The instances are collected in two folders, one per benchmark. The representation of the JSSP instances is pretty similar in the two benchmarks. Figure \ref{fig:instance} shows the first instance of the Large-TA benchmark, with the description of the elements. The first line contains the size of the instance, with the number of jobs $n$ and machines $m$ respectively. Then, each line contains the description of one job and its $m$ operations. For each operation, it is specified the id of the machine where the operation is assigned to, as well as the duration of the processing time of the operation.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{instance2.png}
\caption{Explanation of the instance representation of the Known-Optima benchmark. The instance shown is the file \emph{Known-Optima/long-js-600000-100-10000-1.data}. Values in the grey field represent the line numbers.}
\label{fig:instance2}
\end{figure}
Figure \ref{fig:instance2} shows the first of the Known-Optima benchmark. The definition of the jobs and operation is similar to the Large-TA instances. The only tangible difference is that, in this benchmark, the number of operations per job is not always the same, therefore, each line ends with the values -1 -1 to indicate the end of the operations' sequence.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{folders.png}
\caption{How the program expects the instances to be organized. In particular, ``bench'' refers to the Known-Optima benchmark, and ``large'' refers to the Large-TA benchmark. The benchmark is then divided in sub-folders, for easy split of the benchmark instances for parallel computation. There is no upper limit to the number of sub-folders, but there should be at least one. By convention, they are named after consequent numbers, but the name can actually be anything. Folder ``src'' contains the java source file, while ``results'' is the destination folder of the output of the computations.}
\label{fig:folders}
\end{figure}
\section{Code for JSSP on OR-Tools and CP Optimizer}
In addition to the benchmarks, we provide some code snippets including the encodings of the JSSP on two Constraint Programming solvers (CP), namely CP Optimizer and OR-Tools. This code has been used to test the two solvers on the proposed benchmarks \cite{dacol2019cp,dacol2019iclp}. In the $cp\_solvers\_code$ folder we provide the source code in .java files. We \underline{do not} include the libraries of the solvers, because at least one of them is a proprietary solver that needs to be purchased\footnote{OR-Tools is available for free at https://developers.google.com/optimization. CP Optimizer can be obtained at https://www.ibm.com/analytics/cplex-cp-optimizer}. Figure \ref{fig:folders} explains the directory hierarchy to use for organizing the benchmark in input. Folder ``src'' contains the source files. For each solver, three encodings are available: naive, semi-naive and advanced. The difference between these encodings is explained in detail in \cite{dacol2019cp}. The java files are the following:
\begin{itemize}
\item Main.java: It's the launcher file;
\item MySolutionCallback.java: It's an extension of the SolutionCallback object, to gather the information needed when a solution is available;
\item SchedJobShop.java contains the code of the CP Optimizer solver for JSSP, advance encoding;
\item NaiveJobShop.java contains the naive version of the CP Optimizer solver encoding for JSSP;
\item SemiNaive.java contains the semi-naive version of the CP Optimizer solver encoding for JSSP;
\item SchedJobShopORTools.java contains the code of the OR-Tools solver for JSSP, advance encoding;
\item ORToolsNaive.java contains the naive version of the OR-Tools solver encoding for JSSP;
\item ORToolsSemiNaive.java java contains the semi-naive version of the OR-Tools solver encoding for JSSP;
\item SchedOpenShop.java contains a tentative encoding for the open shop problem (not tested);
\item INFO.java contains metadata information;
\item InstanceConverter.java: converts the instances from the MiniZinc format to the format described in section 4;
\end{itemize}
The code was tested on a Linux system with Ubuntu 18.04.3 LTS x86\_64. However, both solvers offer also Windows-compatible libraries. The libraries have to be referenced at command line when invoking java, using the commands \textbf{-cp} and \textbf{-Djava.library.path=}, or using the tools provided by IDEs.
The java program takes 5 arguments:
\begin{enumerate}
\item First argument (mandatory) selects the solver with a numeric value between 0 and 3:
\begin{itemize}
\item 0 - ORTools Advanced encoding
\item 1 - CPOptimizer Advanced encoding
\item 2 - ORTools Naive encoding
\item 3 - CPOptimizer Naive encoding
\end{itemize}
\item Second argument (Mandatory) selects the dataset:
\begin{itemize}
\item 0 - Large-TA benchmark
\item 1 - Known-Optima benchmark
\item 2 - Classic instances benchmark, used in \cite{dacol2019cp,dacol2019iclp}
\end{itemize}
\item Third argument (Mandatory): Input sub-folder name
\item Fourth argument (Mandatory): Timeout of the solver
\item Fifth argument (Mandatory): n of workers (threads) available to the solver
\end{enumerate}
\bibliographystyle{splncs03}
|
{
"timestamp": "2021-03-29T02:01:43",
"yymm": "2102",
"arxiv_id": "2102.08778",
"language": "en",
"url": "https://arxiv.org/abs/2102.08778"
}
|
\section{Introduction and Preliminaries}
\ \ \ Injectivity of modules is one of the principal notions in homological algebra. Namely, over Noetherian rings, injective modules have very important properties as well as many applications since the classical Matlis' work (see \cite{Matlis}). Stenstr\"{o}m introduced FP-injective modules and studied it over coherent rings as a counterpart to injective modules over Noetherian rings (see \cite{Su}). Recall that a left
$R$-module $M$ is called {\it FP-injective} if ${\rm Ext}_{R}^{1}(U, M)=0$ for any finitely presented left
$R$-module $U$. Accordingly, the FP-injective dimension of $M$, denoted by FP-${\rm id}_{R}(M)$, is defined to be the smallest $n\geq 0$ such that ${\rm Ext}_{R}^{n+1}(U, M)=0$ for all finitely presented
left $R$-modules $U$. If no such $n$ exists, one defines FP-${\rm id}_{R}(M)=\infty$. For background on
FP-injective (or absolutely pure) modules, we refer the reader to \cite{EM,BH,LM1,LD,BC,QD,PK,Su}.
Recall that coherent rings first appeared in Chase's paper \cite{Chase} without being mentioned by name.
The term coherent was first used by Bourbaki in \cite{Bou}. Then, $n$-coherent rings were introduced by Costa in \cite{Costa}. Let $n$ be a non-negative integer. A left $R$-module $M$ is said to be {\it $n$-presented} if there is an exact sequence
$ F_{n}\rightarrow F_{n-1}\rightarrow\dots\rightarrow F_1\rightarrow F_0\rightarrow
M\rightarrow$ of left $R$-modules, where each $F_i$ is finitely generated and free. And a ring $R$ is called left {\it $n$-coherent} if every
$n$-presented $R$-module is $(n + 1)$-presented. Thus, for $n=1$, left $n$-coherent rings are nothing but left coherent rings (see \cite{Costa, DKM, Su}).
In 2015, Wei and Zhang in \cite{JZW}, introduced the notion of $fp_{n}$-injective modules as a generalization of the notion of FP-injective modules by using $n$-presented modules. They also introduce $fp_{n}$-flat modules. Namely, a left $R$-module $M$ is said to be {\it $fp_{n}$-injective} if for every exact sequence $0\rightarrow X\rightarrow Y$ with $X$ and $Y$ $n$-presented left $R$-modules, the induced sequence ${\rm Hom}(Y,M)\rightarrow {\rm Hom}(X,M)\rightarrow 0$ is exact. And a right $R$-module $N$ is said to be {\it $fp_{n}$-flat} if for every exact sequence $0\rightarrow X\rightarrow Y$ with
$X$ and $Y$ are $n$-presented left $R$-modules, the induced sequence $0\rightarrow N\otimes_{R}X\rightarrow N\otimes_{R}Y$ is exact. They investigated the properties of these modules and in particular, proved the existence of $fp_{n}I$-covers (respectively preenvelopes) and $fp_{n}F$-covers (respectively preenvelopes), where $fp_{n}I$ and $fp_{n}F$ denote respectively the classes of $fp_{n}$-injective and $fp_{n}$-flat modules (see \cite[Theorem 2.5]{JZW}). On the other hand, Bravo et al. in \cite{BGH} introduced the notion of absolutely clean and level modules, which Gao and Wang in \cite{Z.W} named weak injective and weak flat, respectively.
In this paper, we deal with weak injective and weak flat modules and some extensions of these notions.
In 2015, Gao and Wang in \cite{Z.W}, using super finitely presented modules instead of finitely presented modules \cite{Z.G}, introduced the notion of
weak injective and weak flat modules. In general, weak injective and weak flat modules are generalizations of $fp_n$-injective and $fp_n$-flat modules, respectively. A left $R$-module $U$ is called {\it super finitely presented} \cite{Z.G} if there exists an exact sequence
$ \cdots\rightarrow F_{2}\rightarrow F_1\rightarrow F_0\rightarrow U\rightarrow 0$, where each $F_i$
is finitely generated and free. A left $R$-module $M$ is called {\it weak injective} if ${\rm Ext}_{R}^{1}(U, M)=0$ for any super finitely presented left $R$-module $U$. A right $R$-module $M$ is called {\it weak flat} if ${\rm Tor}_{1}^{R}(M, U)=0$ for any super finitely presented left $R$-module $U$.
Using weak injective and weak flat modules, several authors investigated the homological aspect of some notions over arbitrary rings, generalizing by this some known results over coherent rings. For example, in 2018, Zhao in \cite{.NG} investigated the homological aspect of modules with finite
weak injective and weak flat dimensions. Namely, if $\mathcal{WI}_k(R)$ and $\mathcal{WF}_k(R^{op})$ are the classes of left and right modules with
weak injective dimension and weak flat dimensions at most $k$, respectively, then by using derived functors ${\rm Ext}^{\mathcal{WF}}$, ${\rm Ext}^{\mathcal{WI}}$ and ${\rm Tor}_{\mathcal{W}}$ on $\mathcal{WF}(R^{op})$-resolutions and $\mathcal{WI}(R)$-resolutions, it was proved that the classes $\mathcal{WI}_k(R)$ and $\mathcal{WF}_k(R^{op})$ are covering and preenveloping over any arbitrary ring, where $\mathcal{WI}(R)$ and $\mathcal{WF}(R^{op})$ are the class of weak injective left modules and weak flat right modules, respectively. When $k=0$ and for coherent rings, every module has an FP-injective cover and an FP-injective preenvelope. In the recent years, the homological theory for weak injective and weak flat modules
has become an important area of research (see \cite{Z.H,Z.W,TX}).
Let $n,k$ be non-negative integers. In this paper, we introduce the concept of $n$-weak injective left modules and $n$-weak flat right modules by using $n$-super finitely presented left modules. Every $n$-weak injective (resp. $n$-weak flat) is weak injective (resp. weak flat). And if $n\geq 1$, then $n$-weak injective (resp. $n$-weak flat) modules are common generalizations of weak injective and $fp_n$-injective (resp. weak flat and $fp_n$-flat) modules. Under these definitions, $n$-super finitely presented, $n$-weak injective and $n$-weak flat modules are weaker than the usual super finitely presented, weak injective (resp. $fp_n$-injective) and weak flat (resp. $fp_n$-flat) modules, respectively.
Also, for any $m\geq n$, every $n$-super finitely presented, $n$-weak injective (resp. $fp_n$-injective) and $n$-weak flat (resp. $fp_n$-flat) modules is $m$-super finitely presented, $m$-weak injective and $m$-weak flat, respectively. But, $m$-weak injective and $m$-weak flat $R$-modules need not be $n$-weak injective (resp. $fp_n$-injective) and $n$-weak flat (resp. $fp_n$-flat) for any $n<m$ (resp. $n\leq m$) (see Example \ref{1.3a}). We also introduce and investigate the classes
$\mathcal{WI}_k^n(R)$ and $\mathcal{WF}_k^n(R^{op})$ which are larger than the classes $\mathcal{WI}_k(R)$ and $\mathcal{WF}_k(R^{op})$ (resp. $fp_{n}I$ and $fp_{n}F$) in \cite{.NG} (resp. \cite{JZW}).
The paper is organized as follows:
In Sec. 1, some fundamental notions and some preliminary results are stated.
In Sec. 2, we introduce $n$-super finitely presented, $n$-weak injective left modules and $n$-weak flat right modules. Then, by considering special super finitey presented $R$-modules of every $n$-super finitely presented left $R$-module, we give some characterizations of these modules.
In Sec. 3, we give some homological aspects of modules with finite $n$-weak injective and $n$-weak flat dimensions. We let $\mathcal{WI}_{k}^n(R)$ and $\mathcal{WF}_k^n(R^{op})$ denote the classes of left and right modules with
$n$-weak injective dimension and $n$-weak flat dimension at most $k$, respectively. Among other results, we prove that over an arbitrary ring, $M$ is in $\mathcal{WF}_k^n(R^{op})$ (resp. $\mathcal{WI}_{k}^n(R)$) if and only if $M^*$ is in $\mathcal{WI}_{k}^n(R)$ (resp. $\mathcal{WF}_k^n(R^{op})$), where $M^{*}={\rm Hom}_{\mathbb{Z}}(M,{\mathbb{Q}}/{\mathbb { Z}})$. Also, considering an exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ of $R$-modules, we show that if $A$ and $ B$ are in $\mathcal{WI}_{ k}^{n}(R)$ then $C$ is in $\mathcal{WI}_{ k}^{n}(R)$ and if $B$ and $C$ are in $\mathcal{WF}_{ k}^{n}(R^{op})$ then $A$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
In Sec. 4, we show that over an arbitrary ring, $\mathcal{WI}_{k}^n(R)$ and $\mathcal{WF}_k^n(R^{op})$ are injectively resolving and projectively resolving and consequently,
we prove that the classes $\mathcal{WI}_{k}^n(R)$ and $\mathcal{WF}_k^n(R^{op})$ are covering and preenveloping. Then by considering $n=0$, we deduce that the classes $\mathcal{WI}_{k}(R)$ and $\mathcal{WF}_k(R^{op})$ are covering and preenveloping (\cite[Theorems 4.4, 4.5, 4.8 and 4.9]{.NG}). Moreover, if $k=0$, then Theorems 2.5 and 2.7 of \cite{JZW} follow. Also, we investigate rings over which every left module is in $\mathcal{WI}^n(R)$ and every right module is in $\mathcal{WF}^n(R^{op})$. Finally, we show that the pair $(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WF}_{k}^n(R^{op})^{\bot})$ is a hereditary perfect cotorsion pair, and if $R$ is in $\mathcal{WI}_{k}^n(R)$, it follows that the pair $(\mathcal{WI}_{k}^n(R), \mathcal{WI}_{k}^n(R)^{\bot})$
is a perfect cotorsion pair.
\section{$n$-Weak injective and $n$-weak flat modules}
\ \ \ In this section, we introduce the notions of $n$-weak injective and $n$-weak flat modules using special super finitely presented modules. Then, we show some of their general properties. We start with the following definition.
\begin{Def}\label{1.1}
Let $n$ be a non-negative integer. A left $R$-module $U$ is said to be $n$-super finitely presented if there exists an exact sequence $$ \cdots\rightarrow F_{n+1}\rightarrow F_{n}\rightarrow \cdots\rightarrow F_1\rightarrow F_0\rightarrow U\rightarrow 0$$ of projective $R$-modules, where each $F_i$ is finitely generated and projective for any $i\geq n$.
If $K_{i}:={\rm Im}(F_{i+1}\rightarrow F_{i})$, then for $i=n-1$, we call the module $K_{n-1}$ special super finitely presented left $R$-module. Moreover, if ${\rm Hom}_{R}(K_{n-1},-)$ is exact with respect to a short exact sequence
$0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ of left $R$-modules, then we say that this sequence is special superpure and $A$ is said to be superpure in $B$.
\end{Def}
\begin{Def}\label{1.2}
Let $n$ be a non-negative integer. A left $R$-module $M$ is called $n$-weak injective if ${\rm Ext}_{R}^{n+1}(U,M)=0$
for every $n$-super finitely presented left $R$-module $ U$. A right $R$-module $N$ is called $n$-weak flat
if ${\rm Tor}_{n+1}^{R}(N,U)=0$ for every $n$-super finitely presented left $R$-module $U$.
\end{Def}
\begin{rem}\label{1.3}
Let $n,m,k$ be non-negative integers. Then:
\begin{enumerate}
\item [\rm (1)]
${\rm Ext}_{R}^{n+1}(U,-)\cong{\rm Ext}_{R}^{1}(K_{n-1},-)$ and ${\rm Tor}_{n+1}^{R}(-,U)\cong {\rm Tor}_{1}^{R}(-,K_{n-1})$, where $U$ is an $n$-super finitely presented left $R$-module with a special super finitely presented module $K_{n-1}$. If $n=0$, then $n$-weak injective left $R$-modules, $n$-super finitely presented left $R$-modules and $n$-weak flat right $R$-modules are simply weak injective left $R$-modules, super finitely presented left $R$-module and weak flat right $R$-module, respectively.
\item [\rm (2)]
Every super finitely presented left $R$-module is $n$-super finitely presented.
\item [\rm (3)]
Every $n$-super finitely presented left $R$-module is $m$-super finitely presented for any $m\geq n$, but not conversely (see Examples \ref{1.1a} and \ref{1.3a}(1)). If we denote by ${\rm Pres}_{n}^{\infty}$ the class of all $n$-super finitely presented
left $R$-modules, then:
$${\rm Pres}_{n}^{\infty}\subseteq{\rm Pres}_{n+1}^{\infty}\subseteq {\rm Pres}_{n+2}^{\infty}\subseteq\cdots$$
If $n=0$, then ${\rm Pres}_{0}^{\infty}$ is simply the class of all super finitely presented left $R$-modules. We denote this class simply by ${\rm Pres}^{\infty}$.
\item [\rm (4)]
Every $n$-weak injective left (resp. $n$-weak flat right) $R$-module is $m$-weak injective (resp. $m$-weak flat) for any $n\leq m$, but not conversely (see Example \ref{1.3a}(2)). If $U$ is an $(n+1)$-super finitely presented left $R$-module, then there exists an exact sequence
$$ \cdots\rightarrow F_{2}\rightarrow F_1\rightarrow F_0\rightarrow U\rightarrow 0,$$ where $K_{n}$ is a special super finitely presented left module. Also, we have the short exact sequence $0\rightarrow K_{0}\rightarrow F_0\rightarrow U\rightarrow 0$, where $K_0$ is an $n$-super finitely presented left module. So, if $M$ is an $n$-weak injective left $R$-module, then ${\rm Ext}_{R}^{n+1}(K_0, M)=0$. On the other hand, ${\rm Ext}_{R}^{n+2}(U,M)\cong{\rm Ext}_{R}^{n+1}(K_{0}, M)=0$, and hence $M$ is $(n+1)$-weak injective. Similarly, every $n$-weak flat right $R$-module is $(n+1)$-weak flat.
\item [\rm (5)]
If $\mathcal{I}$, $\mathcal{FP}$, $\mathcal{WI}(R)$, $\mathcal{WI}^{n}(R)$, $\mathcal{F}$, $\mathcal{WF}(R^{op})$ and $\mathcal{WF}^n(R^{op})$ are the classes of injective, FP-injective, weak injective, $n$-weak injective left $R$-modules and flat, weak flat and $n$-weak flat right $R$-modules, respectively, then
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathcal{I}\subseteq\mathcal{FP}\subseteq\mathcal{WI}(R)\subseteq\mathcal{WI}^{n}(R)\subseteq\mathcal{WI}^{n+1}(R)\subseteq\mathcal{WI}^{n+2}(R)\subseteq\cdots$\\ and
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathcal{F}\subseteq\mathcal{WF}(R^{op})\subseteq\mathcal{WF}^{n}(R^{op})\subseteq\mathcal{WF}^{n+1}(R^{op})\subseteq\mathcal{WF}^{n+2}(R^{op})\subseteq\cdots$.
\item [\rm (6)] Every $fp_n$-injective left $R$-module is $n$-weak injective and every $fp_n$-flat right $R$-module is $n$-weak flat. Indeed, for any
$n$-super finitely presented left module $U$, there exists a short exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow 0$, where $K_{n-1}$ is a special super finitely presented left module. So if $M$ is $fp_n$-injective left $R$-module, then ${\rm Hom}(F_n,M)\rightarrow {\rm Hom}(K_n,M)\rightarrow 0$ is exact, since $K_{n}$ and $F_{n}$ are super finitely presented and consequently are $n$-presented. Therefore, ${\rm Ext}_{R}^{1}(K_{n-1}, M)=0$ and thus by (1), it follows that $M$ is $n$-weak injective. Similarly, every $fp_n$-flat right $R$-module is $n$-weak flat, but not conversely (see Example \ref{1.3a}(3)).
\end{enumerate}
\end{rem}
Let $A$ be an $R$-module. Then, the
finitely presented dimension of $A$ denoted by ${\rm f.p.dim}_{R}(A)$ is defined as
${\rm inf}\{n \ \mid \ {\rm there \ exists \ an \ exact \
sequence }\ {\rm F}_{n+1}\rightarrow {\rm F}_{n}\rightarrow\cdots \rightarrow {\rm F}_1\rightarrow {\rm F}_0\rightarrow {\rm A}\rightarrow 0 \ {\rm of} \ R$-${\rm modules, \ where \ each \ F_{i} \ projective, \ and \ F_{n} \ and \ F_{n+1} \ are \ finitely\ generated} \}$.
So ${\rm f.p.dim}(R)$ = ${\rm sup}\{{\rm f.p.dim}_{R}(A) \ \mid \ {\rm A \ is \ a\ finitely\ generated} \ R$-${\rm module} \}$.
We use ${\rm w.gl.dim}(R)$ and ${\rm gl.dim}(R)$ to denote the weak global dimension and global dimension of a ring $R$ respectively.
Also, a ring $R$ is called an $(a,b,c)$-ring, if ${\rm w.gl.dim}(R)=a$, ${\rm gl.dim}(R)=b$ and ${\rm f.p.dim}(R)=c$ (see \cite{HKN}).
\begin{ex}\label{1.1a}
Let $R=k[x_1, x_2]\oplus R^{'}$, where $k[x_1,x_2]$ is a ring of polynomials in $2$ indeterminate over a field $k$, and $R^{'}$ is a valuation ring with global
dimension $2$. Then by \cite[Proposition 3.10]{HKN}, $R$ is a coherent $(2,2,3)$-ring. So ${\rm f.p.dim}(R)=3$, and hence there exists a finitely generated $R$-module $U$ such that ${\rm f.p.dim}_{R}(U)=3$. Thus, there exists an exact sequence
$F_{4}\rightarrow F_{3}\rightarrow F_2\rightarrow F_1\rightarrow F_0\rightarrow U\rightarrow 0,$
where $F_3$ and $F_{4}$ are finitely generated and projective modules. Also, $K_2:={\rm Im}( F_3\rightarrow F_2)$ is special super finitely presented, since $R$ is coherent. So by Definition \ref{1.1}, $U$ is $3$-super finitely presented. But, $U$ is not $2$-super finitely presented otherwise ${\rm f.p.dim}_{R}(U)=2$, a contradiction.
\end{ex}
\begin{ex}\label{1.3a}
Let $R=k[x_1, x_2]\oplus S$, where $k[x_1,x_2]$ is a ring of polynomials in $2$ indeterminates over a field $k$, and $S$ is a
non-Noetherian hereditary von Neumann regular ring (for example $S$ is a ring of functions of $X$ into $k$ continuous with respect to the discrete topology on $k$, where $k$ is a field and $X$ is a totally disconnected compact Hausdorff space whose associated Boolean ring is hereditary, see examples of \cite{GMB}).
Then by \cite[Proposition 3.8]{HKN}, $R$ is a coherent $(2,2,2)$-ring. Hence,
\begin{enumerate}
\item [\rm (1)]
Since ${\rm f.p.dim}(R)=2$, then by \cite[Proposition 1.5]{HKN}, for a finitely generated $R$-module $U$ either ${\rm f.p.dim}_{R}(U)=2$ or ${\rm f.p.dim}_{R}(U)=0$. If ${\rm f.p.dim}_{R}(U)= 0$, it follows that $U$ is finitely presented. If ${\rm f.p.dim}_{R}(U)=2$, then there exists an exact sequence
$F_{3}\rightarrow F_{2}\rightarrow F_1\rightarrow F_0\rightarrow U\rightarrow 0,$
where $F_2$ and $F_{3}$ are finitely generated and projective $R$-modules. Also, $K_1:={\rm Im}( F_2\rightarrow F_1)$ is a special super finitely presented module, since $R$ is coherent. So by Definition \ref{1.1}, $U$ is $2$-super finitely presented. But, $U$ is neither $1$-super finitely presented nor $0$-super finitely presented, otherwise every finitely generated $R$-module would be finitely presented, and hence by \cite[Theorem 1.3]{HKN}, ${\rm f.p.dim}(R)=0$, a contradiction.
\item [\rm (2)]
It is clear that every $R$-module is $2$-weak injective, since ${\rm gl.dim}(R)=2$. But not every $R$-module is $0$-weak injective. Indeed, if every module $M$ is $0$-weak injective, then ${\rm Ext}_{R}^{1}(U^{'}, M)=0$ for each $0$-super finitely presented $R$-module $U^{'}$. So each $0$-super finitely presented module is projective. But since
$R$ is coherent, any finitely presented module is super finitely presented, and so any finitely presented module is projective. Hence $R$ is $1$-regular, and then by \cite[Theorem 3.9]{.TZ}, every $R$-module is flat. So ${\rm w.gl.dim}(R)=0$, a contradiction. Similarly, since ${\rm w.gl.dim}(R)=2$, it follows that every $R$-module is $2$-weak flat, but not every $R$-module $0$-weak flat.
\item [\rm (3)]
Not every $R$-module is ${\rm fp}_{2}$-injective (resp. ${\rm fp}_{2}$-flat). Indeed, suppose otherwise. Since $R$ is coherent, it follows that any finitely presented $R$-module $A$ is super finitely presented. Then, there is an exact sequence $0\rightarrow L_0\rightarrow F_0\rightarrow A\rightarrow 0,$ where $F_0$ and $L_0$ are $n$-presented $R$-modules, and so are also $2$-presented.
So, if $M$ is an ${\rm fp}_{2}$-injective $R$-module, then ${\rm Ext}_{R}^{1}(A, M)=0$ (resp. ${\rm Tor}_{1}^{R} (M, A)=0$) and consequently, $A$ is projective. Hence, $R$ is $1$-regular and thus ${\rm w.gl.dim}(R)=0$, a contradiction.
\end{enumerate}
\end{ex}
We denote by $R$-${\rm Mod}$ the category of left $R$-modules and by ${\rm Mod}$-$R$ that of right $R$-modules.
\begin{prop}\label{1.4}
Let $n$ be a non-negative integer. Then, the following assertions hold:
\begin{enumerate}
\item [\rm (1)]
For every $M\in{\rm Mod}$-$R$, $M$ is $n$-weak flat if and only if $M^{*}$ is $n$-weak injective.
\item [\rm (2)]
For every $M\in R$-${\rm Mod}$, $M$ is $n$-weak injective if and only if $M^{*}$ is $n$-weak flat.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Let $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. Then, ${\rm Tor}_{1}^{R}(M,K_{n-1})^{*}\cong{\rm Ext}_{R}^{1}(K_{n-1}, M^*)$, since by \cite[Theorem 2.76]{Rot2},
${\rm Hom}_{\mathbb{Z}}(M\otimes_{R}K_{n-1},{\mathbb{Q}}/{\mathbb{Z}})\cong{\rm Hom}_{R}(K_{n-1},{\rm Hom}_{\mathbb{Z}}(M,{\mathbb{Q}}/{\mathbb{Z}}))$. The result follows from Remark \ref{1.3}(1), since ${\rm Ext}_{R}^{n+1}(U,M^*)\cong{\rm Ext}_{R}^{1}(K_{n-1}, M^*)\cong{\rm Tor}_{1}^{R}(M,K_{n-1})^{*}\cong{\rm Tor}_{n+1}^{R}(M,U)^{*}$.
(2) Let $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. Then, ${\rm Ext}_{R}^{1}(K_{n-1}, M)^{*}\cong{\rm Tor}_{1}^{R}(M^*,K_{n-1})$, since by \cite[Lemma 3.55]{Rot2}, we have that $M^{*}\otimes_{R}K_{n-1}\cong{\rm Hom}_{R}(K_{n-1}, M)^*$. The result follows from Remark \ref{1.3}(1), since ${\rm Tor}_{n+1}^{R}(M^*,U)\cong{\rm Tor}_{1}^{R}(M^*,K_{n-1})\cong{\rm Ext}_{R}^{1}(K_{n-1}, M)^{*}\cong{\rm Ext}_{R}^{n+1}(U, M)^{*}.$ \end{proof}
As a direct consequence of Proposition \ref{1.4} we obtain the following corollary.
\begin{cor}\label{1.4a}
Let $M$ be an $R$-module. Then,
\begin{enumerate}
\item [\rm (1)]
For every $M\in R$-${\rm Mod}$, $M$ is $n$-weak injective if and only if $M^{**}$ is $n$-weak injective.
\item [\rm (2)]
For every $M\in{\rm Mod}$-$R$, $M$ is $n$-weak flat if and only if $M^{**}$ is $n$-weak flat.
\end{enumerate}
\end{cor}
\begin{prop}\label{1.10}
Let $M$ be a left $R$-module. Then, the following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
$M$ is $n$-weak injective.
\item [\rm (2)]
Every short exact sequence $0\rightarrow M\rightarrow B\rightarrow C\rightarrow 0$ of left $R$-modules is special superpure.
\item [\rm (3)]
$M$ is special superpure in any left $R$-module containing it.
\item [\rm (4)]
$M$ is special superpure in any injective left $R$-module containing it.
\item [\rm (5)]
$M$ is special superpure in $E(M)$.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)\Rightarrow (2)$ Let $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. Then by Remark \ref{1.3}(1), ${\rm Ext}_{R}^{n+1}(U,M)\cong{\rm Ext}_{R}^{1}(K_{n-1},M)=0$. Consequently, ${\rm Hom}_{R}(K_{n-1}, -)$ is exact with respect to the short exact sequence $0\rightarrow M\rightarrow B\rightarrow C\rightarrow 0$.
$(2)\Rightarrow (3)\Rightarrow (4) \Rightarrow (5)$ Clear.
$(5)\Rightarrow (1)$ The short exact sequence $0\rightarrow M\rightarrow E(M)\rightarrow {E(M)}/{M}\rightarrow 0$ is special superpure. Therefore, if $U$ is an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$, then by assumption and Remark \ref{1.3}(1),
$0={\rm Ext}_{R}^{1}(K_{n-1},M)\cong{\rm Ext}_{R}^{n+1}(U,M)$ and hence $M$ is $n$-weak injective.
\end{proof}
\begin{prop}\label{1.5}
Let $n$ be a non-negative integer. Then, the following assertions hold:
\begin{enumerate}
\item [\rm (1)] Let $\{M_{i}\}_{i\in I}$ be a family of left $R$-modules. Then
$\prod_{i\in I} M_{i}$ is $n$-weak injective if and only if each $M_i$ is $n$-weak injective.
\item [\rm (2)] Let $\{M_{i}\}_{i\in I}$ be a family of right $R$-modules. Then
$\bigoplus_{i\in I} M_{i}$ is $n$-weak flat if and only if each $M_i$ is $n$-weak flat.
\end{enumerate}
\end{prop}
\begin{proof} Both assertions follow easily from \cite[Propositions 7.6 and 7.22]{Rot2}.
\end{proof}
\begin{prop}\label{1.5a}
Let $n$ be a non-negative integer and let $\{M_{i}\}_{i\in I}$ be a family of left $R$-modules. Then
$\bigoplus_{i\in I} M_{i}$ is $n$-weak injective if and only if each $M_i$ is $n$-weak injective.
\end{prop}
\begin{proof}
If $U$ is an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$, then there exists a short exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow 0$ of left $R$-modules, where $F_{n}$ is finitely generated and projective and $K_{n}$ is super finitely presented. Thus we have the following commutative diagram:
$$\xymatrix{
{\rm Hom}_{R}(F_{n}, \bigoplus_{i\in I}M_{i})\ar[r]\ar[d]^{\cong}&{\rm Hom}_{R}(K_{n}, \bigoplus_{i\in I} M_{i})\ar[r]\ar[d]^{\cong}&{\rm Ext}_{R}^{1}(K_{n-1}, \bigoplus_{i\in I} M_{i}) \ar[r]\ar[d]&0 \\
\bigoplus_{i\in I}{\rm Hom}_{R}(F_{n}, M_{i})\ar[r]&\bigoplus_{i\in I}{\rm Hom}_{R}(K_{n}, M_{i})\ar[r]&\bigoplus_{i\in I}{\rm Ext}_{R}^{1}(K_{n-1}, M_{i})\ar[r]& 0 \\
}$$
So ${\rm Ext}_{R}^{1}(K_{n-1}, \bigoplus_{i\in I}M_{i})\cong\bigoplus_{i\in I}{\rm Ext}_{R}^{1}(K_{n-1},M_{i})$ and hence by Remark \ref{1.3}(1), $\bigoplus_{i\in I}M_{i}$ is $n$-weak injective if and only if every $M_{i}$ is $n$-weak injective.
\end{proof}
\begin{prop}\label{1.6}
Let $n$ be a non-negative integer. Then, the following assertions hold:
\begin{enumerate}
\item[\rm (1)] A left $R$-module $M$ is $n$-weak injective if and only if every pure submodule and pure
epimorphic image of $M$ is $n$-weak injective.
\item [\rm (2)] A right $R$-module $M$ is $n$-weak flat if and only if every pure submodule and pure
epimorphic image of $M$ is $n$-weak flat.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Let $M$ be an $n$-weak injective left $R$-module and $N$ a pure submodule of $M$.
Then there exists a pure exact sequence $0\rightarrow N\rightarrow M\rightarrow{M}/{N}\rightarrow 0$ which
gives rise to a split exact sequence $0\rightarrow({M}/{N})^{*}\rightarrow M^*\rightarrow N^*\rightarrow 0$ of right $R$-modules. By
Proposition \ref{1.4}(2), $M^*$
is $n$-weak flat. Now by Proposition \ref{1.5}(2), $M^*$ is $n$-weak flat if and only if $N^*$
and $({M}/{N})^*$ are
$n$-weak flat. Hence by Proposition \ref{1.4}(2), we deduce that $M$ is $n$-weak injective if and only if $N$ and ${M}/{N}$ are $n$-weak injective. Similarly, we prove (2) using Propositions \ref{1.4}(1) and \ref{1.5}(1).
\end{proof}
Now, from the previous results in this section, the following results are obtained.
\begin{thm}\label{1.11}
Any direct product of $n$-weak flat right $R$-modules is $n$-weak flat.
\end{thm}
\begin{proof}
Let $\{M_{i}\}_{i\in I}$ be a family of $n$-weak flat right $R$-modules. By \cite[Proposition 2.4.5]{JX}, $M_{i}$ is pure in $M_{i}^{**}$, hence $\prod_{i\in I}M_{i}$ is pure in $\prod_{i\in I}M_{i}^{**}$. Thus, using Corollary \ref{1.6}(2), it suffices to prove that $\prod_{i\in I}M_{i}^{**}$ is $n$-weak flat. Using \cite[Theorem 2.4]{Rot1}, we have $\prod_{i\in I}M_{i}^{**} \cong (\bigoplus_{i\in I} M_{i}^{*})^{*}$.
By Proposition \ref{1.4}(1), each $M_{i}^{*}$ is $n$-weak injective, and so $\bigoplus_{i\in I} M_{i}^{*}$ is $n$-weak injective by Proposition \ref{1.5a}. Thus $(\bigoplus_{i\in I} M_{i}^{*})^{*}$ is $n$-weak flat by Proposition \ref{1.4}(2), and so is $\prod_{i\in I}M_{i}^{**}$.
\end{proof}
\begin{prop}\label{1.7}
Let $n$ be a non-negative integer. Then,
\begin{enumerate}
\item [\rm (1)]
If a left $R$-module $M$ is $n$-weak injective, then ${\rm Ext}_{R}^{i}(K_{n-1}, M)=0$ for every special super finitely presented left $R$-module $K_{n-1}$ and $i\geq 1$.
\item [\rm (2)]
If a right $R$-module $M$ is $n$-weak flat, then ${\rm Tor}_{i}^{R}(M, K_{n-1})=0$ for every special super finitely presented left $R$-module $K_{n-1}$ and $i\geq 1$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Let $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. Then by Remark \ref{1.3}(3), $U$ is $(n+1)$-super presented with special super finitely presented module $K_{n}$. Also, by Remark \ref{1.3}(4), every $n$-weak injective left $R$-module is $(n+1)$-weak injective, and hence
${\rm Ext}_{R}^{n+2}(U, M)\cong{\rm Ext}_{R}^{1}(K_{n}, M)=0$. There exists an exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow 0$ of left $R$-modules, where $F_n$ is finitely generated and projective. Thus, it follows that ${\rm Ext}_{R}^{1}(K_{n}, M)\cong {\rm Ext}_{R}^{2}(K_{n-1}, M)$ and so ${\rm Ext}_{R}^{2}(K_{n-1}, M)=0$. Repeating this process, we conclude that ${\rm Ext}_{R}^{i}(K_{n-1}, M)=0$ for every $i\geq 1$.
(2) Assume that $U$ is an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. By Remark \ref{1.3}(4), every $n$-weak flat right $R$-module is $(n+1)$-weak flat, and hence ${\rm Tor}_{n+2}^{R}(M,U)\cong{\rm Tor}_{1}^{R}(M, K_{n})=0$. There exists an exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow 0$ of left $R$-modules, where $F_n$ is finitely generated and projective. We deduce that ${\rm Tor}_{1}^{R}(M, K_{n})\cong {\rm Tor}_{2}^{R}(M, K_{n-1})$, then ${\rm Tor}_{2}^{R}(M, K_{n-1})=0$, and so ${\rm Tor}_{i}^{R}(M, K_{n-1})=0$ for any $i\geq 1$.
\end{proof}
\begin{cor}\label{1.7a}{\rm (\cite[Propositin 3.1]{Z.W})}
\begin{enumerate}
\item [\rm (1)]
If a left $R$-module $M$ is weak injective, then ${\rm Ext}_{R}^{i}(F, M)=0$ for every super finitely presented left $R$-module $F$ and $i\geq 1$.
\item [\rm (2)]
If a right $R$-module $M$ is weak flat, then ${\rm Tor}_{i}^{R}(M, F)=0$ for every super finitely presented left $R$-module $F$ and $i\geq 1$.
\end{enumerate}
\end{cor}
\section{Special super finitely presented dimension of modules and rings}
\ \ \ In this section, we give some homological aspects of modules with finite $n$-weak
injective and $n$-weak flat dimensions. Let $n$, $k$ be non-negative integers. We denote by $\mathcal{WI}_{k}^n(R)$ and $\mathcal{WF}_k^n(R^{op})$ the classes of left and right modules with
$n$-weak injective dimension and $n$-weak flat dimensions at most $k$, respectively. If $k=0$, then $\mathcal{WI}_{0}^n(R)=\mathcal{WI}^n(R)$ and $\mathcal{WF}_0^n(R^{op})=\mathcal{WF}^n(R^{op})$, see Remark \ref{1.3}(5).
\begin{Def}\label{2.1}
The $n$-weak injective dimension of a left module $M$ and $n$-weak flat dimension of a right module $N$
are defined by
$n$-${\rm wid}_{R}(M)$= ${\rm inf}\{k: \ {\rm Ext}_{R}^{k+1}(K_{n-1}, M)=0 \ for \ every \ special \ super \ finitely\ presented\ left\\ module\ K_{n-1}\}$,
and
$n$-${\rm wfd}_{R}(N)$= $ {\rm inf}\{k: \ {\rm Tor}^{R}_{k+1}(N, K_{n-1})=0 \ for \ every \ special \ super \ finitely\ presented\ left\\ module \ K_{n-1}\}$.
\end{Def}
If $k=0$, then by Remark \ref{1.3}(1), it follows that $M$ and $N$ are $n$-weak injective and $n$-weak flat, respectively.
\begin{prop}\label{2.2}
Let $M$ be a left $R$-module. Then, the following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
$n$-${\rm wid}_{R}(M)\leq k$.
\item [\rm (2)]
${\rm Ext}_{R}^{k+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any $i\geq 1$.
\item [\rm (3)]
${\rm Ext}_{R}^{k+1}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$.
\item [\rm (4)]
If a sequence of the form $0\rightarrow M\rightarrow E_0 \rightarrow E_1 \rightarrow\cdots\rightarrow E_k\rightarrow0$ is exact with $E_0, E_1,\cdots E_{k-1}$ are
$n$-weak injective modules, then $E_k$ is also $n$-weak injective.
\item [\rm (5)]
There exists an exact sequence $0\rightarrow M\rightarrow E_0 \rightarrow E_1 \rightarrow\cdots\rightarrow E_k\rightarrow0$, where each $E_i$ is $n$-weak injective.
\end{enumerate}
\end{prop}
\begin{proof}
$(2)\Rightarrow (3)\Rightarrow (1)$ and $(4)\Rightarrow (5)$ are obvious.
$(1)\Rightarrow (2)$
We proceed by induction on $k$. If $k=0$, then the result follows by Proposition \ref{1.7}(1). Suppose now the following assertion: if
$n$-${\rm wid}_{R}(M)\leq k$,
then
${\rm Ext}_{R}^{k+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any $i\geq 1$. Let us prove the following assertion: if
$n$-${\rm wid}_{R}(M)\leq k+1$, then
${\rm Ext}_{R}^{k+1+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any $i\geq 1$. Suppose that $n$-${\rm wid}_{R}(M)\leq k+1$.
If $n$-${\rm wid}_{R}(M)\leq k$.
then by induction hypothesis we have
${\rm Ext}_{R}^{k+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any $i\geq 1$, and so ${\rm Ext}_{R}^{k+1+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any $i\geq 1$.
Now, if $n$-${\rm wid}_{R}(M)= k+1$, then ${\rm Ext}_{R}^{k+2}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module. By induction on $i$, we prove that ${\rm Ext}_{R}^{k+1+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and $i\geq1$. The assertion is clearly true for $i=1$. Suppose now that ${\rm Ext}_{R}^{k+1+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and let us prove that ${\rm Ext}_{R}^{k+2+i}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$. Let $U$ be an $n$-super presented left $R$-module with special super finitely presented module $K_{n-1}$.
Then, there exists a short exact sequence $0\to K_n\to F_n\to K_{n-1}\to 0$, and we have ${\rm Ext}_{R}^{k+i+1}(K_{n}, M)\cong{\rm Ext}_{R}^{k+i+2}(K_{n-1}, M)$. But $K_0$ is $n$-super finitely presented with special super finitely presented module $K_n$, hence by induction hypothesis ${\rm Ext}_{R}^{k+i+1}(K_{n}, M)=0$ and consequently ${\rm Ext}_{R}^{k+i+2}(K_{n-1}, M)=0$, as desired.
$(2)\Rightarrow (4)$ Consider the short exact sequence $0\to M\to E_0\to E_0/M\to 0$. Then for any special super finitely presented left $R$-module $K_{n-1}$, we have $$\cdots\to {\rm Ext}_{R}^{k+1}(K_{n-1}, E_0) \to {\rm Ext}_{R}^{k+1}(K_{n-1}, E_0/M)\to {\rm Ext}_{R}^{k+2}(K_{n-1}, M)\to \cdots$$
But by assumption ${\rm Ext}_{R}^{k+2}(K_{n-1}, M)=0$ and by Proposition \ref{1.7}(1) ${\rm Ext}_{R}^{k+1}(K_{n-1}, E_0)$, so ${\rm Ext}_{R}^{k+1}(K_{n-1}, E_0/M)=0$. Step by step, we prove that ${\rm Ext}_{R}^{1}(K_{n-1}, E_k)=0$
$(5)\Rightarrow (1)$ Follows from Proposition \ref{1.7}(1).
\end{proof}
\begin{prop}\label{2.3}
Let $M$ be a right $R$-module. Then, the following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
$n$-${\rm wfd}_{R}(M)\leq k$.
\item [\rm (2)]
${\rm Tor}_{k+i}^{R}(M, K_{n-1})=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any $i\geq 1$.
\item [\rm (3)]
${\rm Tor}_{k+1}^{R}(M, K_{n-1})=0$ for any special super finitely presented left $R$-module $K_{n-1}$.
\item [\rm (4)]
If a sequence of the form $0\rightarrow F_k\rightarrow F_{k-1} \rightarrow\cdots\rightarrow F_1\rightarrow F_0\rightarrow0$ is exact with $F_0, F_1,\cdots F_{k-1}$ are
$n$-weak flat modules, then $F_k$ is also $n$-weak flat.
\item [\rm (5)]
There exists an exact sequence $0\rightarrow F_k\rightarrow F_{k-1} \rightarrow\cdots\rightarrow F_1\rightarrow F_0\rightarrow0$, where each $F_i$ is $n$-weak flat.
\end{enumerate}
\end{prop}
\begin{proof}
The proof is similar to the proof of Proposition \ref{2.2}, using Proposition \ref{1.7}(2).
\end{proof}
\begin{cor}\label{2.3a}
Let $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ be a short exact sequence of $R$-modules. Then, the following assertions hold:
\begin{enumerate}
\item [\rm (1)]
If $A$ and $ B$ are in $\mathcal{WI}_{ k}^{n}(R)$, then $C$ is in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (2)]
If $B$ and $C$ are in $\mathcal{WF}_{ k}^{n}(R^{op})$, then $A$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\end{enumerate}
\end{cor}
\begin{proof}
(1) Let $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. If $A$ and $ B$ are in $\mathcal{WI}_{ k}^{n}(R)$, then by Proposition \ref{2.2}, the short exact sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ induces the following exact sequence:
$$0={\rm Ext}_{R}^{k+1}(K_{n-1}, B)\rightarrow {\rm Ext}_{R}^{k+1}(K_{n-1}, C) \rightarrow {\rm Ext}_{R}^{k+2}(K_{n-1}, A)=0.$$
Hence ${\rm Ext}_{R}^{k+1}(K_{n-1}, C)=0$, and so $C$ is in $\mathcal{WI}_{ k}^{n}(R)$.
(2) Assume that $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ is a short exact sequence of right $R$-modules. Then, by hypothesis and Proposition \ref{2.3}, we have:
$$0={\rm Tor}_{k+2}^{R}(K_{n-1}, C)\rightarrow {\rm Tor}_{k+1}^{R}(K_{n-1}, A) \rightarrow {\rm Tor}_{k+1}^{R}(K_{n-1}, B)=0,$$
where $K_{n-1}$ is a special super finitely presented module associated to an $n$-super presented left $R$-module $U$. Consequently, ${\rm Tor}_{k+1}^{R}(K_{n-1}, A)=0$ and so $A$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\end{proof}
The following proposition is a special case of \cite[Proposition 3.5]{Z.W}.
\begin{prop}\label{2.3l}
Let $M$ be an $R$-module. Then, the following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
${\rm pd}_{R}(K_{n-1})\leq k$ for all special super finitely presented left $R$-modules $K_{n-1}$.
\item [\rm (2)]
${\rm fd}_{R}(K_{n-1})\leq k$ for all special super finitely presented left $R$-modules $K_{n-1}$.
\item [\rm (3)]
${\rm Ext}_{R}^{k+1}(K_{n-1}, M)=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any left $R$-module $M$.
\item [\rm (4)]
${\rm Tor}_{k+1}^{R}(M, K_{n-1})=0$ for any special super finitely presented left $R$-module $K_{n-1}$ and any right $R$-module $M$.
\end{enumerate}
\end{prop}
\begin{prop}\label{2.4}
Let $M$ be an $R$-module. Then, the following assertions hold:
\begin{enumerate}
\item [\rm (1)]
$M$ is in $\mathcal{WF}_{k}^{n}(R^{op})$ if and only if $M^{*}$ is in $\mathcal{WI}_{ k}^n(R)$.
\item [\rm (2)]
$M$ is in $\mathcal{WI}^n{ k}(R)$ if and only if $M^{*}$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) We proceed by induction on $k$. If $k=0$, then by Propositions \ref{1.4}(1) and \ref{2.3}, $M$ is $n$-weak flat if and only if $M^*$ is $n$-weak injective. Consider the short exact sequence $0\rightarrow M\rightarrow E\rightarrow L\rightarrow 0,$ where $E$ is injective. Then, $M$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$ if and only if $L$ is in $\mathcal{WF}_{ k-1}^{n}(R^{op})$. We have $L$ is in $\mathcal{WF}_{ k-1}^{n}(R^{op})$ if and only if $L^*$ is in $\mathcal{WI}_{ k-1}^n(R)$ and so $M^*$ is in $\mathcal{WI}_{ k}^n(R)$. Similarly, by Propositions \ref{1.4}(2) and \ref{2.2}, (2) holds.
\end{proof}
\begin{prop}\label{2.6}
Let $M$ be an $R$-module. Then, the following assertions hold:
\begin{enumerate}
\item [\rm (1)]
$M$ is in $\mathcal{WI}_{ k}^{n}(R)$ if and only if for every pure submodule $N$ of $M$, $N$ and $ {M}/{N}$ are in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (2)]
$M$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$ if and only if for every pure submodule $N$ of $M$, $N$ and $ {M}/{N}$ are in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\end{enumerate}
\end{prop}
\begin{proof}
Similar to the proof of Proposition \ref{1.6} by using
Proposition \ref{2.4}.
\end{proof}
\section{Covers and preenvelopes by modules with $n$-weak injective and $n$-weak
flat dimension at most $k$}
\ \ Let $\rm \mathscr{Y}$ be a class of $R$-modules and $M$ be an
$R$-module. Following \cite{EM}, we say that a morphism $f : F\rightarrow M$ is a
$\rm \mathscr{Y}$-precover of $M$ if $F\in\rm \mathscr{Y}$ and ${\rm Hom}_{R}(F^{'}, F) \rightarrow {\rm Hom}_{R}(F^{'},M)\rightarrow 0$ is exact
for all $F^{'}\in\rm \mathscr{Y}$. Moreover, if whenever a morphism $g : F\rightarrow F$ such that
$fg = f$ is an automorphism of $F$, then $f : F\rightarrow M$ is called an $\rm \mathscr{Y}$-cover of $M$. The
class $\rm \mathscr{Y}$ is called (pre)covering if each $R$-module has a $\rm \mathscr{Y}$-(pre)cover. Dually,
the notions of $\rm \mathscr{Y}$-preenvelopes, $\rm \mathscr{Y}$-envelopes and (pre)enveloping classes are defined.
A duality pair over $R$ \cite{HJ} is a pair $(\mathcal{M}, \mathcal{C})$, where $\mathcal{M}$ is a class of
left $R$-modules and $\mathcal{C}$ is a class of right $R$-
modules, subject to the following conditions:
(1) For an $R$-module $M$, one has $M\in \mathcal{M}$ if and only if $M^{*}\in \mathcal{C}$.
(2) $\mathcal{C}$ is closed under direct summands and finite direct sums.\\
A duality pair $(\mathcal{M}, \mathcal{C})$ is called (co)product-closed if the class $\mathcal{M}$ is closed under
(co)products in the category of all left $R$-modules.
A duality pair $(\mathcal{M}, \mathcal{C})$ is called perfect if it is coproduct-closed, $\mathcal{M}$ is closed
under extensions, and $R$ belongs to $\mathcal{M}$.
Let $\rm \mathscr{X}$ be a class of $R$-modules. We denote by $\mathscr{I}(R)$ the class of injective left modules and by $\mathscr{P}(R)$ the class of projective right modules. We call $\rm \mathscr{X}$ injectively resolving \cite{HH} if $\mathscr{I}(R)\subseteq \rm \mathscr{X}$, and for every short exact sequence
$0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ with $A\in\rm \mathscr{X}$, $B\in\rm \mathscr{X}$ if and only if $C\in\rm \mathscr{X}$. Also, we call $\rm \mathscr{X}$ projectively resolving if $\mathscr{P}(R)\subseteq \rm \mathscr{X}$, and for every short exact sequence
$0\rightarrow A\rightarrow B\rightarrow C\rightarrow0$ with $C\in\rm \mathscr{X}$, $A\in\rm \mathscr{X}$ if and only if $B\in\rm \mathscr{X}$.
In this section, by the use of duality pairs, we investigate $\mathcal{WI}_{k}^{n}(R)$ and $\mathcal{WF}_{k}^{n}(R^{op})$ as preenveloping and covering classes
\begin{prop}\label{2.12}
The pair $(\mathcal{WI}_{ k}^{n}(R), \mathcal{WF}_{ k}^{n}(R^{op}))$ is a duality pair.
\end{prop}
\begin{proof}
Let $\{M_{i}\}_{i\in I}$ be a family of right $R$-modules. If every $M_i$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$, then we claim that $\bigoplus_{i\in I}M_i$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$. By induction, if $k=0$, then by Proposition \ref{1.5}(2), $\bigoplus_{i\in I} M_{i}$ is $n$-weak flat. For every $R$-module $M_i$, there exists a short exact sequence $0\rightarrow L_i\rightarrow P_i\rightarrow M_i\rightarrow 0$ of right $R$-modules, where $P_i$ is projective. Thus we have the following short exact sequence $0\rightarrow \bigoplus_{i\in I} L_i\rightarrow \bigoplus_{i\in I}P_i\rightarrow \bigoplus_{i\in I}M_i\rightarrow 0$. Since each $L_i$ is in $\mathcal{WF}_{ k-1}^{n}(R^{op})$, by the induction hypothesis, we have that $\bigoplus_{i\in I}L_i$ is in $\mathcal{WF}_{ k-1}^{n}(R^{op})$, and so it follows that $\bigoplus_{i\in I}M_i$ is in $\mathcal{WF}_{k}^{n}(R^{op})$.
Also by Proposition \ref{2.4}(2), $M$ is in $\mathcal{WI}_{ k}^{n}(R)$ if and only if $M^{*}$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$. On the other hand, we have $\mathscr{P}(R)\subseteq \mathcal{WF}_{ k}^{n}(R^{op})$, and by Corollary \ref{2.3a}(2), we deduce that
the class $\mathcal{WF}_{ k}^{n}(R^{op})$ is projectively resolving. Then by \cite[Proposition 1.4]{HH}, the class $\mathcal{WF}_{ k}^{n}(R^{op})$
is closed under direct summands, and so we conclude that $(\mathcal{WI}_{ k}^{n}(R), \mathcal{WF}_{ k}^{n}(R^{op}))$ is a duality pair.
\end{proof}
\begin{prop}\label{2.13}
The pair $(\mathcal{WF}_{ k}^{n}(R^{op}), \mathcal{WI}_{ k}^{n}(R))$ is a duality pair.
\end{prop}
\begin{proof}
By Proposition \ref{2.4}(1), $M$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$ if and only if $M^{*}$ is in $\mathcal{WI}_{ k}^{n}(R)$. Let $\{M_{i}\}_{i\in I}$ be a family of left $R$-modules. Suppose that each $M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$ and let us show that $\prod_{i\in I}M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$. By induction, if $k=0$, then by Proposition \ref{1.5}(1), $\prod_{i\in I} M_{i}$ is $n$-weak injective. There exists a short exact sequence $0\rightarrow M_i\rightarrow E_i\rightarrow D_i\rightarrow 0$ of left $R$-modules, where $E_i$ is injective, and consequently, we have the following short exact sequence $0\rightarrow \prod_{i\in I} M_i\rightarrow \prod_{i\in I}E_i\rightarrow \prod_{i\in I}D_i\rightarrow 0$. Since each $D_i$ is in $\mathcal{WI}_{ k-1}^{n}(R)$, by induction hypothesis, we deduce that $\prod_{i\in I}D_i$ is in $\mathcal{WI}_{ k-1}^{n}(R)$, and it follows easily that $\prod_{i\in I}M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$. So in particular, every finite direct sum of family $\{M_i\}_{i\in I}$ in $\mathcal{WI}_{ k}^{n}(R)$ is in $\mathcal{WI}_{k}^{n}(R)$.
Also, we have $\mathscr{I}(R)\subseteq \mathcal{WI}_{ k}^{n}(R)$, and by Corollary \ref{2.3a}(1), it is clear that the class $\mathcal{WI}_{ k}^{n}(R)$ is injectively resolving. Then, by \cite[Proposition 1.4]{HH}, the class $\mathcal{WI}_{ k}^{n}(R)$
is closed under direct summands, and hence, it follows that $(\mathcal{WF}_{ k}^{n}(R^{op}), \mathcal{WI}_{ k}^{n}(R))$ is a duality pair.
\end{proof}
\begin{lem}\label{1.14}{\rm (\cite[Theorem 3.1]{HJ})}
Let $(\mathcal{M}, \mathcal{C})$ is a duality pair. Then $\mathcal{M}$ is closed under pure submodules, pure quotients, and pure extensions. Furthermore, the following hold:
\begin{enumerate}
\item [\rm (1)]
If $(\mathcal{M}, \mathcal{C})$ is product-closed then $\mathcal{M}$ is preenveloping.
\item [\rm (2)]
If $(\mathcal{M}, \mathcal{C})$ is coproduct-closed then $\mathcal{M}$ is covering.
\item [\rm (3)]
If $(\mathcal{M}, \mathcal{C})$ is perfect then $(\mathcal{M}, \mathcal{M}^{\bot})$ is a perfect cotorsion pair.
\end{enumerate}
\end{lem}
\begin{thm}\label{2.14}
The class $\mathcal{WI}_{ k}^{n}(R)$ is covering and preenveloping.
\end{thm}
\begin{proof}
By Proposition \ref{2.6}(1), the class $\mathcal{WI}_{ k}^{n}(R)$ is closed under pure submodules, pure quotients, and pure extensions. Also, by the proof of the Proposition \ref{2.13}, the class $\mathcal{WI}_{ k}^{n}(R)$ is closed under direct products, and similarly by using Proposition \ref{1.5a}, we see that the class $\mathcal{WI}_{ k}^{n}(R)$ is also closed under direct sums. By Proposition \ref{2.12}, the pair $(\mathcal{WI}_{ k}^{n}(R), \mathcal{WF}_{ k}^{n}(R^{op}))$ is a duality pair, and so from Lemma \ref{1.14} follows that the class $\mathcal{WI}_{ k}^{n}(R)$ is covering and preenveloping.
\end{proof}
\begin{thm}\label{2.16}
The class $\mathcal{WF}_{ k}^{n}(R^{op})$ is covering and preenveloping.
\end{thm}
\begin{proof}
By Proposition \ref{2.6}(2), the class $\mathcal{WF}_{ k}^{n}(R^{op})$ is closed under pure submodules, pure quotients, and pure extensions. Now, we show that the class $\mathcal{WF}_{ k}^{n}(R^{op})$ is closed under direct products.
If $\{M_{i}\}_{i\in I}$ is a family of right $R$-modules, where $M_i$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$ for any $i\in I$, then by induction, if $k=0$, by Theorem \ref{1.11}, it follows that $M_i$ is $n$-weak flat if and only if $\prod_{i\in I}M_i$ is $n$-weak flat. There exists a short exact sequence $0\rightarrow L_i\rightarrow P_i\rightarrow M_i\rightarrow 0$ of right $R$-modules, where $P_i$ is projective. Then, there exists the short exact sequence $0\rightarrow \prod_{i\in I} L_i\rightarrow \prod_{i\in I}P_i\rightarrow \prod_{i\in I}M_i\rightarrow 0$. If $M_i$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$, then $L_i$ is in $\mathcal{WF}_{ k-1}^{n}(R^{op})$. By induction hypothesis, $\prod_{i\in I}L_i$ is in $\mathcal{WF}_{ k-1}^{n}(R^{op})$, and so
we conclude that $\prod_{i\in I}M_i$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$. Similarly, using Proposition \ref{1.5}(2), we see that the class $\mathcal{WF}_{ k}^{n}(R^{op})$ is closed under direct sums.
Since the pair $(\mathcal{WF}_{ k}^{n}, \mathcal{WI}_{ k}^{n})$ is a duality pair by Proposition \ref{2.13}, we conclude that the class $\mathcal{WF}_{ k}^{n}(R^{op})$ is covering and preenveloping from Lemma \ref{1.14}.
\end{proof}
Notice that $\mathcal{WI}_{ k}^{0}(R)= \mathcal{WI}_{ k}(R)$ and $\mathcal{WF}_{ k}^{0}(R^{op})=\mathcal{WF}_{ k}(R^{op})$. Indeed, by Remark \ref{1.3}(1), $\mathcal{WI}_{ k}(R)$
and $\mathcal{WF}_{ k}(R^{op})$
are the classes of left $R$-modules and right $R$-modules with
weak injective dimension and weak flat dimension at most $k$, respectively, see \cite{.NG}. Thus
by Proposition \ref{2.4} and Theorems \ref{2.14} and \ref{2.16}, we have the following result:
\begin{cor}\label{1.14t}{\rm (\cite[Theorems 4.4, 4.5, 4.8 and 4.9]{.NG})}
\begin{enumerate}
\item [\rm (1)]
The class $\mathcal{WI}_{k}(R)$ is covering and preenveloping.
\item [\rm (2)]
The class $\mathcal{WF}_{k}(R^{op})$ is covering and preenveloping.
\end{enumerate}
\end{cor}
Also, if $k=0$, then from Proposition \ref{2.4} and Theorems \ref{2.14} and \ref{2.16} we have:
\begin{cor}\label{1.20t}{\rm (\cite[Theorem 2.5]{JZW})}
The following assertions hold:
\begin{enumerate}
\item [\rm (1)]
Every left (resp. right) $R$-module has an $fp_n$-injective (resp. $fp_n$-flat) cover.
\item [\rm (2)]
Every left (resp. right) $R$-module has an $fp_n$-injective (resp. $fp_n$-flat) preenvelope.
\item [\rm (3)]
If $M\rightarrow N$ is an $fp_n$-injective (resp. $fp_n$-flat) preenvelope of a left (resp. right) $R$-module $M$, then $N^*\rightarrow M^*$ is an $fp_n$-flat (resp. $fp_n$-injective) precover of $M^*$.
\end{enumerate}
\end{cor}
Now we give some equivalent characterizations for $_RR$ to be in $\mathcal{WI}_{ k}^{n}(R)$ in terms of properties of $\mathcal{WI}_{ k}^{n}(R)$ and $\mathcal{WF}_{ k}^{n}(R^{op})$.
\begin{prop}\label{2.17}
The following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
$_RR$ is in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (2)]
Every right $R$-module has a monic $\mathcal{WF}_{ k}^{n}(R^{op})$-preenvelope.
\item [\rm (3)]
Every injective right $R$-module is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\item [\rm (4)]
Every flat left $R$-module is in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (5)]
Every projective left $R$-module is in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (6)]
Every left $R$-module has an epic $\mathcal{WI}_{ k}^{n}(R)$-cover.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)\Rightarrow (2)$ By Theorem \ref{2.16}, every $R$-module $M$ has a $\mathcal{WF}_{ k}^{n}(R^{op})$-preenvelope $f: M\rightarrow F$. By Proposition \ref{2.4}(2), $R^*$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$, and so similar to proof the of Theorem \ref{2.16}, one can prove that $\prod_{i\in I} R^{*}$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$. Also, $(_RR)^*$ is a cogenerator. So
we have the following exact sequence $0\rightarrow M\stackrel{\displaystyle g}\rightarrow \prod_{i\in I} R^{*}$, and hence there exists a morphism $h: F\to \prod_{i\in I} R^{*}$ such that $hf=g$ and so $f$ is monic.
$(2)\Rightarrow (3)$ Let $E$ be an injective right $R$-module. By assumption, let $f: E\rightarrow F$ be a monic $\mathcal{WF}_{ k}^{n}(R^{op})$-preenvelope of $E$. Therefore, the split exact sequence $0\rightarrow E\rightarrow F\rightarrow{F}/{E}\rightarrow 0$ exists, and so $E$ is a direct summand of $F$. Hence by Proposition \ref{2.3} and \cite[Proposition 7.6]{Rot2}, $E$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
$(3)\Rightarrow (1)$ By assumption, $R^*$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$, since $R^*$ is injective. So $_RR$ is in $\mathcal{WI}_{ k}^{n}(R)$ by Proposition \ref{2.4}(2).
$(3)\Rightarrow (4)$
Let $F$ be a flat left $R$- module. Then by \cite[Theorem 3.52]{Rot1}, $F^*$ is injective and so $F^*$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$ by assumption, and hence $F$ is in $\mathcal{WI}_{ k}^{n}(R)$ by Proposition \ref{2.4}(2).
$(4)\Rightarrow (5)$ and $(5)\Rightarrow (1)$ are clear.
$(6)\Rightarrow (1)$ By assumption, $_RR$ has an epimorphism $\mathcal{WI}_{ k}^{n}(R)$-cover $f:D\rightarrow R$.
Then we have a split exact sequence $0\rightarrow {\rm Ker}f \rightarrow D\rightarrow R\rightarrow 0$ with $D$ is
in $\mathcal{WI}_{ k}^{n}(R)$. Then, by Proposition \ref{2.2} and also in particular by \cite[Proposition 7.22]{Rot2}, $_RR$ is in $\mathcal{WI}_{ k}^{n}(R)$.
$(1)\Rightarrow (6)$
First, we show that if $\{M_{i}\}_{i\in I}$ is a family of left $R$-modules and $M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$, then $\bigoplus_{i\in I}M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$. If $k=0$, then by Proposition \ref{1.5a}, $\bigoplus_{i\in I} M_{i}$ is $n$-weak injective if and only if each $M_i$ is $n$-weak injective. The short exact sequence $0\rightarrow M_i \rightarrow E_i\rightarrow N_i\rightarrow 0$, where $E_i$ is injective exists. Consequently, the sequence
$0\rightarrow \bigoplus_{i\in I}M_i \rightarrow \bigoplus_{i\in I}E_i\rightarrow \bigoplus_{i\in I}N_i\rightarrow 0$ is also exact. If $M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$, then by Proposition \ref{2.2}, $N_i$ is in $\mathcal{WI}_{ k-1}^{n}(R)$. By induction hypothesis, $\bigoplus_{i\in I}N_i$ is in $\mathcal{WI}_{ k-1}^{n}(R)$ and then by Proposition \ref{2.2} again, $\bigoplus_{i\in I}M_i$ is in $\mathcal{WI}_{ k}^{n}(R)$. On the other hand, by Theorem \ref{2.14},
there is a $\mathcal{WI}_{ k}^{n}(R)$-cover $\psi: X \rightarrow M$ for any left $R$-module $M$. Also, there is an exact sequence $0\rightarrow K \rightarrow P\stackrel{\displaystyle h}\rightarrow M\rightarrow 0$ of left $R$-modules, where $P$ is an $R$-module free. Since $_RR$ is in $\mathcal{WI}_{ k}^{n}(R)$, then it follows that $P=\bigoplus_{i\in I}R$ is $\mathcal{WI}_{ k-1}^{n}(R)$. So there exists a map $g: P \rightarrow X $ such that $\psi g=h$. Since $h$ is epic, we deduce that $\psi: X \rightarrow M$ is also epic.
\end{proof}
Now we define the $n$-super finitely presented dimension of a ring which generalizes a homological dimension introduced and studied in \cite{Z.G} defined as ${\rm l.sp. gldim}(R)={\rm sup}\{{\rm pd}_{R}(M) \mid M \ \rm{is \ a \ super \ finitely \ presented \ left \ module}\}.$
\begin{Def}\label{2.1w}
Let $n$ be a non-negative integer. Define\\
${\rm l.nsp. gldim}(R):={\rm sup}\{{\rm pd}_{R}(K_{n-1}) \mid K_{n-1} \ \text{is a special\ super \ finitely presented left module} \}.$
\end{Def}
It is clear that ${\rm l.n.sp. gldim}(R)\leq{\rm l.sp. gldim}(R)$ for any $n\geq 0$. If $n=0$, then ${\rm l.n.sp. gldim}(R)={\rm l.sp. gldim}(R)$.
In examples \ref{1.1a} and \ref{1.3a}, since $R$ is coherent, we have ${\rm l.sp. gldim}(R)=2$ by \cite[Theorem 3.8]{Z.W}. But, ${\rm l.n.sp. gldim}(R)\leq1$ for any $n\geq 1$, since ${\rm pd}_{R}(U)\leq 2$ for every $n$-super finitely presented left $R$-module $U$.
\begin{prop}\label{1.17a}
The following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
Every right $R$-module has an epic $\mathcal{WF}_{ k}^{n}(R^{op})$-envelope.
\item [\rm (2)]
$M$ is in $\mathcal{WI}_{ k+1}^{n}(R)$ for every left $R$-module $M$.
\item [\rm (3)]
$N$ is in $\mathcal{WF}_{ k+1}^{n}$ for every right $R$-module $N$.
\item [\rm (4)]
Every $R$-module has a monic $\mathcal{WI}_{ k}^{n}(R)$-cover.
\item [\rm (5)]
Every quotient of any $n$-weak injective left $R$-module is in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (6)]
Every submodule of any $n$-weak flat right $R$-module is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\item [\rm (7)]
The kernel of any $\mathcal{WI}_{ k}^{n}(R)$-precover of any left $R$-module is in $\mathcal{WI}_{ k}^{n}(R)$.
\item [\rm (8)]
The cokernel of any $\mathcal{WF}_{ k}^{n}(R^{op})$-preenvelope of any right $R$-module is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
\item [\rm (9)]
${\rm l.nsp. gldim}(R)\leq k+1$.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)\Leftrightarrow (6)$
Consider the class $\mathcal{WF}_{k}^{n}(R^{op})$ of modules with
$n$-weak flat dimension at most $k$. Then, similar to the proofs of the Proposition \ref{2.12} and Theorem \ref{2.16}, the class $\mathcal{WF}_{k}^{n}(R^{op})$ is closed under direct summands and direct products, respectively. So \cite[Theorem 2]{CD} shows that (1) and (6) are equivalent.
$(4)\Leftrightarrow (5)$
Consider the class $\mathcal{WI}_{k}^{n}(R)$ of left modules with
$n$-weak injective dimension at most $k$. Then, similar to the proofs of the Propositions \ref{2.13} and \ref{2.17}($(1)\Rightarrow (6)$), the class $\mathcal{WI}_{k}^{n}(R)$ is closed under direct summands and direct sums, respectively.
Thus from \cite[Proposition 4]{Z.JJG}, it follows that (4) and (5) are equivalent.
$(6)\Rightarrow (5)$
Let $N$ be a submodule of $n$-weak injective left $R$-module $M$. Then, there exists a short exact sequence $ 0\rightarrow N\rightarrow M\rightarrow {M}/{N}\rightarrow 0$ which induces the
exactness of $ 0\rightarrow ({M}/{N})^*\rightarrow M^*\rightarrow N^*\rightarrow 0$. By Proposition \ref{1.4}(2), $M^*$ is $n$-weak flat right $R$-module, and hence by hypothesis, $ ({M}/{N})^*$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$. Consequently, using Proposition \ref{2.4}(2), we conclude that ${M}/{N}$ is in $\mathcal{WI}_{ k}^{n}(R)$.
$(5)\Rightarrow (6)$ Similar to the proof of $(6)\Rightarrow (5)$ using Propositions \ref{1.4}(1 ) and \ref{2.4}(1).
$(1)\Rightarrow (8)$
Let $M$ be a right $R$-module. Then by Theorem \ref{2.16}, there is a $\mathcal{WF}_{ k}^{n}(R^{op})$-preenvelope. $\psi : M\rightarrow D$. Also by hypothesis, if the map $\phi: M\rightarrow Y$ is an epic $\mathcal{WF}_{ k}^{n}(R^{op})$-envelope of $M$, then from \cite[Lemma 8.6.3]{EM}, it follows that $L\oplus Y\cong D$, where $L={\rm Coker}\psi$. So $L$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$ as a direct summand of $D$.
$(8)\Rightarrow (6)$ Consider the short exact sequence $ 0\rightarrow L\rightarrow M\rightarrow D\rightarrow 0$ of right $R$-modules, where $M$ is $n$-weak flat and $L$ a submodule of $M$. We claim that $L$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$. Indeed, we have the following commutative diagram:
$$\xymatrix{
0\ar[r]& L\ar[r]\ar@{=}[d]&M\ar[d]& \\
&L\ar[r]^{h}&X\ar[r]&Y\ar[r]& 0\\
}$$
where $h: L\rightarrow X$ is a $\mathcal{WF}_{ k}^{n}(R^{op})$-preenvelope of $L$ and $Y={\rm Coker}h$. In particular, the sequence $ 0\rightarrow L\rightarrow X\rightarrow Y\rightarrow 0$ is exact, and so by Corollary \ref{2.3a}(2), $L$ is in $\mathcal{WF}_{ k}^{n}(R^{op})$.
$(5)\Rightarrow (2)$ For every left $R$-module $M$, there is an exact sequence
$ 0\rightarrow M\rightarrow E\rightarrow D\rightarrow 0$ of left $R$-modules, where $E$ is injective. By (5), $D$ is in $\mathcal{WI}_{ k}^{n}(R)$ and so by Proposition \ref{2.2}, $M$ is in $\mathcal{WI}_{ k+1}^{n}(R)$.
$(2)\Rightarrow (5) $ Clear.
$(2)\Leftrightarrow (3)\Leftrightarrow (9)$ Clear by Proposition \ref{2.3l}.
\end{proof}
When $k=0$, we have the following result.
\begin{thm}\label{2.1oo}
The following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)]
$_RR$ is in $\mathcal{WI}^{n}(R)$.
\item [\rm (2)]
Every left $R$-module is in $\mathcal{WI}^{n}(R)$.
\item [\rm (3)]
Every special super finitely presented left $R$-module is in $\mathcal{WI}^{n}(R)$.
\item [\rm (4)]
The short exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow0$ is a split superpure sequence.
\item [\rm (5)]
Every right $R$-module is in $\mathcal{WF}^{n}(R^{op})$.
\end{enumerate}
\end{thm}
\begin{proof}
$(2)\Rightarrow (3)$ and $(2)\Rightarrow (1)$ are trivial.
$(1)\Rightarrow (2)$
Let $N$ be a left $R$-module. Consider $P\rightarrow N\rightarrow 0$ where $P$ is free. Since $_RR$ is in $\mathcal{WI}^{n}(R)$, by Proposition \ref{1.5a}, we get that $P$ is in $\mathcal{WI}^{n}(R)$, and so by Proposition \ref{1.17a}, $N$ is in
$\mathcal{WI}^{n}(R)$.
$(3)\Rightarrow (4)$ Let $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. Then, we have the short exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow0$ left $R$-modules. Since $U$ is also
$(n+1)$-super finitely presented, it follows that $K_{n}$ is special super finitely presented. By assumption, $K_{n}$ is in $\mathcal{WI}^{n}(R)$ and thus by Remark \ref{1.3}(1), ${\rm Ext}_{R}^{1}(K_{n-1}, K_{n})=0$ and so by Proposition \ref{1.10}, the above sequence is split superpure.
$(4)\Rightarrow (5)$ Let the short exact sequence $0\rightarrow K_{n}\rightarrow F_{n}\rightarrow K_{n-1}\rightarrow0$ be split superpure. Then $K_{n-1}$ is flat as a direct summand of $F_{n}$. Consequently, ${\rm Tor}_{1}^{R}(M, K_{n-1})=0$ for any right $R$-module $M$, and so by Remark \ref{1.3}(1), $M$ is in $\mathcal{WF}^{n}(R^{op})$.
$(5)\Rightarrow (2)$
Let $M$ be any left $R$-module. Then, $M^*$ is a right $R$-module and hence by assumption, $M^*$ is in $\mathcal{WF}^{n}(R^{op})$. Therefore by Proposition \ref{1.4}(2), every left $R$-module $M$ is $\mathcal{WI}^{n}(R)$.
\end{proof}
A cotorsion pair (or orthogonal theory of ${\rm Ext}$) consists of a pair $(\mathcal{F}, \mathcal{C})$ of
classes of $R$-modules \cite {SA,SAT} such that $\mathcal{C}= \mathcal{F}^{\bot}$ and $\mathcal{F} =^{\bot}\mathcal{C}$ where for a class $\mathcal{S}$, we have
$\mathcal{S}^{\bot}=\{M : M \ {\rm is \ an} \ R$-${\rm module \ and} \ {\rm Ext}_{R}^{1}(S,M) = 0 \ {\rm for \ all} \ S\in\mathcal{S}\}$
and $^{\bot}\mathcal{S}=\{M : M \ {\rm is \ an} \ R$-${\rm module \ and} \ {\rm Ext}_{R}^{1}(M,S)=0 \ {\rm for \ all} \ S\in\mathcal{S}\}$.
A cotorsion theory $({\cal F}, \mathcal{C})$ is called hereditary, if whenever $0 \rightarrow F^{'}\rightarrow F\rightarrow F^{''}\rightarrow 0$ is exact with $F, F^{''}\in {\cal F}$ then $F^{'}$ is also in ${\cal F}$, or equivalently, if $0 \rightarrow C^{'}\rightarrow C\rightarrow C^{''}\rightarrow 0$ is an exact sequence with $C, C^{'}\in \mathcal{C}$, then $C^{''}$ is also in $\mathcal{C}$. A
cotorsion pair $({\cal F}, \mathcal{C})$ is called complete provided that for any $R$-module $M$,
there exist exact sequences $0\rightarrow M\rightarrow C\rightarrow D\rightarrow 0$ and $0\rightarrow C^{'}\rightarrow D^{'}\rightarrow M\rightarrow 0$
of $R$-modules with $C, C^{'}\in\mathcal{C}$ and $D, D^{'}\in\mathcal{F}$, for more details, see \cite{ Z.JG,NG}.
\begin{prop}\label{1.17u}
\begin{enumerate}
\item [\rm (1)]
If $n$-${\rm wid}_{R}(R)\leq k$, then the pair
$(\mathcal{WI}_{k}^n(R), \mathcal{WI}_{k}^n(R)^{\bot})$
is a perfect cotorsion pair.
\item [\rm (2)]
The pair $(^{\bot}\mathcal{WI}_{k}^n(R), \mathcal{WI}_{k}^n(R))$ is a hereditary cotorsion pair.
\end{enumerate}
\end{prop}
\begin{proof}
(1) The pair $(\mathcal{WI}_{ k}^{n}(R), \mathcal{WF}_{ k}^{n}(R^{op}))$ is a duality pair by Proposition \ref{2.12}. Similar to the proof of the Proposition \ref{2.12}, we show that $\mathcal{WI}^n(R)$ is closed under direct sums and under extensions. Also by hypothesis and Proposition \ref{2.3}, $R$ is in $\mathcal{WI}_{k}^n(R)$, and so $(\mathcal{WI}_{k}^n(R), \mathcal{WF}_{k}^{n}(R^{op}))$ is a perfect duality pair. So by Lemma \ref{1.14}, it follows that $(\mathcal{WI}_{k}^n(R), \mathcal{WI}_{k}^n(R)^{\bot})$ is a perfect cotorsion pair.
(2) First, we show that $(^{\bot}\mathcal{WI}_{k}^{n}(R))^{\bot}=\mathcal{WI}_{k}^n(R)$. It is clear that $\mathcal{WI}_{k}^n(R)\subseteq (^{\bot}\mathcal{WI}_{k}^n(R))^{\bot}$. Let $M$ is in $(^{\bot}\mathcal{WI}_{k}^n(R))^{\bot}$ and $U$ be an $n$-super finitely presented left $R$-module with special super finitely presented module $K_{n-1}$. Then, it follows that $K_{n-1}$ is in ${^{\bot}\mathcal{WI}_{k}^n(R)}$ and consequently, ${\rm Ext}_{R}^{1}(K_{n-1}, M)=0$. Thus by Remark \ref{1.3}(1), ${\rm Ext}_{R}^{n}(U, M)=0$ and hence
$M$ is in $\mathcal{WI}_{k}^n(R)$. Let $0 \rightarrow M_1\rightarrow M_2\rightarrow M_3\rightarrow 0$ be a short exact sequence left $R$-modules such that $M_{1}$ and $M_{2}$ are in $\mathcal{WI}_{k}^n(R)$. Then from Corollary \ref{2.3a}(1), we conclude that $M_{3}$ is in $\mathcal{WI}_{k}^n(R)$, and so $(^{\bot}\mathcal{WI}_{k}^n(R), \mathcal{WI}_{k}^n(R))$ is a hereditary cotorsion pair.
\end{proof}
\begin{prop}\label{1.18}
The pair $(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WF}_{k}^n(R^{op})^{\bot})$ is a hereditary perfect cotorsion pair.
\end{prop}
\begin{proof}
The pair $(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WI}_{k}^n(R))$ is a duality pair by Proposition \ref{2.13}. The class $\mathcal{WF}_{k}^n(R^{op})$ is closed under direct sums and extensions. By Remark \ref{1.3}(5), $R$ is in $\mathcal{WF}_{k}^n(R^{op})$, and hence $(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WI}_{k}^n(R))$ is a perfect duality pair. So by Lemma \ref{1.14}, $(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WF}_{k}^n(R^{op})^{\bot})$ is a perfect cotorsion pair. Let $0 \rightarrow M_1\rightarrow M_2\rightarrow M_3\rightarrow 0$ be a short exact sequence of right $R$-modules such that $M_{2}$ and $M_{3}$ are in $\mathcal{WF}_{k}^{n}(R^{op})$. Then from Corollary \ref{2.3a}(2), we get that $M_{1}$ is in $\mathcal{WF}_{k}^n(R^{op})$, and so $(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WF}_{k}^n(R^{op})^{\bot})$ is a hereditary cotorsion pair.
\end{proof}
Let $(\mathcal{A}, \mathcal{B})$ and $(\mathcal{C}, \mathcal{D})$ be two cotorsion pairs. Then by \cite[Remark 4.12]{NG}, $(\mathcal{A}, \mathcal{B})\preceq (\mathcal{C}, \mathcal{D})$ if $\mathcal{B}\subseteq\mathcal{D}$. By \cite[Definition 6.1]{HW}, the pair $(\mathcal{M}, \mathcal{N})$ is said to be cogenerated by a set if there is a set
of objects $M\in\mathcal{M}$ such that $N\in\mathcal{N}$ if and only if ${\rm Ext}_{R}^{1}(M,N)=0$ for all
$M\in\mathcal{M}$.
In \cite{TE}, Eklof and Trlifaj proved that a cotorsion pair $(\mathcal{F}, \mathcal{C})$ in $R$-Mod is complete
when it is cogenerated by a set. This result actually holds in any Grothendieck
category with enough projectives, as Hovey proved in \cite{HW}.
Then by Remark \ref{1.3}, we have the following easy observations:
\begin{rem}\label{1.19}
\begin{enumerate}
\item [\rm (1)]
Let $\mathcal{S}Pres_{n}^{\infty}$ be a subclass of all the
special $n$-super finitely presented left $R$-modules. Then, $(^{\bot}\mathcal{WI}^n(R), \mathcal{WI}^n(R))$ is a hereditary complete cotorsion pair, since it is cogenerated
by a set of representatives for $\mathcal{S}Pres_{n}^{\infty}$.
\item [\rm (2)]
There is a serie of hereditary complete cotorsion pairs for any $n\geq 0$ and $k\geq 0$ as follows:
$$(^{\bot}\mathcal{WI}_{k}^n(R), \mathcal{WI}_{k}^n(R))\preceq (^{\bot}\mathcal{WI}_{k}^{n+1}(R), \mathcal{WI}_{k}^{n+1}(R))\preceq (^{\bot}\mathcal{WI}_{k}^{n+2}(R), \mathcal{WI}_{k}^{n+2}(R))\preceq\cdots$$
\item [\rm (3)]
There is a serie of hereditary cotorsion pairs for any $n\geq 0$ and $k\geq 0$ as follows:
$$(\mathcal{WF}_{k}^n(R^{op}), \mathcal{WF}_{k}^n(R^{op})^{\bot})\preceq(\mathcal{WF}_{k}^{n+1}(R^{op}), \mathcal{WF}_{k}^{n+1}(R^{op})^{\bot})\preceq\cdots$$
\end{enumerate}
\end{rem}
\bigskip
\noindent\textbf{Acknowledgment.} The authors
would like to thank the referee for the helpful suggestions and valuable comments.
\bigskip
|
{
"timestamp": "2021-06-08T02:23:00",
"yymm": "2102",
"arxiv_id": "2102.08775",
"language": "en",
"url": "https://arxiv.org/abs/2102.08775"
}
|
\section{Introduction}
Photon detectors are indispensable in quantum communication applications \cite{hadfield2009}. To ensure the reliability of detection results, it is important to characterize the detectors being used both within the intended working parameters and possible unintended conditions. This characterization could help in revealing possible flaws and imperfections. These flaws could lead to misguided detection results or, worse, exploitable vulnerabilities in the case of quantum cryptography applications. This characterization guides the work on improving the robustness of quantum systems. Over the years, many attacks have been reported on various types of photon detectors based on avalanche photodiodes \cite{makarov2006,zhao2008,lydersen2010a,lydersen2010,lydersen2010b,lydersen2011,gerhardt2011,huang2016,qian2018,fei2018} and superconducting nanowires \cite{lydersen2011c,tanner2014,elezov2019}.
Transition-edge sensor (TES) is a photon detector capable of providing full photon-number-resolving capability \cite{berggren2013,eisaman2011}. It also achieved the highest detection efficiency among photon-number-resolving detectors up to $95\%$ at $1550~\nano\meter$ \cite{lita2008,fukuda2009,miller2011b}. This type of detector is used in various applications that require high detection probability, such as loophole-free Bell test \cite{giustina2015}. Its photon number resolving capability could also be used to monitor against attacks on a quantum key distribution (QKD) system \cite{xu2010e}. As one of the potential detectors in quantum communication where the reliability of detection result affects overall security, the TES photon detector should be investigated for its robustness and possible flaws. In this study, we experimentally demonstrate two potential vulnerabilities of TES, namely, a wavelength attack where the photon number result could be controlled by changing signal's wavelength and a faked-state attack where the adversary increases the temperature of TES with an appropriate bright continuous-wave (CW) laser then forces an arbitrary photon number detection result using a bright pulsed laser.
\begin{figure}
\includegraphics{setup.pdf}
\caption{Experimental setup. (a)~Internal circuit diagram of the TES system, consisting of the TES photon detector and its DC-SQUID readout. The TES photon detector is mounted on a $100$-$\milli\kelvin$ cold stage chilled by an adiabatic demagnetization refrigerator (ADR). The TES current $I_{\text{TES}}$ is readout by DC-SQUID electronics and transferred proportionally to a voltage output $V_{\text{out}}$. (b)~Blinding and fake signal power is controlled by variable attenuators (Att), combined at a $50\!:\!50$ fiber-optic beam splitter (BS), measured by an optical power meter (PM), and applied to the TES system under test. Its output voltage $V_{\text{out}}$ is recorded and analyzed by a data acquisition module (DAQ) connected to a computer (PC).}
\label{fig:setup}
\end{figure}
A transition-edge sensor is a sensitive micro-calorimeter whose sensing element consists of an absorber and a superconductive thermometer with a positive temperature coefficient of resistance ($\dd{R}/\dd{T}>0$) \cite{irwin2005}. During the operation, the sensing element's temperature is kept near the transition temperature via voltage-biasing \cite{irwin_application_1995}. This voltage-biasing is provided by an external total bias current flowing through a shunt resistor $R_s$ connected in parallel with the TES [\cref{fig:setup}(a)]. In our setup $R_s = 16.1~\milli\ohm$, which is much smaller than the TES normal-conductivity resistance of $3~\ohm$.
The current passing through the TES $I_{\text{TES}}$ flows through an inductive coil $L_\text{in}$. The latter couples its magnetic flux via a mutual inductance ($M_\text{in}$) to a direct-current superconducting quantum interference device (DC-SQUID). The SQUID serves as a low-noise amplifier of $I_{\text{TES}}$. A feedback coil $L_\text{FB}$ inside the ADR, together with a room-temperature amplifier G and feedback resistor $R_\text{FB}$ are used to transform the signal from the TES into a measurable voltage $V_\text{out}$ \cite{drung2006}. $I_{\text{TES}}$ is obtained by dividing $V_\text{out}$ by the current-to-voltage gain of the DC-SQUID and amplifier G ($0.375~\volt\per\micro\ampere$ in this experiment), while the voltage across TES $V_\text{TES}$ is calculated by multiplying $R_s$ by the current through it (total bias current with $I_{\text{TES}}$ subtracted).
When a photon from the input optical fiber hits the detector, the photon's energy is absorbed, raising the TES' temperature and resistance. This change of resistance reduces $I_{\text{TES}}$ and proportionally reduces $V_{\text{out}}$. From the relation of TES temperature and $I_{\text{TES}}$, it can be seen that the change of $V_\text{out}$ during the detection is proportional to the absorbed energy of the photon(s), enabling photon-number discrimination.
In our setup, the TES and SQUID are attached on a copper block attached in turn to the cold plate of the ADR. Under normal operating conditions, both the TES and SQUID are at $100~\milli\kelvin$ temperature. Their bias currents are provided by specialised electronic circuits (commercially available from Magnicon GmbH).
\begin{figure}
\includegraphics{oscillograms.pdf}
\caption{Oscillograms of $V_\text{out}$. (a)~Typical single-photon responses. (b)~Fake ``single-photon'' responses under $1550~\nano\meter$ blinding attack with $2.4\times10^{-18}~\joule$ pulse energy (i.e.,\ about 19-photon weak coherent pulse).}
\label{fig:oscillograms}
\end{figure}
To measure the response of the TES to various optical signals, we use a setup shown in \cref{fig:setup}(b). The TES is a fiber-coupled $10\times10~\micro\meter$ Ti device in a multilayer optical resonator designed to maximise coupling at $1550~\nano\meter$ wavelength and is similar to devices reported in \cite{fukuda2009,fukuda2011}. The photon coupling efficiency in our TES sample under test is $\approx1\%$ owing to a misaligned fiber end to the TES effective area. However, this should not affect the results of our study in a qualitative way, because the misalignment merely introduces additional optical attenuation and can be compensated by applying a brighter test signal. Our light source consists of a CW blinding laser and a pulsed laser (with about $16~\nano\second$ pulse width), combined on a fiber-optic beamsplitter (BS). The energy of laser pulse can be adjusted by the variable attenuator (OZ Optics DA-100). A power meter is used for monitoring the laser output power. A function generator produces trigger pulses to synchronize the laser source and signal recordings. The signal from the TES is digitized by a data acquisition module (DAQ) and analyzed on a computer (PC). The DAQ is a 16-bit, $125~\mega\hertz$ sampling rate analog-to-digital converter (AlazarTech ATS660) mounted on a peripheral component interconnect (PCI) bus of the PC. This DAQ allows measuring signals of millivolt level. Typical single-photon responses are shown in \cref{fig:oscillograms}(a). The peak voltage value during $5~\micro\second$ following the application of the optical pulse is assumed to be the amplitude of the detector response $V_\text{max}$.
Next, we investigate two potentially exploitable vulnerabilities of the TES detector.
\emph{Wavelength-dependent response.} TES amplitude output voltage $V_\text{max}$ is inherently proportional to the energy of photons absorbed, and sensitive to a wide range of wavelengths. In principle, $N$ photons with a wavelength $N\lambda$ arriving simultaneously have the same combined energy E as one photon with the wavelength $\lambda$. This can be seen from the relation $E = N h c/\lambda$, where $h$ is Planck's constant and $c$ is the speed of light in vacuum. Thus TES would produce the same output in these two cases \cite{rosenberg2005,joshi2014,hattori2019}.
We illustrate this fact with a simple experiment that shows how an attacker Eve could fake a single-photon detection result by sending multiple photons with proportionally lower photon energy. We send weak-coherent signals from several lasers of different wavelengths through the input fiber of the TES. We then record the voltage response's amplitude $V_\text{max}$ from the TES. The histogram in \cref{fig:results-wavelength} shows that the response signal of single-photon detection from a $450~\nano\meter$ photon is overlapped with two-photons detection from $780~\nano\meter$ and three-photons detection from $1550~\nano\meter$ photons. This shows that an expected photon number readout from the TES could be faked by multiple photons with a proportionally longer wavelength. It shows that the photon number measurement results from the TES alone cannot be used to characterize the photon number distribution of photon signal through an untrusted channel, such as the quantum channel, where the adversary could intercept and replace the signal with photons of arbitrary wavelength. Thus, any QKD scheme using photon number distribution from TES to monitor Eve's activity in the quantum channel is vulnerable to this wavelength-dependent attack \cite{xu2010e}. A narrow-band wavelength filter should prevent this attack. However, the characterization of the filter's performance against exploitable wavelengths is needed.
\begin{figure}
\includegraphics{wavelength.pdf}
\caption{Histogram of TES output voltage under weak-coherent laser illumination at $1550~\nano\meter$ (red), $780~\nano\meter$ (black), and $450~\nano\meter$ (blue). The leftmost peak represents zero-photon detection. Subsequent peaks to the right represent higher photon number detections. These peaks appear at the voltage level proportional to the energy of the photons.}
\label{fig:results-wavelength}
\end{figure}
\emph{Blinding attack.} In a blinding attack on QKD receiver, Eve turns the QKD detectors insensitive to single photons (blinded), but able to produce the expected detection output results when experiencing a bright-light pulse. This type of attack has been demonstrated in various single-photon detectors \cite{lydersen2010a,lydersen2010b,lydersen2011,lydersen2011c,gerhardt2011,huang2016}.
In the ideal condition, the TES operates at the transition edge between superconductivity and normal resistive state. In this region, a small change of energy such as single-photon absorption could induce a measurable change in the output voltage proportional to the energy absorbed. By setting a voltage threshold level for each input photon energy, one could discriminate the number of absorbed photons. From the known characteristic of TES \cite{irwin2005} at a slightly higher temperature than the operational regime, it could produce the same voltage output level when absorbing much higher energy that can be delivered by a bright laser pulse. In this section, we experimentally demonstrate this behavior.
\begin{figure}
\includegraphics{I-V.pdf}
\caption{I--V curves of the TES. (b)~The characteristics of the system at $100~\milli\kelvin$ under bright laser illumination closely resemble (a)~the characteristics at different heat-bath temperatures. This confirms Eve's ability to control TES's temperature using bright light through the input fiber. Dots are measurement results while a solid line is their bin-averaging.}
\label{fig:results-IV}
\end{figure}
We first investigate the behavior of TES when its temperature is increased beyond the designed transition-edge region. We set the TES to the operating temperature of $100~\milli\kelvin$. We record the current-voltage (I--V) characteristic curves of the TES at different temperatures \cite{fukuda2009}. These characteristic curves, shown in \cref{fig:results-IV}(a), will be used as a reference for the following experiments. At low temperature ($100~\milli\kelvin$), $I_{\text{TES}}$ is roughly inverse proportional on $V_\text{TES}$. As the temperature increases, $I_{\text{TES}}$ becomes lower. Once the device reaches its critical temperature of $\approx 180~\milli\kelvin$, $I_{\text{TES}}$ becomes directly proportional on $V_\text{TES}$ as the TES becomes a normal resistor.
We now demonstrate the ability of Eve to control the temperature using bright light. A CW laser at $1550~\nano\meter$ is coupled through the input fiber of TES. \cref{fig:results-IV}(b) shows that the I--V characteristics at different temperature of the device under test can be replicated. This shows that an adversary could arbitrarily control the temperature of TES using bright CW laser.
\begin{figure}
\includegraphics{response.pdf}
\caption{Detector response to the faked-state attack. For comparison, the black curve shows the normal response to a weak coherent pulse (WCP) attenuated to a single-photon level, containing the zero-photon response (left peak) and the one-photon response (right peak). (a)~Fake histogram of output voltage at different faked-state pulse energies. The detector is blinded with $0.25~\nano\watt$ CW light. (b)~An attack model on a BB84 QKD system with TES as a detector. The threshold (green vertical dashed line) marks the minimum TES voltage output that the system in our model would register as a detection. The fake response is shown for two cases where Bob and Eve pick the same (red) and different (blue) measurement bases under fake pulsed signal of $2.4\times10^{-18}~\joule$ pulse energy.}
\label{fig:results-fake}
\end{figure}
For the faked-state attack, the appropriate blinding laser power is one that puts the response at the threshold between the transition-edge regime and the normal resistor regime. In this region, the TES is `blinded' from single-photon input as the change of voltage produced by an additional absorption is minimal. At the same time, the system in this condition could produce the same voltage level as the system at normal operating temperature when absorbing a bright laser pulse. The histogram of faked-state results with different peak power is shown in \cref{fig:results-fake}(a) and typical oscillograms in \cref{fig:oscillograms}(b). Here, the fake signals are laser pulses with $16~\nano\second$ width and $100~\kilo\hertz$ repetition rate. The detector response exhibits a strong superlinearity \cite{lydersen2011b} between Eve's pulse energies of $1.2$ to $9.6 \times 10^{-18}~\joule$, which is a potential security loophole. I.e.,\ the voltage response of TES can be controlled by Eve who has access to the input channel. She can choose a bright laser power such that the voltage output represents a `photon number' of her choice. The physics of the detector in this regime is not clear to us and needs to be investigated further.
\emph{Attack model.} To emphasize the threat of vulnerability found in the previous section, we model a faked-state attack \cite{lydersen2010a} on a Bennett-Brassard 1984 (BB84) \cite{bennett1984} QKD system, assuming it uses the TES under test as its detectors. We assume here that the wavelength of the signal used by Alice and Bob is $780~\nano\meter$. In this attack model, the adversary Eve intercepts each signal from Alice and measures it in a random basis. She then reproduces a bright fake signal identical to her detection result and sends it to Bob. Here, she also sends a CW blinding laser power set to $0.25~\nano\watt$ and sets her fake pulsed signal at $2.4\times10^{-18}~\joule$ pulse energy, both at $1550~\nano\meter$. In case of Bob's measurement basis choice being different from that of Eve, the power of the fake signal would be split equally between Bob's detectors (we assumed here Bob's basis choice modulator is wavelength-independent). As shown in \cref{fig:results-fake}(b), most of the response signal from TES would fall below the single-photon detection threshold, thus remain unregistered. However, if their basis choices matched, sometimes the signal will be registered. This attack condition causes extra detection loss in Bob. Eve could hide this loss from Alice and Bob if the original quantum channel loss between Alice and Bob is lower than the detection loss induced by Eve's attack. When the basis of measurement between Eve and Bob are different, half of the registered detection events would cause an error in the key. This can be seen in the portion of the blue histogram to the right of the single-photon threshold (green line) in \cref{fig:results-fake}(b). With this estimated detection probability and error rate, the quantum bit error rate of the attack could be calculated. Our calculation shows that this attack on a QKD system with the TES under test and the specific parameters assumed above would induce $7.4\%$ quantum bit error rate (QBER). This QBER is lower than the $11\%$ abort threshold of the BB84 protocol \cite{gottesman2004}, thus the security of the key could be compromised.
This shows a possible vulnerability of a QKD system with TES as a single photon threshold detector. A more general attack on a QKD scheme with TES as a photon number resolving detector, as well as attack on other QKD protocols such as coherent-one-way (COW) \cite{lydersen2011} can also be considered.
In conclusion, we have experimentally demonstrated two possible security vulnerabilities of TES as a photon detector. In this study, we have illustrated the ability of Eve to fake photon-number results in TES using different wavelengths. We have also shown that the characteristics of TES could be altered by a bright CW laser, and photon-number detection results could be faked using laser pulses with appropriate peak power. Using this result, we model an attack on a BB84-QKD system with TES as a detector and show that Eve could perform the intercept-and-resend attack while inducing as low as $7.4\%$ error rate, under certain specific assumptions. Since the TES under test has a misalignment of its input coupling, which limits its detection efficiency, we speculate that an attack on a higher-efficiency TES with better energy resolution might yield a better result for Eve. Understanding a physical model of the TES under attack can be a topic of a future study. Countermeasures to such attacks will need to be considered in the future when TESes begin getting employed in secure quantum communication schemes.
\medskip
This research was funded by the Ministry of Education and Science of Russia (program NTI center for quantum communications) and NSERC of Canada. P.C.\ was supported by the Thai DPST scholarship. J.Z.\ was supported in part by National Key R\&D Program of China under grant 2017YFA0304003 and in part by the National Natural Science Foundation of China under grants U1731119, U1831202, and U1931123. A.H.\ was supported by the National Natural Science Foundation of China (grant 6201101369). V.M.\ was supported by the Key program of special development funds of Zhangjiang national innovation demonstration zone (grant ZJ2018-ZD-009) and the Russian Science Foundation (grant 21-42-00040). H.Q.\ was sponsored by Shanghai Pujiang Program.
\emph{Author contributions:} P.C.,\ J.Z.,\ A.H.\ and H.Q.\ performed the experiment. P.C.\ and J.Z.\ developed the attack model and wrote the paper with input from all authors. V.M.\ and S.-c.S.\ supervised the study.
\section*{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
{
"timestamp": "2021-02-22T02:20:09",
"yymm": "2102",
"arxiv_id": "2102.08746",
"language": "en",
"url": "https://arxiv.org/abs/2102.08746"
}
|
\section{Introduction }
Wavelet transform has rich theoretical structures and is extremely useful as tools for building signal transforms, adapted to various signal geometries, quantum mechanics, etc. Continuous wavelet transform admits a generalization to locally compact groups. Such a unified approach seems to be useful, since it emphasizes on a clear way to basic features of continuous wavelet transform and includes all important cases for applications \cite{olofson, Fuhr, wong}. It should be mentioned that, the Weyl Heisenberg group plays a significant designations in various aspects of the connections between the classical harmonic analysis and concrete applications of numerical harmonic analysis. \\
This paper contains $4$ sections. Section 2 includes the definition of semi-direct product of two locally compact groups and generalized Weyl Heisenberg group. In Section 3, we study the square integrable representations on the generalized Weyl Heisenberg group and then we obtain the necessary and sufficient conditions for admissible wavelet on this group. In Section 4, some examples are proved as application of our results.
\section{Preliminaries and notation}
Let $H$ and $K$ be two locally compact groups with the identity elements $e_H$ and $e_K$, respectively and let $\tau:H\rightarrow Aut(K)$ be a homomorphism such that the map $(h,k)\mapsto \tau_h(k)$ is continuous from $H\times K$ onto $K$, where $H\times K$ equips with the product topology. The semi- direct product topological group $G_\tau=H \times_\tau K$ is the locally compact topological space $H\times K$ under the product topology, with the group operations: $$(h_1,k_1)\times_\tau (h_2,k_2)=(h_1h_2,k_1\tau_{h_1}(k_2),$$
$$(h,k)^{-1}=(h^{-1},\tau_{h^{-1}}(k^{-1})).$$ It is worth to note that $K_1=\lbrace (e_H,k); k \in K\rbrace$ is a closed normal subgroup and $H_1=\lbrace (h,e_K); h \in H \rbrace$ is a closed subgroup of $G_\tau$ such that $G_\tau=HK$ . Moreover, the left Haar measure of the locally compact group $G_\tau$ is $$d\mu_{G_\tau}(h,k)=\delta_H(h)d\mu_H(h)d\mu_K(k),
$$ \\
in which $d\mu_H,d\mu_K$ are the left Haar measures on $H$ and $K$, respectively and $\delta_H: H\rightarrow (0,\infty)$ is a positive continuous homomorphism that satisfies $$d\mu_K(k)=\delta_H(h)d\mu(\tau_h(k)),$$ for $h \in H, k \in K$. Moreover, the modular function $\Delta_{G_\tau}$ is $$\Delta_{G_\tau}=\delta_H(h)\Delta_H(h)\Delta_K(k),$$ where $\Delta_H, \Delta_K$ are the modular functions of $H, K$, respectively. \\
When $K$ is also abelian, one can define $\hat{\tau}:H \rightarrow Aut(\hat{K})$ via $h\mapsto \hat{\tau_h}$ where $$\hat{\tau_h}(\omega)=\omega \circ \tau_{h^{-1}},$$ for all $\omega \in{\hat{K}}$.
We usually denote $\omega \circ \tau_{h^{-1}}$ by $\omega_h$. With this notation, it is easy to see $$\omega_{h_1h_2}=(\omega_{h_2})_{h_1},$$ where
$h_1,h_2 \in H$ and $\omega \in{\hat{K}}$.
The semi-direct product $G_{\hat{\tau}}=H \times_ {\hat{\tau}} \hat{K}$ is a locally compact group with the left Haar measure
$$d\mu_{\hat{G}}(h,\omega)=\delta_H(h)^{-1} d\mu_H(h)d\mu_{\hat{K}}(\omega),$$
where $d\mu_{\hat{K}}$ is the Haar measure on $\hat{K}$. Also, for all $h \in H$, $$d\mu_{\hat{K}}(\omega_h)=\delta_H(h)d\mu_{\hat{K}}(\omega),$$ for $\omega \in{\hat{K}},$ (see more details in \cite{ghaani, Arefi, Fuhr}.)\\
Let $G_\tau=H \times_\tau K$, and define $\theta:G_\tau \rightarrow Aut(\hat{K}\times \mathbb{T})$ via $$(h,k)\mapsto \theta_{(h,k)}(\omega,z)=(\hat{\tau_h}(\omega), \hat{\tau_h}(\omega)(k)z)=(\omega_h,\omega_h(k)z),$$ for all $(h,k) \in{H \times_\tau K}$
and $(\omega,z)\in{\hat{K}\times \mathbb{T}}.$ The mapping $\theta$ is a continuous homomorphism. Thus the semi-direct prodoct $$G_\tau \times_\theta (\hat{K}\times \mathbb{T})=(H \times_\tau K)\times_\theta (\hat{K}\times \mathbb{T}),$$ is a locally compact group and it is called the generalized Weyl Heisenberg group associated with the semi direct product group $G_\tau =H \times_\tau K$, and denoted by $\mathbb{H}(G_\tau)$. It is easy to see that the
group operations of $\mathbb{H}(G_\tau)$ are
$$(h_1,k_1,\omega_1,z_1).(h_2,k_2,\omega_2,z_2)=(h_1h_2,k_1\tau_{h_1}(k_2),\omega_1{\omega_2}_{h_1}, {\omega_{2}}_{h_1}(k)z_1z_2),$$ $$(h_1,k_1,\omega_1,z_1)^{-1}=(h_1^{-1},\tau_{h_1}^{-1}(k^{-1}),\bar{\omega}_{h_1^{-1}}, \bar{\omega}_{h_1^{-1}}(\tau_{h_1}^{-1}(k^{-1}))z^{-1}),$$
for $(h_1,k_1,\omega_1,z_1),(h_2,k_2,\omega_2,z_2) \in{\mathbb{H}(G_\tau)}$ (see \cite{ghaani}) and the left Haar measure of $\mathbb{H}(G_\tau)$ is: $$d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)=d\mu_H(h)d\mu_K(k)d\mu_{\hat{K}}(\omega)d\mu_{\mathbb{T}}(z).$$
\section{The square integrable representation of $\mathbb{H}(G_\tau)$}
Throughout this section, we assume that $H$ and $K$ are locally compact topological groups and that $K$ is abelian, too. We denote the left Haar measures of $H$ and $K$ by $d\mu_H, d\mu_K$, respectively. Suppose that $h\mapsto \tau_h$ from $H$ to $Aut(K)$ is a homomorphism such that $(h,k)\mapsto \tau_h(k)$ from $H \times K$ into $K$ is continuous. $G_\tau=H \times_\tau K$ is the semi-direct product of $H$ and $K$ that is a locally compact topology group with the left Haar measure $d\mu_{G_\tau}(h,k)=\delta_H(h) d\mu_H((h)d\mu_K(k)$, where $\delta_H: H \mapsto (0,\infty)$ is a continuous homomorphism. Consider the homomorphism $\theta: G_\tau \rightarrow Aut(\hat{K}\times \mathbb{T})$ is defined by $$((h,k),(\omega,z))\mapsto \theta_{(h,k)} (\omega,z),$$ where $\theta_{(h,k)}(\omega,z)=(\omega \circ \tau_{h^{-1}}, \omega \circ \tau_{h^{-1}}(k).z).$ This makes $\mathbb{H}(G_\tau)=G_\tau \times_\theta( \hat{K}\times \mathbb{T})$ a locally compact topological group where $\mathbb{H}(G_\tau)$ is equipped with the product topology and the group operations as $$(h_1,k_1,\omega_1,z_1).(h_2,k_2,\omega_2,z_2)=(h_1h_2,k_1\tau_{h_1}(k_2),\omega_1{\omega_2}_{h_1}, {\omega_2}_{h_1}(k)z_1z_2),$$ $$(h_1,k_1,\omega_1,z_1)^{-1}=(h_1^{-1},\tau_{h_1}^{-1}(k^{-1}),\bar{\omega}_{h_1^{-1}}, \bar{\omega}_{h_1^{-1}}(\tau_{h_1}^{-1}(k^{-1}))z^{-1}),$$
for $(h_1,k_1,\omega_1,z_1),(h_2,k_2,\omega_2,z_2) \in{\mathbb{H}(G_\tau)}.$ The left Haar measure of $\mathbb{H}(G_\tau)$ is $$d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)=d\mu_H(h)d\mu_K(k)d\mu_{\hat{K}}(\omega)d\mu_{\mathbb{T}}(z).$$\\
Now, we are going to define a square integrable representation on $\mathbb{H}(G_\tau)$. With the above notations define $\pi:\mathbb{H}(G_\tau) \rightarrow U(L^2(\hat{K}))$ by
\begin{equation}
\pi(h,k,\omega,z)f(\xi)=\delta_H^{-1/2}(h)z\xi(k) \overline{\omega(k)}f((\xi\overline{\omega})_{h^{-1}}),
\end{equation}
then $\pi$ is a homomorphism. Indeed,
\\$\begin{array}{lll}
\pi\left( (h_1,k_1,\omega_1,z_1)(h_2,k_2,\omega_2,z_2)\left) f(\xi)=\pi(h_1h_2,k_1\tau_{h_1}(k_2),
\omega_{1}\right( \omega_{2}\right) _{h_{1}},(\omega_{2})_{h_{1}}(k_{1})z_{1}z_{2})f(\xi)\\[1ex]=
\delta_H^{-1/2}(h_{1}h_{2})
(\omega_2)_{h_1}(k_1)z_1z_2\xi(k_1\tau_{h_1}(k_2)) \overline{\omega_1(\omega_2)_{h_1}}(k_1\tau_{h_1}
(k_2)) f((\xi \overline{\omega_1(\omega_2)_{h_1}})_{(h_1h_2)^{-1}}
\\[1ex]=\delta_H^{-1/2}(h_1h_2)(\omega_2)_{h_1}(k_1)z_1z_2\xi(k_1)\xi_{h_1^{-1}}(k_2)\overline{\omega_1(k_1)}
\overline{(\omega_1)_{h_1^{-1}}(k_2)}\overline{\omega_2(k_2)}f(\xi_{h_2^{-1}h_1^{-1}}
\overline{(\omega_1)}_{h_2^{-1}h_1^{-1}}
\overline{(\omega_2)}_{h_2^{-1}})
\end{array}.$\\
Also,
\\$\begin{array}{lll}
\pi(h_1,k_1,\omega_1,z_1)\pi(h_2,k_2,\omega_2,z_2)f(\xi)\\[1ex] =\delta_H^{-1/2}(h_1)z_1\xi(k_1)\overline{\omega_1}(k_1)\pi(h_2,k_2,\omega_2,z_2)f((\xi \overline{\omega_1})_{h_1^{-1}}\\[1ex]=\delta_H^{-1/2}(h_1)\delta_H^{-1/2}(h_2)z_1z_2\xi(k_1)\overline{\omega_1}(k_1)\overline{\omega_2}(k_2)(\xi \overline{\omega_1})_{h_1^{-1}}(k_2)f((\xi \overline{\omega_1})_{h_1^{-1}}(\overline{\omega_2})_{h_2^{-1}})
\\[1ex]=\delta_H^{-1/2}(h_1h_2)z_1z_2 \xi(k_1)\xi_{h_1^{-1}}(k_2)\overline{\omega_1(k_1)}
\overline{(\omega_1)_{h_1^{-1}}(k_2)}\overline{\omega_2(k_2)}f(\xi_{h_2^{-1}h_1^{-1}}
\overline{(\omega_1)}_{h_2^{-1}h_1^{-1}}
\overline{(\omega_2)}_{h_2^{-1}}).
\end{array}$.\\
Moreover, $\pi$ is unitary. In fact we have,
\\$\begin{array}{rcl}
\Vert \pi(h,k,\omega,z)f\Vert^2_2 &=&\int_{\hat{K}}\vert \pi(h,k,\omega,z)f(\xi)\vert^2 d\mu_{\hat{K}}(\xi)\\[1ex]&=& \int_{\hat{K}}\delta_H^{-1}(h)\vert f((\xi \overline{\omega})_{h^{-1}}\vert^2d\mu_{\hat{K}}(\xi)\\[1ex]&=&\int_{\hat{K}}\delta_H^{-1}(h)\vert f((\xi )_{h^{-1}}\vert^2d\mu_{\hat{K}}(\xi)\\[1ex]&=&\int_{\hat{K}}f((\xi )\vert^2d\mu_{\hat{K}}(\xi)\\[1ex]&=&\Vert f \Vert_2^2.
\end{array}$.\\
And it is easy to check that $\pi$ is continuous and onto. So, $\pi$ is a continuous unitary representation of group $\mathbb{H}(G_\tau)$ to the Hilbert space $L^2(\hat{K})$. In the sequel, we show that $\pi$ is irreducible when $H$ is compact. Furthermore, it is also shown that $\pi$ is square integrable if and only if $H$ is compact. Note that when $H$ is a compact group, we normalize the Haar measure $\mu_H$ such that $\mu_H(H)=1$.
\begin{theorem}\label{theo}
Let $\mathbb{H}(G_\tau)=(H \times_\tau K)\times_\theta(\hat{K}\times \mathbb{T})$ where $H$ is a locally compact group and $K$ is a locally compact abelian group. Then for $\varphi, \psi $ in $L^2(\hat{K})$,
\begin{equation}\label{equ}
\int_{\mathbb{H}(G_\tau)} \vert \prec \varphi, \pi(h,k,\omega,z)\psi\succ\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)=\Vert \varphi \Vert _2^2 \Vert \psi \Vert_2^2.
\end{equation}
if and only if $H$ is compact.
\begin{proof}
For $\varphi, \psi $ in $L^2(\hat{K})$ we first consider the following observations:
\\$\begin{array}{lll}
\int_{\mathbb{H}(G_\tau)} \vert \prec \varphi, \pi(h,k,\omega,z)\psi\succ\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=\int_{\mathbb{H}(G_\tau)} \vert \int_{\hat{K}} \varphi(\xi) \overline{\pi(h,k,\omega,z)\psi(\xi)}d\mu_{\hat{K}}(\xi)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=\int_{\mathbb{H}(G_\tau)} \vert \int_{\hat{K}} \varphi(\xi) \delta_H^{-1/2}(h) \overline{z} \overline{\xi(k)}\omega(k) \overline{\psi(\xi \overline{\omega})_{h^{-1}}}d\mu_{\hat{K}}(\xi)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=
\int_{\mathbb{H}(G_\tau)} \vert \int_{\hat{K}} \varphi(\xi \omega) \delta_H^{-1/2}(h) \overline{z} \overline{\xi(k)}\overline{\psi(\xi )_{h^{-1}}}d\mu_{\hat{K}}(\xi)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=
\int_{\mathbb{H}(G_\tau)} \vert \int_{\hat{K}} R_{\omega}\varphi(\xi ) \delta_H^{-1/2}(h) \overline{z} \overline{\xi(k)}\overline{\psi(\xi \circ \tau_h)}d\mu_{\hat{K}}(\xi)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=\int_{\mathbb{H}(G_\tau)} \vert \int_{\hat{K}} R_{\omega}\varphi(\xi\circ \tau_{h^{-1}} ) \delta_H^{-1/2}(h) \overline{z} \overline{\xi \circ \tau_{h^{-1}} (k)}\overline{\psi(\xi) }d\mu_{\hat{K}}(\xi_h)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=
\int_{\mathbb{H}(G_\tau)} \vert \int_{\hat{K}} R_{\omega}\varphi(\xi\circ \tau_{h^{-1}} ) \delta_H^{1/2}(h) \overline{z} \overline{\xi (\tau_{h^{-1}} (k))}\overline{\psi(\xi) }d\mu_{\hat{K}}(\xi)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=\int_{\mathbb{H}(G_\tau)} \delta_H(h) \vert \int_{\hat{K}} (R_{\omega}\varphi(.\circ \tau_{h^{-1}}) .\overline{\psi})(\xi) \overline{\xi (\tau_{h^{-1}} (k))}d\mu_{\hat{K}}(\xi)\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=
\int_{\mathbb{H}(G_\tau)} \delta_H(h) \widehat{\vert (R_{\omega}\varphi(.\circ \tau_{h^{-1}} ).\overline{\psi})} (\tau_{h^{-1}} (k))\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]=
\int_H \delta_H(h) \int_{\hat{K}}\int_K \vert \widehat{(R_{\omega}\varphi(.\circ \tau_{h^{-1}} ).\overline{\psi})} (\tau_{h^{-1}} (k))\vert^2 d\mu_K(k)d\mu_{\hat{K}}(\omega)d\mu_H(h)\\[1ex]=
\int_H \int_{\hat{K}}\int_K \vert \widehat{(R_{\omega}\varphi(.\circ \tau_{h^{-1}}) .\overline{\psi})} (k)\vert^2 d\mu_K(k)d\mu_{\hat{K}}(\omega)d\mu_H(h)\\[1ex]=
\int_H \int_{\hat{K}}\int_{\hat{K}} \vert (R_{\omega}\varphi(.\circ \tau_{h^{-1}}) .\overline{\psi}) (\xi)\vert^2 d\mu_{\hat{K}}(\xi)d\mu_{\hat{K}}(\omega)d\mu_H(h)\\[1ex]=
\int_H \int_{\hat{K}}\int_{\hat{K}} \vert R_{\omega}\varphi(\xi\circ \tau_{h^{-1}}) .\overline{\psi}(\xi) \vert^2 d\mu_{\hat{K}}(\xi)d\mu_{\hat{K}}(\omega)d\mu_H(h)\\[1ex]=\int_H \int_{\hat{K}}\int_{\hat{K}} \delta_H(h)\vert R_{\omega}\varphi(\xi) .\overline{\psi}(\xi \circ \tau_h) \vert^2 d\mu_{\hat{K}}(\xi)d\mu_{\hat{K}}(\omega)d\mu_H(h)\\[1ex]=\int_H \int_{\hat{K}}\Vert \varphi \Vert_2^2\delta_H(h)\vert \overline{\psi}(\xi \circ \tau_h) \vert^2 d\mu_{\hat{K}}(\xi)d\mu_H(h)\\[1ex]=
\Vert \varphi \Vert_2^2 \Vert \psi \Vert_2^2 \mu_H(H)
\end{array}$.\\
Now, if $H$ is compact, then $\mu_H(H)=1$. So, (\ref{equ}) holds. Conversely, if (\ref{equ}) holds,the above observation implies that $\mu_H(H)=1$ . So, we can conclude that $H$ is compact.
\end{proof}
\end{theorem}
\begin{corollary}
With notation as above, the representation $\pi$ of $\mathbb{H}(G_\tau)$ on $L^2(\hat{K})$ is irreducible if $H$ is compact.
\begin{proof}
If $H$ is compact, then
(\ref{equ}) in Theorem 3.1 holds. Now,
suppose that $M$ is a closed subspace of the Hilbert space $L^2(\hat{K})$ that is invariant under $ \pi$. Then for any $\varphi \in M$ we have,
$$\lbrace \pi (h,k, \omega ,z) \varphi; \ \ (h,k, \omega ,z) \in{\mathbb{H}(G_\tau)} \rbrace \subseteq M.$$ Let $\psi \in{L^2(\hat{K})}$ be orthogonal to $M$, that is $\prec \psi, \pi(h,k, \omega ,z) \varphi \succ=0,$ for all $(h,k, \omega ,z) \in{\mathbb{H}(G_\tau)}$. Thus by (\ref{equ}), $\Vert \varphi \Vert_2\Vert \psi \Vert_2=0$, and hence $\psi=0$. So, $M^{\perp}=\lbrace 0 \rbrace$, that is, $M=L^2(\hat{K})$. Namely, $\pi$ is irreducible.
\end{proof}
\end{corollary}
We remind the reader that, an irreducible representation $\pi$ of $\mathbb{H}(G_\tau)$ on $L^2(\hat{K})$ is called square integrable if there exists a non zero element $\psi$ in $L^2(\hat{K})$ such that
\begin{equation}\label{form}
\prec \pi(., ., ., .)\psi,f \succ \in{L^2({\mathbb{H}(G_\tau)})},
\end{equation}
for all $f \in{L^2(\hat{K})}$. A unit vector $\psi$ satisfying (\ref{form}) is said to be an admissible wavelet for $\pi$, and the constant $$c_{\psi}=\int_{\mathbb{H}(G_\tau)}\vert \prec \pi(h,k,\omega,z)\psi, \psi\succ \vert^2d\mu_{\mathbb{H}(G_\tau)},$$ is called the wavelet constant associated to the admissible wavelet $\psi$.\\
Also, for the wavelet vector $\psi$, the continuous wavelet transform is defined by $$W_\psi f(h,k,\omega,z)=\prec f, \pi(h,k,\omega,z)\psi\succ.$$ It is easy to see that $(h,k,\omega,z) \mapsto W_\psi f(h,k,\omega,z)$ is a continuous function on $\mathbb{H}(G_\tau).$ Moreover, $W_\psi$ intertwines $\pi$ and the left regular representation on $\mathbb{H}(G_\tau).$
\begin{corollary}
The representation $\pi$ of the $GWH$ group $\mathbb{H}(G_\tau)=(H\times_\tau K)\times_\theta(\hat{K}\times \mathbb{T})$ on $L^2(\hat{K})$ is square integrable if and only if $H$ is compact.
\end{corollary}
\begin{proof}
If $H$ is compact, then by Theorem 3.1 and Corollary 3.2, $\pi$ is square integrable. For the inverse, if $\pi$ is square integrable, then there exists a non zero element $\varphi \in L^2(\hat{K})$ such that
\begin{equation*}
\prec \pi(., ., ., .)\varphi,\psi \succ \in{L^2({\mathbb{H}(G_\tau)})},
\end{equation*}
for all $\psi \in{L^2(\hat{K})}$. On the other hand,
\begin{equation*}
\int_{\mathbb{H}(G_\tau)} \vert \prec \varphi, \pi(h,k,\omega,z)\psi\succ\vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)=\Vert \varphi \Vert _2^2 \Vert \psi \Vert_2^2 \mu_H(H).
\end{equation*}
So $\mu_H(H)< \infty$. That is $H$ is compact.
\end{proof}
\begin{remark}
There is another irreducible representation of $\mathbb{H}(G_\tau)$ on Hilbert space $L^2(K)$. Indeed, consider
\begin{equation*}\label{rep}
\tilde{\pi}:\mathbb{H}(G_\tau) \rightarrow U(L^2(K)), \ \ \ \ \tilde{\pi}(h,k,\omega,z)f(k')=\delta_H(h)^{1/2}z \omega(k')f(\tau_{h^{-1}}(k'k)),
\end{equation*}
for all $(h,k,\omega,z) \in{\mathbb{H}(G_\tau)}, f \in{L^2(K)}$. $\tilde{\pi}$ is homomorphism and unitary. In fact we have
\\$\begin{array}{lll}
\tilde{\pi}(( h_1,k_1,\omega_1,z_1)(h_2,k_2,\omega_2,z_2 )) f(k')\\[1ex]=\tilde{\pi}(h_1h_2,k_1\tau_{h_1}(k_2),\omega_1(\omega_2)_{h_1},(\omega_2)_{h_1}(k_1)z_1z_2)f(k')\\[1ex]=\delta_H^{1/2}(h_1h_2)(\omega_2)_{h_1}(k_1)z_1z_2 \omega_1(\omega_2)_{h_1}(k') f(\tau_{(h_1h_2)^{-1}}(k'(k_1\tau_h{}_1(k_2))
\\[1ex]=\delta_H^{1/2}(h_1h_2)z_1z_2\omega_1(k')(\omega_2)_{h_1}(k'k_1)f(\tau_{h_2^{-1}h_1^{-1}}(k'k_1\tau_{h_1}(k_2))\\[1ex]=\delta_H^{1/2}(h_1h_2)z_1z_2\omega_1(k')\omega_2)_{h_1}(k'k_1)f(\tau_{h_2^{-1}h_1^{-1}}(k'k_1)\tau_{h_2^{-1}}(k_2)),
\end{array}$\\
and
\\$\begin{array}{lll}
\tilde{\pi}(h_1,k_1,\omega_1,z_1)\tilde{\pi}(h_2,k_2,\omega_2,z_2)f(k')\\[1ex] =\delta_H^{1/2}(h_1)z_1\omega_1(k')\tilde{\pi}(h_2,k_2,\omega_2,z_2)f(\tau_{h_1^{-1}}(k'k_1)) \\[1ex]=\delta_H^{1/2}(h_1)\delta_H^{1/2}(h_2)z_1z_2\omega_1(k')\omega_2(\tau_{h_1^{-1}}(k'k_1))f(\tau_{h_2^{-1}}(\tau_{h_1^{-1}}(k'k_1)k_2)
\\[1ex]=\delta_H^{1/2}(h_1h_2)z_1z_2 \omega_1(k')(\omega_2)_{h_1}(k'k_1)f(\tau_{h_2^{-1}h_1^{-1}}(k'k_1)\tau_{h_2^{-1}}(k_2)).
\end{array}$.\\
Also,
\\$\begin{array}{rcl}
\Vert \tilde{\pi}(h,k,\omega,z)f\Vert^2_2 &=&\int_{K}\vert \tilde{\pi}(h,k,\omega,z)f(k')\vert^2 d\mu_{K}(k')\\[1ex]&=& \int_{K}\delta_H(h)\vert f(\tau_{h^{-1}}(k'k))\vert^2d\mu_{K}(k')\\[1ex]&=&\int_{K}\delta_H(h)\vert f(k')\vert^2d\mu_{K}(\tau_h(k'))\\[1ex]&=&\int_{K}\vert f(k')\vert^2d\mu_{K}(k')\\[1ex]&=&\Vert f \Vert_2^2.
\end{array}$.\\
Using the Plancherel theorem, $\pi, \tilde{\pi}$ are unitarily equivalent. So, $\tilde{\pi}$ is square integrable if and only if $\pi$ is square integrable.
\end{remark}
\begin{remark}
The inverse of Corollary 3.2 does not hold, generally. An obvious example is when $H$ is a non compact group and $K$ is the trivial group $\lbrace e \rbrace$. Then the representation $\pi:\mathbb{H}(H \times_\tau \lbrace e \rbrace )\rightarrow U(\mathbb{C})$ is an irreducible representation. Here we give a non trivial example in which $\pi$ is an irreducible representation, but $H$ is not compact. Let $H=\mathbb{R}^+, K=\mathbb{R}$. Define the representation $\pi$ of $\mathbb{H}(\mathbb{R}^+\times_\tau \mathbb{R})$ as follows:
\begin{equation*}
\pi:\mathbb{H}(\mathbb{R}^+\times_\tau \mathbb{R})\rightarrow U(L^2(\mathbb{R})); \ \ \ \ \pi(a,x,\omega,z)f(\xi)=a^{1/2} z e^{2\pi ix(\xi-\omega)}f((\xi \bar{\omega)}_{a^{-1}}),
\end{equation*}
in which $(\xi \bar{\omega)}_{a^{-1}}=(\xi \bar{\omega)} \circ \tau_a$, $\tau_a(x)=a.x$ and $\delta_H(a)=a^{-1}$. This representation is irreducible. Indeed, let $M$ be a closed invariant subspace of $L^2(\mathbb{R})$ under $\pi$. Then for any $f\in{M}$, we have $\pi(h,k,\omega,z)f \in M$. Consider $0 \neq g\in{M^{\perp}}$, so that $\prec g, \pi(h,k,\omega,z)f\succ =0$. Then
\begin{equation*}
0=\int_{\mathbb{R}}g(\xi)e^{-2\pi i x \xi}\bar{f}((\xi \bar{\omega})_{a^{-1}})d\xi=\\[1ex]\int_{\mathbb{R}}g(\xi_a \omega)e^{-2\pi i x \xi_a \omega}\bar{f}(\xi )d\xi.
\end{equation*}
Thus, $g(\xi_a \omega)\bar{f}(\xi )=0$, for almost all $\xi \in \mathbb{R}$. Suppose that $\bar{f}(\xi ) \neq 0$, for all $\xi$ in a set $A$ with positive measure. Then for all $\xi \in A, \ \ g(\xi_a \omega)=0$, for all $\omega \in{\mathbb{R}}, a \in{\mathbb{R}^+}$. Thus $g=0$. This is a contradiction. So, $\pi$ is an irreducible representation, but $H$ is not compact.
\end{remark}
In the sequel, we define the quasi regular representation and we obtain a concrete form for an admissible vector.
Note that $\mathbb{H}(G_\tau)$ acts on the Hilbert space
$L^2(\hat{K}\times \mathbb{T})$ and this action induces the quasi regular representation $\lbrace \rho, L^2(\hat{K}\times \mathbb{T})\rbrace$ as follows:
\begin{equation}\label{pi}
\rho:(H \times_\tau K) \times_\theta( \hat{K}\times \mathbb{T}) \rightarrow U(L^2(\hat{K}\times \mathbb{T})),
\end{equation}
where
\begin{eqnarray*}
\rho(h,k,\omega,z)f(\xi,t)&=&\delta_{H \times_\tau K}^{1/2}(h,k)f(\theta_{(h,k)^{-1}}(\xi,t)(\omega,z)^{-1})\\[1ex]&=&\delta_{H}^{-1/2}(h)f(\theta_{(h^{-1},\tau_{h^{-1}}({k^{-1}})}(\xi \bar{\omega},tz^{-1}))\\[1ex]&=&\delta_{H}^{-1/2}(h) f((\xi \bar{\omega})_{h^{-1}},(\xi \bar{\omega})_{h^{-1}}(\tau_{h^{-1}}(k^{-1})).tz^{-1}).
\end{eqnarray*}
Note that $\delta_{H \times_\tau K}(h,k)=\delta_H(h)^{-1}.$ (see Corollary 3.3 in \cite{ghaani})\\
A type of the Fourier transform of the quasi regular representation $\rho$ obtains as follows:
\\$\begin{array}{lll}
\widehat{\rho(h,k,\omega,z)f}(k',n')\\[1ex]= \int_{\hat{K}\times \mathbb{T}}\rho(h,k,\omega,z)f(\xi,t) \overline{(k',n')(\xi,t)}d\mu_{\hat{K}}(\xi)d\mu_{\mathbb{T}}(t)\\[1ex]= \delta_H(h)^{-1/2}\int_{\hat{K}\times \mathbb{T}} f((\xi \overline{\omega})_{h^{-1}},(\xi \overline{\omega})_{h^{-1}}(\tau_{h^{-1}}(k^{-1}))tz^{-1}) \overline{\xi(k')} \overline{ t^{n'}} d\mu_{\hat{K}}(\xi)d\mu_{\mathbb{T}}(t)\\[1ex]= \delta(h)^{-1/2} \overline{z^{n'}}\int_{\hat{K}\times \mathbb{T}}f(\xi )_{h^{-1}},\xi _{h^{-1}}(\tau_{h^{-1}}(k^{-1}))t) \overline{\omega}(k')\bar{\xi(k')} \overline{ t^{n'}} d\mu_{\hat{K}}(\xi)d\mu_{\mathbb{T}}(t)\\[1ex]=\delta_H(h)^{-1/2} \bar{z^{n'}}\bar{\omega}(k')\int_{\hat{K}\times \mathbb{T}}f(\theta_{(h^{-1},\tau_{h^{-1}})}(\xi,t)\bar{\xi(k')} \bar{ t^{n'}} d\mu_{\hat{K}}(\xi)d\mu_{\mathbb{T}}(t)\\[1ex]= \delta_H(h)^{-1/2} \bar{z^{n'}}\bar{\omega}(k')\int_{\hat{K}\times \mathbb{T}}f \circ \theta_{(h,k)^{-1}}(\xi,t) \overline{(k',n')(\xi,t)}d\mu_{\hat{K}}(\xi)d\mu_{\mathbb{T}}(t)\\[1ex]= \delta_H(h)^{-1/2} \bar{z^{n'}}\bar{\omega}(k')\widehat{(f \circ \theta_{(h,k)^{-1}})}(k',n'),
\end{array}$\\\\
for all $(k',n')\in{K \times \mathbb{Z}}=\widehat{(\hat{K}\times \mathbb{T})}.$\\ So,
\begin{equation}\label{1}
\widehat{\rho(h,k,\omega,z)f}(k',n')=\delta_H(h)^{-1/2} \bar{z^{n'}}\bar{\omega}(k')\widehat{(f \circ \theta_{(h,k)^{-1}})}(k',n').
\end{equation}
\begin{theorem}
With the notation as above, let $\rho$ be the quasi regular representation on $\mathbb{H}(G_\tau),$ and $\psi, f \in{L^2(\hat{K}\times \mathbb{T})}$.\\
\item[(i)] If $\psi$ is a wavelet vector, then
$$W_\psi f(h,k,\omega,z)= \delta_H^{-1/2}(h) \int_K \sum_{n' \in{\mathbb{Z}}} \hat{f}(k',n')z^{n'}\omega(k')\overline{\widehat{{(\psi \circ \theta)}}}_{(h,k)^{-1}}(k',n')d\mu_K(k').$$
\item[(ii)] The vector $\psi$ is wavelet if
$$\int_{H\times_\tau K} \vert \hat{\psi}(k',n') \circ \theta_{(h,k)^{-1}}\vert^2 d\mu_{H\times K}(h,k) < \infty.$$
\begin{proof}
For $(k',n') \in{K \times \mathbb{Z}},$
\item[(i)] By the Plancherel's theorem and (\ref{1}), we have
\begin{eqnarray*}
W_\psi f(h,k,\omega,z)&=& \prec f, \rho(h,k,\omega,z)\psi \succ \\[1ex]&=& \prec \hat{f}, \widehat{\rho(h,k,\omega,z)\psi}\succ \\[1ex]&=& \delta_H^{-1/2}(h)\int_K \sum_{n' \in{\mathbb{Z}}} \hat{f}(k',n')z^{n'}\omega(k')\overline{\widehat{{(\psi \circ \theta)}}}_{(h,k)^{-1}}(k',n')d\mu_K(k').
\end{eqnarray*}
\item[(ii)] By applying the part (i), for $f \in{L^2(\hat{K}\times \mathbb{T})}$, we get
\\$\begin{array}{lll}
\int_{\hat{K}\times \mathbb{T}} \vert W_\psi f(h,k,\omega,z) \vert^2 d\mu_{\hat{K}\times \mathbb{T}}(\omega,z)\\[1ex]= \int_{\hat{K}\times \mathbb{T}}W_\psi f(h,k,\omega,z) \overline{W_\psi f(h,k,\omega,z)}d\mu_{\hat{K}\times \mathbb{T}}(\omega,z)\\[1ex]= \delta_H^{-1}(h) \int_{\hat{K}\times \mathbb{T}} [(\int_K \sum_{n' \in{\mathbb{Z}}} \hat{f}(k',n')z^{n'}\omega(k')\overline{\widehat{{(\psi \circ \theta)}}}_{(h,k)^{-1}}(k',n')d\mu_K(k'))\\\times(\overline{\int_K \sum_{n'' \in{\mathbb{Z}}} \hat{f}(k',n')z^{n''}\omega(k'')\overline{\widehat{{(\psi \circ \theta)}}}_{(h,k)^{-1}}(k'',n'')}d\mu_K(k''))]\\[1ex] =\delta_H^{-1}(h) \int_{\hat{K}\times \mathbb{T}}\vert \hat{F}(\omega,z)\vert^2d\mu_{\hat{K}\times \mathbb{T}}\\[1ex] =\delta_H^{-1}(h) \int_{\hat{K}\times \mathbb{T}}\vert F(k',n')\vert^2d\mu_{{K}\times \mathbb{Z}}\\[1ex] =\delta_H^{-1}(h)\int_K \sum_{n' \in{\mathbb{Z}}} \vert \hat{f}(k',n')\vert^2 \vert\widehat{{(\psi \circ \theta)}}(k',n')\vert^2d\mu_K(k'),
\end{array}$\\\\
where $\hat{F}=\hat{f}\hat{(\psi \circ \theta)}\in{L^1(K\times \mathbb{Z}}).$ It is easy to see that $$\widehat{(\psi \circ \theta)}((k',n')=\delta_H^{-1}(h) \hat{\psi}(k',n')\circ \theta_{(h,k)^{-1}}.$$
Then
\begin{equation}\label{2.2}
\int_{\hat{K}\times \mathbb{T}} \vert W_\psi f(h,k,\omega,z) \vert^2 d\mu_{\hat{K}\times \mathbb{T}}(\omega,z)
= \delta_H^{-1}(h)\int_K \sum_{n' \in{\mathbb{Z}}} \vert \hat{f}(k',n')\vert^2 \vert\widehat{(\psi(k',n')} \circ \theta_{(h,k)^{-1}}\vert^2d\mu_K(k').
\end{equation}
Now, by using (\ref{2.2}) we have
\\$\begin{array}{lll}
\Vert W_\psi f\Vert_2^2 &=&\int_{\mathbb{H}(G_\tau)} \vert W_\psi f(h,k,\omega,z) \vert^2 d\mu_{\mathbb{H}(G_\tau)}(h,k,\omega,z)\\[1ex]&=&\int_{H\times_\tau K} \int_{\hat{K}\times \mathbb{T}}\vert W_\psi f(h,k,\omega,z) \vert^2 \delta_H^{-1}(h) d\mu_{\hat{K}\times \mathbb{T}}(\omega,z)d\mu_{H\times_\tau K} (h,k)\\[1ex]&=& \int_{H\times_\tau K} \int_K \sum_{n' \in{\mathbb{Z}}} \vert \hat{f}(k',n')\vert^2 \vert\widehat{(\psi(k',n')} \circ \theta_{(h,k)^{-1}}\vert^2d\mu_K(k')d\mu_{H\times_\tau K} (h,k)\\[1ex] &=&\Vert f \Vert_2^2 \int_{H\times_\tau K}\vert\widehat{(\psi(k',n')} \circ \theta_{(h,k)^{-1}}\vert^2d\mu_{H\times_\tau K} (h,k),
\end{array}$\\
and then the proof of part $(ii)$ is complete.
\end{proof}
\end{theorem}
\section{Examples and applications}
\begin{example}
Let $K$ be an abelian locally compact group and $H=\lbrace e \rbrace$ (the trivial group). In this case the generalized weyl Heisenberg group $\mathbb{H}(G_\tau)$ coincides with the standard weyl Heisenberg group $G:=K\times_\theta(\hat{K}\times \mathbb{T})$. In this case the square integrable representation of $G=K\times_\theta(\hat{K}\times \mathbb{T})$ on $L^2(\hat{K})$ is as follows:
\begin{equation}
\pi(k,\omega,z)f(\xi)=z\xi(k) \overline{\omega(k)} f(\xi \overline{\omega}).
\end{equation}
\end{example}
\begin{example}
Let $E(n)$ be the Euclidean group which is the semi-direct product of $So(n) \times_\tau \mathbb{R}^n$ where the continuous homomorphism $\tau:So(n) \rightarrow Aut(\mathbb{R}^n)$ given by $\sigma \mapsto \tau_\sigma$ via $\tau_\sigma(x)=\sigma x$, for all $x \in{\mathbb{R}^n}$. The group operation for $E(n)$ is
$$(\sigma_1,x_1)\times_\tau (\sigma_2,x_2)=(\sigma_1\sigma_2,x_1+\sigma_1x_2).$$ Consider the continuous homomorphism $\hat{\tau}: So(n) \rightarrow Aut(\mathbb{R}^n)$ via $\sigma \mapsto \hat{\tau_\sigma}$ which is given by $\hat{\tau_\sigma}((\omega)=\omega_\sigma=\omega \circ \tau_{\sigma^{-1}}$. Thus the generalized Weyl Heisenberg group of $E(n)$, is the set $\mathbb{H}(E(n))=(So(n)\times_\tau \mathbb{R}^n) \times _\theta (\mathbb{R}^n \times \mathbb{T})$ with the group operation
\begin{equation*}
(\sigma_1,x_1,\omega_1,z_1)(\sigma_2,x_2,\omega_2,z_2)=(\sigma_{\sigma_2},x_1+\sigma_1 x_2,\omega_1(\omega_2)_{\sigma_1},(\omega_2)_{\sigma_1}(x_1)z_1z_2),
\end{equation*}
for all $(\sigma_1,x_1,\omega_1,z_1)(\sigma_2,x_2,\omega_2,z_2) \in{\mathbb{H}(E(n))}$ and with the product topology. Then the square integrable representation $\pi$ of $\mathbb{H}(E(n))$ onto $L^2(\mathbb{R}^n)$ is
$$\pi(\sigma,x,\omega,z)f(\xi)=e^{2\pi ix(\xi-\omega)}f((\xi-\omega)_{\sigma^{-1}}).$$ Note that $H$ is compact and $\delta_H(h)=1$.
\end{example}
\begin{example}
Let $\mathbb{H}(\mathbb{R}^n)=\mathbb{R}^n \times_\theta(\mathbb{R}^n \times \mathbb{T})$ be the classical Heisenberg group on $\mathbb{R}^n,$ in which the continuous homomorphism $x \mapsto \theta_x$ from $\mathbb{R}^n$ into $Aut(\mathbb{R}^n \times \mathbb{T}) $
is defined by $\theta_x(y,z)=(y,z e^{2\pi ix.y}).$ Then the square integrable representation $\pi$
of $\mathbb{H}(\mathbb{R}^n)$ onto $L^2(\mathbb{R}^n)$ is $$ \pi(x,\omega,z)f(\xi)=z.e^{2\pi i x(\xi-\omega)}f(\xi-\omega).$$
\end{example}
\bibliographystyle{amsplain}
|
{
"timestamp": "2021-02-18T02:16:29",
"yymm": "2102",
"arxiv_id": "2102.08719",
"language": "en",
"url": "https://arxiv.org/abs/2102.08719"
}
|
\section{Introduction}
Iron-based superconductors (FeSCs) are the second family of unconventional high-temperature superconductors. In most FeSCs, several Fe derivative $d$-bands cross the Fermi energy forming the electron- and hole-like pockets. Meanwhile, band structures and Fermi surfaces are quite different in various FeSCs, and they are also very sensitive to chemical doping or external pressure. The widely accepted $s^\pm$ pairing symmetry in some FeSCs is based on the nesting between hole pockets near $\Gamma$ point and electron pockets around M point with similar sizes based on the weak coupling scenario, but the gap symmetry and the gap structure can be different in other FeSCs because of different structures of Fermi surfaces \cite{Review}.
The newly found $A\mathrm{Ca_2Fe_4As_4F_2}$ ($A =$ K, Rb, Cs) is a representative compound of the layered FeSCs, and the critical temperature ($T_\mathrm{c}$) ranges from 28 to 33 K \cite{K12442,RbCs12442,Growthmethod}. The crystal structure of $\mathrm{KCa_2Fe_4As_4F_2}$ (K12442) is shown in Fig.~\ref{fig1}(a) as an example, one can see that in these materials, double $\mathrm{FeAs}$ layers are separated by insulating $\mathrm{Ca_2F_2}$ layers. Such kind of layered structure results in a significant anisotropy of superconductivity and normal-state resistance \cite{Growthmethod,anisotropytransport,magneticmeasurement,anisotropicJc}. It is supposed that the 12442-type FeSCs have the intrinsic hole conduction with the doping level of 0.25 hole/Fe. Interestingly, it is easy to transform the primary carrier from $p$-type to $n$-type by Co or Ni doping, but $T_\mathrm{c}$ decreases with increase of the doped concentration of Co or Ni dopants \cite{Codoping,Nidoping}. Meanwhile, $T_\mathrm{c}$ can be slightly enhanced by applying a hydrostatic pressure \cite{pressureenhancement}. Transport measurements in $\mathrm{KCa_2Fe_4As_4F_2}$ (K12442) single crystals suggest that the in-plane upper critical field is dominated by the Pauli paramagnetic effect instead of the orbital effect \cite{StrongPaulieffect}. Theoretical calculation predicts that there are several hole and electron pockets in the K12442 \cite{Codoping,firstprinciple,Bandcaculation}. Based on a recent angle-resolved photoemission spectroscopy (ARPES) work conducted in K12442, three separate hole pockets $\alpha$, $\beta$, $\gamma$ are observed around the $\Gamma$ point, and one tiny electron pocket together with four incipient hole bands (which barely touch the Fermi energy) are observed around the M point \cite{ARPES}. Obviously, this topology of Fermi surface cannot satisfy the nesting condition because of very different sizes of hole and electron pockets. The nesting condition is satisfied by Co doping to K12442 with the doping level of about 0.1, but $T_\mathrm{c}$ decreases to about 25 K \cite{Codoping}. In this point of view, the superconductivity in 12442-type FeSCs may be different from other FeSCs. ARPES measurements in K12442 exhibit six nodeless gaps with gap values ranging from 2 meV to 8 meV for different Fermi pockets. The multiple and nodeless gap feature are also proved by different experimental methods \cite{HeatTran,optical}. However, some other works claim that there might be line nodes on the superconducting gap(s) in 12442-type FeSCs \cite{uSRCs,uSRK,specific heat}. The controversies of the existence of gap nodes in 12442 system require further investigations. Although the nesting condition is not satisfied in 12442-tpye FeSCs, the spin resonance peak is still observed around $Q=(0.5,0.5)$ \cite{NSRK,NSRCs} which corresponds to the scattering vector from the hole to electron pockets. Here, the $s^\pm$ pairing symmetry with the spin resonance can be explained in the strong coupling approach with the absence of the nesting condition \cite{DaiReview}. In addition, the spin resonance mode with a downward dispersion is observed in K12442, and this kind of dispersion is similar to the behavior in cuprates \cite{NSRK}.
In this paper, we report the experimental study on $\mathrm{KCa_2Fe_4As_4F_2}$ single crystals by the scanning tunneling microscopy/spectroscopy (STM/STS). Fully gapped feature is observed on almost all tunneling spectra. We also conduct the quasiparticle interference (QPI) measurements in the sample in order to obtain the information of Fermi pockets. Our results provide fruitful information to this multi-band superconductor.
\section{Experimental Methods}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm,height=8.5cm]{figure1.eps}
\caption{(a) Crystal structure of $\mathrm{KCa_2Fe_4As_4F_2}$. (b) Temperature dependent magnetization measured in zero-field cooled (ZFC) and field-cooled (FC) processes under a magnetic field of 10 Oe. (c) Temperature dependence of normalized in-plane resistance measured at 0 T.
} \label{fig1}
\end{figure}
The KCa$_2$Fe$_4$As$_4$F$_2$ single crystals used in this work were grown by the self-flux method \cite{Growthmethod}. Temperature dependent magnetization and normalized resistance are shown in Figs.~\ref{fig1}(b) and \ref{fig1}(c), and both of them show fine superconducting transitions with critical temperature $T_\mathrm{c}$ of about 33.5 K determined from the zero-resistance. STM/STS measurements were carried out in a scanning tunneling microscope (USM-1300, Unisoku Co., Ltd.). The K12442 samples were cleaved at about 77 K in an ultrahigh vacuum with the base pressure of about $1\times10^{-10}$ Torr, and then they were transferred to the microscopy head which was kept at a low temperature. Electrochemically etched tungsten tips were used for STM/STS measurements after cleaning by the electron-beam heating. A typical lock-in technique was used in tunneling spectrum measurements with an ac modulation of 0.1 mV and the frequency of 931.773 Hz. Voltage offsets were carefully calibrated before STS measurements.
\section{Results}
\subsection{Topography and tunneling spectra}
Figure~\ref{fig2}(a) shows a typical topographic image measured on the surface of K12442 single crystal. Based on the lattice structure of K12442, there are layers of alkali-metal K atoms and those of alkaline-earth-metal Ca atoms. The cleavage may occur in these layers with the relatively weak bonding energy. After the cleavage, most probably, half K or Ca atoms remain in the surface layer of each separated part, which makes both surface unpolarized. This can get a proof from an atomically resolved topography shown in the upper-right inset in Fig.~\ref{fig2}(a) measured on a flat area far away from any defects. The topography shows a square lattice with the lattice constant of about 5.3 \AA\ which is approximately equal to $\sqrt2$ times of the K-K or Ca-Ca lattice constant ($a_0=3.87$ \AA). From the topographic image, one can see that there are many hollows with different sizes on the flat background. The depths of the hollows are from 100 to 300 pm, and these hollows can be clearly seen from the re-scanned image shown in the lower-left inset in Fig.~\ref{fig2}(a). Similar kinds of hollows have been observed in NaFe$_{1-x}$Co$_x$As \cite{NaFeCoAs}, LiFeAs \cite{LiFeAs}, and RbFe$_2$As$_2$ \cite{RbFe2As2} but with much lower densities, and they may be the assembled vacancies of alkali metal atoms on the reconstructed surface. In Fig.~\ref{fig2}(b), we show a typical tunneling spectrum on the surface measured in a wide energy window. The differential conductance is much larger in negative-bias side than that in positive-bias side, which is consistent with the asymmetric density of states from previous band calculation results \cite{firstprinciple,Bandcaculation}.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{figure2.eps}
\caption{(a) A typical topographic image taken on a surface of K12442 measured at $T = 1.7$ K with setpoint conditions of $V_\mathrm{set} = 20$ mV and $I_\mathrm{set} = 200$ pA. The inset in the upper-right corner shows the atomically resolved topography measured in another flat area ($V_\mathrm{set} = 10$ mV and $I_\mathrm{set} = 500$ pA). The inset in the lower-left corner shows the re-scanned image with a higher resolution of an area in (a) marked by the red dashed square ($V_\mathrm{set} = 20$ mV, $I_\mathrm{set} = 200$ pA). (b) A typical tunneling spectrum measured in a energy window far beyond the superconducting gap ($T = 1.7$ K, $V_\mathrm{set}$ = 300 mV, and $I_\mathrm{set}$ = 500 pA). (c) Two tunneling spectra measured at the marked positions by the red and black crosses in the lower-left inset in (a) ($T = 1.7$ K, $V_\mathrm{set}$ = 10 mV, and $I_\mathrm{set}$ = 200 pA). The spectrum in red color is measured at the center of the red cross in the hollow, while the spectrum in black color is measured at the center of black cross in the flat area. The blue dashed line shows the fitting curve by the Dynes model with a slightly anisotropic $s$-wave gap function. (d) A spatially resolved tunneling spectra measured along the yellow dashed line in the lower-left inset in (a) ($T = 0.4$ K, $V_\mathrm{set}$ = 20 mV, and $I_\mathrm{set}$ = 200 pA). The positions of the coherence peaks are marked by the green arrows. The spectrum with green color is measured at the center of the green cross shown in the lower-left inset in (a), which exhibits a peak at bias voltage of about -2 mV (marked by a blue arrow), and it may be the impurity induced state. (e) The statistics of peak energies in 845 tunneling spectra measured at randomly selected points in the area of (a) ($T = 1.7$ K, $V_\mathrm{set} = 10$ mV, and $I_\mathrm{set} = 200$ pA). The blue arrows indicate the existence of low-energy peaks at about $\pm2.2$ meV, and they may be induced by impurities.
} \label{fig2}
\end{figure}
Figure~\ref{fig2}(c) shows two tunneling spectra measured at two marked positions in the lower-left inset in Fig.~\ref{fig2}(a), i.e., one is measured at a position in a hollow, and the other is measured at a position on the flat area far away from the hollows. One can see that the two spectra show almost the same feature, which suggests that hollows have very little influence on the superconductivity. A slight suppression of the intensity of coherence peaks can be observed on the spectrum measured in the hollow compared to that measured in the flat area. Both of the two spectra show a full gap feature with a pair of coherence peaks locating at energies of about $\pm$4.6 meV. Then we carry out the Dynes model \cite{Dynes} with an $s$-wave gap to fit the spectrum measured in the flat area. The best fitting result is shown as the dashed curve in Fig.~\ref{fig2}(c), and it requires a slightly anisotropic $s$-wave gap for the best fitting. The obtained gap function reads $\Delta(\theta) = 4.6(0.93+0.07\cos2\theta)$ meV, and the scattering rate $\Gamma = 0.1$ meV. Here the gap maximum $\Delta_\mathrm{max}$ is close to the energy value of coherence peaks, and it is also similar to gap values of hole pockets of $\alpha$ and $\beta_1$ or the electron pocket of $\delta$ from the ARPES measurements \cite{ARPES}. We also measured a set of tunneling spectra along a dashed line in the lower-left inset in Fig.~\ref{fig2}(a), and the spectra are shown in Fig.~\ref{fig2}(d). All the spectra are homogeneous except for a slight change of the coherence peak energy. On the spectrum in green color shown in Fig.~\ref{fig2}(d), one can see that there is a small peak at about 2 mV marked by a blue arrow. It should be noted that this spectrum is not measured in the hollow, and there is no any unique feature on the topography. The peak is likely to be the bound state induce by an impurity underneath the surface, which will be discussed below in Subsection~\ref{impurity}. We then conduct the tunneling spectrum measurement all over the area in Fig.~\ref{fig2}(a), and do the statistics to peak energies on 845 measured spectra. The result is shown in Fig.~\ref{fig2}(e). The coherence peaks are mostly located from 4.4 to 5.4 meV from the statistics. About 2\% of spectra have low-energy peaks within the energy range of $\pm(2.2\pm0.4)$ mV, they can appear either at the hollow positions or in the flat area.
\subsection{Results of quasiparticle interference}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm]{figure3.eps}
\caption{(a) Topographic image ($V_\mathrm{set}$ = 30 mV, $I_\mathrm{set}$ = 200 pA) and (b) the corresponding normal-state QPI mapping at $E = 10$ meV ($V_\mathrm{set} = 30$ mV, $I_\mathrm{set} = 200$ pA) measured in the same area. (c) Schematic plot of Fermi surfaces derived from a previous ARPES work \cite{ARPES}. (d) The Fourier transformed pattern of the QPI mapping in (b). (e) Line profile plot of the intensity of the FT-QPI pattern along the black dashed line in (d), and the arrows indicate the position of scattering vectors connecting $\Gamma$ and M points. (f) The simulated scattering patterns between the hole and electron pockets plotted as grey patterns with center of $\sqrt{2}\pi/a_0$ from the center of the FT-QPI pattern. Here the grey patterns are the selected patterns in the self-correlated image of (c). And the arrow represents the scattering vector connecting $\Gamma$ and M points. Comparing (d) and (f), one can see that the scattering between hole and electron pockets has not been detected from our experimental data. All measurements are carried out at 0.7 K.
} \label{fig3}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=18cm,height=13cm]{figure4.eps}
\caption{(a) A large-scale topographic image measured on the surface of K12442 ($V_\mathrm{set}$ = 10 mV, $I_\mathrm{set}$ = 200 pA). (b)-(j) The QPI mapping measured in (a) at different energies with $B = 0$ T and $T= 1.1$ K ($V_\mathrm{set}$ = 10 mV, $I_\mathrm{set}$ = 200 pA). Circles in (a) and (c) mark positions of impurities with the bound state energy at about $\pm2.3$ mV. The locations of these impurities seem to be irrelevant to the hollows on the top surface. (k)-(s) Corresponding FT-QPI patterns derived from Fourier transformation to the QPI mappings (b)-(j), respectively. A 2D-Gaussian-function background is subtracted in the center of all the FT-QPI patterns, and the full width at half maximum is about $0.18\pi/a_0$ which is a very small value. (t) The energy dispersion of intensity in FT-QPI patterns along the diagonal direction marked by a blue dashed line in (m). The energy dispersion feature seems to be hole-like. The dashed curve is a guide line of a parabola, and we can obtain the energy $E_b\approx24\pm6$ meV for the top of the band. Red full squares represent sizes of the FT-QPI patterns marked as orange dashed circles in (m)-(o) and (q)-(s). The error bars highlighted in (t) reflect roughly the estimated width of the circle-like outline of the FT-QPI pattern.
} \label{fig4}
\end{figure*}
QPI measurements and the related analysis are very useful because they can tell the information of the Fermi surface \cite{Fe-STM-Hoffman}, the gap anisotropy \cite{LiFeAsani,FeSe11111ani}, as well as the gap signs \cite{HanaguriScience,LiFeAss+-,FeSes+-,FeSe11111s+-,Bi2212d} in a superconductor. We also measure the differential conductance mapping and show a QPI mapping at $E = 10$ meV in Fig.~\ref{fig3}(b). Although the hollows in the topography do not affect the superconductivity too much, standing waves can be clearly seen surrounding these hollows. When we do the Fourier transformation to the QPI mapping, we can obtain the Fourier-transformed (FT-) QPI pattern and show it in Fig.~\ref{fig3}(d). In K12442, a previous ARPES work \cite{ARPES} observe three hole pockets ($\alpha$, $\beta$, and $\gamma$) around the $\Gamma$ point and one tiny electron $\delta$ pocket around M point. We plot a sketch map of Fermi surfaces in Fig.~\ref{fig3}(c). Here the intra band scattering should locate around the center point of the FT-QPI pattern shown in Fig.~\ref{fig3}(d), and the simulated scattering results between the hole and the electron pockets are plotted as the four grey patterns in Fig.~\ref{fig3}(f). However, these scattering patterns have not been observed in the experimental data shown in (d). It is more clear in the line-cut intensity of the differential conductance shown in Fig.~\ref{fig3}(e). It is almost featureless near the points of $q=\pm\sqrt{2}\pi/a_0$ which connects $\Gamma$ and M points in momentum space. The absence of the characteristic scattering patterns between the hole and electron pockets may be explained by following two possible reasons. (a) A low density of states at the Fermi energy for the small electron $\delta$ pockets; (b) The unsensitive tunneling matrix element effect. A detailed discussion is given in the Section~\ref{discussion}.
Since we have not observed the scattering between the hole and electron pockets, we try to get some information of the intra-band scattering from the QPI data in a large area. Results of QPI mappings and corresponding FT-QPI patterns are shown in Fig.~\ref{fig4}. At zero energy, almost no clear features can be observed in QPI pattern in Fig.~\ref{fig4}(b), which is consistent with the full gap feature from tunneling spectrum measurements. At $E=\pm2.3$ meV, one can see obviously in Fig.~\ref{fig4}(c) that there are about 15 clear spots induced by the impurity bound state at this energy. When we go back to the topographic image in Fig.~\ref{fig4}(a), we can conclude that there is no evident correlation between impurity locations and the hollows on the top surface. The impurity may be beneath the top surface, e.g., in FeAs layer. The most frequent superconducting gap value or the coherence peak energy is about 4.6 meV from tunneling spectrum measurements on this surface, and it is comprehensible that the intensity of FT-QPI patterns are very strong at $E=\pm4.6$ meV. Although we cannot distinguish the detailed scattering features, it is clear that there are fourfold diamond-like patterns around the center in FT-QPI patterns. The traces of these outlines arising from the FT-QPI patterns are marked by yellow dashed lines in Figs.~\ref{fig4}(m)-(o) and \ref{fig4}(q)-(s). The size of such fourfold diamond-like pattern shrink with increase of the energy, which can be clearly seen in the energy dispersion plot [Fig.~\ref{fig4}(t)]. This result suggests that the scattering may be due to the intra-band scattering between the hole pockets near $\Gamma$. When we try to fit the dispersion data with a parabola, an energy value of the band top can be estimated to be about $24\pm6$ meV, and the diameter of the relevant Fermi pocket is about $0.18\pi/a_0$. These values are close to the parameters of the hole-like $\alpha$ pocket as determined from ARPES measurements \cite{ARPES}.
\subsection{Impurity bound states}\label{impurity}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm,height=9cm]{figure5.eps}
\caption{(a) Topographic image ($V_\mathrm{set}$ = 20 mV, $I_\mathrm{set}$ = 200 pA) and (b) QPI mapping at $E = 2$ meV ($V_\mathrm{set}$ = 20 mV, $I_\mathrm{set}$ = 200 pA) measured in the same area. One can see three bright spots in (b). (c) A set of tunneling spectra ($V_\mathrm{set}$ = 20 mV, $I_\mathrm{set}$ = 200 pA) measured along the arrowed line in (a) or (b) crossing centers of these two spots. The tunneling spectra in red are the ones measured in the area of the bright spot. All measurements are carried out at 1.7 K.
} \label{fig5}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm]{figure6.eps}
\caption{Temperature dependent evolution of the tunneling spectra measured (a) at the bright-spot center and (b) far away from any impurities. Specifically, (a) and (b) are measured at centers of the red and yellow crosses marked in the inset of (a), respectively. (c) Magnetic field evolution of the tunneling spectra measured at the impurity center. The inset shows the enlarged view of the bound state peak at about $-2$ meV. Setpoint conditions: $V_\mathrm{set}$ = 20 mV, $I_\mathrm{set}$ = 200 pA.
} \label{fig6}
\end{figure}
In order to investigate low-energy peaks at about $\pm2.2$ meV, we measured spatial evolution of tunneling spectra [Fig.~\ref{fig5}(c)] along an arrowed line across two bright spots in QPI mapping [Fig.~\ref{fig5}(b)]. One can see in Fig.~\ref{fig5}(c) that the tunneling spectra have the similar feature of peaks when across these two spots. Such peaks can be even sharper at 0.4 K, and one example can be seen in Fig.~\ref{fig6}(a). The spectrum feature near zero bias can even be affected by these low-energy peaks. The extremely sharp peaks may exclude the possibility of a smaller superconducting gap, because we can not fit the experimental data well by using the Dynes model with two superconducting gaps. In addition, the peaks at about $\pm2$ meV disappear when the temperature is 8 K, but the coherence peaks at about $\pm5$ meV can exist at temperature above 15 K. Following the discussions mentioned above, we argue that low-energy peaks are impurity induced bound states although these impurities locate underneath. In Fig.~\ref{fig6}(c), we show tunneling spectra measured at the center of a bright spot and under different magnetic fields. The amplitude of the impurity induced peak lowers down with increase of the magnetic field. The inset of Fig.~\ref{fig6}(c) shows the enlarged view of the impurity induced peak in negative-energy side, and the peak energy is almost unchanged under the magnetic field of 4 T. From a previous report, the field-induced peak-shift slope is about 0.06 meV/T for the impurity bound state induced by a magnetic Fe-vacancy impurity with the Land\'{e} factor $g=2$ \cite{XueFevacancy}. Based on this slope, we can calculate the energy shift is about 0.24 meV when the field changes from 0 to 4 T. However, the energy shift is negligible for the impurity bound state in the K12442 sample, we can argue that the impurity may be non-magnetic or weak magnetic. In addition, the peak feature is similar to the bound state peak of the As vacancy \cite{AsVancancy}, so impurities are likely to be non-magnetic As vacancies in the FeAs layer underneath.
\subsection{Tunneling spectra measured on another kind of terminated surface}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm]{figure7.eps}
\caption{(a) Topographic image of another kind of terminated surface ($V_\mathrm{set} = 1$ V, $I_\mathrm{set} = 20$ pA). Hollows in this topography have a lower density and a smaller averaged size when compared with ones in Fig.~\ref{fig2}(a). The inset shows the atomically resolved topography measured in the flat area far away from hollows ($V_\mathrm{set}$ = 100 mV, $I_\mathrm{set}$ = 200 pA). (b) A typical tunneling spectrum measured to high energy ($V_\mathrm{set}$ = 100 mV, $I_\mathrm{set}$ = 200 pA). (c-f) Tunneling spectra measured at marked positions in (a) ($V_\mathrm{set}$ = 30 mV, $I_\mathrm{set}$ = 200 pA). The characteristic peaks are marked by arrows. (g) The statistics of the peak energies derived from 900 spectra which are measured at points with a matrix of 30$\times$ 30 uniformly distributed in the area of (a) ($V_\mathrm{set}$ = 50 mV, $I_\mathrm{set}$ = 500 pA). All measurements are carried out at 1.7 K.
} \label{fig7}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm]{figure8.eps}
\caption{(a) Topographic image with an exposed low-lying area of the under layer ($V_\mathrm{set}$ =50 mV, $I_\mathrm{set}$ = 100 pA). The inset shows the line profile of the surface height along the arrowed line, and the exposed low-lying layer is about 1 \AA\ by average lower than the top surface layer. (b) A set of tunneling spectra measured along the dashed line in (a) ($V_\mathrm{set}$ = 50 mV, $I_\mathrm{set}$ = 200 pA). (c) Color mapping of the coherence peak energy in positive bias side based on 900 tunneling spectra measured at points with a matrix of 30$\times$ 30 uniformly distributed on the area of (a) ($V_\mathrm{set}$ = 50 mV, $I_\mathrm{set}$ = 200 pA). All measurements are carried out at 1.7 K.
} \label{fig8}
\end{figure}
In the K12442 sample, we also observe other areas with different topographic features, and one example is shown in Fig.~\ref{fig7}(a). This kind of surface is rarely observed with a possibility of once in 8 times cleavage. On this surface the hollows are much smaller than the ones shown in Fig.~\ref{fig2}(a). However, from the atomically resolved surface shown in the inset in Fig.~\ref{fig7}(a), the lattice constant is also about 5.3 \AA\, which is close to the value obtained in Fig.~\ref{fig2}(a). Figure~\ref{fig7}(b) shows a tunneling spectrum measured in a wide bias range. The spectrum has the similar behavior comparing with the one shown in Fig.~\ref{fig2}(b): both of them show the particle hole asymmetry. The quantitative difference of them is the considerable bias-voltage dependence of d$I/$d$V$ in the negative bias side of the spectrum shown in Fig.~\ref{fig7}(b). This suggests slightly different band structures in these two kinds of surfaces. In the area of Fig.~\ref{fig7}(a), we can detect spectra with different gap energies, and four examples are shown in Figs.~\ref{fig7}(c)-\ref{fig7}(f). In the spectrum shown in Fig.~\ref{fig7}(f), we can also see low-energy peaks at about $\pm2.5$ meV which may be attributed to the impurity induced bound states. Besides, we can see that coherence peaks in spectra locate in a wider energy range compared to the ones shown in Fig.~\ref{fig2}(d). In addition, hump features can be observed at energy near $\pm10$ meV on some spectra. The humps on tunneling spectra may be the feature of a larger superconducting gap, or may be the bosonic mode which has been observed in many FeSCs \cite{Bosonicmode1,Bosonicmode2,LiFeAsMode}. Figure~\ref{fig7}(h) shows the energy statistic results of all the peak features based on 900 spectra measured in the area of Fig.~\ref{fig7}(a). One can see that the coherence peak feature are in the range of $\pm$(4-8) meV with possible local maxima at about $\pm4.4$, $\pm5.4$, and $\pm6.2$ meV. This suggests the multi-gap feature in the superconductor. The maximum probability of the hump feature appears at about $\pm9.8$ meV.
Figure~\ref{fig8}(a) shows a typical topography where a low-lying layer can be observed. The size of such exposed low-lying area is dozens of nanometers, and the average height of this area is about 1 \AA\, lower than the top surface. Then we measure the tunneling spectra across the exposed low-lying area and show them in Fig.~\ref{fig8}(b). One can see that the fully gapped feature can be observed on all the spectra, but energy values of the coherence peaks shift from about $\pm6$ meV on the top layer to about $\pm8$ meV in the exposed low-lying area. The feature is more clear in the coherence peak energy mapping shown in Fig.~\ref{fig8}(c) with much larger coherence peak energy in the exposed low-lying area.
\section{Discussions}\label{discussion}
On the surfaces of $\rm KCa_2Fe_4As_4F_2$ single crystals, we observe the $\sqrt{2}\times\sqrt{2}$ reconstructed layer of atoms. This is consistent with the fact that half of potassium atoms or calcium atoms of a layer should stay and reconstruct on each cleaved surface. This can lead to electrically unpolar surfaces. It should be noted that there are lots of hollows with the size of sub- or several nanometers randomly distributed on the surface. Such topography with hollows is very common in FeSCs with exposed surface composed by alkali metal atoms \cite{NaFeCoAs,LiFeAs,RbFe2As2}. The hollow population and distribution may be different in the surface terminated by alkaline-earth metal atoms. In this point of view, and considering the observed features, the commonly obtained top surface may be reconstructed by K atoms, while the rarely achieved surface may be reconstructed by the Ca atoms in K12442 samples. The hollows are either K or Ca vacancies in the two cases, which may be due to easy losing of these atoms in the cleaving procedure. The missing K/Ca atoms on the surface will adjust the band structure slightly, which is supported by the different features beyond the gap on spectra shown in Figs.~\ref{fig2}(b) and \ref{fig7}(b).
We observe obvious fully gapped feature in most tunneling spectra measured in K12442 single crystals, which indicates the absence of nodes and is consistent with other measurements \cite{ARPES,HeatTran,optical}. Most of the coherence peaks locate in the energy range from 4 to 8 meV, and these gap values seem to be comparable to those reported previously \cite{ARPES,optical,Hc1}. Since the dominant contribution of the FT-QPI is consistent with the intrapocket scattering of the $\alpha$ pocket, the obtained superconducting gap values on the easily achieved surface are likely to be assigned to this hole pocket. Therefor the superfluid may be mainly contributed by the hole-like Fermi surfaces near $\Gamma$ point. The situation is similar to that in CaKFe$_4$As$_4$: several hole and electron pockets are observed near $\Gamma$ and M points by the ARPES measurement \cite{ARPES1144}, but only the scattering between two hole pockets is observed in FT-QPI patterns from the STM measurement \cite{STM1144}. The hump feature at about $\pm9.8$ meV observed in K12442 on other type surface may be the larger superconducting gap as reported previously \cite{uSRK,Hc1} or the bosonic mode corresponding to some gap(s). If it is the bosonic mode, the mode energy is about 2-6 meV according to the substraction of the superconducting gap energy. However, the measured bosonic mode value is as large as 16 meV from neutron scattering experiments \cite{NSRK}, thus this feature most likely reflects an energy gap. Although we have observed some peaks on the spectra near $\pm$2.2 mV, due to the existence of some very strong impurity bound states appearing near this energy, we cannot assure that these features reflect a small superconducting gap\cite{ARPES,uSRK,Hc1}. From our experimental data, the confirmable superconducting gap ranges from 4 to 8 meV and the dominant contribution of superfluid may be from the bands with the gaps of about 4.4 to 5.4 meV, most probably the hole derivative $\alpha$ band near $\Gamma$. However, the determined Fermi energy of the $\alpha$ band is only about $24\pm6$ meV, this indicates that the basic requirement of the Bardeen-Cooper-Schrieffer (BCS) theory in the weak coupling limit, namely $E_\mathrm{F}\gg\Delta$ cannot be satisfied in K12442. This strongly suggests that the superconducting physics in K12442 material should possess by itself an unconventional feature.
In FeSCs, the scenario of $s^\pm$ pairing is based on the nesting between the hole and electron pockets with similar sizes in the weak coupling scenario. This pairing manner was firstly inferred from the QPI measurements on Fe(Se,Te) by comparing the difference when the applied magnetic field is zero and finite\cite{HanaguriScience}. This was later further strengthened by the impurity effect\cite{YangHNC2013} and phase resolved QPI analysis \cite{ChenMYPRBFeTeSe2019}. However, this pairing mechanism is challenged in K12442 because the electron pocket at M point is too small \cite{ARPES}, so the nesting condition cannot be satisfied. In our measurements, we cannot even observe the scattering pattern between the hole pockets and tiny electron pockets in the FT-QPI result. There are two possible reasons for this. The first one is the tunneling matrix element effect due to which we cannot detect the electron $\delta$ pocket. In fact we can only detect the intrapocket scattering of $\alpha$ pocket, and we cannot even detect the scattering related to $\beta$ and $\gamma$ pockets. Hence, it is not strange that we have not detected the scattering based on the $\delta$ pocket. The other possible reason is the low density of states for the $\delta$ pocket. However in any case, very different sizes of the hole and electron pockets challenge the nesting picture of $s^\pm$-pairing in the weak coupling scenario. From tunneling spectra obtained in this work, if the impurity is non-magnetic in nature as we argued, the impurity bound states at about $\pm2.2$ meV may suggest the sign change of superconducting gaps in this superconductor. In any case, it remains unclear but interesting to know what role is played by the shallow electron pocket and even the incipient hole bands \cite{ARPES} near the M point, and whether they help to form the ``incipient'' $s^\pm$ pairing \cite{incipient}.
\section{Conclusion}
In conclusion, by using scanning tunneling microscope, we have investigated the superconducting gaps and pairing mechanism of KCa$_2$Fe$_4$As$_4$F$_2$ single crystals. Most spectra exhibit a full gap feature with the gap value from 4 to 8 meV. On few spectra, some peaks can be observed at about $\pm2.2$ meV, which can be attributed to the impurity induced bound states. We have not seen the characteristic scattering pattern between hole to electron pockets based on the QPI data and related analysis, however the FT-QPI pattern can be well described by the intrapocket scattering of $\alpha$ pocket near $\Gamma$ point. The dispersion derived from the FT-QPI indicates a small Fermi energy of about 24 meV for the band forming the $\alpha$ pocket. This indicates a strong deviation from the basic requirement of the weak coupling BCS theory. Our results shed new light in helping to clarify the superconducting mechanism in iron based superconductors.
\begin{acknowledgments}
We appreciate useful discussions with A. V. Balatsky and Z. Y. Wang. This work was supported by National Key R\&D Program of China (Grants No. 2016YFA0300401, No. 2018YFA0704200, No. 2017YFA0303100, and No. 2017YFA0302900), National Natural Science Foundation of China (Grants No. 12061131001, No. 11974171, No. 11822411, No. 11961160699, No. 11674406, and No. 11674372), and the Strategic Priority Research Program (B) of Chinese Academy of Sciences (Grants No. XDB25000000, and No. XDB33000000). H. L. is grateful for the support from Beijing Natural Science Foundation (Grant No. JQ19002) and the Youth Innovation Promotion Association of CAS (Grant No. 2016004).
\end{acknowledgments}
$^*$ huanyang@nju.edu.cn
$^\dag$ hhwen@nju.edu.cn
|
{
"timestamp": "2021-02-18T02:19:22",
"yymm": "2102",
"arxiv_id": "2102.08785",
"language": "en",
"url": "https://arxiv.org/abs/2102.08785"
}
|
\section{1. Introduction}
\label{intro}
First extensions of the nuclear shell model \cite{Goep48,Haxel49} into regions far beyond the heaviest known doubly magic nucleus,
$^{208}$Pb ({\it{Z}} = 82, {\it{N}} = 126), performed about fifty years ago lead to the prediction of spherical proton and neutron
shells at {\it{Z}} = 114 and {\it{N}} = 184 \cite{Sobi66,Meld67}. Nuclei in the vicinity of the crossing of both shells were
expected to be extremely stabilized against spontaneous fission by fission barriers up to about 10 MeV.
Particulary for the doubly magic nucleus a fission barrier of 9.6 MeV and hence a partial fission half-life of 10$^{16}$ years
\cite{Nils69}, in a preceding study even 2$\times$10$^{19}$ years \cite{Nils68}, were expected. In an allegorical picture these
nuclei were regarded to form an island of stability, separated from the peninsula of known nuclei (the heaviest, safely identified element at that time was lawrencium
(Z\,=\,103))
by a sea of instability and soon were denoted as 'superheavy' (see {\it{e.g}} \cite{Fric70}).
The theoretical predictions initiated tremendous efforts from experimental side to produce these superheavy nuclei and to investigate their decay properties as well as
their nuclear and atomic structure and their chemical properties. The major and so far only successful method to synthesize transactinide elements ({\it{Z}} $>$ 103) were complete fusion reactions.\\
These efforts were accompanied by pioneering technical developments a) of accelerators and ion sources to deliver stable heavy ion beams of high intensity, b) of targets
being able to stand the high beam intensities for long irradiation times ($>$ several weeks), c) for fast and efficient separation of products from complete fusion reactions
from the primary beam and products from nuclear reactions others than complete fusion, d) of detector systems to measure the different decay modes ($\alpha$ - decay, EC - decay, spontaneous
fission and acompanying $\gamma$ radiation and conversion electrons), e) of data analysis techniques, and f) for modelling measured particle spectra by advanced simulations, {\it{e.g.}} GEANT4 \cite{Agost03}.
Despite all efforts it took more than thirty years until the first serious results on the production of elements {\it{Z}} $\ge$ 114
(flerovium) were reported \cite{Ogan99,Ogan99a}. However, these first results could not be reproduced independently and are still ambiguous \cite{Hess13}.\\
Nevertheless, during the past twenty years synthesis of elements {\it{Z}} = 113 to {\it{Z}} = 118 has been reported and their discovery was approved
by the International Union for Pure and Applied Chemistry (IUPAC) \cite{BarK11,KarB16,KarB16a}. The decay data reported for the isotopes of
elements {\it{Z}} $\ge$ 113 that have been claimed to be identified indicate the existence of a region of shell stabilized nuclei towards {\it{N}} = 184,
but the center has not been reached so far. Data on the strength of the possible shells is still scarce.\\
Tremendous efforts have also been undertaken from the theoretical side to make predictions on stability ('shell effects'), fission barriers, Q$_{\alpha}$ - values, decay modes,
half-lives, spin and parity of the ground-state as well as of low lying excited states, {\it{etc.}}. For about thirty years the calculations were performed using
macroscopic-microscopic approaches based on the nuclear drop model \cite{Weiz35} and the Strutinsky shell correction method \cite{Struti67}. Although predicted shell effects
disagreed considerably the models agreed in {\it{Z}}\,=\,114 and {\it{N}}\,=\,184 as proton and neutron shell closures (see {\it{e.g.}} \cite{Smolan95,Moller95}).
The situation changed by the end of the 1990ties when for the first time results
using self-consistent models like Skyrme-Hartree-Fock-Bogoliubov (SHFB) calculations or relativistic mean-field models (RMF)
were published \cite{Rutz97,Ben03}.
Most of the calculations predict {\it{Z}}\,=\,120 as proton shell closure, while others predict {\it{Z}}\,=\,114 (SkI4) or {\it{Z}}\,=\,126 (SkP, SkM*). Skyrme force based calculations agree in {\it{N}}\,=\,184 as neutron shell closure, while the RMF calculations favour {\it{N}}\,=\,172. As a common feature all these parametrizations and also the macroscopic - microscopic calculations result in a wide area of high shell effects.
That behavior is different to that at known shell closures, {\it{e.g.}} {\it{Z}}\,=\,50, {\it{N}}\,=\,50,\,82, where the region of high shell effects (or high 2p\,- and 2n\,- separation energies) is strongly localized. It is thus not evident if the concept of nuclear shells as known from the lighter nuclei is still reasonable in the region of superheavy nuclei. It might be wiser to speak of regions of high shell stabilization instead.
On the other hand, it has been already discussed extensively by Bender et al.\cite{Bend99}
that the proton number {\it{Z}} and the neutron number {\it{N}}, where the shell closure occurs strongly depend on details in the description of the underlying forces, specifically on the values for the effective masses {\it{m$^{*}$}} and the strength of the spin - orbit interaction. It also
has been emphasized in \cite{Bend99} that the energy gap between the spin - orbit partners
2f$_{5/2}$ and 2f$_{7/2}$ determines whether the proton shell occurs at Z\,=\,114 or Z\,=\,120.
Under these circumstances predictions of shell closures at different proton ({\it{Z}}) and/or neutron ({\it{N}}) numbers by different models may be regarded rather as a feature of 'fine tuning' of the models than as a principle disagreement.
Having this in mind superheavy elements represent an ideal laboratory for investigation of the nuclear ('strong') force.
More detailed knowledge of properties and structure of superheavy heavy nuclei is thus undoubtedly decisive for
deeper understanding of basic interactions. Therefore investigations of decay properties and structure of superheavy nuclei will become in future even more important than synthesis of new elements. \\
One has, however, to keep in mind, that the theoretically predicted high density of nuclear levels in a narrow energy interval above the ground-state may lead to complex $\alpha$-decay patterns, while on the other hand often only little numbers of decay events are observed. Therefore it is tempting to take the average of the measured decay data, which finally results to assign the
measured data as due to one transition in case of the $\alpha$-decay energies, or due to originating from one nuclear level in the case of life-times. Thus fine structure in the $\alpha$ decay or the existence of isomeric levels might be overseen. Rather, a critical analysis and assessment of the measured data is required.\\
As already indicated above the expression 'superheavy nuclei' or 'superheavy elements' had been originally suggested for the nuclei in the vicinity of the crossing of the spherical proton and neutron shells at Z\,=\,114 and N\,=\,184. The establishment of deformed proton and neutron shells at Z\,=\,108 and N\,=\,162 \cite{Cwiok83,Moller86,Patyk91,Patyk91a} resulted in the existence of a ridge between the 'peninsula' of known nuclei and the 'island of stability'. Thus it became common to denote all purely shell stabilized nuclei as 'superheavy',
{\it{i.e.}} nuclei with liquid drop fission barriers lower than the zero - point motion energy ($<$0.5 MeV). \\
The region of superheavy nuclei that shall be treated in this review is shown in fig. 1.
\begin{figure*}
\resizebox{1.0\textwidth}{!}
\includegraphics{fig1.eps}
}
\caption{Excerpt from the charts of nuclei for the region {\it{Z}}\,$\ge$\,112.}
\label{fig:1}
\end{figure*}
\section{2. Experimental Approach}
Complete fusion reactions of suited projectile and target nuclei has been so far the only successful method to produce nuclei with atomic numbers Z $>$ 103 (see e.g. \cite{Mun16,Oga16}).
Under these considerations a separation method has been developped to take into account the specific features of this type of nuclear reactions.
Due to momentum conservation the velocity of the fusion product, in the following denoted as 'compund nucleus' (CN)\footnote{it is common to denote the primary fusion products which represent in mass and atomic number the sum of projectile and target as 'compound nucleus' (CN) which is highly excited.The final product after deexcitation by prompt emission of nucleons and/or $\alpha$ particles is denoted as 'evaporation residue' (ER).}
can be written as\\
\\
v$_{CN}$ = (m$_{p}$ / (m$_{p}$ + m$_{t}$)) x v$_{p}$ \\
\\
where {\it{m$_{p}$}}, {\it{m$_{t}$}} denote the masses of the projectile and target nucleus, and {\it{v$_{p}$}} the velocity of the projectile. This simply means that a) fusion products are
emitted in beam direction (with an angular distribution around zero degree determined by particle emission from the highly excited CN and by scattering in the target foil)
and b) that CN are slower than the projectiles. It seemed therefore straightforward to use the velocity difference for separation of the fusion products from the projectiles
and products from nuclear reactions others than complete fusion. Such a method has the further advantage of being fast, as the separation is performed in-flight
without necessity to stop the products. So separation time is determined by the flight-time through the separation device. In the region of transactinide nuclei this
separation technique has been applied for the first time at the velocity filter SHIP at GSI, Darmstadt (Germany) \cite{MuF79} for investigation of evaporation residue production in the
reactions $^{50}$Ti + $^{207,208}$Pb, $^{209}$Bi \cite{Hess85,Hess85a} and for the identification of element Z = 107 (bohrium)
in the reaction $^{54}$Cr + $^{209}$Bi \cite{Muenz81}. Separation times in these cases were in the order of 2 $\mu$s.\\
As an alternative separation technique gas-filled separators have been developped using the different magnetic rigidities B$\rho$ of fusion products and projectiles
as the basis for separation. Early devices used in investigation of tranfermium nuclei were SASSY \cite{GhY88} and SASSY II \cite{Ghi88} at LNBL Berkeley, USA and
HECK \cite{NiA95} at GSI.\\
Due to their simpler conception and more compact construction, which allows for separation times below 1 $\mu$s gas-filled separators meanwhile have
become a wide spread tool for investigation of heaviest nuclei and are operated in many laboratories, e.g. RITU (University of Jyv\"askyl\"a, Finland) \cite{LeA95},
BGS (LNBL Berkeley, USA) \cite{NiG98}, DGFRS (JINR, Dubna, Russia) \cite{LaL93}, GARIS (RIKEN, Wako, Japan) \cite{MoY92}, SHANS (IMP, Lanzhou, China) \cite{Zhang13},
TASCA (GSI, Darmstadt, Germany) \cite{SeB08}.\\
Separation, however, is only one side of the medal. The fusion products have also to be identified safely. Having in mind that the essential decay modes of superheavy
nuclei are $\alpha$ decay and spontaneous fission (SF) detection methods suited for these items have been developped. After it was shown that suppression of the
projectile beam by SHIP was high enough to use silicon detectors \cite{Schmidt78} an advanced detection set-up for investigation of heaviest nuclei was built \cite{Hofm84}.
It consisted of an array of seven position-sensitive silicon detectors ('stop detector'), suited for $\alpha$ spectroscopy and registration of heavy particles (fission products,
evaporation residues (ER), scattered projectiles etc.). To obtain a discrimination between particles passing the velocity filter and being stopped in the detector
(ER, projectiles, products from few nucleon transfer) and radioactive decays ($\alpha$ decay, SF) and to obtain a separation of ER and scattered projectiles or
transfer products a time-of-flight detector was placed in front of the stop detector \cite{Hess82}. Also the possibility to measure $\gamma$ rays emitted
in coincidence with $\alpha$ particles was considered by placing a Ge- detector behind the stop detector.
This kind of detection system has been improved in the course of the years at SHIP \cite{Hof95, Ack18} and was also adopted in modified versions and improved by other
research groups in other laboratories; examples are GREAT \cite{Page03}, GABRIELA \cite{Haus06} and TASISpec \cite{Anders10}.\\
The improvements essentially comprise the following items:\\
a) the detector set-ups were upgraded by adding a box-shaped Si - dector arrangement, placed upstream and facing the 'stop detector',
allowing with high efficiency registration of $\alpha$ particles and fission
products escaping the 'stop detector'. This was required as the ranges of $\alpha$ particles and fission fragments in silicon are larger than the range of ER; so about
half of the $\alpha$ particles and 30\,-\,50 $\%$ of the fission fragments will leave the 'stop detector' releasing only part of their kinetic energy in it.\\
b) the 'old' Si detectors where positions were determined by charge division were replaced by pixelized detectors allowing for a higher position resolution and thus
longer correlation times.\\
c) effort was made to reduce the detector noise to have access to low energy particles (E\,$<$\,500 keV) like conversion electrons (CE).\\
d) digital electronics was introduced to enable dead time free data aquisition and to have access to short-lived acitivities with half-lives
lower than some microseconds. \\
e) detecor geometry and mechanical components were optimized to use several Ge detectors and to minimize scattering and absorption of $\gamma$ rays
in the detector frames to increase the efficiency for $\gamma$ ray detection.\\
As it is not the scope of this work to present experimental techniques and set-ups in detail we refer for this item
to a recent review paper \cite{AckT17}.\\
Another technical aspect concerns the targets. As production cross-sections for heaviest elements are small, highest available beam currents
have to been used. Consequently a technology was desired to avoid destruction of the targets, which led to the development of rotating
target wheels \cite{Marx79}; performance of the wheels and target quality were continuously improved. \\
As an alternative method to produce superheavy nuclei (SHN) recently the idea of using multinucleon transfer reactions was
resumed, see {\it{e.g.}} \cite{Zag11,Zag13}. Indeed, intensive studies of those reactions with respect to SHN production, {\it{e.g.}}
$^{238}$U + $^{238}$U or $^{238}$U + $^{248}$Cm, had been performed at the UNILAC at GSI already about forty years ago. A summary
of these efforts is given in \cite{Kratz13}. Heaviest nuclides that could be identified in these experiments were isotopes of mendelevium ({\it{Z}}\,=\,101). A drawback of these studies, however, was the use of radiochemical methods which restricted
isotope identification to those with 'long' halflives, T$_{1/2}$ $>$$>$ 1 min, giving no excess to short-lived nuclei in the
region Z\,$\le$102. To proceed in studying those reactions new types of separators are required, taking into account not
only short half-lives down to the $\mu$s - range, but also a broad angular distribution of the reaction products.
A more detailed discussion of this feature, however, is beyond the scope of this review.\\
\begin{figure}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig2.eps}
}
\caption{Spectra of particles registered in an irradiation of $^{209}$Bi with $^{54}$Cr at SHIP \cite{HesA10a}; black line:
particles registered in anticoincidence to the time-of-flight detectors, red line: particles registered during the beam-off period.}
\label{fig:2}
\end{figure}
\begin{figure}
\vspace{-0.8cm}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig3.eps}
}
\caption{$\alpha$ - $\alpha$ correlation analysis of the radioactive decay chain starting from $^{258}$Db. a) Spectrum of
$\alpha$ events observed in an irradiation of $^{209}$Bi with $^{50}$Ti at SHIP \cite{Vost15} between the beam bursts; the region below 7.6 MeV has been downscaled by a factor of five for better presentation; b) spectrum of the first $\alpha$ decays following the $\alpha$ decay of
$^{258}$Db within 250 s; c) same as b), but requiring that both $\alpha$ decays occur in the same detector strip; d) spectrum of
correlated (daughter) $\alpha$ events, requiring a maximum position difference of $\pm$0.3 mm and a time difference $\Delta$t $<$ 250 s.}
\label{fig:3}
\end{figure}
\vspace{10 mm}
\section{3. Data Selection}
Within the commonly used techniques particles passing the in-flight separator are implanted into a silicon-detector set-up.
As separation of the ER from 'unwanted' particles (scattered projectiles, scattered target nuclei, products from few nucleon
tansfer etc.) is not clean, there will be always a cocktail of particles registered, forming a background covering the energy
range of the $\alpha$ decays of the nuclei to be investigated, and usually also the energy range of the spontaneous fission
products. In cases of small production cross sections, typically $<$1$\mu$b, $\alpha$ decays of the ER are usually not visible in the particle spectra. Further cleaning procedures are required. An often applied procedure is the use of transmission detctors in front
of the 'stop - detector' and requiring an anticoincidence between events registered in the stop detector and the transmission detector.
In practise the efficiency of the latter is never excatly 100 $\%$, therefore there will still be a residual background in the spectra.
In cases of a pulsed beam, one can restrict to the time intervals between the pulses. \\
An example is shown in fig. 2. where the $\alpha$ spectrum taken in an irradiation of $^{209}$Bi with $^{54}$Cr (271 MeV) at SHIP is
presented \cite{HesA10a}.
The black line represents the particle spectrum taken in anti-coincidence with the time-of-flight (TOF) detectors; clearly products
($^{211m,212m}$Po, $^{214,214m}$Fr, $^{215}$Ra) stemming from few nucleon tranfer reactions are visible, the ER, $^{261}$Bh and its
daughter product $^{257,257m}$Db, however, are buried under the background. In cases of pulsed beams a further purification is achieved by requiring a 'beam - off' condition. Thus the $\alpha$ decays of $^{261}$Bh, $^{257,257m}$Db become visible (red line in
fig. 2). Such a restriction, however, is not desirable in many cases as it restricts identification to nuclei having liftimes in the order of the pulse-lengths or longer.\\
A possible way out is the use of genetic correlations between registered events; these may be correlations of the type ER - $\alpha$,
$\alpha$ - $\alpha$, ER - SF, $\alpha$ - SF etc.
\vspace{15 mm}
\section{4. Data Treatment}
\subsection{\bf{4.1 Genetic Correlations}}
To establish genetic relationships between mother and daughter $\alpha$ decays is presently a standard method to identify
unknown isotopes or to assign individual decay energies to a certain nucleus.\\
Originally it was developped at applying the He - jet technique for stopping the reaction products and transport them to the detection system.
As the reaction products were deposited on the surface of the detector, depending on the direction of emission
of the $\alpha$ particle the latter could be either registered in the detector, but the residual nucleus was kicked off the detector
by the recoil of the emitted $\alpha$ particle or the residual nucleus was shallowly implanted into the detector, while the $\alpha$ particle
was emitted in opposite direction and did not hit the detector. To establish correlations sophisticated detector arrangements were required (see {\it{i.e.}} \cite{Ghiorso82}).
The technique of stopping the reaction products in silicon surface barrier detectors after in-flight separation from the projectile beam simplified the procedures considerably \cite{Schmidt78,Schmidt79}.
Due to implantation into the detector by $\approx$(5\,-\,10) $\mu$m the residual nuclei was not kicked out of the detector by the recoil of the emitted $\alpha$ particle
and therefore decays of the implanted nucleus and all daughter products occured in the same detector;
so it was sufficient to establish chronological relationship between $\alpha$ events measured within the same detector \cite{Schmidt79}.
The applicability of this method was limited by the decay rate in the detector, as the time sequence of decays became accidential if the search time for correlations exceeded the
average time distance between two decays. The application of this technique was improved by using position sensitive silicon detectors \cite{Hofm84,Hofm79}.
These detectors deliver the position of implantation as an additional parameter. The position resolution is typically around 300 $\mu$m (FWHM),
while the range of $\alpha$ particles of (5\,-\,10) MeV is (40\,-\,80) $\mu$m \cite{North70} and the dislocation of the residual nucleus due to the $\alpha$ recoil is $<$1 $\mu$m.
Thus all subsequent decays of a nucleus will occur at the same position (within the detector resolution). The probability to observe random correlations is reduced significantly
by this procedure. \\
In these set-ups position signals were produced by charge division between an upper (top) and a lower (bottom) detector electrode (see {\it{e.g.}} \cite{Hofm12} for an advanced version of such a detector set-up).
In modern set-ups (see {\it{e.g.}} \cite{Anders10}) these position sensitve detectors have been replaced by pixeled detectors having vertical strips
(a typical width is 1 mm) on one side and horizontal strips of the same width on the other side. The position is then given by the coordinates
representing the numbers of the horizontal and vertical strips. One advantage of the pixeled detectors is a somewhat better position resolution;
taking strip widths of each 1 mm, one obtains a pixel size of 1 mm$^{2}$; for the SHIP - type detector \cite{Hofm12} (5 mm wide strips, position resolution 0.3 mm (FWHM))
taking in the analysis three times the FWHM one obtains an effective pixel size of
4.5 mm$^{2}$ (3 x 0.3 x 5 mm$^{2}$). More stringent, however, is the fact that the position resolution for a pixeled detector is given solely by
the strip numbers and is thus independent of the energy deposit of the particle and of the range of the particle
(as long as it does not exceed the strip width). \\
In position sensitive detectors low energy particles ($\alpha$ particles escaping the detector,
conversion electrons) deliver small signals often influenced by the detector noise and nonlinearities of the used amplifiers and ADCs,
which significantly lowers the position resolution. In many cases signals are missing at all, as they are lower than the detection threshold. Another drawback is that at electron energies of around 300 keV the range in silicon becomes $\approx$300 $\mu$m and thus reaches the detector resolution, which then requires to enhance the position window for correlation search.\\
But also one drawback of the pixeled detectors should be at least mentioned:
due to the small widths of the strips (typically 1 mm) already for a notable fraction of the implanted particles the energy
signal is split between two strips making sophisticated data analysis algorithms necessary to reconstruct the energy of the particles.
Also, the energy split between two strips also introduces some ambiguities in the determination of the position.\\
An illustrative example for the benefit of including the position into the correlation search is given in fig. 3. Here the $\alpha$ spectrum obtained in an irradiation of $^{209}$Bi with $^{50}$Ti at SHIP \cite{Vost15} (using the same set-up as in \cite{Hofm12}) between the beam bursts is shown in fig. 3a. Besides $^{258}$Db (9.0\,--\,9.4 MeV), produced in the reaction $^{209}$Bi($^{50}$Ti,n)$^{258}$Db, and its decay products
$^{254}$Lr (8.35\,--\,8.50 MeV) and $^{254}$No (8.1 MeV, EC decay daughter of $^{254}$Lr) also $\alpha$ lines from $^{212g,212m}$At (7.68,7.90 MeV and $^{211}$Po (7.45 MeV) are present; these activities were produced by few nucleon transfer reactions; in addition also the $\alpha$ line of $^{215}$Po (6.78 MeV) is visible, stemming from $\alpha$ decay of $^{223}$Ra (T$_{1/2}$ = 11.43 d), produced in a preceeding experiment.
In fig. 3b the spectrum of the first $\alpha$ particles following an $\alpha$ decay of $^{258}$Db (energy region is marked by the red lines in fig. 3a is shown. Besides the daughter products $^{254}$Lr and $^{254}$No, strong random correlations with $^{211}$Po, $^{215}$Po, $^{258}$Db, $^{212g,212m}$At are observed; the random correlation can be significantly suppressed if in addition the occurence of both $\alpha$ decays in the same detector strip is required, as seen in fig. 3c; the result of the position correlation analysis finally is shown in fig. 3d. Here, in addition the occurence of both $\alpha$ events within a position difference of $\pm$0.3 mm is required. The background $\alpha$ decays is completely gone, and also details
in the energy distribution of the $\alpha$ events are visible; the $\alpha$ events at (7.7\,-\,7.9) MeV stem from decay of $^{250}$Md ($\alpha$ - decay daughter of $^{254}$Lr) and those at 7.45 MeV are here from $^{250}$Fm ($\alpha$-decay daughter of $^{254}$No, EC - decay daughter of $^{250}$Md). The events at (8.7\,-\,8.8) MeV are from decay of $^{253g,253m}$Lr, the $\alpha$ decay daughters of $^{257g,257m}$Db, which was produced to a small amount in the reaction $^{209}$Bi($^{50}$Ti,2n)$^{257}$Db.
\begin{figure}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig4.eps}
}
\caption{Recoil energies transferred to the residual nuclei by $\alpha$ - decay of heavy nuclei. The lines are to guide the eye.}
\label{fig:4}
\end{figure}
\begin{figure}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig5.eps}
}
\caption{Energy summing of $\alpha$ particles and conversion electrons (CE); a) decay scheme of $^{255}$Rf \cite{Hessb06};
b) spectrum of $\gamma$ rays emitted in coincidence
with $\alpha$ decays of $^{255}$Rf; c) energy distribution of $\alpha$ particles in coincidence with the E\,=\, 203.6 keV (full line) or E\,=\,143.3 keV $\gamma$-line (dashed line).}
\label{fig:5}
\end{figure}
\vspace{10 mm}
\subsection{\bf{4.2 Summing of $\alpha$-particle and Recoil - energies}}
Implantation of ER into a silicon detector has consquences for measuring the energies of $\alpha$ particles.
One item concerns summing of the $\alpha$ particle energy and the energy transferred by the $\alpha$ particle to the
residual nucleus, which will be in the following denoted as recoil energy {\it{E$_{rec}$}}. \\
The total decay energy {\it{Q}} (for a ground-state to ground-state transition) is given by \\
Q = (m$_{mother}$ - m$_{daughter}$ - m$_{\alpha}$) $\times$ c$^{2}$ \\
This energy splits in two components \\
Q = E$_{\alpha}$ + E$_{rec}$ = (1 + m$_{\alpha}$/m$_{daughter}$) $\times$ E$_{\alpha}$ \\
Here {\it{m$_{mother}$}}, {\it{m$_{daughter}$}} denote the masses of the mother and daughter nucleus\footnote{strictly spoken, the atomic mass, not the mass of a bare
nucleus}, {\it{E$_{\alpha}$}} the kinetic energy of the
the $\alpha$ particle.\\
Evidently the recoil energy {\it{E$_{rec}$ = (m$_{\alpha}$/m$_{daughter}$) $\times$ E$_{\alpha}$}} is stronly dependent on the mass
of the daughter nucleus and the kinetic energy of the $\alpha$ - particle.\\
This behavior is shown in fig. 4, where for some isotopes {\it{E$_{rec}$}} is plotted versus {\it{E$_{\alpha}$}}. The black squares represent
the results for $^{210,211,212}$Po and $^{216,217}$Th, which are often produced in reactions using lead or bismuth targets by nucleon transfer or in
so called 'calibration reactions' (reactions used to check the performance of the experimental set-up), the red dots are results for 'neutron deficient' isotopes in the range {\it{Z}}\,=\,(98-110), the blue triangles, finally, results for neutron rich SHN produced so far in irradiations of actinide targets with $^{48}$Ca. Evidently the recoil energies for the polonium and thorium isotopes
are by 15-30 keV higher than for the {\it{Z}}\,=\,(98-110) - isotopes, while the differences between the latter and the
'neutron rich SHN' are typically in the order of 10 keV; specifically striking is the difference of {\it{$\Delta$E$_{rec}$}}\,=\,65 keV between $^{212}$Po and $^{294}$Og, both having nearly the same $\alpha$ - decay energy.\\
In practise, however, the differences are less severe:
the measured energy of the $\alpha$ particle is not simply the sum of both contributions as due to the high ionisation density
of the heavy recoil nucleus part of the created charge carriers will recombine and thus only a fraction of them will contribute to the
hight of the detector signal, hence\\
E$_{\alpha}$(measured) = E$_{\alpha}$ + a$\times$E$_{rec}$ \\
with a\,$<$\,1, giving the fraction of the contribution of the recoil energy, which can be considered to be in the
order of a\,$\approx$\,0.3 \cite{Eyal82}. One should, however, keep in mind, that this analysis was performed
for nuclei around {\it{A}}\,=\,150. As ionization density increases for heavier nuclei (larger {\it{Z}}) the recombination
might be larger for SHN, thus a\,$<$\,0.3. Nevertheless different recoil contributions should be considered when
calibrations are performed.\\
Further discussion of this item is found in \cite{Hofm12}.
\subsection{\bf{4.3 Summing of $\alpha$ particle and conversion electron (CE) energies}}
One more problem is connected with energy summing of $\alpha$ particles and conversion electrons (CE) in cases
where excited levels are populated decaying towards the ground state by internal conversion, leading to a shift of
the measured $\alpha$ energies towards higher values \cite{HessH87}. \\
An illustrative example is shown in fig. 5 where the decay of $^{255}$Rf is presented.
The decay scheme is shown in fig. 5a; $\alpha$ decay populates the 9/2$^{-}$[734] - level in $^{251}$No, which
then decays by $\gamma$ emission either into the 7/2$^{+}$[624] ground-state (E$_{\gamma}$ = 203.6 keV ) or
into the 9/2$^{+}$ state (E$_{\gamma}$ = 143.3 keV (fig. 5b) \cite{Hessb06}.
The M1 - transition 9/2$^{+}$ $\rightarrow$ 7/2$^{+}$ is highly converted. In fig 5c we present the energy distributions
of $\alpha$ particles either in coincidence with the E$_{\gamma}$ = 203.6 keV (black line) or the
E$_{\gamma}$ = 143.3 keV (red line). We observe a shift in the $\alpha$ energies by $\Delta$E\,=\,38 keV, which is even
larger than the CE energy (31 keV)\cite{KibB07}, indicating that not only the CE contribute to the energy shift but also the
energy released during deexcitation of the atomic shell (e.g. Auger electrons).
\subsection{\bf{4.4 $\alpha$ particles escaping the detector}}
As the implantation depth into the detector is typically $\leq$10 $\mu$m and thus considerably
smaller than the range of $\alpha$ - particles in silicon ($>$50 $\mu$m at E\,$>$\,8 MeV) only part of them
(50 \,-\,60 $\%$) will be registered with full enery.\\
So if one observes on the basis of a small number of events besides a 'bulk' at a mean energy {\it{E}} also $\alpha$ particles with energies
of {\it{E\,-\,$\Delta$E}} will be registered. So it is a priori not possible to state if these events represent decays into higher lying daughter
levels or if they are just $\alpha$ particles of energy {\it{E}} escaping the detector with an energy {\it{E\,-\,$\Delta$E}}.
However, some arguments can be given on the basis of the probability to observe the latter events.
As an illustrative example
the $\alpha$ spectrum of $^{253}$No \cite{Hess12} is given in fig. 6.\\
\begin{figure}
\vspace{-1cm}
\resizebox{1.0\textwidth}{!}
\includegraphics{fig6.eps}
}
\caption{alpha - decay spectrum of $^{253}$No (in coincidence with the 279.5 keV $\gamma$ - transition).
The insert shows the region (7200\,-\,8200) keV in expanded scale.}
\label{fig:6}
\end{figure}
Here the $\alpha$ decays in coincidence with the 279.5 keV $\gamma$ line are shown, which represents the transition of the
9/2$^{-}$[734] level in $^{249}$Fm populated by the $\alpha$ decay, into 7/2$^{+}$[624] ground-state. In that case one obtains a clean
$\alpha$ spectrum of a single transition not disturbed by energy summing with CE.\\
Besides the 'peak' at E$_{\alpha}$\,=\,8005 keV a bulk of events at {\it{E}}\,$<$\,2 MeV is shown. About
55 $\%$ of the $\alpha$ particles are registered in the peak, about 32$\%$ are found at {\it{E}}\,$<$\,2 MeV; the rest
(13$\%$) is distributed in the energy range in between. In the insert the energy range between {\it{E}}\,=(7.2-8.2) MeV
is expanded. It is clearly seen that the number of $\alpha$ particles in the range, taken here somewhat arbitrarily
as {\it{E$_{mean}$}}\,-\,570 keV is small. The 'peak' is here defined as the energy region {\it{E}}\,$>$\,7935 keV, as at this energy
the number of events (per 5 keV) has been dropped to 5$\%$ of the number in the peak maximum (region 1).
The ratio of events in the energy interval (7835,7935) keV (region 2) is about 1.2\,$\%$ of that in the 'peak' (region 1),
while the ratio of events in the energy interval (7435,7835) keV (region 3) is about 0.8 $\%$.
These small numbers indicate that at low number of observed total decays, it is quite
unlikely that
events with energies some hundred keV lower than the 'bulk' energy represent $\alpha$ particles from the 'bulk' leaving the detector with nearly full energy loss.
They rather stem from decays into excited daughter levels (but possibly influenced by energy summing with CE)\footnote{We briefly want to point
to an detector effect that may pretend lower energies. In cases where the detector has already suffered from radiation damages the charge
collection may be incomplete and so the signal might be lower than that for a 'full energy event' even if the $\alpha$ particles was
completely stopped in the detector.}.
\subsection{\bf{4.5 Compatibility of $\alpha$ energy measurements in the region of SHN}}
As the numbers of decays observed in specific experiments is usually quite small it seems of high
interest to merge data of different experiments to enhance statistics to possibly extract details
on the decay properties, {\it{e.g.}} fine structure in the $\alpha$ decay. One drawback concerning
this item is possible energy summing between $\alpha$ particles and CE as discussed above; another problem
is the compatibility of the decay energies measured in the different experiments and thus a consequence of
calibration. This is not necessecarily a trivial problem as shown in fig. 7, where the $\alpha$ energies obtained
for $^{272}$Bh in three experiments are shown. $^{272}$Bh was produced in irradiations of $^{243}$Am with $^{48}$Ca
within the decay chain of $^{288}$Mc, via $^{243}$Am($^{48}$Ca,3n)$^{288}$Mc $^{\alpha}_{\rightarrow}$
$^{284}$Nh $^{\alpha}_{\rightarrow}$ $^{280}$Rg $^{\alpha}_{\rightarrow}$ $^{276}$Mt $^{\alpha}_{\rightarrow}$ $^{272}$Bh $^{\alpha}_{\rightarrow}$ $^{268}$Db.
This decay chain has been investigated so far at three different separators, DGFRS at FLNR, Dubna, Russia \cite{OgA13},
TASCA at GSI, Darmstadt, Germany \cite{RuF13}, and BGS at LNBL, Berkeley, USA \cite{GaG15}.
The energy distributions of the odd-odd nuclei occuring in the decay chain of $^{288}$Mc are in general quite
broad indicating decays into different daughter levels accompanied by energy summing of $\alpha$ particles and CE.
Solely for $^{272}$Bh a 'quite narrow' line is observed. The results of the different experiments are compared in
fig. 7. To avoid ambiguities due to worse energy resolution of 'stop + box' events we restricted to events with full energy release
in the 'stop' detector.
Evidently there are large discrepancies in the $\alpha$ energies: the DGFRS experiment \cite{OgA13} delivers a mean value
{\it{E$_{\alpha}$(DGFRS)}}\,=\,9.022\,$\pm$\,0.012 MeV, the TASCA experiment \cite{RuF13} {\it{E$_{\alpha}$(TASCA)}}\,=\,9.063\,$\pm$\,0.014 MeV, and
the BGS experiment {\it{E$_{\alpha}$(BGS)}}\,=\,9.098\,$\pm$\,0.02 MeV, hence differences
{\it{E$_{\alpha}$(TASCA)}} - {\it{E$_{\alpha}$(DGFRS)}}\,=\,41 keV,
{\it{E$_{\alpha}$(BGS)}} - {\it{E$_{\alpha}$(TASC)}}\,=\,35 keV,
{\it{E$_{\alpha}$(BGS)}} - {\it{E$_{\alpha}$(DGFRS)}}\,=\,76 keV, which are by far larger than calibration uncertainties in the range of
10-20 keV, which might be expected usually. That is a very unsatisfying situation.
\begin{figure}
\vspace{-1cm}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig7.eps}
}
\caption{Energy distributions of $^{272}$Bh; a) DGFRS experiment \cite{OgA13}; b) TASCA experiment \cite{RuF13}; c) BGS experiment \cite{GaG15}}
\label{fig:7}
\end{figure}
\newpage
\section{5. Discovery of elements Z\,=\,107 (bohrium) to Z\,=\,112 (copernicium) and their approvement by the IUPAC}
The elements {\it{Z}}\,=\,107 to {\it{Z}}\,=\,112 where first synthesized at the velocity filter SHIP, GSI, in the period 1981 - 1996.
The corresponding isotopes where identified
after implantation into arrangements of silicon detectors by registering their $\alpha$ decay chains. Identification was based on
decay properties ($\alpha$ energies, halflives) of at least one member of the decay chain, that had been either known
from literature or had been synthesized and investigated at SHIP in preceeding experiments.
The latter is the main difference to elements {\it{Z}}\,$\ge$\,114, where the decay chains are not connected to the region of known
and safely identified isotopes. Nevertheless, the elements {\it{Z}}\,=\,107 - {\it{Z}}\,=\,112 depict in some cases already the difficulties to unambiguously
identify an isotope on the basis of only a few observed decays and also the problems evaluaters in charge to approve discovery of
a new element are faced with.\\
In order not to overtop the banks in the following only the reports of the Tranfermium Working Group of the IUPAC and IUPAP (TWG)
(for elements bohrium to meitnerium) or the IUPAC/IUPAP Joint Working Party (JWP) (for elements darmstadtium to copernicium)
concerning the GSI new element claims are considered. Other claims on discovery of one ore more of these elements are not discussed.
\subsection{\bf{5.1 Element 107 (Bohrium)}}
The first isotope of element 107, $^{262}$Bh, was synthesized in 1981 in the reaction $^{209}$Bi($^{54}$Cr,n)$^{262}$Bh
\cite{Muenz81}. Altogether six decay chains where observed at that time. Prior to approval of the discovery by the IUPAC two more
experiments were performed. The complete results are reported in \cite{Muenz89}: two states of $^{262}$Bh decaying by $\alpha$ emission
$^{262g}$Bh ({\it{T$_{1/2}$}}\,=\,102\,$\pm$\,26 ms (15 decays)) and $^{262m}$Bh ({\it{T$_{1/2}$}}\,=\,8.0\,$\pm$\,2.1 ms (14 decays)) as well as the
neighbouring isotope $^{261}$Bh (10 decays) were observed. Thus approval of the discovery of element 107 was based on a
'safe ground', and it was stated by the TWG \cite{Wilk93}: 'This work ({\it\cite{Muenz81}}) is considered sufficiently convincing and was
confirmed in 1989 {\it\cite{Muenz89}}.'
\subsection{\bf{5.2 Element 108 (Hassium)}}
Compared to bohrium the data for hassium on which the discovery was approved was scarce. In the first experiment performed in 1984
\cite{Muenz84a} three decay chains of $^{265}$Hs were observed in an irradiation of $^{208}$Pb with $^{58}$Fe.
In two cases a full energy event of $^{265}$Hs was followed by an escape event of $^{261}$Sg, while in one case an escape event
of $^{265}$Hs was followed by a full energy event of $^{261}$Sg. The $\alpha$ particle from the granddaughter $^{257}$Rf was measured
in all three cases with full energy.\\In follow-up experiment only one decay chain of the neighbouring isotope $^{264}$Hs was observed
in an irradiation of $^{207}$Pb by $^{58}$Fe \cite{Muenz86}. The chain consisted of two escape events followed by an SF, which was
attributed to $^{256}$Rf on the basis of the decay time.
Nevertheless discovery of element 108 was approved on the basis of these data and it was stated by the TWG \cite{Wilk93}: 'The Darmstadt work
in itself is sufficiently convincing to be accepted as a discovery.'
\subsection{\bf{5.3 Element 109 (Meitnerium)}}
Discovery of element 109 was connected to more severe problems. In the first experiment at SHIP, performed in summer 1982, only one decay chain
shown in fig. 8. was observed \cite{Muenz82} in an irradiation of $^{209}$Bi with $^{58}$Fe.
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig8.eps}
}
\caption{alpha - decay chains observed in SHIP experiments in 1982 and 1988 and attributed to the decay of $^{266}$Mt
\cite{Muenz82,Muenz88}.}
\label{fig:8}
\end{figure}
It started with an alpha - event with full energy, followed by an escape event and was terminated by an SF event. The latter was attributed to
$^{258}$Rf produced by EC decay of $^{258}$Db. A thorough investigation of the data showed that the probability for the event sequence to be random
was $<$10$^{-18}$ \cite{Muenz84}. Among all possible 'starting points' (energetically possible evaporation residues) $^{266}$Mt was the most likely
one \cite{Muenz84}. In a second experiment performed early in 1988 (january 31st to february 13th) two more decay chains, also shown in fig. 8
where observed \cite{Muenz88}; chain number 2 consisted of four $\alpha$ events, two with full energy, and two escape events, attributed to $^{266}$Mt (first chain member)
and $^{258}$Db (third chain member); the two full energy events where attributed to $^{262}$Bh (second chain member) and $^{254}$No (forth chain member),
which was interpreted to be formed by EC decay of $^{254}$Lr. The third chain consisted of two $\alpha$ decays, which were assigned to $^{262}$Bh and
$^{258}$Db on the basis of the measured energies, while the $\alpha$ particle from the decay of $^{266}$Mt was not observed. The non-registration of $^{266}$Mt could have different reasons:\\
a) $^{262}$Bh was produced directly via the reaction $^{209}$Bi($^{58}$Fe,$\alpha$n)$^{262}$Bh. This possibility was excluded as this reaction channel
was assumed to be considerably smaller than the 1n - deexcitation channel. And indeed in a later experiment, performed after the approval of element 109
by the IUPAC, twelve more decay chains of $^{266}$Mt were observed, but no signature for an $\alpha$n - deexcitation channel was found \cite{HofH97}. \\
b) $^{266}$Mt has a short-lived isomer decaying in-flight during separation. This interpretation seemed unlikely as in case of $\alpha$ emission
the recoil of the $\alpha$ particle would have kicked the residual nucleus out of its trajectory, so it would not have reached the detector placed in the focal
plane of SHIP; similary in case of decay by internal transitions one could expect that emission of Auger electrons following internal conversion would have
changed the charge state of the atom, so it would been also kicked out of its trajectory.\\
c) a short-lived isomer may decay within 20 $\mu$s after implantation of the ER, i.e. during the dead time of the data acquisition system, and thus not be
recorded. Also for this interpretation no arguments were found in the later experiment \cite{HofH97}.\\
d) The $\alpha$ particle from the decay of $^{266}$Mt escaped with an energy loss $<$670 keV, which was the lower detection limit in this experiment.
This was seen as the
most reasonable case.\\
To summarize: the three chains presented strong evidence for having produced an isotope of element 109, but it still may be discussed if the presented
data really showed an unambiguous proof. However, the TWG did not share those concerns, leading to the assessment 'The result is convincing even though
originally only one event was observed' and came to the conclusion 'The Darmstadt work \cite{Muenz82} gives confidence that element 109 has been observed'
\cite{Wilk93}'.
\subsection{\bf{5.4 Element 110 (Darmstadtium)}}
After a couple of years of technical development, including installation of a new low energy RFQ - IH acceleration structure coupled to an ECR ion source
of the UNILAC
and construction of a new detector set-up experiments on synthesis of new elements were resumed in 1994.
The focal plane detector was surrounded by a 'box' formed of six silicon detectors allowing to measure with a probability of
about 80$\%$ the full energy of $\alpha$ - particles escaping the 'stop' detector as the sum
{\it{E\,=\,$\Delta$E(stop)\,+\,E$_{residual}$(box)}}.
The first isotope of element 110, $^{269}$Ds, was synthesized 1994 in the reaction $^{208}$Pb($^{62}$Ni,n)$^{269}$Ds \cite{Hof95}; four decay chains were observed.
In three of the four chains $\alpha$ decays with full energy were observed down to $^{257}$Rf or $^{253}$No, respectivly.
For the forth decay chain of $^{269}$Ds only an energy loss signal was measured, while $\alpha$ decays of $^{265}$Hs and $^{261}$Sg were
registered with full energy. Further members of the decay chain ($^{257}$Rf, $^{253}$No, etc.) were not recorded. In a later
re-analysis of the data this chain could not be reproduced any more, similary to the case of decay chains
reported from irradiations of $^{208}$Pb with $^{86}$Kr at the BGS, Berkeley, and interpreted to start from an isotope of
element 118 \cite{Ninov99,Ninov02}.
This deficiency, however, did not concern the discovery of element 110 and it was stated by the JWP of the IUPAC/IUPAP 'Element 110 has been discovered by
this collaboration' \cite{Karol01}.\\
\subsection{\bf{5.5 Element 111 (Roentgenium)}}
The first experiment on synthesis of element 111 was performed in continuation of the element 110 discovery experiment.
In an irradiation of $^{209}$Bi with $^{64}$Ni three $\alpha$ decay chains were observed. They were assigned to $^{272}$Rg, produced in the reaction
$^{209}$Bi($^{64}$Ni,n)$^{272}$Rg \cite{Hofm95a}. Two of these chains ended with $\alpha$ decay of $^{260}$Db; for the first member, $^{272}$Rg,
only a {\it{$\Delta$E}} signal was registered. In the third chain $\alpha$ decay was observed down to $^{256}$Lr and all $\alpha$ particles from the decay
chain members ($^{272}$Rg, $^{268}$Mt, $^{264}$Bh, $^{260}$Db, $^{256}$Lr) were registered with
'full' energy. It should be noted that also $^{268}$Mt and $^{264}$Bh had not been observed before.
The JWP members, however, were quite cautious in that case \cite{Karol01}. It was remarked that the $\alpha$ energy of $^{264}$Bh in chain 1 was quite different to
the values of chain 2 and 3 and that the $\alpha$ energy {\it{E}}\,=\,9.146 MeV of $^{260}$Db in chain 2 was in fact in-line the literature value (given {\it{e.g.}} in
\cite{Fire96}), but was quite different to the value {\it{E}}\,=\,9.200 MeV in chain 3. Further it was noted, that the time difference $\Delta$t($^{262}$Db\,-\,$^{256}$Lr)\,=\,66.3 s, was considerably longer than the known half-life of $^{256}$Lr ({\it{T$_{1/2}$}}\,=\,28$\pm$3 s \cite{Fire96}). So it was stated (JWP assessment): 'The results of this study are definitely of high quality but there is insufficient internal redundancy to warrant certitude at this stage. Confirmation by further results is needed to assign priority of discovery to this collaboration' \cite{Karol01}. In a further experiment at SHIP three more decay chains were observed which confirmed the previous
results \cite{Hofm02}, leading to the JWP statement:'Priority of discovery of element 111 by Hofmann et al. collaboration in \cite{Hofm95a} has been confirmed
owing to the additional convincing observations in \cite{Hofm02}' \cite{Karol03}.\\
For completeness it should be noted that the SHIP results were later confirmed in experiments performed at the GARIS separator at RIKEN, Wako (Japan), where
the same reaction was used \cite{MoM04a}, and at the BGS separator at LBNL Berkeley (USA), where, however, a different reaction, $^{208}$Pb($^{65}$Cu,n)$^{272}$Rg
was applied \cite{FoG04}.
\subsection{\bf{5.6 Element 112 (Copernicium)}}
Concerning discovery of element 112 the situation was even more complicated. In a first irradiation of $^{208}$Pb with $^{70}$Zn performed at SHIP early in
1996 two decay chains interpreted to start from $^{277}$Cn were reported \cite{Hofm96}. In chain 1 $\alpha$ decays down to $^{261}$Rf, in chain 2 alpha decays down
to $^{257}$No were observed. Both chains showed severe differences in chain members $\alpha$(1) and $\alpha$(2). (In the following $\alpha$(n) in chains 1 and 2 will be
denoted as $\alpha$(n1) and $\alpha$(n2), respectively.)\\
The $\alpha$ energies for $^{277}$Cn differed by 0.22 MeV, while the 'lifetimes' {\it{$\tau$}} (time differences between ER implantation and $\alpha$ decay were comparable)
with {\it{E$_{\alpha 11}$}}\,=\, 11.65 MeV, {\it{$\tau_{\alpha 11}$}}\,=\,400 $\mu$s and {\it{E$_{\alpha 12}$}}\,=\, 11.45 MeV,
{\it{$\tau_{\alpha 12}$}}\,=\,280 $\mu$s.
For {\it{$\alpha$(2)}} ($^{273}$Ds) the discrepancies were more severe: {\it{E$_{\alpha 21}$}}\,=\,9.73 MeV, {\it{$\tau_{\alpha 21}$}}\,=\,170 ms and {\it{E$_{\alpha 22}$}}\,=\, 11.08 MeV, {\it{$\tau_{\alpha 12}$}}\,=\,110 $\mu$s. It seemed thus likely that the $\alpha$ decays of $^{273}$Ds (and thus also of $^{277}$Cn) occurred from different levels.
This was commented in the JWP report \cite{Karol01} as 'Redundancy is arguably and unfortunately confounded by the effects of isomerism. The two observed alphas
from $^{277}$112 involve different states and lead to yet two other very different decay branches in $^{273}$110. (...) The first two alpha in the chains show
no redundancy.' It was further remarked that the energy of $^{261}$Rf in chain 2 ({\it{E}}\,=\,8.52 MeV) differed by 0.24 MeV from the literature value \cite{Fire96}.
Indeed it was later shown by other research groups \cite{DvB08,HaK11,HaK12} that two longlived states decaying by $\alpha$ emission exist in $^{261}$Rf with one
state having a decay energy and a halflife of {\it{E$_{\alpha}$}}\,=\,8.51$\pm$0.06 MeV and {\it{T$_{1/2}$}}\,=\,2.6$^{+0.7}_{-0.5}$s \cite{HaK12} in-line with the data from
chain 2 ({\it{E$_{\alpha 52}$}}\,=\,8.52 MeV, {\it{$\tau_{\alpha 52}$}}\,=\,4.7 s). But this feature was not known when the TWG report \cite{Karol01} was written.
Consequently it was stated 'The results of this study are of characteristically high quality, but there is insufficient internal redundancy to warrant conviction
at this state. Confirmation by further experiments is needed to assign priority of discovery to this collaboration.'\\
One further experiment was performed at SHIP in spring 2000, where one more decay chain was observed, which resembled chain 2, but was terminated by a fission
event \cite{Hofm02}.The latter was remarkable, as the fission branch of $^{261}$Rf was estimated at that time as {\it{b$_{sf}$}}\,$<$\,0.1. But also here later
experiments \cite{DvB08,HaK11,HaK12} established a high fission branch for the 2.6 s - activity with the most recent value
{\it{b$_{sf}$}}\,=\,0.82$\pm$0.09 \cite{HaK12}.\\
Then, during preparing the manuscript \cite{Hofm02} a 'desaster' happed: in re-analysis of the data from 1996 chain 1 could not
reproduced, similary to the case of one chain in the element 110 synthesis experiment (see above) and of decay chains
reported from irradiations of $^{208}$Pb with $^{86}$Kr at the BGS, Berkeley, and interpreted to start from an isotope of
element 118 \cite{Ninov99,Ninov02}. It was shown that this chain had been created spuriously \cite{Hofm02}. At least this finding could
explain the inconsistencies concerning the data for $^{277}$Cn and $^{273}$Ds in chains 1 and 2.
On this basis the JWP concluded \cite{Karol03}: 'In summary, though there are only two chains, and neither is completely characterized on its own merit.
Supportive, independent results on intermediates remain less then completely compelling at that stage.'\\
In the following years two more experiments at SHIP using the reaction $^{70}$Zn + $^{208}$Pb were performed without observing one more chain
\cite{Hess20}, however, decay studies of $^{269}$Hs and $^{265}$Rf confirmed specifically the data for $^{261}$Rf \cite{DvB08}, while the decay chains
of $^{277}$Cn were reproduced in an irradiation of $^{208}$Pb with $^{70}$Zn at the GARIS separator at RIKEN, Wako (Japan) \cite{Morita05,Morita07}.\\
On this basis the JWP concluded in their report from 2009 \cite{Barber09}: 'The 1996 collaboration of Hofmann et al. \cite{Hofm96} combined with the
2002 collaboration of Hofmann et al. \cite{Hofm02} are accepted as the first evidence for synthesis of element with atomic number 112 being
supported by subsequent measurements of Morita \cite{Morita05,Morita07} and by assignment of decay properties of likely hassium imtermediates
\cite{DvB08,Duell02,Turl03} in the decay chain of $^{277}$112'.\\
\section{6. Some Critical assessments of decay chains starting from elements Z$\ge$112 and discussion of decay data of the chain members}
The experiments on synthesis of the new elements with {\it{Z}}\,=\,113 to {\it{Z}}\,=\,118 reflect the extreme difficulties connected
with identification of new elements on the basis of observing their decay when only very few nuclei are produced and decay chains end
in a region where no isotopes had been identified so far or their decay properties are only known scarcely.
Nevertheless discovery of elements {\it{Z}}\,=\,113 to {\it{Z}}\,=\,118 has been approved by IUPAC and discovery priority
was settled \cite{BarK11,KarB16,KarB16a}, and also names have been proposed and accepted so far:\\
{\it{Z}}\,=\,113: Nihonium (Nh)\\
{\it{Z}}\,=\,114: Flerovium (Fl)\\
{\it{Z}}\,=\,115: Moscovium (Mc)\\
{\it{Z}}\,=\,116: Livermorium (Lv)\\
{\it{Z}}\,=\,117: Tennessine (Ts)\\
{\it{Z}}\,=\,118: Oganesson (Og)\\
Still there remain a couple of open questions and ambiguities concerning decay properties of several isotopes, which may have feedback
to their final assignment. In the following we will discuss some selected cases and point to open problems that need to be clarified
in further experiments.\\
For illustrating the following discussion an excerpt of the charts of nuclei covering
the region {\it{Z}}\,$\ge$\,112 and {\it{N}}\,=\,(165\,-\,178)
is shown in fig. 1.\\
\subsection{\bf{6.1 Ambiguities in the assignment of decay chains - case of $^{293}$Fl - $^{291}$Fl} }
As already briefly mentioned in sect. 3, the continuous implantation of nuclei, the overlap of low energy particles
passing the separator with the $\alpha$ - decay energies of the expected particles and efficiencies lower than 100 $\%$ of detectors used
for anti-coincidence to discriminate between 'implantation of nuclei' and 'decays in the detector' introduces a problem of background.
It might be severe, if only very few decay chains are observed, since at a larger number of events single chains containing a member that does not fit to the rest of the data can be easily removed.\\
\begin{figure}
\vspace{-1cm}
\resizebox{0.99\textwidth}{!}
\includegraphics{fig9.eps}
}
\caption{ Assignments and reassignments of a decay chain observed in an irradiation of
$^{248}$Cm with $^{48}$Ca; a) decay data reported for $^{293}$Lv and its daughter products \cite{Oga16}; b) decay chain as assigned in \cite{HofH10}; c)decay chain as assigned \cite{Hofm12}; d) decay chain as assigned in \cite{HofH16};
e) decay data reported for $^{293}$Lv and its daughter products \cite{Oga16}.}
\label{fig:9}
\end{figure}
An example to illustrate those related difficulties is given in fig. 9.
The decay chain was observed at SHIP, GSI, in an irradiation of $^{248}$Cm with $^{48}$Ca at
a bombarding energy E$_{lab}$ = 265.4 MeV \cite{Hofm12}. A first analysis of the data resulted in an implantation of an ER,
followed by three $\alpha$ decays. The chain was terminated by a spontaneous fission event \cite{HofH10} as shown in fig. 9b.
It was tentatively assigned to the decay of $^{293}$Lv.
After some further analysis, one more $\alpha$ decay (an event that occured during the beam-on period) placed at the position of $^{285}$Cn
was included into the chain, but still assigned tentatively to the decay of $^{293}$Lv \cite{Hofm12} as shown in fig. 9c.
However, except for $^{293}$Lv the agreement of the decay properties ($\alpha$ energies, lifetimes) of the chain members with
literature data \cite{Oga16}, shown in fig. 9a, was rather bad.
Therefore in a more recent publication \cite{HofH16} the assigment was revised and, including a low energy signal of 0.244 MeV
registered during the beam-off period, but without position signal, into the chain at the place of $^{283}$Cn. The chain is now
assigned to the decay of $^{291}$Lv (fig. 9d). A comparison with the literature data \cite{Oga16}, presented in fig. 9e, shows a good agreement
in $\alpha$ decay energies for $^{287}$Fl, $^{279}$Ds, $^{275}$Hs and in 'lifetimes' (i.e. time differences between consecutive decay events)
for $^{291}$Lv, $^{287}$Fl, $^{283}$Ds, $^{275}$Hs, $^{271}$Sg. Stringent differences, however, are obtained for $\alpha$ energy of
$^{291}$Lv and the lifetime of $^{279}$Ds (the event observed in the beam-on period).
The differences in the $\alpha$ decay energies of 240 keV principally can be explained by decay in different daughter levels. As in
\cite{Oga16} only three decays are reported, it might be that the decay of the lower energy simply was not observed in the experiments
from which the data in \cite{Oga16} were obtained. Such an explanation is principally reasonable.
For {\it{E$_{\alpha}$}}\,=\,10.74 MeV one obtains a theoretical $\alpha$ decay halflife of {\it{T$_{\alpha}$}}\,=\,32 ms using the formula suggested by
Poenaru \cite{PoI80} using the parameter modification suggested in \cite{Rur83} which has been proven to reproduce $\alpha$ decay halflives in the region of heaviest nuclei very well \cite{Hess16a}.
The value is indeed in good agreement with the reported half-life of {\it{T$_{\alpha}$}}\,=\,19$^{+17}_{-6}$ ms \cite{Oga16}.
For {\it{E$_{\alpha}$}}\,=\,10.50 MeV one obtains {\it{T$_{\alpha}$}}\,=\,139 ms. This means, that one expects some 25$\%$ intensity for an $\alpha$ transition
with an energy lower by about 250 keV, provided that $\alpha$ decay hindrance factors are comparable for both transitions.
More severe, however, seems the lifetime of $^{279}$Ds, which is a factor of twenty longer than the reported half-life of
{\it{T$_{\alpha}$}}\,=\,0.21$\pm$0.04 s. The probability to observe an event after twenty halflives is only $\approx$10$^{-6}$.
To conclude: it is certainly alluring to assign this chain to $^{291}$Lv, the assignment, however, is not unambiguous.
As long as it not confirmed by further data, it should be taken with caution.\\
\subsection{\bf{6.2 Ambiguities in the assignment of decay chains - case of $^{289,288}$Fl}}
The observation of a decay chain, registered in an irradiation of $^{244}$Pu with $^{48}$Ca at {\it{E$_{lab}$}}\,=\,236 MeV
at the Dubna Gasfilled Separator (DGFRS), assigned to start
from $^{289}$Fl was reported by Oganessian et al. \cite{Ogan99a}. The data presented in \cite{Ogan99a} are shown in fig. 10a.
In a follow-up experiment two more decay chains with different decay chracteristics were observed and attributed to the
neighbouring isotope $^{288}$Fl \cite{Ogan00a}, while in an irradiation of $^{248}$Cm with $^{48}$Ca at {\it{E$_{lab}$}}\,=\,240 MeV
one decay chain, shown in fig. 10c was registered \cite{Ogan00b}. Decay properties of members 2, 3 and 4 of the latter chain were
consistent with those for $^{288}$Fl, $^{284}$Ds, and $^{280}$Hs. Consequently the chain (fig. 10c) was interpreted to start from
$^{292}$Lv.\\
\begin{figure}
\vspace{-1cm}
\resizebox{0.99\textwidth}{!}
\includegraphics{fig10.eps}
}
\caption{ Decay chains observed at the DGFRS in irradiations of $^{244}$Pu or $^{248}$Cm with $^{48}$Ca and assigned to decays
starting from $^{289}$Fl ((a) \cite{Ogan99a}), $^{288}$Fl ((b) \cite{Ogan00a}), and $^{292}$Lv ((c) \cite{Ogan00b}).}
\label{fig:12}
\end{figure}
However, results from later irradiations of $^{244}$Pu with $^{48}$Ca were interpreted in a different way \cite{Ogan04}.
The activity previously attributed to $^{288}$Fl was now assigned to $^{289}$Fl, while no further events having the characteristics of the chain
originally attributed to $^{289}$Fl \cite{Ogan99a} were observed. It was now considered as a possible candidate for
$^{290}$Fl \cite{OganUt04}. But this chain was not mentioned later as a decay chain stemming from a
flerovium isotope \cite{Ogan07}. However, a new activity, consisting of an $\alpha$ decay of {\it{E$_{\alpha}$}}\,=\,9.95$\pm$0.08 MeV and
{\it{T$_{1/2}$}}\,=\,0.63$^{+0.27}_{-0.14}$s followed by a fission activity of
{\it{T$_{1/2}$}}\,=\,98$^{+41}_{-23}$ms was observed. It was assigned to the decay sequence
$^{288}$Fl $^{\alpha}_{\rightarrow}$ $^{284}$Ds $^{SF}_{\rightarrow}$. These 'new' results were consistent with those obtained in later
irradiations of $^{244}$Pu with $^{48}$Ca \cite{Gates11} and $^{248}$Cm with $^{48}$Ca \cite{Hofm12,Kaji17} in other labs.\\
As a summary the 'old' and new results for $^{288,289}$Fl are compared in fig. 11. The 'new' results are taken from the recent review \cite{Oga16}, the 'old' results for $^{288}$Fl are the mean values from the three decays reported in \cite{Ogan00a,Ogan00b} as evaluated by the author.\\
It should be noticed for completeness, that Kaji et al. \cite{Kaji17} observed also a chain consisting consisting of three $\alpha$ particles
terminated by a fission event. The chain was not regarded as unambiguous and so $\alpha_{3}$ and the SF event were only tentatively
assigned to $^{284}$Cn ($\alpha_{3}$) and $^{280}$Ds (SF). In a more recent decay study of $^{288}$Fl using the production reaction
$^{244}$Pu($^{48}$Ca,4n)$^{288}$Fl a small $\alpha$ - decay branch (b$_{\alpha}$$\approx$0.02) and spontaneous fission of $^{280}$Ds were
confirmed \cite{Sarm21}.
\begin{figure}
\resizebox{0.99\textwidth}{!}
\includegraphics{fig11.eps}
}
\caption{ 'Old' and 'new' decay chains for $^{289}$Fl and $^{288}$Fl.}
\label{fig:13}
\end{figure}
\begin{figure}
\vspace{-1cm}
\resizebox{0.99\textwidth}{!}
\includegraphics{fig12.eps}
}
\caption{ 'Old' and 'new' decay chains for $^{287}$Fl and $^{283}$Cn; fig. 12.1 results observed at VASSILISSA: (a) data from
$^{48}$Ca + $^{242}$Pu \cite{Ogan99}; (b) \cite{OgaY99}, (c) \cite{OgaY04} from $^{48}$Ca + $^{238}$U irradiation;
fig. 12.2 results observed at DGFRS \cite{Ogan07}, SHIP \cite{HofA07}, GARIS II \cite{Kajia17}, BGS \cite{StavG09} in
irradiations of $^{238}$U or $^{242}$Pu with $^{48}$Ca. See text for details.}
\label{fig:14}
\end{figure}
\subsection{\bf{6.3 Ambiguities in the assignment of decay chains - case of $^{287}$Fl - $^{283}$Cn }}
A couple of weeks after submission of \cite{Ogan99a} (recieved march 9,1999) another paper was submitted by Yu.Ts. Oganessian et al.
reporting on synthesis of a flerovium isotope with mass number {\it{A}}\,=\,287 \cite{Ogan99} (received april 19, 1999). The experiment had been performed
at the energy filter VASSILISSA at FLNR-JINR Dubna, and the decay chains (shown in fig. 12a) were observed in bombardments of $^{242}$Pu with
$^{48}$Ca at E$_{lab}$\,=\,230\,-\,235 MeV. Two chains consisting of an $\alpha$ - decay (in one case only an 'escape' $\alpha$ - particle was registered) follwed by
spontaneus fission were observed. Although lifetimes of the SF events were longer than those of two SF events correlated to ER observed in a preceding irradiation of $^{238}$U with
$^{48}$Ca at VASSILISSA \cite{OgaY99} (fig. 12b) they were attributed to the same isotope, $^{283}$Cn and the $\alpha$ decays were attributed to $^{287}$Fl. In a later
irradiation of $^{238}$U with $^{48}$Ca at the same set-up two more SF events attributed to $^{283}$Cn were observed \cite{OgaY04} (fig. 12c). The production cross section was
$\sigma$\,=\,3.0$^{+4.0}_{-2.0}$ pb in fair agreement with the value $\sigma$\,=\,5.0$^{+6.3}_{-3.2}$pb obtained in the first experiment \cite{OgaY99}.\\
Both acitivities, however, could not be reproduced in irradiations of $^{238}$U, $^{242}$Pu with $^{48}$Ca performed at the Dubna Gas-filled Separator (DGFRS)
\cite{Ogan04b} (see fig. 12). $^{287}$Fl was here interpreted as an $\alpha$ emitter of {\it{E$_{\alpha}$}}\,=\,10.02$\pm$0.06 MeV, {\it{T$_{1/2}$}}\,=\,0.51$^{+0.18}_{-0.10}$ s,
$^{283}$Cn as an $\alpha$ emitter of {\it{E$_{\alpha}$}}\,=\,9.162$\pm$0.06 MeV, {\it{T$_{1/2}$}}\,=\,4.0$^{+1.3}_{-0.70}$ s.
Most of the decay chains were terminated by SF
of $^{279}$Ds, except in two cases: in one decay chain, observed in the irradiation of $^{242}$Pu also $\alpha$ decay of $^{279}$Ds and $^{275}$Hs was observed and
the chain was terminated by SF of $^{271}$Sg; in one chain observed in the irradiation of $^{238}$U also $\alpha$ decay of $^{271}$Sg was observed and the chain was terminated by SF $^{267}$Rf.
The previously observed chains at VASSILISSA were suspected to represent a les probaly decay mode \cite{OganUt04}, but not listed any more in later
publications (see e.g. \cite{Ogan07}).
The 'DGFR - results' were in-line with data for $^{283}$Cn and $^{287}$Fl data later obtained in the reactions $^{238}$U($^{48}$Ca,3n)$^{283}$Cn investigated at SHIP, GSI Darmstadt \cite{HofA07} and at GARIS II, RIKEN, Wako \cite{Kajia17}, as well as in the reaction $^{242}$Pu($^{48}$Ca,3n)$^{287}$Fl investigated at BGS, LNBL Berkeley \cite{StavG09}, while the
'VASSILISSA - events' were not observed. \\
It should be noted, however, that due to a more sensitive detector system used in \cite{HofA07} than that used in \cite{Ogan04b}
in cases where the $\alpha$ decay of $^{283}$Cn was denoted as 'missing' in \cite{Ogan04b},
since fission was directly following $\alpha$ decay of $^{287}$Fl, the $\alpha$ decay of $^{283}$Cn probably was not missing,
but fission occured from $^{283}$Cn \cite{HofA07}.\\
The discrepancy between the 'DGFRS results' and the 'VASSILiSSA results' could not be clarified so far, but it should be noted that the latter ones were not
considered any more in later reviews of SHE synthesis experiments at FLNR - JINR Dubna \cite{Oga16,Ogan07}.\\
However, the 'VASSILISSA results' again were discussed in context with a series of events, registered in an irradiation of $^{248}$Cm with $^{54}$Cr at SHIP, which
were regarded as a signature for a decay chain starting from an isotope of element 120 \cite{HofH16}.
It should be noted that a critical re-inspection of this sequence of events showed that it does not fulfil the physics criteria for a 'real' decay chain and the probability to be a real chain
is p $<$$<$ 0.01 \cite{HesA17}. Nevertheless the discussion in \cite{HofH16} can be regarded as an illustrative example of trying to match doubtful data
from different experiments. Therefore it will be treated her in some more detail.\\
\begin{table}
\caption{Comparison of the 'event sequence' from the $^{54}$Cr + $^{248}$Cm irradiation at SHIP \cite{HofH16} and the VASSILISSA results
for $^{287}$Fl and $^{283}$Cn. Data taken from \cite{HofH16}.}
\label{tab:1}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
\noalign{\smallskip}\hline\noalign{\smallskip}
Isotope & E$_{\alpha}$(SHIP) / MeV & $\Delta$t & E$_{\alpha}$(VASSILISSA) / MeV & T$_{1/2}$ / s \\
\hline\noalign{\smallskip}
($^{299}$120) & 13.14$\pm$0.030 & 5.4 s* & & \\
($^{295}$Og) & 11.814$\pm$0.040 & 261.069 ms & & \\
($^{291}$Lv) & 10.698$\pm$0.030 & 18.378 ms & & \\
($^{287}$Fl) & 0.353 (10.14$^{+0.09}_{-0.27}$)** & 20.1 s & 10.29$\pm$0.02 & 5.5$^{+9.9}_{-2.1}$ s \\
($^{283}$Cn & SF & 701 s & SF & 308$^{+212}_{-89}$ s \\
\end{tabular}
* time difference to the closed possible evaporation residue\\
** energy calculated from $\Delta$t = 20.1 s for a hindrance factor HF\,=\,104 \cite{HofH16}.
\vspace*{0.cm}
\end{table}
\begin{figure}
\vspace{-2cm}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig13.eps}
}
\caption{Comparison of $\alpha$ decay spectra of $^{289}$Mc and $^{285}$Nh for different ways of production;
a)$^{289}$Mc from decay of $^{293}$Ts; b) $^{289}$Mc from 'direct' production $^{243}$Am($^{48}$Ca,2n)$^{289}$Mc;
c)$^{285}$Nh produced in $\alpha$ decay chains starting from $^{293}$Ts;
d)$^{285}$Nh produced in $\alpha$ decay chains starting from $^{289}$Mc.
}
\label{fig:13}
\end{figure}
The data are shown in table 1. Evidently the events $\alpha_{4}$ and 'SF' would represent $^{287}$Fl and $^{283}$Cn if the chain starts at $^{299}$120.
$\alpha_{4}$ is recorded as an $\alpha$ particle escaping the detector, releasing only an energy loss of $\Delta$E\,=\,0.353 MeV in it. Using the measured lifetime (20 s)
and a hindrance factor HF\,=\,104, as derived from the full energy $\alpha$ event (10.29 MeV) attributed to $^{287}$Fl in \cite{Ogan99}, the authors calculated a full $\alpha$ decay energy
for $\alpha_{4}$ of {\it{E}}\,=\,10.14$^{+0.09}_{-0.27}$ MeV. Using the same procedure they obtained a full $\alpha$ energy {\it{E}}\,=\,10.19$^{+0.10}_{-0.28}$ MeV for the {\it{E}}\,=\,2.31 MeV - 'escape' event in \cite{Ogan99}.
The time differences {\it{$\Delta$T($\alpha_{3}$ - $\alpha_{4}$)}} and {\it{$\Delta$T($\alpha_{4}$ -SF)}} resulted
in lifetimes {\it{$\tau$($\alpha_{4}$}}\,=\,20$^{+89}_{-9}$ s
({\it{T$_{1/2}$}}\,=\,14$^{+62}_{-6}$ s) and {\it{$\tau$(SF)}}\,=\,12$^{+56}_{-5}$ min
({\it{T$_{1/2}$}}\,=\,500$^{+233}_{-208}$ s) \cite{HofH16} and thus were inline with the 'VASSILISSA - data' for $^{287}$Fl ({\it{E$_{\alpha}$}}\,=\,10.29$\pm$0.02 MeV, {\it{T$_{1/2}$}}\,=\,5.5$^{+9.9}_{-2.1}$s)
and $^{283}$Cn ({\it{T$_{1/2}$}}\,=\,308$^{+212}_{-89}$s).\\
This finding was seen as a 'mutual support' of the data strengthen the (tentative) assignments of the chains in \cite{Ogan99,HofH16}, although the authors in \cite{HofH16}
could not give a reasonable explanation, why these data were only seen in the VASSILISSA experiment and could not be reproduced in other laboratories. As it was shown in
\cite{HesA17} that the decay chain in \cite{HofH16} represents just a sequence of background events, it becomes clear, that blinkered data analysis may lead to
correlations between background events even if they are obtained in different experiments.\\
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig14.eps}
}
\caption{ $\alpha$ - $\alpha$ - correlations between decays of $^{289}$Mc and $^{285}$Nh; the insert shows the
energy distribution of $^{285}$Nh (E$_{\alpha}$\,=\,9.75-10.0 MeV), either correlated to $^{289}$Mc
E$_{\alpha}$\,$>$\,10.4 MeV (upper figure) or E$_{\alpha}$\,$<$\,10.4 MeV (lower figure).
}
\label{fig:14}
\end{figure}
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig15.eps}
}
\caption{ Suggestion for decay schemes of $^{293}$Ts, $^{289}$Mc(1) and $^{289}$Mc(2).
}
\label{fig:15}
\end{figure}
\subsection{\bf{6.5 Alpha decay chain of N-Z = 59 nuclei}}
Within the so far assigned superheavy nuclei, the decay properties of the {\it{N-Z}}\,=\,59 nuclei are of specific importance and
interest, as the acceptance of discovery of elements {\it{Z}}\,=\,117 (tenessine) and {\it{Z}}\,=\,115 (moscovium) is based on them.
The heaviest known nucleus of that chain, $^{293}$Ts, was produced by the reaction $^{249}$Bk($^{48}$Ca,4n)$^{293}$Ts
\cite{Ogan13,Ogan12,OgA10,OgA11,Khu14,Khu19}. A second entry point into this chain is $^{289}$Mc produced in the reaction
$^{243}$Am($^{48}$Ca,2n)$^{289}$Mc \cite{OgA13,OgA12,FoR16}. It was stated in \cite{Ogan13} 'decay energies and half-lives of
the nuclei $^{289}$115 $\rightarrow$ $^{285}$113 $\rightarrow$ $^{281}$Rg decay chains observed in the $^{243}$Am + $^{48}$Ca reaction
agree within the statistical uncertainties with the decay properties of the daughter nucleus of the $^{293}$117 nucleus produced in the $^{249}$Bk($^{48}$Ca, 4n)$^{293}$117 reaction (10 events) [...]. Such agreement provides indirect identification and consistency checks via cross - bombardment production of the same nuclei in different fusion reactions of $^{243}$Am and $^{249}$Bk targets with $^{48}$Ca projectiles'.
The fourth IUPAC/IUPAP Joint Working Party (JWP) on the priority of claims to the discovery of the new elements Z\,=\,113, 115 and 117 followed that statement and regarded it as a central issue to have met the criteria for the elements {\it{Z}}\,=\,115 and {\it{Z}}\,=\,117 using the following phrasing \cite{KarB16}:
a) Element 115: 'JWP ASSESSMENT: The 2010 [...] jointly with the 2013 [...] collaborations of Oganessian et al. have met the Criteria for discovery of the element with the atomic number {\it{Z}}\,=\,115 in as much as the reproducibility of the alpha chain energies and lifetimes of $^{289}$115 in a cross reaction comparison is very convincing.'\\
b) Element 117: 'JWP ASSESSMENT: A convincing case in cross reaction producing $^{289}$115 and $^{285}$113 from both $^{48}$Ca + $^{249}$Bk and $^{243}$Am is demonstrated in the top of the precious table. Thus, the 2010 [...], 2012 [...] and 2013 [...] jointly with the 2013 [...] collaborations of Oganessian et al. have met the criteria for discovery of the elements with atomic numbers {\it{Z}}\,=\,115 and {\it{Z}}\,=\,117.'\\
This assessment was critizised and a different interpretation of the results was suggested \cite{FoR16a}. This issue will be illuminated in the
following discussion. A final solution of the problem, however, cannot be presented on the basis of the available data.\\
The so far published decay data for the members of the {\it{N-Z}}\,=\,59 chain \cite{Oga16} are summarized in table 2. It should be noted, however,
that they are based solely on the results from the DGFRS - experiments. A complete list of the decay data published so far
for the {\it{N-Z}}\,=\,59 chain members $^{293}$Ts, $^{289}$Mc, $^{285}$Nh and $^{281}$Rg are given in table 3.
Altogether eighteen decay chains assigned to start from $^{293}$Ts are reported so far, sixteen from experiments at the DGFRS (Dubna) \cite{Ogan13,Ogan12,OgA11} and two from an experiment at TASCA (GSI) \cite{Khu19}. Evidently only one of the TASCA chains was complete, in the second one the first two members ($^{293}$Ts, $^{289}$Mc) were missing. Eleven chains starting
from $^{289}$Mc were reported, four from experiments performed at the DGFRS \cite{OgA13}, seven from a TASCA experiment
\cite{FoR16}. Also included in table 3 are three events observed in an irradiation of $^{243}$Am with $^{48}$Ca at the BGS, Berkeley \cite{GaG15}, although they were not explicitely assigned to $^{289}$Mc.\\
At first glance three items are striking:\\
a) four of the decay chains interpreted to start from $^{289}$Mc consist of an $\alpha$\,-\,sf correlation, i.e. sf decay of $^{285}$Nh, while in none of the eighteen chains starting from $^{293}$Ts fission of $^{285}$Nh was observed;\\
b) in chain no. D4 of \cite{OgA13} $^{285}$Nh has an $\alpha$ - decay energy more 0.2 MeV higher than that of all other decays where the $\alpha$ particle was registered with full energy (23 cases), which all had values {\it{E$_{\alpha}$}}\,$<$\,10 MeV.\\
c) $\alpha$ decay of $^{281}$Rg was only observed in decay chains starting from $^{293}$Ts.\\
To obtain more detailed information on the decay properties of the isotopes assigned to the {\it{N-Z}}\,=\,59 chain
a closer inspection of the data listed in table 3 was performed, specifically the results from the 'entry' into the chain
at $^{293}$Ts (reaction $^{48}$Ca + $^{249}$Bk) and 'entry' into the chain at $^{289}$Mc (reaction $^{48}$Ca + $^{243}$Am)
were compared.
The resulting $\alpha$ spectra are shown in fig. 13. Before starting a detailed discussion
it seems, however, necessary to stress some items that could cause confusion.
First, as discussed in sect. 4.4 (individual) $\alpha$ energies measured in the experiments performed at the
different laboratories (DGFRS Dubna, TASCA Darmstadt, BGS Berkeley) vary considerably, eventually due to the
calibration procedures applied. Differences {\it{$\Delta$E}}\,=\,76 keV were found between the DGFRS and BGS results,
and {\it{$\Delta$E}}\,=\,41 keV between the DGFRS and TASCA results for $^{272}$Bh; as $\approx$90 $\%$ of the data in Table 3
are from DGFRS or TASCA an uncertainty of $\approx$50 keV in the absolute value may be considered. As will be shown, the
energy differences from the different production mechanisms are munch larger and thus cannot be attributed to
calibration discrepancies.\\
Part of the $\alpha$ energies were obtained as sum events from 'stop' and 'box' detector, thus suffering from worse
accuracy due to worse energy resolution. A few decays (three events) from the DGFRS experiments were registered as 'box - only' events
(i.e. the $\alpha$ particle escaped the 'stop' detector with an energy loss below the registration threshold).
That means, the measured energies are too low by a few hundred keV. Due to the low number these events also
this feature cannot be
the reason for the differences in the energy distribtions.\\
In the considered reaction on direct production of $^{289}$Mc also $^{288}$Mc is produced; the assigment to $^{289}$Mc
was based on the fact, that the properties of the decay chains did not fit to the decay chain of $^{288}$Mc.
In principle a production of $^{287}$Mc by the reaction $^{243}$Am($^{48}$Ca,4n)$^{287}$Mc can also not be ruled out.
However, the properties of the decay chains attributed to $^{289}$Mc are not in-line with the decay proprties of $^{287}$Mc
and its daughter products (see e.g. \cite{Oga16}).\\
\begin{table}
\caption{Summary of decay properties of {\it{N-Z}}\,=\,59 nuclei; data taken from \cite{Oga16}.}
\label{tab:2}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
\noalign{\smallskip}\hline\noalign{\smallskip}
Isotope & Decay mode & E$_{\alpha}$ / MeV & half-life \\
\hline\noalign{\smallskip}
$^{293}$Ts & $\alpha$ & 10.60-11.20 & 22$^{+8}_{-4}$ ms \\
$^{289}$Mc & $\alpha$ & 10.15-10.54 & 330$^{+120}_{-80}$ ms \\
$^{285}$Nh & $\alpha$ & 9.47-10.18 & 4.2$^{+1.4}_{-0.8}$ s \\
$^{281}$Rg & $\alpha$($\approx$0.12, SF (0.88$^{+0.07}_{-0.09}$) & 9.28$\pm$0.05 & 17$^{+6}_{-3}$ s \\
$^{277}$Mt & SF & & 5$^{+9}_{-2}$ ms \\
\end{tabular}
\vspace*{0.cm}
\end{table}
\begin{table}
\caption{Summary observed decay chains starting from either $^{293}$Ts or $^{289}$Mc.}
\label{tab:3}
\footnotesize{
\begin{tabular}{lllllllll}
\hline\noalign{\smallskip}
\noalign{\smallskip}\hline\noalign{\smallskip}
Ref. & $^{293}$Ts & & $^{289}$Mc & & $^{285}$Nh & & $^{281}$Rg & \\
& E$_{\alpha}$/MeV & $\Delta$t/ms & E$_{\alpha}$/MeV & $\Delta$t/s & E$_{\alpha}$/MeV & $\Delta$t/s & E$_{\alpha}$/MeV & $\Delta$t/s \\
\hline\noalign{\smallskip}
\cite{OgA11} & 10.99 & 17.01 & missing & & 9.72 & (16.17) & SF & 40.19 \\
\cite{OgA11} & 11.14 & 7.89 & missing & & 9.52** & (2.23) & SF & 4.25 \\
\cite{OgA11} & 11.08* & 4.60 & 10.34 & 0.0175 & 9.71** & 1.17 & SF & 12.3 \\
\cite{OgA11} & 10.91 & 53.0 & 10.25 & 0.5118 & 9.79 & 0.238 & SF & 31.66 \\
\cite{OgA11} & 11.00 & 20.24 & 10.27 & 0.4244 & 9.48 & 13.49 & SF & 76.56 \\
\cite{Ogan12} & 10.90$\pm$0.10* & 7.525 & 10.37$\pm$0.28** & 0.2665 & 9.857$\pm$0.040 & 1.5155 & SF & 9.4192 \\
\cite{Ogan12} & 11.142$\pm$0.065 & 3.305 & 10.310$\pm$0.065 & 0.1819 & missing & & SF & (7.4538) \\
\cite{Ogan12} & 10.114$\pm$0.089* & 153.948 & missing & & 9.631$\pm$0.067 & (19.0456) & SF & 1.4809 \\
\cite{Ogan12} & 10.914$\pm$0.068 & 10.547 & 10.198$\pm$0.068 & 1.4348 & 9.36$\pm$0.3** & 1.3153 & SF & 103.406 \\
\cite{Ogan12} & 10.598$\pm$0.049 & 109.878 & 10.217$\pm$0.049 & 0.1510 & 9.683$\pm$0.049 & 1.5155 & SF & 42.1349 \\
\cite{Ogan13} & 10.969$\pm$0.068 & 0.043 & 10.60$\pm$0.33** & 0.0136 & 9.902$\pm$0.068 & 0.421 & SF & 15.0551 \\
\cite{Ogan13} & 11.183$\pm$0.048 & 2.553 & 10.364$\pm$0.048 & 0.9572 & 9.845$\pm$0.048 & 1.3712 & SF & 4.751 \\
\cite{Ogan13} & 11.203$\pm$0.070 & 8.173 & 10.279$\pm$0.070 & 0.0565 & 9.867$\pm$0.070 & 4.213 & 9.36$\pm$0.30 & 3.642 \\
\cite{Ogan13} & 11.059$\pm$0.050 & 7.525 & 10.362$\pm$0.050 & 0.0161 & 9.758$\pm$0.108 & 0.2589 & 9.280$\pm$0.050 & 2.9249 \\
\cite{Ogan13} & missing & & 10.145$\pm$0.066 & (0.0852) & 9.471$\pm$0.066 & 0.2456 & SF & 21.8372 \\
\cite{Ogan13} & 10.190$\pm$0.070 & 36.424 & missing & & missing & & SF & (4.488) \\
\cite{Khu19} & 9.70$\pm$0.08 & 8.65 & 10.00$\pm$0.03 & 0.07698 & (2.74) & 0.97 & 9.34$\pm$0.08 & 1.44 \\
\cite{Khu19} & missing & & missing & & 9.97$\pm$0.08 & 0.49 & 9.31$\pm$0.08 & 2.57 \\
\hline\noalign{\smallskip}
\cite{FoR16} & & & 10.51$\pm$0.01 & 0.227 & SF & 0.378 & & \\
\cite{FoR16} & & & (1.45$\pm$0.01) & 0.0645 & SF & 0.366 & & \\
\cite{FoR16} & & & 10.54$\pm$0.04 & 0.261 & 9.95$\pm$0.05 & 1.15 & SF & 0.343 \\
\cite{FoR16} & & & 10.34$\pm$0.01 & 1.46 & 9.89$\pm$0.01 & 0.0262 & SF & 0.432 \\
\cite{FoR16} & & & 10.49$\pm$0.04 & 0.345 & 9.97$\pm$0.01 & 0.369 & SF & 13.4 \\
\cite{FoR16} & & & 10.53$\pm$0.01 & 0.210 & 9.89$\pm$0.05 & 1.05 & SF & 8.27 \\
\cite{FoR16} & & & (0.541$\pm$0.03) & 0.815 & (3.12$\pm$0.01) & 2.33 & SF & 2.89 \\
\cite{OgA13} & & & 10.377$\pm$0.062 & 0.2562 & 9.886$\pm$0.062 & 1.4027 & SF & 1.977 \\
\cite{OgA13} & & & 10.540$\pm$0.123 & 0.0661 & 9.916$\pm$0.072 & 1.55 & SF & 2.364 \\
\cite{OgA13} & & & 10.373$\pm$0.050 & 2.3507 & 9.579$\pm$0.005 & 22.5822 & SF & 60.185 \\
\cite{OgA13} & & & 10.292$\pm$0.170*(D4) & 0.0536 & 10.178$\pm$0.055 & 0.4671 & SF & 0.0908 \\
\cite{GaG15} & & & 10.49$\pm$0.05 & 0.214 & 9.82$\pm$0.02 & 1.54 & (SF)*** & 7.57 \\
\cite{GaG15} & & & 10.49$\pm$0.02 & 0.0591 & SF & 0.824 & & \\
\cite{GaG15} & & & 10.22$\pm$0.02 & 0.0455 & SF & 0.142 & & \\
\end{tabular}
}
\vspace*{0.cm}
\end{table}
\normalsize
\begin{table}
\caption{Summary of halflife measurements.}
\label{tab:4}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
\noalign{\smallskip}\hline\noalign{\smallskip}
Isotope & E$_{/alpha}$ / MeV & T$_{1/2}$ / s & correlation \\
\hline\noalign{\smallskip}
$^{289}$Mc & 10.51$\pm$0.02 & 0.13$^{+0.09}_{-0.04}$ & corr. to $^{285}$Nh (9.8\,-\,10.0 MeV) \\
& 10.49\,-\,10.55 & & \\
$^{289}$Mc & 10.20\,-\,10.40 & 0.42$^{+0.29}_{-0.12}$ & corr. to $^{285}$Nh (9.8\,-\,10.0 MeV) \\
$^{289}$Mc & 10.20\,-\,10.40 & 0.44$^{+0.27}_{-0.12}$ & corr. to $^{285}$Nh (9.3\,-\,9.8 MeV) \\
\hline\noalign{\smallskip}
$^{285}$Nh & 9.8\,-\,10.0 & 0.70$^{+0.48}_{-0.20}$ & corr. to $^{289}$Mc (10.49\,-\,10.55 MeV) \\
$^{285}$Nh & 9.8\,-\,10.0 & 0.70$^{+0.59}_{-0.20}$ & corr. to $^{289}$Mc (10.2\,-\,10.4 MeV) \\
$^{285}$Nh & 9.97, 9.87, 9.76 & 1.02$^{+0.xx}_{-0.xx}$ & corr. to $^{281}$Rg $\alpha$ decay (9.32$\pm$0.04) \\
$^{285}$Nh & 9.3\,-\,9.8 & 5.69$^{+3.45}_{-1.56}$ & corr. to $^{289}$Mc (10.2\,-\,10.4 MeV) \\
\hline\noalign{\smallskip}
$^{281}$Rg & SF & 5.5$^{+3.8}_{-1.6}$ & corr. to $^{285}$Nh (9.8\,-\,10.0 MeV) and/or \\
& & & corr. to $^{289}$Mc (10.49\,-\,10.55 MeV) \\
$^{281}$Rg & SF & 6.0$^{+4.1}_{-1.7}$ & corr. to $^{285}$Nh (9.8\,-\,10.0 MeV) and/or \\
& & & corr. to $^{289}$Mc (10.2\,-\,10.4 MeV) \\
$^{281}$Rg & SF & 31.6$^{+19.2}_{-8.7}$ & corr. to $^{285}$Nh (9.3\,-\,9.8 MeV) and/or \\
& & & corr. to $^{289}$Mc (10.2\,-\,10.4 MeV) \\
$^{281}$Rg & $\alpha$ decays & 1.8$^{+1.8}_{-0.6}$ & \\
\end{tabular}
\vspace*{0.cm}
\end{table}
As seen from fig. 13 the $\alpha$ energy spectra of $^{289}$Mc (figs. 13a,b) and $^{285}$Nh (fig. 13c,d) exhibit significant differences for the different production mechanisms. In the production by $\alpha$ decay of $^{293}$Ts nearly all $^{289}$Mc
events (in 11 of 12 cases) have energies {\it{E$_{\alpha}$}}\,$<$\,10.4 MeV, while in the direct production seven of twelve events
are concentrated in the energy range {\it{E$_{\alpha}$}}\,=\,(10.49-10.55)\,MeV. Also half-lives are different; for the group at
{\it{E$_{\alpha}$}}\,$<$\,10.4 MeV one abtains {\it{T$_{1/2}$}}\,=\,0.39$^{+0.14}_{-0.08}$ s, for the group at
{\it{E$_{\alpha}$}}\,$>$\,10.4 MeV one abtains {\it{T$_{1/2}$}}\,=\,0.11$^{+0.06}_{-0.03}$ s. An analogue situation is found for $^{285}$Nh;
about two third (10 of 15 cases) of the $\alpha$ events from chains starting at $^{293}$Ts have energies {\it{E$_{\alpha}$}}\,$<$\,9.8 MeV, while for decays within the chains starting at $^{289}$Mc only one of nine events is located in this energy interval. \\
This behavior is also evident in the $\alpha$ - $\alpha$ correlation spectrum (fig. 14); the {\it{E$_{\alpha}$}}\,$>$\,10.4 MeV
component of $^{289}$Mc is exclusively correlated to $^{285}$Nh events in the energy interval E$_{\alpha}$\,=\,(9.8-10.0) MeV,
while for $^{289}$Mc of {\it{E$_{\alpha}$}}\,$<$\,10.4 MeV only about half of the events are correlated
to $^{285}$Nh decays in that energy interval. But as seen in the insert of fig. 14 the energy distributions are not the same.
The $^{285}$Nh decays correlated to $^{289}$Mc at {\it{E$_{\alpha}$}}\,$>$\,10.4 MeV (upper figure) have somewhat higher energies
of {\it{E(mean)}}\,=\,9.91$\pm$0.05 MeV than those correlated to $^{289}$Mc at {\it{E$_{\alpha}$}}\,$<$\,10.4 MeV (lower figure),
which have {\it{E(mean)}}\,=\,9.86$\pm$0.04 MeV.\\
All these differences indicate that in the direct production a $^{289}$Mc component (it also could be a different isotope)
having an energy {\it{E$_{\alpha}$}}\,=\,10.53$\pm$0.04 MeV and a hallife of {\it{T$_{1/2}$}}\,=\,0.11$^{+0.06}_{-0.03}$ s is produced
which is not present in the decay chain of $^{293}$Ts, i.e. eventually an isomeric state is populated by deexcitation of the compound nucleus in 'direct' production reaction, which is not populated by $\alpha$ decay of $^{293}$Ts. That would be nothing surprising,
as a couple of such examples are known in the transfermium region, e.g. $^{251}$No, $^{257}$Rf. \\
The assumption of an isomeric state in $^{289}$Mc does not vitiate the JWG assassment, but it clearly shows, how fragile
such conclusion may be on the basis of very low statistics. \\
So we have to face the following situation considerating all decays listed in table 3:
\begin{itemize}
\item Within the production $^{289}$Mc via $\alpha$ decay of $^{293}$Ts the $\alpha$ particles of $^{289}$Mc are located in an
energy interval {\it{E$_{\alpha}$}}\,=\,(10.2\,-\,10.4) MeV; the resulting half-life is {\it{T$_{1/2}$}}\,=\,0.28$^{+0.13}_{-0.07}$ s;
\item Within the production of $^{289}$Mc 'directly' via the reaction $^{243}$Am($^{48}$Ca,2n)$^{289}$Mc we observe two components
in the $\alpha$ energies: one at {\it{E$_{\alpha}$}}\,=\,(10.2\,-\,10.4) MeV with a half-life of {\it{T$_{1/2}$}}\,=\,0.71$^{+0.71}_{-0.24}$ s; and
one at {\it{E$_{\alpha}$}}\,=\,(10.49\,-\,10.55) MeV with a half-life of {\it{T$_{1/2}$}}\,=\,0.12$^{+0.13}_{-0.07}$ s;
\item The $^{285}$Nh events from the production via $^{293}$Ts $\rightarrow$ $^{289}$Mc $\rightarrow$ $^{285}$Nh are spread over
an energy range {\it{E$_{\alpha}$}}\,=\,(9.35\,-\,10.0) MeV and exhibit a half-life {\it{T$_{1/2}$($^{285}$Nh)}}\,=\,2.44$^{+0.94}_{-0.53}$ s;
\item The $^{285}$Nh events from the 'direct production' of $^{289}$Mc via $^{289}$Mc $\rightarrow$ $^{285}$Nh are in the
an energy range {\it{E$_{\alpha}$}}\,=\,(9.8\,-\,10.0) MeV (except the two events at 9.58 MeV and 10.18 MeV ) and exhibit a half-life {\it{T$_{1/2}$($^{285}$Nh)}}\,=\,0.76$^{+0.38}_{-0.19}$ s;
\item Taking into account in addition the $\alpha$ - $\alpha$ - correlations $^{289}$Mc $\rightarrow$ $^{285}$Nh, we tentatively can distinguish
three groups
\begin{enumerate}
\item $^{289}$Mc ({\it{E$_{\alpha}$}}\,=\,(10.49-10.55) MeV) $\rightarrow$ $^{285}$Nh ({\it{E$_{\alpha}$}}\,=\,(9.8-10.0) MeV),
with T$_{1/2}$($^{289}$Mc)\,=\,0.13$^{+0.09}_{-0.04}$ s and T$_{1/2}$($^{285}$Nh)\,=\,0.70$^{+0.48}_{-0.20}$ s; the fission events terminating
the decay chain have a half-life {\it{T$_{1/2}$($^{281}$Rg)}}\,=\,5.5$^{+3.8}_{-1.6}$ s;\\
\item $^{289}$Mc ({\it{E$_{\alpha}$}}\,=\,(10.2\,-\,10.4) MeV) $\rightarrow$ $^{285}$Nh (E$_{\alpha}$\,=\,(9.8-10.0) MeV),
with {\it{T$_{1/2}$($^{289}$Mc)}}\,=\,0.42$^{+0.29}_{-0.12}$ s and {\it{T$_{1/2}$($^{285}$Nh)}}\,=\,0.70$^{+0.49}_{-0.20}$ s; the fission events terminating
the decay chain have a half-life {\it{T$_{1/2}$($^{281}$Rg)}}\,=\,6.0$^{+4.1}_{-1.7}$ s;\\
\item $^{289}$Mc ({\it{E$_{\alpha}$}}\,=\,(10.15\,-\,10.4) MeV) $\rightarrow$ $^{285}$Nh ({\it{E$_{\alpha}$}}\,=\,(9.35\,-\,9.8)MeV),
with {\it{T$_{1/2}$($^{289}$Mc)}}\,=\,0.44$^{+0.27}_{-0.12}$ s and {\it{T$_{1/2}$($^{285}$Nh)}} = 5.69$^{+3.45}_{-1.56}$ s; the fission events terminating
the decay chain have a half-life {\it{T$_{1/2}$($^{281}$Rg)}}\,=\,31.6$^{+19.2}_{-8.7}$ s.\\
\end{enumerate}
\end{itemize}
Under these circumstances we tentatively can distinguish the following decay chains.
\begin{itemize}
\item $^{289}$Mc ({\it{E$_{\alpha}$}}\,=\,(10.4\,-\,10.6) MeV, {\it{T$_{1/2}$}}\,=\,0.13$^{+0.09}_{-0.04}$ s) $\rightarrow$
$^{285}$Nh ({\it{E$_{\alpha}$}}\,=\,(9.8\,-\,10.0 )MeV, {\it{T$_{1/2}$}}\,=\,0.70$^{+0.48}_{-0.20}$ s) $\rightarrow$
$^{281}$Rg (SF, {\it{T$_{1/2}$}}\,=\,5.5$^{+3.8}_{-1.6}$ s);
\item $^{289}$Mc ({\it{E$_{\alpha}$}}\,=\,(10.2\,-\,10.4) MeV, {\it{T$_{1/2}$}}\,=\,0.42$^{+0.29}_{-0.12}$ s) $\rightarrow$
$^{285}$Nh ({\it{E$_{\alpha}$}}\,=\,(9.8\,-\,10.0 )MeV, T$_{1/2}$ = 0.70$^{+0.49}_{-0.20}$ s) $\rightarrow$
$^{281}$Rg (SF, {\it{T$_{1/2}$}}\,=\,6.0$^{+4.1}_{-1.7}$ s);\\
\item $^{289}$Mc ({\it{E$_{\alpha}$}}\,=\,(10.2\,-\,10.4) MeV, {\it{T$_{1/2}$}}\,=\,0.44$^{+0.27}_{-0.12}$ s) $\rightarrow$
$^{285}$Nh ({\it{E$_{\alpha}$}}\,=\,(9.35-9.8) MeV, {\it{T$_{1/2}$}}\,=\,5.69$^{+3.45}_{-1.56}$ s) $\rightarrow$
$^{281}$Rg (SF, {\it{T$_{1/2}$}}\,=\,31.6$^{+19.2}_{-8.7}$ s);\\
\end{itemize}
This at first glance somewhat puzzling seeming decay pattern can qualitatively explained by existence
of low lying long lived isomeric states in $^{289}$Mc, $^{285}$Nh and $^{281}$Rg decaying by $\alpha$ emission or
spontaneous fission. The existence of such states is due to existence of Nilsson states with low and high spins placed closely
at low excitation energies; the decay by internal transitions of such states is hindered by large spin differences and thus lifetimes become
long and $\alpha$ decay can compete with internal transitions. That is a well known phenomena in the transfermium regions.
In direct production both states are usually populated; in production by $\alpha$ decay the population of the states
depend on the decay of the mother nucleus. If there are two longlived isomeric states in the mother nucleus, also two longlived
states in the daughter nucleus may populated, see e.g. decay of $^{257}$Db $\rightarrow$ $^{253}$Lr \cite{HesH01}; if there is only one
$\alpha$ emitting state populated by the deexcitation process two cases are possible; either only one state in the daughter nucleus is populated
as e.g. in the decay $^{261}$Sg $\rightarrow$ $^{257}$Rf \cite{Streich10}, or both long-lived states in the daughter nucleus may be
populated, as known for $^{261}$Bh $\rightarrow$ $^{257}$Db \cite{HesA10a}.\\
Under this circumstances the puzzling behavior can be understood in the following way:
decay of $^{293}$Ts populates one state $^{289}$Mc(1) (10.2-10.4 MeV), while direct production populates two states
$^{289}$Mc(1) and $^{289}$Mc(2) (10.4-10.6 MeV); $^{289}$Mc(2) decays exclusively into one state $^{285}$Nh(2)(9.8-10.0 MeV), which then
decays into $^{281}$Rg(2) which undergoes fission and $\alpha$ decay. $^{289}$Mc(1) on the other side partly populates $^{285}$Nh(2) and $^{285}$Nh(1) (9.35-9.8 MeV) which then decays into $^{281}$Rg(1) which undergoes probably nearly exclusively spontaneous fission.
The resulting tentative decay scheme is shown in fig. 15.\\
In addition there might be other contributions, e.g.
the chain marked as D4 in Table 3, which does not fit to the other ones.\\
Also the very short chains in Table 3 consiting of $\alpha$ $\rightarrow$ SF seemingly may have a different origins
The half-lives of the $\alpha$ events, T$_{1/2}$($\alpha$) = 0.069 $^{+0.069}_{-0.23}$ s, and of the fission events
and T$_{1/2}$(SF) = 0.3 $^{+0.30}_{-0.1}$ are lower than the values for $^{289}$Mc(2) and $^{285}$Nh(2),
but considering the large uncertainties they are not in disagreement. So they could indicate a fission branch of
$^{285}$Nh in the order of b$_{SF}$$\approx$0.4. We remark here also the short half-life of
T$_{1/2}$\,=\,1.8$^{+1.8}_{-0.6}$s of
the $\alpha$ events assigned to $^{281}$Rg which is considerably shorter than the half-life of the fission events. Despite this fact we tentatively
assign them to $^{281}$Rg(1). \\
The joint analysis of the data presented for the decay chains interpreted to start either from $^{293}$Ts or from
$^{289}$Mc seem to shed some light into the 'puzzeling' decay data reported so far and suggests a solution. It should be noted,
however, the conclusions drawn here must be confirmed by more sensitive measurements before they can be finally
accepted.
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig16.eps}
}
\vspace{0cm}
\caption{Systematics of maximum production cross-sections in cold fusion reactions and reactions using actinide targets
}
\label{fig:16}
\end{figure}
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig17.eps}
}
\vspace{-1cm}
\caption{Decay chains attributed to start from $^{278}$Nh \cite{Morita15}
}
\label{fig:17}
\end{figure}
\begin{figure}
\resizebox{0.90\textwidth}{!}
\includegraphics{fig18.eps}
}
\caption{$\alpha$ decay energies of $^{266}$Bh as reported by different authors;
a) from decay of $^{278}$Nh \cite{Morita15},
b) production via $^{249}$Bk($^{22}$Ne,5n)$^{266}$Bh \cite{Wilk00},
c) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from triple $\alpha$
correlations $\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db) - $\alpha_{3}$($^{258}$Lr),
d) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from double $\alpha$
correlations $\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db or $^{258}$Lr),
e) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from $\alpha$ - SF
correlations,
f) production via $^{243}$Am($^{26}$Mg,3n)$^{266}Bh$ \cite{Qin06},
g) production via $^{248}$Cm($^{23}$F,5n)$^{266}$Bh, $\alpha$($^{266}$Bh) - SF($^{262}$Db) correlations \cite{Haba20},
h) production via $^{248}$Cm($^{23}$F,5n)$^{266}$Bh, triple correlations $\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db) - $\alpha_{3}$($^{258}$Db) correlations \cite{Haba20}.
}
\label{fig:18}
\end{figure}
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig19.eps}
}
\caption{$\alpha$ decay energies of $^{262}$Db as reported by different authors;
a) from decay of $^{278}$Nh \cite{Morita15},
b) production via $^{249}$Bk($^{22}$Ne,5n)$^{266}$Bh \cite{Wilk00},
c) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from triple $\alpha$
correlations $\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db) - $\alpha_{3}$($^{258}$Lr),
d) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from double $\alpha$ correlations $\alpha_{1}$($^{262}$Db) - $\alpha_{2}$($^{258}$Db, with $\alpha$ decay of $^{266}$Bh)
not recorded;
e) production via $^{243}$Am($^{26}$Mg,3n)$^{266}$Bh \cite{Qin06},
f) production via $^{248}$Cm($^{19}$F,5n)$^{262}$Db \cite{Dress99},
g) production via $^{248}$Cm($^{19}$F,5n)$^{262}$Db \cite{Haba14},
h) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Haba20}.
}
\label{fig:19}
\end{figure}
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig20.eps}
}
\caption{$\alpha$ decay energies of $^{258}$Lr as reported by different authors;
a) from decay of $^{278}$Nh \cite{Morita15},
b) production via $^{249}$Bk($^{22}$Ne,5n)$^{266}$Bh \cite{Wilk00},
c) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from triple $\alpha$
correlatiions $\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db) - $\alpha_{3}$($^{258}$Lr),
d) production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Morita09,Morita15}, $\alpha$ energies from double $\alpha$ correlatiions $\alpha_{2}$($^{262}$Db) - $\alpha_{2}$($^{258}$Lr,
with $\alpha$ decay of $^{266}$Bh not recorded,
e)production via $^{243}$Am($^{26}$Mg,3n)$^{266}$Bh \cite{Qin06},
f)production via $^{248}$Cm($^{19}$F,5n)$^{262}$Db \cite{Dress99},
g)production via $^{248}$Cm($^{19}$F,5n)$^{262}$Db \cite{Haba14},
h)production via $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh \cite{Haba20}.
}
\label{fig:20}
\end{figure}
\begin{figure}
\resizebox{0.95\textwidth}{!}
\includegraphics{fig21.eps}
}
\caption{Comparison of all published triple correlations $^{266}$Bh $^{\alpha}_{\rightarrow}$ $^{262}$Db $^{\alpha}_{\rightarrow}$
$^{258}$Lr $^{\alpha}_{\rightarrow}$
}
\label{fig:21}
\end{figure}
\newpage
\subsection{\bf{6.6 Discovery of element 113 - Alpha decay chain of $^{278}$Nh}}
\begin{table}
\begin{center}
\begin{tabular}{| l l l l l l l l l l |}
\hline
& \vline & chain 1 & & \vline & chain2 & & \vline & chain 3 & \\
Isotope & \vline & E$_{\alpha}$/MeV & T$_{1/2}$ & \vline & E$_{\alpha}$/MeV & T$_{1/2}$ & \vline & E$_{\alpha}$/MeV & T$_{1/2}$ \\
\hline
$^{278}$113 & \vline & 11.68$\pm$0.04 & 0.344 ms & \vline & 11.52$\pm$0.04 & 4.93 ms & \vline & 11.82$\pm$0.06 & 0.667 ms \\
$^{274}$Rg & \vline & 11.15$\pm$0.07 & 9.26 ms & \vline & 11.31$\pm$0.07 & 34.3 ms & \vline & 10.65$\pm$0.06 & 9.97 ms \\
$^{270}$Mt & \vline & 10.03$\pm$0.07 & 7.16 ms & \vline & 2.32 (esc) & 1.63 s & \vline & 10.26$\pm$0.07 & 444 ms \\
$^{266}$Bh & \vline & 9.08$\pm$0.04 & 2.47 s & \vline & 9.77$\pm$0.04 & 1.31 s & \vline & 9.39$\pm$0.06 & 5.26 s \\
$^{262}$Db & \vline & sf & 40.9 s & \vline & sf & 0.787 s & \vline & 8.63$\pm$0.06 & 126 s \\
$^{258}$Lr & \vline & & & \vline & & & \vline & 8.66$\pm$0.06 & 3.78 s \\
\hline
\end{tabular}
\end{center}
\caption{Decay chains observed at GARIS, RIKEN in the reaction $^{70}$Zn + $^{209}$Bi and interpreted to start from $^{278}$113 \cite{Morita15}.
'esc' denotes that the $\alpha$ particle escaped the 'stop' detector and only an energy loss signal was recorded.} \label{tab5}
\end{table}
The first report on discovery of element 113 was published by Oganessian et al. \cite{OganU04} in 2004. In an irradiation of $^{243}$Am with $^{48}$Ca performed at the DGFRs three decay chains were observed which were interpreted to start from $^{288}$115; the isotope
$^{284}$113 of the new element 113 was thus produced as the $\alpha$ decay descendant of $^{288}$115. The data published in 2004
were confirmed at DGFRS \cite{OgA13}, at TASCA \cite{RuF13} and also later - after giving credit to the discovery of element 113 - at the BGS \cite{GaG15}. Nevertheless the 'fourth IUPAC/IUPAP joint working Party (JWP) did not accept these results
as the discovery of elemenent 113, as they concluded that the discovery profiles were not fulfilled: 'The 2013 Oganessian
collaboration \cite{OgA13} and the 2013 Rudolph collaboration \cite{RuF13} provide redundancy to the three $^{288}$113 chains
observed in 2004 with the $\alpha$ energies being in excellent agreement among most of the events. [...] However, the criteria (q.v. \cite{Wapstra91}) have not been met as there is no mandatory identification of the chain atomic numbers neither through a known descendant nor by cross reaction. Chemical determination as detailed in the subsequent profile of Z\,=\,115 where they are documented, serving the important role of assigning atomic number are insufficiently selective although certainly otherwise informative' \cite{KarB16}.\\
Instead credit for discovery of element 113 was given to Morita et al. on the basis of three decay chains observed in the 'cold' fusion reaction $^{70}$Zn + $^{209}$Bi \cite{KarB16}.\\
Although cold fusion had been the successful method to synthesize elements {\it{Z}}\,=\,107 to {\it{Z}}\,=\,112, due to the steep decrease of the
cross-sections by a factor of three to four per element, it did not seem to be straightforward to assume it would be the silver bullet to the
SHE (see fig. 16). Nevertheless after the successful synthesis of element 112 in bombardments of $^{208}$Pb with $^{70}$Zn \cite{Hofm96}
it seemed straightforward to attempt to synthesize element 113 in the reaction $^{209}$Bi($^{70}$Zn,n)$^{278}$113. Being optimistic and
assuming a drop in the cross section not larger than a factor of five, as observed for the step from element 110 to element 111
(see fig. 16), a cross section of some hundred femtobarn could be expected.\\
First attempts were undertaken at SHIP, GSI, Darmstadt, Germany in 1998 \cite{HofH99} and in 2003 \cite{HofA04}. No $\alpha$ decay chains
that could be attributed to start from an element 113 isotope were observed. Merging the projectile doses collected in both experiments
an upper production cross section limit $\sigma$\,$\le$\,160 fb was obtained \cite{Hofm11}.\\
More intensively this reaction was studied at the GARIS separator, Riken, Wako-shi, Japan. Over a period of nine years (from 2003 to 2012)
with a complete irradiation time of 575 days altogether three dcay chains interpreted to start from $^{278}$113 were observed
\cite{Morita04,Morita07a,Morita13,Morita15}.
The collected beam dose was 1.35$\times$10$^{20}$ $^{70}$Zn - ions, the formation cross-section was $\sigma$\,=\,22$^{+20}_{-13}$ fb
\cite{Morita13}.\\
The chains are shown in fig. 17 and the data are presented in table 5. Chain 1 and chain 2 consist of four $\alpha$ particles and are
terminated by a fission event, while chain 3 consists of six $\alpha$ decays. Already at first glance the large differences in the
$\alpha$ energies of members assigned to the same isotope is striking, especially for the events $\alpha_{2}$($^{274}$Rg) in chains 2 and 3
with {\it{$\Delta$E}}\,=\, 0.68 MeV and $\alpha_{4}$($^{266}$Bh) in chains 1 and 2 with {\it{$\Delta$E}}\,=\, 0.69 MeV. Although it is known
that $\alpha$ - decay energies can vary in a wide range for odd - odd nuclei in the region of heaviest nuclei, as was shown, e.g.,
for $^{266}$Mt, where $\alpha$ energies were found to vary in the range {\it{E$_{\alpha}$}}\,=\,(10.456\,-\,11.739) MeV \cite{HofH97},
the assignment of such different energies to the decay of the same isotope or the same nuclear level can be debated ({\it{see e.g.}} \cite{Hess13}), as specifcally
concerning the latter case it is known, that in odd - odd (and also in odd - mass) nuclei often low lying isomeric states exist, which
decay by $\alpha$ emission with energies and halflives similar to those of the ground state (see e.g. \cite{Vost15}
for the cases of $^{258}$Db, $^{254}$Lr, and $^{250}$Md).
In the present case large $\alpha$ energy differences of {\it{$\Delta$E}}\,$>$\,0.1 MeV are evident for the corresponding members in
all chains as shown in table 6.
\begin{table}
\begin{center}
\begin{tabular}{l l l l l l l}
\hline
isotope & E$_{\alpha}$/MeV & E$_{\alpha}$/MeV & E$_{\alpha}$/MeV & $\mid$$\Delta$E$_{\alpha}$$\mid$/MeV & $\mid$$\Delta$ E$_{\alpha}$$\mid$/MeV &
$\mid$$\Delta$E$_{\alpha}$$\mid$/MeV \\
& Chain 1 & Chain 2 & Chain 3 & Ch.1 - Ch.2 & Ch.1 - Ch.3 & Ch.2 - Ch. 3 \\
\hline
$^{278}$113 & 11.68$\pm$0.04 & 11.52$\pm$0.04 & 11.82$\pm$0.06 & 0.16 & 0.14 & 0.30 \\
$^{274}$Rg & 11.15$\pm$0.07 & 11.31$\pm$0.07 & 10.65$\pm$0.06 & 0.16 & 0.50 & 0.68 \\
$^{270}$Mt & 10.03$\pm$0.07 & 2.32 (esc) & 10.26$\pm$0.07 & - & 0,23 & - \\
$^{266}$Bh & 9.08$\pm$0.04 & 9.77$\pm$0.04 & 9.39$\pm$0.06 & 0.69 & 0.31 & 0.48 \\
\hline
\end{tabular}
\end{center}
\caption{$\alpha$ energy differences of the individual chain members of the three decay chains interpreted to start from
$^{278}$113 } \label{tab6}
\end{table}
These differences are of specific importance for $^{266}$Bh which acts as an anchor point for identification of
the chains. The observation of $\alpha$ decay of this isotope has been reported by several authors who produced it in diffeent reactions:\\
a) Wilk et al. \cite{Wilk00} used the reaction $^{249}$Bk($^{22}$Ne,5n)$^{266}$Bh. They observed one event with an $\alpha$ energy of
{\it{E$_{\alpha}$}}\,=\,9.29 MeV.\\
b) Morita et al. \cite{Morita15,Morita09} used the reaction $^{248}$Cm($^{23}$Na,5n)$^{266}$Bh; they observed in total 32 decay chains; 20 of
them
were attributed (partly tentative) to the decay of $^{266}$Bh; four decay chains consisted of three $\alpha$ particles, assigned as decays
$\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db) - $\alpha_{3}$($^{258}$Lr); four decay chains consisted of two $\alpha$ particles,
interpreted as decays $\alpha_{1}$($^{266}$Bh) - $\alpha_{2}$($^{262}$Db or $^{258}$Lr); twelve decay chains consisted of an $\alpha$ particle
followed by a fission event, interpreted as $\alpha$($^{266}$Bh) - SF($^{262}$Db)(possibly SF from $^{262}$Rf,
produced by EC decay of $^{262}$Db); in the case of four $\alpha$ energies of E$_{\alpha}$\,$<$\,9 MeV, the
assignment was marked as 'tentative'. \\
c) Qin et al. \cite{Qin06} used the reaction $^{243}$Am($^{26}$Mg,3n)$^{266}$Bh. They observed four decay chains which they assigned
start from $^{266}$Bh.\\
Evidently there is no real agreement for the $\alpha$ energies of $^{266}$Bh; two of the three energies of $^{266}$Bh from the
$^{278}$113 decay chains (fig. 18a) are outside the range of energies observed in direct production, which is specifically critical
for chain 3, as it is not terminated by fission, but $\alpha$ decay is followed by two more $\alpha$ events attributed to
$^{262}$Db and $^{258}$Lr and thus is the anchor point for identification of the chain. Some agreement is obtained for the events
from the direct production followed by fission \cite{Morita09,Morita15} (fig. 18e), the $\alpha$ energy in chain 1 (fig. 18a)
(also follwed by fission) and the results from Qin et al. \cite{Qin06} (fig. 18f), where two groups at (9.05\,-\,9.1) MeV and (8.9\,-\,9.0) MeV
are visible. Note, that in \cite{Morita09,Morita15} the events at {\it{E$_{\alpha}$}}\,$<$\,9.0 MeV followed by fission are only assigned
tentatively to $^{266}$Bh, while in \cite{Qin06} all $^{266}$Bh $\alpha$ decays are followed by $\alpha$ decays.\\
Unclear is the situation of the events followed by $\alpha$ decays. As seen in figs. 18c, 18d, and 18f, there are already in
the results from \cite{Morita09,Morita15} discrepancies in the $^{266}$Bh energies from triple correlations (fig. 18c) and double correlations
(fig. 18d). In the triple correlations there is one event at {\it{E}}\,=\,8.82 MeV, three more are in the interval {\it{E}}\,=\,(9.08\,-\,9.2) MeV, while for the
double correlations all four events are in the range {\it{E}}\,=\,(9.14\,-\,9.23) MeV; tentatively merging the $^{266}$Bh $\alpha$ energies from events
followed by $\alpha$ decay we find six of eight events (75 per cent) in the range {\it{E}}\,=\,(9.14\,-\,9.23) MeV while only one of twelve events
followed by fission is observed in that region. In this enery range none of the events observed by Qin et al. \cite{Qin06} is found, which are all below 9.14 MeV, also none of the events from the decay of $^{278}$113, and also not the event reported by Wilk et al. \cite{Wilk00} (fig. 18b) is found.\\
To conclude, the $\alpha$ decay energies of $^{266}$Bh reported from the different production reactions as well as from the different decay modes of the daughter products ($\alpha$ decay or (SF/EC) vary considerably, so there is no real experimantal
basis to use $^{266}$Bh as an anchor point for identification of the chain assumed to start at $^{278}$113.\\
Discrepancies are also found for the halflives. From the $^{278}$113 decay chains a half-life of {\it{T$_{1/2}$}}\,=\,2.2$^{+2.9}_{-0.8}$ s is obtained for
$^{266}$Bh \cite{Morita15}, while Qin et al. \cite{Qin06} give a value T$_{1/2}$\,=\,0.66$^{+0.59}_{-0.26}$ s.
The discrepancy is already slightly outside the 1$\sigma$ confidential interval. No half-life value is given from the direct production
of Morita et al. \cite{Morita15}. \\
The disagreement in the decay properties of $^{266}$Bh reported by different authors renders the interpretation of the $\alpha$ decay chain
(chain 3) quite difficult. It is therefore of importance to check the following $\alpha$ decays assigned to $^{262}$Db and $^{258}$Lr,
respectively, as they may help to clarify the situation. In order to do so, it is required to review the reported decay properties of these
isotopes and to compare the results with the data in chain 3.\\
It should also be remarked here that the differences of the $\alpha$ energies attributed to $^{266}$Bh followed by $\alpha$ decays or by SF
in \cite{Morita15} indicates that the assignement of these events to the same isotope is not straightforward, at least not the
assignment to the decay of the same nuclear level.\\
In a previous data compilation \cite{Fire96} three $\alpha$ lines of {\it{E$_{\alpha1}$}}\,=\,8.45$\pm$0.02 MeV (i\,=\,0.75),
{\it{E$_{\alpha2}$}}\,=\,8.53$\pm$0.02 MeV (i\,=\,0.16), {\it{E$_{\alpha3}$}}\,=\,8.67$\pm$0.02 MeV (i\,=\,0.09) and a half-life of
{\it{T$_{1/2}$}}\,=\,34$\pm$4 s are reported for $^{262}$Db. More recent data were obtained from decay studies of $^{266}$Bh \cite{Morita15,Wilk00,Morita09,Qin06}
or from direct production via the reaction $^{248}$Cm($^{19}$F,5n)$^{262}$Db \cite{Dress99,Haba14}. The results of the different studies are
compared in fig. 19.\\
The energy of the one event from the $^{278}$113 decay chain 3 is shown in fig. 19a.
The most extensive recent data for $^{262}$Db were collected by Haba et al. \cite{Haba14}. They observed two groups of $\alpha$-decay
energies, one at E$_{\alpha}$\,=\,(8.40-8.55) MeV (in the following also denoted as 'low energy component') and another one at E$_{\alpha}$\,=\,(8.60-8.80) MeV (in the following also denoted as 'high energy component') (fig. 19g).
Mean $\alpha$ energy values and intensities are
{\it{E$_{\alpha}$}}\,=\,8.46$\pm$0.04 MeV ({\it{i$_{rel}$}}\,=\,0.70$\pm$0.05) and {\it{E$_{\alpha}$}}\,=\,8.68$\pm$0.03 MeV
(i$_{rel}$\,=\,0.30$\pm$0.05).
In \cite{Haba14} only
one common half-life of {\it{T$_{1/2}$}}\,=\,33.8$^{+4.4}_{-3.5}$ s is given for both groups. A re-analysis of the data, however, indicate different
halflives: {\it{T$_{1/2}$}}\,=\,39$^{+6}_{-5}$ s for {\it{E$_{\alpha}$}}\,=\,(8.40\,-\,8.55) MeV and
{\it{T$_{1/2}$}}\,=\,24$^{+6}_{-4}$ s for {\it{E$_{\alpha}$}}\,=\,(8.60\,-\,8.80) MeV. A similar behavior is reported by Dressler et al. \cite{Dress99}
2 events at {\it{E$_{\alpha}$}}\,=\,(8.40\,-\,8.55) MeV and one event at {\it{E$_{\alpha}$}}\,=\,(8.60\,-\,8.80) MeV (see fig. 19f).
Qin et al. \cite{Qin06} oberserved three events at E$_{\alpha}$\,=\,(8.40\,-\,8.55) MeV and one event at E$_{\alpha}$\,=\,8.604 MeV, outside
the bulk of the high energy group reported in \cite{Haba14}(see fig. 19e). A similar behavior is seen for the double correlations
($^{262}$Db\,-\,$^{258}$Lr), with missing $^{266}$Bh from the reaction $^{23}$Na + $^{248}$Cm measured by Morita et al. \cite{Morita15}
(see fig. 19d). Three of four events are located in the range
of the low energy component, while for the triple correlations all four events are in the high energy group (see fig. 19c).
This behavior seems somewhat strange as there is no physical reason why the
$\alpha$ decay energies of $^{262}$Db should be different for the cases where the preceding $^{266}$Bh $\alpha$ decay is recorded
or not recorded. It rather could mean that the triple ($^{266}$Bh $\rightarrow$ $^{262}$Db $\rightarrow$ $^{258}$Lr) and the
double correlations ($^{262}$Db $\rightarrow$ $^{258}$Lr) of \cite{Morita15} do no respresent the same activities.
The $\alpha$ decay energy of the one event observed by Wilk et al. \cite{Wilk00} belongs to the low energy group (see fig. 19b), the one
event from the decay chain attributed to start from $^{278}$113 does not really fit to one of the groups. The energy is definitely
lower than the mean value of the high energy group, an agreement with that group can only be postulated considering the
large uncertainty ($\pm$60 keV) of its energy value (see fig. 19a).\\
Halflives are {\it{T$_{1/2}$}}\,=\,44$^{+60}_{-16}$ s for the {\it{E$_{\alpha}$}}\,=\,(8.40\,-\,8.55) MeV component in \cite{Morita15},
T$_{1/2}$\,=\,16$^{+7}_{-4}$ s for the {\it{E$_{\alpha}$}}\,=\,(8.55\,-\,8.80) MeV in agreement with the values of Haba et al. \cite{Haba14},
and {\it{T$_{1/2}$}}\,=\,52$^{+21}_{-12}$ s for the SF activity, which is rather in agreement with that of the low energy component. \\
To summarize: The assignment of the event $\alpha_{5}$ in chain 3 in \cite{Morita15} to $^{262}$Db is not unambiguos on the basis of its
energy, in addition also its 'lifetime' {\it{$\tau$}}\,=\,t$_{\alpha5}$-t$_{\alpha4}$ = 126 s is about five times of the half-life
of the high energy component of $^{262}$Db observed in \cite{Haba14}. One should keep in mind, that the probability to observe a decay
at times longer than five halflives is {\it{p}}\,$<$0.03.\\
\begin{table}
\begin{center}
\begin{tabular}{l l l l }
\hline
ref. \cite{Eskola73,Akovali01} & & ref. \cite{Bemis76,Akovali01} & \\
\hline
E$_{\alpha}$/MeV & i$_{\alpha}$ & E$_{\alpha}$/MeV & i$_{\alpha}$ \\
8.590$\pm$0.02 & 0.30$\pm$0.04 & 8.540$\pm$0.02 & 0.10$\pm$0.05 \\
8.620$\pm$0.02 & 0.47$\pm$0.03 & 8.589$\pm$0.01 & 0.45$\pm$0.07 \\
8.650$\pm$0.02 & 0.16$\pm$0.03 & 8.614$\pm$0.01 & 0.35$\pm$0.05 \\
8.680$\pm$0.02 & 0.07$\pm$0.04 & 8.648$\pm$0.01 & 0.10$\pm$0.02 \\
\hline
\end{tabular}
\end{center}
\caption{ Alpha decay energies reported for $^{258}$Lr by Eskola et al. \cite{Eskola73} and by
Bemis et al. \cite{Bemis76}.} \label{tab7}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{l l l l }
\hline
Reference & Isoptope & analysis mode & T$_{1/2}$ / s \\
\hline
\cite{Morita15} & $^{266}$Bh & decay chain $^{278}$Nh & 2.2$^{+2.9}_{-0.8}$ \\
\cite{Qin06} & $^{266}$Bh & & 0.66$^{+0.59}_{-0.26}$ \\
\cite{Haba20} & $^{266}$Bh & correlated to SF & 12.8$^{+5.2}_{-2.9}$ \\
\cite{Haba20} & $^{266}$Bh & correlated to $\alpha$ decay & 7.0$^{+3.0}_{-1.6}$ \\
\cite{Haba20} & $^{266}$Bh & all events & 10.0$^{+2.6}_{-1.7}$ \\
\hline
\cite{Fire96} & $^{262}$Db & & 34$\pm$4 \\
\cite{Morita15} & $^{262}$Db & E\,=\,(8.38-8.52) MeV & 44$^{+60}_{-16}$ \\
\cite{Morita15} & $^{262}$Db & E\,=\,(8.55-8.80) MeV & 16$^{+7}_{-4}$ \\
\cite{Morita15} & $^{262}$Db & corr. $^{266}$Bh - SF & 52$^{+21}_{-12}$ \\
\cite{Haba14} & $^{262}$Db & E\,=\,(8.38-8.52) MeV & 39$^{+6}_{-5}$ \\
\cite{Haba14} & $^{262}$Db & E\,=\,(8.55-8.80) MeV & 24$^{+6}_{-4}$ \\
\cite{Qin06} & $^{262}$Db & & 26$^{+26}_{-9}$ \\
\cite{Dress99} & $^{262}$Db & & 26$^{+26}_{-9}$ \\
\cite{Haba20} & $^{262}$Bh & corre. $^{266}$Bh - SF & 39$^{+15}_{-8}$\\
\cite{Haba20} & $^{262}$Db & E\,=\,(8.38-8.52) MeV & 32$^{+22}_{-9}$ \\
\cite{Haba20} & $^{262}$Db & E\,=\,(8.55-8.80) MeV & 6.7$^{+16.2}_{-2.6}$ \\
\hline
\cite{Fire96} & $^{258}$Lr & & 3.9$^{+4}_{-3}$ \\
\cite{Morita15} & $^{258}$Lr & triple corr. $^{266}$Bh - $^{262}$Db - $^{258}$Lr & 4.7$^{+4.7}_{-1.6}$ \\
\cite{Morita15} & $^{258}$Lr & double corr. $^{262}$Db - $^{258}$Lr & 3.3$^{+3.3}_{-1.1}$ \\
\cite{Morita15} & $^{258}$Lr & all events & 4.0$^{+2.2}_{-1.1}$ \\
\cite{Haba14} & $^{258}$Lr & corr. $^{262}$Db, E\,=\,(8.38-8.52) MeV & 3.5$^{+0.6}_{-0.4}$ \\
\cite{Haba14} & $^{258}$Lr & corr. $^{262}$Db, E\,=\,(8.55-8.8ß) MeV & 4.1$^{+1.1}_{-0.7}$ \\
\cite{Haba14} & $^{258}$Lr & all events & 3.5$^{+0.5}_{-0.4}$ \\
\cite{Dress99} & $^{258}$Lr & & 3.1$^{+3.1}_{-1.0}$ \\
\cite{Haba20} & $^{258}$Lr & all events & 3.6$^{+1.4}_{-0.8}$ \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of half-lives of $^{266}$Bh, $^{262}$Db, and $^{258}$Lr published or analysed from
published data by the author in this work.} \label{tab8}
\end{table}
Observation of $^{258}$Lr was first reported by Eskola et al. \cite{Eskola73} and later by Bemis et al. \cite{Bemis76}. The reported
$\alpha$ energies and intensities slightly disagree \cite{Akovali01}. The data are given in table 6. The energies given in \cite{Bemis76} are 30-50 keV lower than those reported in \cite{Eskola73}.
More recent data were obtained from decay studies of $^{266}$Bh, $^{262}$Db \cite{Morita15,Wilk00,Morita09,Qin06,Dress99,Haba14}.
The results are compared in fig. 20.\\
The quality of the data is lower than that of $^{262}$Db, but not less confusing. Haba et al. got a broad energy distribution
in the range {\it{E$_{\alpha}$}}\,=\,(8.50\,-\,8.75) MeV with the bulk at {\it{E$_{\alpha}$}}\,=\,(8.60\,-\,8.65) MeV having a mean value
{\it{E$_{\alpha}$}}\,=\,8.62$\pm$0.02 MeV (see fig. 20g), for which a half-life of
{\it{T$_{1/2}$}}\,=\,3.54$^{+0.46}_{-0.36}$ s is given. Dressler et al. \cite{Dress99} observed all the events at {\it{E$_{\alpha}$}}\,$\le$\,8.60 MeV
(see fig. 20f), but obtained a similar halflife of {\it{T$_{1/2}$}}\,=\,3.10$^{+3.1}_{-1.0}$ s. Each one event within the energy range of the
Haba - data was observed by Qin et al. \cite{Qin06} (fig. 20e) and Wilk et al. \cite{Wilk00} (fig. 20b).
Contrary to the energies of $^{266}$Bh and $^{262}$Db the $\alpha$ energies for $^{258}$Lr from the $^{266}$Bh decay study of
Morita et al. \cite{Morita15} are quite in agreement for the triple (fig. 20c) and double (fig. 20d) correlations, a bulk of five
events at a mean energy of E$_{\alpha}$\,=\,8.70$\pm$0.01 MeV, further two events at a mean energy E$_{\alpha}$\,=\,8.59$\pm$0.02 MeV,
and single one at {\it{E$_{\alpha}$}}\,=\,8.80 MeV; half-lives are {\it{T$_{1/2}$}}\,=\,4.7$^{+4.7}_{-1.6}$ s for the events from the triple correlations,
{\it{T$_{1/2}$}}\,=\,3.3$^{+3.3}_{-1.1}$ s for the events from the double correlations, being in agreement within the error bars.
They may be merged to a single half-life of {\it{T$_{1/2}$}}\,=\,4.0$^{+2.2}_{-1.0}$ s.
The one decay event from chain 3 attributed to start from $^{278}$113 \cite{Morita15} of {\it{E$_{\alpha}$}}\,=\,8.66$\pm$0.06 MeV and $\tau$\,=\,t$_{\alpha6}$-t$_{\alpha5}$ = 3.78 s fairly fits into the decay properties reported for $^{258}$Lr.\\
The dilemma is evident as seen from fig. 21 where all triple correlations $^{266}$Bh $^{\alpha}_{\rightarrow}$ $^{262}$Db $^{\alpha}_{\rightarrow}$
$^{258}$Lr $^{\alpha}_{\rightarrow}$ reported so far \cite{Morita15,Wilk00,Qin06,Haba20} are shown. None of the forteen chains agrees with any other
one. This feature may indicate the complicate $\alpha$ decay pattern of these isotopes, but it makes the assignment to the same isotope
speculative. In other words: the 'subchain' $^{266}$Bh $^{\alpha}_{\rightarrow}$ $^{262}$Db $^{\alpha}_{\rightarrow}$
$^{258}$Lr $^{\alpha}_{\rightarrow}$ of the decay chain interpreted to start from $^{278}$113 does not agree with any other so far
observed $\alpha$ decay chain
interpreted to start from $^{266}$Bh. The essential item, however, is that this triple correlation was regared as the key point
for first identification of element 113, to approve the discovery of this element and give credit to the discoverers.
But this decision is based rather on weak probability considerations than on firm experimental facts.
The only solid pillar is the agreement with the decay properties reported for $^{258}$Lr, which might be regarded
as rather weak. In other words, the assignment of the three decay chains to the decay of $^{278}$Nh is probable, but not firm.\\
It should be reminded that in case of element 111 in the JWP report from 2001 it was stated: 'The results of this study are definitely of high quality but there is insufficient internal redundancy to warrant certitude at this stage. Confirmation by further results is needed to assign priority of discovery to this collaboration' \cite{Karol01}.
So it seems strange that in a similar situation as evidently here, such concerns were not expressed.\\
A new decay study of $^{266}$Bh was reported recently by Haba et al. \cite{Haba20} using the same production reaction as in \cite{Morita15}.
Alpha decays were observed correlated to fission events, assigned to the decay of $^{262}$Db and $\alpha$ decay chains $^{266}$Bh\,$^{\alpha}_{\rightarrow}$\,
$^{262}$Db\,$^{\alpha}_{\rightarrow}$ or $^{266}$Bh\,$^{\alpha}_{\rightarrow}$\,$^{262}$Db\,$^{\alpha}_{\rightarrow}$\,$^{258}$Lr\,$^{\alpha}_{\rightarrow}$.
The $\alpha$ spectra of decays followed by fission is shown in fig. 18g, that of events followed by $\alpha$ decays of $^{262}$Db in fig. 18h. Evidently
in correlation to fission events a concentration of events ('peak') is observed at E$_{\alpha}$\,=\,8.85 MeV, not observed in \cite{Morita15}, while in the range
E$_{\alpha}$\,=\,8.9-9.0 MeV, where in \cite{Morita09} a peak - like structure was oberserved Haba et al. registered only a broad distribution. Also only
a broad distribution without indication of a peak - like concentration in the range E$_{\alpha}$\,=\,8.8-9.4 MeV is observed in correlation to $\alpha$ decays.
However, two $\alpha$ decays at E\,$\approx$\,9.4 MeV were now reported, close to the $\alpha$ energy of $^{266}$Bh in the $^{278}$Nh decay chain 3 \cite{Morita15}.
A remarkable result of Haba et al. \cite{Haba20} is the half-life of $^{266}$Bh; values of T$_{1/2}$\,=\,12.8$^{+5.2}_{-2.9}$ s are obtained for events
correlated to SF, and
T$_{1/2}$\,=\,7.0$^{+3.0}_{-1.6}$ s for events correlated to $\alpha$ decays. This finding suggests a common half-life of
T$_{1/2}$\,=\,10.0$^{+2.6}_{-1.7}$ s as given in \cite{Haba20} despite the discrepancies in the $\alpha$ energies. That value is, however, significantly larger
than the results from previous studies \cite{Morita15,Qin06}.\\
For $^{262}$Db in \cite{Haba20} a similar $\alpha$ decay energy distribution is observed as in \cite{Haba14}, as seen in figs. 19g and 19h,
but again not in agreement with the results from \cite{Morita09} (figs. 19c and 19d) where the low energy component E\,=\,(8.38-8.52) MeV is practically missing.
Halflives of SF events assigned to $^{262}$Db are in agreement with those of $\alpha$ events at (E\,=\,(8.38-8.52) MeV (see table 8), but again for the events
at E\,=\,(8.55-8.80) MeV a shorter half-life (T$_{1/2}$\,=\,6.7$^{+16.2}_{-2,6}$ s) is indicated. Interestingly all events at E\,=\,(8.55-8.80) MeV are correlated to
$^{266}$Bh $\alpha$ decays E\,$>$\,9 MeV. These are the events no. 13, 28 in \cite{Haba20} and no. 1, 2, 3 in \cite{Morita09}. The two events of $^{266}$Bh
in \cite{Haba20} have
extremely low correlation times of (0.92 s and 0.33 s), the $^{262}$Db events have a half-life T$_{1/2}$\,=\,13.7$^{+11.1}_{-4.2}$ s.
Despite the above mentioned differences the same feature is observed in \cite{Morita15}. The data
are summarized in table 8. A halflife T$_{1/2}$\,=\,22.5$^{-4.9}_{-3.4}$ s is obtained clearly lower than that of the SF events and $\alpha$ - events
E\,=\,(8.38-8.55 MeV), corroborating the possible existence of two long-lived states in $^{262}$Db. Although data are scarce two other features are
indicated: a) all $^{266}$Bh energies are above the 'bulk' of the $\alpha$ energy distribution of $^{266}$Bh and b) one obtains a half-life
of T$_{1/2}$\,=\,3.4$^{-4.7}_{-1.3}$ s, lower than the value of 10 s extracted from all events. This can be regarded as a hint for the existence of
two long-lived states in $^{266}$Bh decaying by $\alpha$ emission, resulting in two essential decay branches $^{266}$Bh(1) $\rightarrow$ $^{262}$Db(1)
and $^{266}$Bh(2) $\rightarrow$ $^{262}$Db(2).\\
The $\alpha$ spectrum for $^{258}$Lr measured in \cite{Haba20} is shown in fig. 20h. Essentially it is in-line with the one obtained in \cite{Haba14}.\\
To summarize: the new decay study of $^{266}$Bh delivers results not really in agreement with those from previous studies concerning the decay
energies and delivers a considerably longer halflife for that isotope. So the results do not remove the concerns on the decay chains interpreted to start
from $^{278}$Nh. But it delivers some interesting features: the $\alpha$ decays in the range (8.38-8.55) MeV and the SF events following $\alpha$ decay of $^{266}$Bh
seemingly are due to the decay of the same state in $^{262}$Db, the fission activity, however, may be due to $^{262}$Rf produced by EC of $^{262}$Db.
The $\alpha$ - decays of $^{262}$Db of E\,=\,(8.55-8,80) MeV eventually are from decay of a second long-lived level. There is also strong evidence that this
level is populated essentially by $\alpha$ decay of a long-lived level in $^{266}$Bh, different to that populating the one in $^{262}$Db decaying by
$\alpha$ particles in the range (8.38-8.55) MeV. Further studies are required to clarify this undoubtedly interesting feature.\\
Discussing the items above one has of course to emphasize the different experimental techniques used which may influence the the measured energies.
The important feature are the different detector resolutions which determine the widths of the ebery distributions.
So comparison of energies might be somewhat 'dangerous'. Another item is energy summing between $\alpha$ particles and CE. In the experiments of Wilk et al.
\cite{Wilk00} and Morita et al. \cite{Morita15}, the reaction products were implanted into the detector after in-flight separation.
Qin et al. \cite{Qin06}, Dressler et al. \cite{Dress99} and Haba et al. \cite{Haba14,Haba20} collect the reaction products on the detector surface or on a thin foil
between two detectors. The letter procedure reduces the efficiency for energy summing considerably. This could be the reason for the 'shift' of
the small 'bulk' of the $^{266}$Bh $\alpha$ energy distribution from $\approx$8.85 MeV \cite{Haba20} in fig. 18g to $\approx$8.95 MeV \cite{Morita15}
in fig. 18e. This interpretation might be speculative, but it clearly shows that such effects renders the consistency of data more difficult if different
experimental techniques are applied.
\begin{table}
\begin{center}
\begin{tabular}{l l l l l }
\hline
\hline
Reference & E$_{\alpha}$($^{266}$Bh) / MeV & $\Delta$t / s & E$_{\alpha}$($^{262}$Db) / MeV & $\Delta$t / s \\
\hline
\cite{Morita15} & 9.05 & & 8.71 & 54.91 \\
\cite{Morita15} & 9.12 & & 8.74 & 13.76 \\
\cite{Morita15} & 9.20 & & 8.67 & 13.71 \\
\hline
\cite{Haba20} & 9.12 & 0.92 & 8.70 & 10.29 \\
\cite{Haba20} & 9.04 & 0.33 & 8.63 & 9.07 \\
\hline
\end{tabular}
\end{center}
\caption{$\alpha$-$\alpha$ correlations $^{266}$Bh (E$>$9.0 MeV) - $^{262}$Db (E\,=(8.60-8.75 MeV)
from triple correlations $^{266}$Bh\,$^{\alpha}_{\rightarrow}$\,$^{262}$Db\,$^{\alpha}_{\rightarrow}$\,$^{258}$Lr\,$^{\alpha}_{\rightarrow}$.}
\label{tab9}
\end{table}
\section{\bf{7. (Exemplified) Cross-checks with Nuclear Structure Theory}}
In the following some discussion on selected decay and nuclear structure properties will be presented.
\subsection{\bf{ 7.1 Alpha-decay energies / Q-alpha values; even Z, odd-A, odd-odd nuclei}}
Alpha-decay energies provide some basic information about nuclear stability and properties. Discussing the properties
one strictly has to distinguish two cases, a) $\alpha$ decay of even-even nuclei, and b) $\alpha$ decay of isotopes
with odd proton and/or odd neutron numbers.\\
In even-even nuclei $\alpha$ transitions occur with highest intensities between the {\it{I$^{\pi}$}} = 0$^{+}$ ground - states of mother and daughter isotopes. Still, in the region of strongly deformed heaviest nuclei ({\it{Z}} $\ge$ 90) notable population with
relative intensities of (10-30 $\%$) is observed for transitions into the {\it{I$^{\pi}$}} = 2$^{+}$ level of the ground-state rotational band \cite{Fire96}, while band members of higher spins (4$^{+}$, 6$^{+}$ {\it{etc.}}) are populated only weakly with
relative intensities of $<$1$\%$. Under these circumstances the $\alpha$ line of highest intensity represents the Q-value of the transition and is thus a measure for the mass difference of the mother and the daughter nucleus. It should be kept in mind, however,
that only in cases where the mass of the daughter nucleus is known, the Q-value can be used to calculate the mass of the mother nucleus, and only in those cases $\alpha$-decay energies can be used to 'directly' test nuclear mass predictions. Nevertheless, already the mass differences, i.e. the Q-values, can be used for qualitative assessments of those models. Particulary, as crossing of nucleon shells is accompanied by a strong local decrease of the {\it{Q$_{\alpha}$}} - values, existence, and by some extent also strength of such shells can be verified by analyzing systematics of {\it{Q$_{\alpha}$}} - values. That feature is displayed in fig. 22, where experimental {\it{Q$_{\alpha}$}} values for the known isotopes of even-Z elements {\it{Z}} $\ge$ 104 are compared with results of two (widely used) mass predictions based on the macropscopic - microscopic approach, the one reported by R. Smolanczuk and A. Sobiczewski \cite{Smolan95} (fig. 22a), and the one reported by P. M\"oller et al. \cite{Moller95} (fig. 22b). The neutron shells
at {\it{N}} = 152 and {\it{N}} = 162, indicated by the black dashed lines are experimentally and theoretically verified by the local minima in the {\it{Q$_{\alpha}$}} - values. But significant differences in the theoretical predictions are indicated, those of \cite{Smolan95} reproduce the experimental data in general quite fairly, while the agreement of those from \cite{Moller95} is
significantly worse.\\
\begin{figure}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig22.eps}
}
\caption{Comparison of experimental {\it{Q$_{\alpha}$}} - values of even-Z elements {\it{Z}} $\ge$ 104 with theoretical predictions of R. Smolanczuk and A. Sobiczewski
\cite{Smolan95} (fig. 22a) and P. M\"oller et al. \cite{Moller95} (fig. 22b). In case of isotopes with odd neutron numbers, the
{\it{Q$_{\alpha}$}} - value was calculted from the highest reported decay energy.}
\label{fig:22}
\end{figure}
\begin{figure}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig23.eps}
}
\caption{Hindrance factors for decays into {\it{I$^{\pi}$}} = 2$^{+}$ and {\it{I$^{\pi}$}} = 4$^{+}$ daughter levels of
even-even actinide isotopes {\it{Z}} $\ge$ 90. Alpha-decay data are taken from \cite{Fire96}.}
\label{fig:23}
\end{figure}
\begin{figure}
\resizebox{0.9\textwidth}{!}
\includegraphics{fig24.eps}
}
\caption{ Hindrance factors for decays into {\it{I$^{\pi}$}} = 2$^{+}$ daughter levels of
even-even actinide isotopes {\it{Z}} $\ge$ 90 as function of the quadrupole deformation parameter $\beta_{2}$.
The line is to guide the eye. }
\label{fig:24}
\end{figure}
\begin{figure}
\vspace{-0.5cm}
\resizebox{0.75\textwidth}{!}
\includegraphics{fig25.eps}
}
\caption{Hindrance factors for decays into {\it{I$^{\pi}$}} = 4$^{+}$ daughter levels of
even-even actinide isotopes {\it{Z}} $\ge$ 90 as fuction of the hexadecapole deformation parameter $\beta_{4}$.
The line is to guide the eye. }
\label{fig:25}
\end{figure}
\begin{figure}
\vspace{-2cm}
\resizebox{1.0\textwidth}{!}
\includegraphics{fig26.eps}
}
\caption{Comparison of experimental and theoretical \cite{Denis09,Hassan18} $\alpha$ transition intensities into
I$^{\pi}$\,=\,0$^{+}$, 2$^{+}$, and 4$^{+}$ daughter levels for californium (a-c)and fermium isotopes (d-f).
black lines and diamond represent experimental values, red dashed line and squares represent the caluclations of \cite{Denis09},
blue dashed - dotted lines and circles represent the calculations of \cite{Hassan18}.}
\label{fig:26}
\end{figure}
Alpha-decay between states of different spins are hindered. Quantitatively the hindrance can be expressed by a hindrance factor {\it{HF}}, defined as HF = T$_{\alpha}$(exp) / T$_{\alpha}$(theo), where {\it{T$_{\alpha}$(exp)}} denotes the experimental partial $\alpha$-decay half-life and {\it{T$_{\alpha}$(theo)}} the theoretical one. To calculate the latter a couple of (mostly empirical) relations are available. In the following will use the one proposed by D.N. Poenaru \cite{PoI80} with the parameter modification suggested by Rurarz \cite{Rur83}. This formula has been proven to reproduce experimental partial $\alpha$-decay half-lives of even-even nuclei in the region of superheavy nuclei within a factor of two \cite{Hess16a}. \\
A semi-empirical relation for the hindrance due to angular momentum change was given in 1959 by J.O. Rasmussen \cite{Rasm59}.
\begin{figure}
\vspace{0cm}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig27.eps}
}
\caption{(a) Experimental ground-state mass excesses of {\it{N-Z}} = 49 and (b) comparisons with (previously) evaluated values \cite{Audi03}
and (c) theoretical predictions by P. M\"oller et al. \cite{Moller95} (squares) or Liran et al. \cite{Liran02} (circles).}
\label{fig:27}
\end{figure}
The change of the transition probability P$_{0}$ (no angular momentum change) through the barrier, which can be equated with the inverse hindrance factor,
was given as \\
P$_{L}$/P$_{0}$\,=\,exp[-0.2027L(L+1)Z$^{-1/2}$A$^{-1/6}$] \\
with {\it{L}} denoting change of the angular momentum, {\it{Z}} and {\it{A}} the atomic number of the daughter nuclei.\\
In the range of actinide nuclei where
data are available ({\it{Z}}\,$\approx$\,90\,-\,102) one expects hindrance factors {\it{HF}}\,$\approx$\,1.6\,-\,1.7 for {\it{$\Delta$L}}\,=\,2
and {\it{HF}}\,$\approx$\,(5\,-\,6) for {\it{$\Delta$L}}\,=\,4, with a slight decrease at increasing {\it{A}} and {\it{Z}}.
The experimental hindrance factors for $\alpha$ decay into the {\it{I$^{\pi}$}} = 2$^{+}$ and {\it{I$^{\pi}$}} = 4$^{+}$ levels for the known cases in the actinide region {\it{Z}} $\ge$ 90 are shown in fig. 23.
They exhibit a complete different behavior: for the {\it{$\Delta$L}}\,=\,2 transitions the experimental hindrance factor are comparable, but increase at increasing {\it{A}} and {\it{Z}}. For the {\it{$\Delta$L}}\,=\,4 transitions the hindrance factors are considerably larger and a maximum is indicated for curium isotopes in the mass range {\it{A}}\,=\,(240\,-\,246).
Interestingly this behavior can be related to the ground-state deformation as shown in fig. 24 and 25. In fig. 24 the hindrance factors for the {\it{I$^{\pi}$}} = 0$^{+}$ $\rightarrow$ {\it{I$^{\pi}$}} = 2$^{+}$ transitions are plotted as function of the quadrupole deformation parameter $\beta_{2}$ (taken from \cite{Moller95}). Evidently a strong increase of the hindrance factor at increasing quadrupole deformation is observed. In fig. 25 the hindrance factors for the {\it{I$^{\pi}$}} = 0$^{+}$ $\rightarrow$ {\it{I$^{\pi}$}} = 4$^{+}$ transitions are plotted as a function of the hexadecapole deformation parameter $\beta_{4}$ (taken from \cite{Moller95}). Here, a maximum at a deformation parameter $\beta_{4}$ $\approx$ 0.08 is indicated.
This suggests a strong dependence of the hindrance factor on nuclear deformation and
measuring transitions into rotational members of the ground-state band may already deliver valuable information about the ground-state deformation of the considered nuclei.\\
Some attempts to calculate the transition probabilty into the ground - state rotational band members have been undertaken by
V. Yu. Denisov and A.A. Khudenko \cite{Denis09} as well as by H. Hassanabadi and S.S. Hosseini \cite{Hassan18}. \\
In both papers the $\alpha$ halflives were calculated in the 'standard' way as\\
T$_{1/2}$\,=\,ln2/$\nu$P \\
with $\nu$ denoting the frequency of assaults on the barrier, and {\it{P}} being the penetration probability through the potential barrier using
the semiclassic WKB method.\\
In \cite{Hassan18} the $\alpha$ - nucleus potential was parameterized as a polynominal of third order for r$\le$C$_{t}$ and as sum of Coulomb {\it{V$_{C}$}},
nuclear {\it{V$_{N}$}} and centrifugal {\it{V$_{l}$}} + ($\hbar^{2}$l(l+1)/(2$\mu$r$^{2}$)) potential. For {\it{V$_{C}$}} and {\it{V$_{l}$}} 'standard expressions' were used, for {\it{V$_{n}$}}
the 'proximity potential' of Blocki et al. \cite{Blocki77}. {\it{C$_{t}$}} is the touching configuration of daughter nucleus (d) and $\alpha$ particle ($\alpha$),
{\it{C$_{t}$}}\,=\,{\it{C$_{d}$}}\,+\,{\it{C$_{\alpha}$}}, with {\it{C$-{t}$}} denoting the Suessman central radii (see \cite{Hassan18} for details).
V.Yu. Denisov and A.A. Khudenko \cite{Denis09} use the 'unified model for $\alpha$ decay and $\alpha$ capture (UMADAC). Their potential represents the sum
of a 'standard' centrifugal potential {\it{V$_{l}$}} (see above), a Coulomb potential {\it{V$_{C}$}} including quadrupole ($\beta_{2}$) and hexadecapole ($\beta_{4}$)
deformations, and a nuclear potential {\it{V$_{N}$}} of Woods-Saxon type (see \cite{Denis09} for further details). \\
Their results for the californium and fermium isotopes are compared with the experintal values in fig. 26.
Obviously the calculations of Denisov and Khudenko do not well reproduce the experimental data for both,
californium and fermium isotopes; the calculated
(relative) intensities for the 0$^{+}$ $\rightarrow$ 0$^{+}$ transitions (fig. 26c) are too low and hence too high values
for the 0$^{+}$ $\rightarrow$ 2$^{+}$ transitions (fig. 26c) and the 0$^{+}$ $\rightarrow$ 4$^{+}$ transitions (fig. 26a) are obtained.
The latter are even roughly an order of magnitude higher than the experimental data for the respective transition.
Quite fair agreement between experimental data is evident for the calculations of Hassanabadi and Hosseini
(blue lines and symbols).\\
In odd-mass nuclei the situation is completely different as ground-state of mother and daughter nuclei usually differ in spin and
often also in parity. So ground-state to ground-state $\alpha$ decays are usually hindered. Hindrance factors significally depend on the spin difference, as well as on a possible parity change and/or a spin flip. For odd-mass nuclei an empirical classification of the hindrance factors into five groups has been established (see {\it{e.g.}} \cite{SeaL90}). Hindrance factors {\it{HF}}\,$<$\,4 characterize transitions between the same Nilsson levels in mother and daughter nuclei and are denoted as 'favoured transitions'. Hindrance factors {\it{HF}}\,=\,(4\,-\,10) indicate a favourable overlap between the initial and final nuclear state, while values
{\it{HF}}\,=\,(10\,-\,100) point to an unfavourable overlap, but still parallel spin projections of the initial and final state.
Factors {\it{HF}}\,=\,(100\,-\,1000) indicate a parity change and still parallel spin projections, while {\it{HF}}\,$>$\,1000 mean a parity change and a spin flip.\\
Thus hindrance factors already point to differences in the initial and final states, but on their own do not allow for spin and parity assignments. It is, however, known that in even-Z odd-mass nuclei nuclear structure and thus $\alpha$ decay patterns are similar along the isotone lines (see {\it{e.g.}} \cite{Asai15}), while in odd-Z odd-mass nuclei this feature is evident along the isotope lines (see {\it{e.g.}} \cite{Hess05}). So, in certain cases, based on empirical relationships tentative spin and parity
assignments can be established, as done {\it{e.g.}} in suggesting an $\alpha$ decay pattern for $^{255}$No by P. Eskola et al.
\cite{Eskola70}, which later was confirmed by $\alpha$ - $\gamma$ spectroscopy measurement \cite{Hess06,Asai11}.\\
Another feature in the case of odd-mass nuclei is the fact, that competition between structural hindrance and Q-value hindrance may lead to complex $\alpha$ - decay patterns. Nilsson levels identical to the ground-state of the mother nucleus may be excited states located at several hundred keV in the daughter nuclei, {\it{e.g.}},
in a recent decay study of $^{259}$Sg it was shown that the 11/2$^{-}$[725] Nilsson level assigned to the ground-state in this isotope, is located at {\it{E$^{*}$}} $\approx$ 600 keV in the daughter nucleus $^{255}$Rf \cite{AntH15}. Therefore the advantage of a low hindrance factor may be cancelled by a lower barrier transmission probability due to a significantly lower Q-value compared to the ground-state to ground-state transition. Consequently $\alpha$ transitions with moderate hindrance factors into lower lying levels may have similar or even higher intensities than the favored transition as it is the case in the above mentioned examples, $^{255}$No and $^{259}$Sg.\\
A drawback of many recent $\alpha$ decay studies of odd-mass nuclei in the transfermium region was the fact that the ground-state to ground-state transition could not be clearly identified and thus the 'total' {\it{Q$_{\alpha}$}} value could not be established.
Another difficulty in these studies was the existence of isomeric states in several nuclei, also decaying by $\alpha$ - emission
and having half-lives similar as to the ground-state, as in the cases of $^{251}$No \cite{Hessb06} or $^{257}$Rf \cite{Hess97}, while in early studies ground-state decay and isomeric decay could not be disentangled. Enhanced experimental techniques, applying also $\alpha$ - $\gamma$ spectroscopy have overcome widely that problem in the transfermium region. An illustrative example is the
{\it{N-Z}} = 49 - line, where based on the directly measured mass of $^{253}$No \cite{Dworschak10},
and decay data of $^{253}$No \cite{Hess12}, $^{257}$Rf, $^{261}$Sg \cite{Streich10}, $^{265}$Hs \cite{Hess09} and $^{269}$Ds \cite{Hof95}
experimental masses could be determined up to $^{269}$Ds and could serve for a test of theoretical predictions \cite{Moller95,Liran02} and empirical evaluations \cite{Audi03}, as shown in fig. 27.
The masses predicted by M\"oller et al. \cite{Moller95} agree with the experimental value within $\approx$0.5 MeV
up to {\it{Z}}\,=\,106, while towards {\it{Z}}\,=\,110 ($^{269}$Ds) deviations rapidly increase up to nearly 2 MeV.
A similar behavior was observed for the
even-even nuclei of the {\it{N-Z}}\,=\,50 line \cite{Hess16a}, which was interpreted as a possible signature for a lower
shell effect at {\it{N}}\,=\,162.
\begin{figure}
\resizebox{0.93\textwidth}{!}
\includegraphics{fig28.eps}
}
\caption{left side: Q$_{\alpha}$ - values and 2p - binding energies (S$_{2p}$ for even - even nuclei of N\,=\,124 (diamonds), 126 (squares),
128 (circles) around Z\,=\,82;
right side: Q$_{\alpha}$ - values and 2p - binding energies (S$_{2p}$ for even - even nuclei of N\,=\,150 (diamonds), 152 (squares),
154 (circles) around Z\,=\,100.
Data are taken from \cite{Wang16}.}
\label{fig:28}
\end{figure}
\subsection{\bf{7.2 Q$_{\alpha}$ values as signatures for nuclear shells}}
Historically evidence for nuclear shells was first found from the existence of specifically stable nuclei at certain
proton and neutron numbers ({\it{Z,N}} = 2, 8, 20, 28, 50, 82 and {\it{N}} = 126) which were denoted as 'magic'. Experimental signatures
were, e.g. strong kinks in the 2p- or 2n - binding energies at the magic numbers and on the basis of enhanced nuclear decay
data also by local minima in the {\it{Q$_{\alpha}$}} values.
The existence of nuclear shells was theoretically explained by the nuclear shell model \cite{Goep48,Haxel49}, which showed
large energy gaps in the single particle levels at the 'magic' numbers, which were equated with 'shell closures'.
This item was the basis for the prediction of 'superheavy' elements around {\it{Z}}\,=\,114 and {\it{N}}\,=\,184 when the nuclear shell model was extended in the region
of unknown nuclei far above {\it{Z}}\,=\,82 and {\it{N}}\,=\,126 \cite{Sobi66,Meld67}.
As the shell gap is related to a higher density of single particle levels, compared to the nuclear average (expected e.g. from a Fermi gas model)
below the Fermi level and a lower density above the Fermi level,
the large energy gaps at the magic numbers go hand in hand with large shell correction energies, leading to the
irregularities in the 2p-, 2n- separation energies and in the {\it{Q$_{\alpha}$}} values.\\
In between {\it{Z}}\,=\,82, {\it{N}}\,=\,126 and {\it{Z}}\,=\,114 and {\it{N}}\,=\,184 a wide region of stronly deformed nuclei is existing.
Calculations (see e.g. \cite{Chas77}) resulted in large shell gaps at {\it{N}}\,=\,152 and {\it{Z}}\,=\,100. Later theoretical studies
also showed in addition a region of large shell correction energies in between {\it{N}}\,=\,152 and {\it{N}}\,=\,184 \cite{Cwiok83,Moller86,Patyk91,Patyk91a}
the center of which is presently set at {\it{N}}\,=\,162 and {\it{Z}}\,=\,108.\\
While the (deformed) nuclear shell at N\,=\,152 is well established on the basis of the {\it{Q$_{\alpha}$}} - value as seen from fig. 22 and there is,
despite of scarce data, strong evidence for a shell at {\it{N}}\,=\,162, the quest for a shell closure at {\it{Z}}\,=\,100 is still open.
It was pointed out by Greenlees et al. \cite{Green08} that their results on nuclear structure investigation of $^{250}$Fm
are in-line with a shell gap at {\it{Z}}\,=\,100, but 2p - separation energies and {\it{Q$_{\alpha}$}} - values do not support a shell closure.
The item is shown in fig. 28. On the right hand side {\it{Q$_{\alpha}$}} values and 2p - binding energies ({\it{S$_{2p}$}}) are plotted for
three isotone chains ({\it{N}}\,=\,124, 126, 128) around {\it{Z}}\,=\,82. In all three cases a strong increase in the {\it{Q$_{\alpha}$}} values and
a strong decrease in the {\it{S$_{2p}$}} values is observed from {\it{Z}}\,=\,82 to {\it{Z}}\,=\,84. On the right hand side {\it{Q$_{\alpha}$}} values
and {\it{S$_{2p}$}}
values are plotted around {\it{Z}}\,=\,100 for {\it{N}}\,=\,150, 152, and 154. Here a straight increase for both is observed from {\it{Z}}\,=\,94 to
{\it{Z}}\,=\,106, i.e. the data do not indicate a shell closure at {\it{Z}}\,=\,100. This means, even if a gap in the single particle levels
will be confirmed in further experiments this feature does not prove a proton shell (or a 'magic' number) at {\it{Z}}\,=\,100 as claimed recently \cite{AckT17}.
\subsection{\bf{7.3 Electron Capture decays}}
The analysis of $\alpha$ decay chains from SHN produced in reactions of $^{48}$Ca with actinide targets
so far acted on the assumptions that the chains consisted on a sequence of $\alpha$
decays and were finally terminated by spontaneous fission \cite{Oga16}. The possibility that one of the chain members
could undergo EC - decay was not considered. Indeed, EC - decay of superheavy nuclei has been only little investigated so
far. Mainly this is due to the technical difficulties to detect EC decay at very low production rates of the isotopes.
Consequently, only very recently EC decay has been investigated successfully in the transactinide region for the cases of
$^{257}$Rf \cite{Hess16} and $^{258}$Db \cite{Hess16a}. Two ways of identifying EC - decay turned out to be successful,
a) measuring delayed coincidences between K X-rays and $\alpha$ decay or spontaneous fission of the EC - daughter, and b)
measuring delayed coincidences between implanted nuclei and conversion electrons (CE) from decay of excited states
populated by the EC or delayed coincidences between CE and decays ($\alpha$ decay or spontaneous fission) of
the EC daughter. The latter cases, however, require population of excited level decaying by internal conversion, which is not necessarily the case.\\
Evidence for occuring EC within the decay chains of SHE gives the termination of the decay chains of odd-odd nuclei by spontaneous fission. Since spontaneous fission of odd-odd nuclei is strongly hindered, it can be assumed that it may not the odd-odd nucleus that undergoes fission, but the even-even daughter nucleus, produced by EC decay \cite{OganU15}.\\
The situation is, however, quite complicated. To illustrate we compare in fig. 29 the experimental
(EC/$\beta^{+}$) - halflives of lawrencium and dubnium isotopes with recently calculated \cite{Karpov12} EC - halflives.
\begin{figure}
\vspace{-2.3cm}
\resizebox{0.6\textwidth}{!}
\includegraphics{fig29.eps}
}
\caption{Comparison of experimental and calculated \cite{Karpov12} EC - halflives for lawrencium (upper figure)
and dubnium (lower figure) isotopes. For the cases $^{266,268,270}$Db it was assumed that spontaneous fission
originates from the even-even EC - daughter. Full squares - experimental halflives, open squares - $\beta^{+}$,CE - halflives,
circles - $\beta^{-}$ halflives.}
\label{fig:29}
\end{figure}
\begin{figure}
\vspace{-0.5cm}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig30.eps}
}
\caption{Comparison of experimental and calculated \cite{Karpov12} EC - halflives, $\alpha$ halflives \cite{PoI80}
of $^{266,268,270}$Db and theoretical SF halflives \cite{Smolan95a} of the EC daughters
$^{266,268,270}$Rf.}
\label{fig:30}
\end{figure}
In general the agreement between experimental and calculated values is better for the dubnium, specifcally for
{\it{A}}\,$\le$\,262, than for the lawrencium isotopes. Evidently, however, the disagreement increases at approaching
the line of beta - stability. The experimental EC halflives are up to several orders of magnitude higher than the
theoretical values, which may lead to assume that direct spontaneous of the odd-odd nuclei is observed indeed.
So there is lot of room for speculation. In this context the difficulties to make also some 'empirical'
conclusions shall be briefly discussed. So far, only in two cases, $^{260}$Md and $^{262}$Db observation
of spontaneous fission of an odd-odd isotope is reported. $^{262}$Db seems, however, a less certain case
(see discussion in \cite{Hess17}). In table 10 the 'fission halflives' of $^{268}$Db, $^{262}$Db and $^{260}$Md
are compared with the values obtained for their odd-mass neighbouring isotopes with {\it{A-1}} and {\it{Z-1}}, respectively.
\begin{table}
\begin{center}
\begin{tabular}{l l l l }
\hline
\hline
Isotope & T$_{SF}$ /s & HF \\
\hline
$^{268}$Db & 93600 & \\
$^{267}$Db & 4320 & 21.7 \\
$^{267}$Rf & 4680 & 20.0 \\
\hline
$^{262}$Db & 103 & \\
$^{261}$Db & 18.4 & 5.6 \\
$^{261}$Rf & 32.5 & 3.2 \\
\hline
$^{260}$Md & 2.75x10$^{6}$ & \\
$^{259}$Md & 5700 & 482 \\
$^{261}$Rf & 1.5 & 1.9x10$^{6}$ \\
\hline
\end{tabular}
\end{center}
\caption{ Comparison of 'fission' halflives of some selected odd-mass and odd-odd nuclei in the range
{\it{Z}}\,=\,101\,-\,105. The 'hindrance factor' HF here means the ratio of fission halflives of the
odd-odd nucleus and its neighbouring odd mass nuclei (see text).} \label{tab10}
\end{table}
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig31.eps}
}
\caption{Comparison of experimental and calculated \cite{Karpov12} EC - halflives for Z\,=\,113, 115, 117 - isotopes.
The full squares denote the theoretical EC - halflives, the open squares the experimental halflives. The lines are to guide the eye.}
\label{fig:31}
\end{figure}
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig32.eps}
}
\caption{Comparison of experimental and calculated \cite{Karpov12} EC - halflives for Z\,=\,107, 109, 111 - isotopes.
The full squares denote the theoretical EC - halflives, the open squares the experimental halflives. The lines are to guide the eye.}
\label{fig:32}
\end{figure}
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig33.eps}
}
\caption{Comparison of experimental and calculated \cite{Karpov12} EC - halflives for Z\,=\,112, 114, 116 - isotopes.
The full squares denote the theoretical EC - halflives, the open squares the experimental halflives. The lines are to guide the eye.}
\label{fig:33}
\end{figure}
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig34.eps}
}
\caption{Comparison of experimental and calculated \cite{Karpov12} EC - halflives for Z\,=\,106, 108, 110 - isotopes.
The full squares denote the theoretical EC - halflives, the open squares the experimental halflives. The lines are to guide the eye.
$^{271}$Sg is predicted as stable against beta - decay.}
\label{fig:34}
\end{figure}
Evidently the resulting hindrance factors {\it{HF}}\,=\,T$_{SF}$(Z,A)/T$_{SF}$(Z,A-1) or {\it{HF}}\,=\,T$_{SF}$(Z,A)/T$_{SF}$(Z-1,A-1) are
much lower for $^{268}$Db (21.7,20.0) than for $^{260}$Md (482,1.9x10$^{6}$), These low values suggest that 'fission' of $^{268}$Db originates indeed
from the EC - daughter $^{268}$Rf, the lower hindrances factors for $^{262}$Db, although the case is debated, puts
some doubts in that interpretation. On the other hand it is quite common to take the ratio of the experimental
fission half-life and an 'unhindered' fission half-life, defined as the geometric mean of the neighbouring
even - even isotopes (see \cite{Hess17} for more detailed discussion), but to estimate reliable hindrance factors the
spontaneous fission half-lives of the surrounding even-even nuclei have to be known. In the region of $^{266,268,270}$Db
only for one even-even isotope, $^{266}$Sg the fission halflive is known, {\it{T$_{sf}$}}\,=\,58 s, while a theoretical value of
{\it{T$_{sf}$}}\,=\,0.35 s was reported \cite{Smolan95a}, which is lower by a factor of $\approx$165. Under these
cirumstances it does not make much sense to use theoretical values to estimante hindrance factors for
spontaneous fission.\\
Therefore final decision if the terminating odd - odd nuclei may fission directly must be left to future experiments.
Techniques to identify EC decay have been presented at the beginnig of this section. But it has to be kept in mind that the
identification mentioned was performed for isotopes with production cross sections of some nanobarn, while in
the considered SHE region production rates are roughly three orders of magnitude lower. So technical effort to
increase production rates and detection efficiencies are required to perform successful experiments in that direction.
From phsysics side such experiments may cause big problems as seen from fig. 30, where experimental halflives of
$^{266,268,270}$Db (red circles) are compared with the calculated EC halflives from \cite{Karpov12} (black squares).
Calculated SF halflives from \cite{Smolan95a} for the even-even EC - daughters $^{266,268,270}$Rf, are in the
range of $\approx$20 ms - $\approx$20 s, so the technique for identification should be applicable.
The situation could be, however, unfavourable if there is a similar situation as in the case of $^{266}$Sg, where the
experimental SF halflife is a factor of 165 longer than the predicted one. These modified SF halflives are shown in
fig. 30 by the margenta triangles.
For comparison in fig. 30 also the expected $\alpha$ decay halflives for $^{266,268}$Db based on the E$_{\alpha}$ values
calculated from the mass predictions of \cite{Moller95}
({\it{E$_{\alpha}$($^{266}$Db)}}\,=\,7242 keV, {\it{E$_{\alpha}$($^{268}$Db)}}\,=\,7076 keV) are presented.
As for $^{270}$Db a value of {\it{E$_{\alpha}$}}\,=\,7721 keV is predicted in \cite{Moller95}, which is roughly 200 keV
lower than the experimental value of {\it{E$_{\alpha}$}}\,=\,7.90$\pm$0.03 MeV, conservatively 300 keV higher values were taken for
$^{266,268}$Db. The halflives were calculated using the formula from \cite{PoI80}. Results are shown as blue dots in fig. 30. For
$^{270}$Db the experimental $\alpha$ - decay half-life is given. Evidently the values for $^{266,268}$Db are still about an order
of magnitude higher, so non-observation of $\alpha$ decay of these isotopes so far is in-line with the expectations.\\
Another interesting feature is to identify candidates for EC - decay within the $\alpha$ - decay chains. With respect to the
quite uncertain predictions of EC - halflives that task is not trivial. As the experimental EC halflives are longer than the
calculated ones for the lawrencium and dubnium isotopes (see fig. 29) one tentatively may assume that conditions are
similar for the heavier elements (it should be kept in mind that this item is not proven !). In other words, candidates for
EC decay are isotopes for which the experimental half-life is similar or even longer than the calculated \cite{Karpov12} EC - half-life.
In figs. 31-34, the experimental and calculated EC halflives are compared for the known nuclei with {\it{Z}}\,$\ge$\,106. As seen
no EC decay can be expected for isotopes of elements of {\it{Z}}\,=\,108, {\it{Z}}\,=\,108, and {\it{Z}}\,=\,114-118;
possible candidates are $^{285,286}$Nh ({\it{N}}=172,174), $^{283,285}$Cn ({\it{N}}=171,173), $^{280,281,282}$Rg ({\it{N}}=169,170,171), $^{270}$Bh (N=163), and
$^{269}$Sg. It should, however, stressed that these isotopes are just 'candidates'.\\
An example for possibly having observed EC in $\alpha$ decay chains are the 'short chains' registered in irradiations of
$^{243}$Am with $^{48}$Ca \cite{FoR16}, denoted as B1 - B3. As a possible explantion the decay sequence
$^{288}$Mc\,$^{\alpha}_{\rightarrow}$\,$^{284}$Mc\,$^{EC}_{\rightarrow}$\,$^{284}$Cn\,$^{SF}_{\rightarrow}$ was given
in scenario 2. So far, this item is, however, is just an interesting feature, which has to be investigated thoroughly in future.\\
\subsection{\bf{7.4 Spontaneous fission}}
Spontaneous fission is believed to finally terminating the charts of nuclei towards increasing proton numbers
{\it{Z}}. The strong shell stabilization of nuclei in the vicinity of the spherical proton and neutron shells
{\it{Z}} = 114 or {\it{Z}} = 120 and {\it{N}} = 172 or {\it{N}} = 184 leads also to high fission barriers and thus
long fission half-lives. Qualitatively these expections are in line with the experimental results. For all nuclei
{\it{Z}} $>$ 114 so far only $\alpha$ decay was observed, while for {\it{Z}} = 112\,--\,114 only for five nuclei
$^{284,286}$Fl, $^{282,283,284}$Cn spontaneous fission was reported.
The spontaneous fission half-lives of the even-even nuclei $^{284,286}$Fl, $^{282,284}$Cn, $^{280}$Ds agree
within two orders of magnitude, those for $^{284,286}$Fl even within one order of magnitude
with the predictions of R. Smolanczuk et al. \cite{Smolan95a}, which calculations also quite fairly reproduce the half-lives
of the even-even isotopes of rutherfordium ({\it{Z}} = 104), seaborgium ({\it{Z}} = 106), and hassium ({\it{Z}} = 108).
These results indicate that the expected high stabilization against spontaneous fission in the vicinity of the
spherical proton and neutron shells is indeed present. For further discussion of these items we refer to the
review paper \cite{Hess17}.
\begin{figure}
\resizebox{1.0\textwidth}{!}
\includegraphics{fig35.eps}
}
\caption{a) experimental low lying Nilsson levels in odd-mass einsteinium iotopes (data taken from \cite{Hess05}); b) results of HFB - SLy4 calculations for odd mass einsteinium isotopes (data taken from \cite{ChaT06}); c) results of macroscopic - microscopic calculations for odd mass einsteinium isotopes \cite{ParS04}; d) (upper panel) energy differences between the 7/2$^{-}$[514] and 7/2$^{+}$[633] Nilsson levels, (lower panel) energy difference between the 7/2$^{+}$[633] bandhead and the 9/2$^{+}$ rotational band member; e) (upper panel) quadropule deformation parameters $\beta_{2}$ for odd mass einsteinium isotopes \cite{Moller95}, (lower panel) hexadecapole deformation parameters $\beta_{4}$ for odd mass einsteinium isotopes \cite{Moller95}. }
\label{fig:35}
\end{figure}
\subsection{\bf{7.5 Systematics in nuclear structure - odd-mass einsteinium isotopes}}
Detailed information on nuclear structure of heaviest nuclei provide a wide field of information for testing nuclear models with respect to their predictive power. Presently the situation, however, is not very satisfying for at least three major reasons;\\
a) 'detailed' decay studies using $\alpha$ - $\gamma$ spectroscopy are essentially only possible for nuclei with Z\,$\le$107 due to low production rates;\\
b) for many isotopes only very few Nilsson levels have been identified, while the assignment is partly only tentative;\\
c) agreement between experimental data and results from theoretical calculations is in general rather poor.\\
In \cite{Asai15} experimental data are compared with results from theoretical calculations for N\,=\,151 and N\,=\,153 isotones of even-Z elements in the range Z\,=\,94-106. Agreement in excitation energies of the Nilsson levels is often not better than a few hundred keV and also the experimentally established ordering of the levels is often not reproduced by the calculations. Thus,
for example, the existence of the low lying 5/2$^{+}$[622] - isomers in the N\,=\,151 isotones is not predicted by the calculations. These deficiencies, on the other hand, make it hard to trust in predictions of properties of heavier nuclei by these models.\\
In this study the situation will be illustrated for the case of the odd-mass einsteinium isotopes (fig. 35). Experimentally only two Nilsson levels have been established in all presented isotopes, namely 7/2$^{+}$[633] and 7/2$^{-}$[514]. In the heaviest isotopes, $^{251,253}$Es also the 3/2$^{-}$[521] was assigned. While in $^{253,249}$Es 7/2$^{+}$[633] was identified as
ground - state \cite{MooL93,AhmS70}, in case of $^{251}$Es the ground-state was assigned as 3/2$^{-}$[521] \cite{AhmC00}.
For the lighter einsteinium isotopes the situation is unclear. The Nilsson levels 7/2$^{+}$[633] and 7/2$^{-}$[514] have been established from $\alpha$ - $\gamma$ decay studies of odd-mass mendelvium isotopes \cite{Hess05,ChaT06}. However no ground-state assigment was made as on the basis of the results for the heavier einsteinium isotopes as it could not be excluded, that the
7/2$^{+}$[633] and 3/2$^{-}$[521] are close in energy and may alter as ground-state. Indeed a more detailed decay study of
$^{247}$Md indicates that the 3/2$^{-}$[521] level may be the ground-state in $^{243}$Es, while the 7/2$^{+}$[633] is located at
E$^{*}$\,=\,10 keV \cite{Hes20a}.\\
The experimental data are compared with theoretical calculations in figs. 35b und 35c. In fig. 35b the results from a self-consistent Hartree-Fock-Bogoliubov calculation using SLy4 force (HBF - SLy4) are presented (data taken from \cite{ChaT06}), in fig. 35c the results from a macroscopic - microscopic calculation \cite{ParS04}. The HBF - SLy4 calculations only predict the ground-states of $^{253}$Es correctly, for $^{251}$Es they result in 7/2$^{+}$[633] as for $^{253}$Es. For the lighter isotopes
the ground-state is predicted as 1/2$^{-}$[521], while the 3/2$^{-}$[521], for which strong experimental evidence exists that it is ground-state or located close to the ground-state, is located at E$^{*}$\,$\approx$400 keV, except for $^{253}$Es. The macroscopic - microscopic calculations, on the other side, predict 7/2$^{+}$[633] as a low lying level but the 3/2$^{-}$[521] one in an excitation energy range of E$^{*}$\,$\approx$\,(400-600) keV.
As noted in fig. 35b, the 3/2$^{-}$[521], 7/2$^{-}$[514] and 7/2$^{+}$[633] Nilsson levels arise from the f$_{7/2}$, h$_{9/2}$ and i$_{13/2}$ subshells located below the shell gap at Z\,=\,114, while the 1/2$^{-}$[521] stems from the f$_{5/2}$ subshell located above it \cite{Chas77}. The 3/2$^{-}$[521], 7/2$^{-}$[514] and 1/2$^{-}$[521] decrease in energy at increasing deformation, while the 7/2$^{+}$[633] increases in energy. At a deformation $\nu_{2}$\,$\approx$\,0.3 where a shell gap of
$\approx$1 MeV is expected at Z\,=\,100, the 3/2$^{-}$[521] and 7/2$^{+}$[633] states are located below the predicted shell gap, while the 7/2$^{-}$[514] and 1/2$^{-}$[521] are located above it. From this side one can expect that the energy difference $\Delta$E\,=\,E(7/2$^{-}$[514])\,-\,E(7/2$^{+}$[633]) gives some information about the size of the shell gap. Indeed the experimental energy difference is lower than predicted by the HFB-SLy4 calculations (typically $\approx$400 keV) and
by the macroscopic - microscopic calcultions (typically $\approx$600 keV) as seen from figs. 35a - 35c, which hints to a lower shall gap as preticted. Indeed this could explain the non-observation of a discontinuity in the two-proton binding energies and the Q$_{\alpha}$ - values when crossing Z\,=\,100 (see sect. 7.2).\\
Two more interesting features are evident: in figs. 35d and 33e (upper panel) the energy difference $\Delta$E\,=\,E(7/2$^{-}$[514])\,-\,E(7/2$^{+}$[633] is compared with the quadrupole deformation parameter $\beta_{2}$, while in the lower panels the energy difference of the 7/2$^{-}$[514] bandhead and the 9/2- rotational level is compared with the
hexadecapole deformation parameter $\beta_{4}$, both taken from \cite{Moller95}. Both, the experimental energy differences $\Delta$E\,=\,E(7/2$^{-}$[514])\,-\,E(7/2$^{+}$[633]) (not so evident in the calculation) and the $\beta_{2}$ values show a pronounced maximum at N\,=\,152, while as well as the energy differences E(9/2$^{-}$) - E(7/2$^{-}$ as the $\beta_{4}$ - values decrease at increasing mass number or increasing neutron number, respectively.\\
\subsection{\bf{7.6 Nuclear structure predictions for odd-odd nuclei - exemplified for $^{250}$Md and $^{254}$Lr}}
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig36.eps}
}
\caption{Predicted (a) (\cite{Govri18}) and (b) experimentally (tentatvely) assigned \cite{Vost15} low lying levels of $^{254}$Lr. }
\label{fig:36}
\end{figure}
Predictions of level schemes in heaviest odd-odd nuclei are scarce so far. Only for a couple of cases calculations have
been performed so far. Thus we will discuss here only two cases, $^{250}$Md and $^{254}$Lr for which recently new results
have been reported \cite{Vost15}.\\
The ground-state of $^{250}$Md was predicted by Sood et al.\cite{Sood00} as K$^{\pi}$\,=\,0$^{-}$ and a long-lived isomeric state
with spin and parity K$^{\pi}$\,=\,7$^{-}$
expected to decay primarily by $\alpha$ emission or electron capture was predicted at E$^{*}$\,=\,80$\pm$30 keV.
Recently a longlived isomeric state at E$^{*}$\,=\,123 keV was identified \cite{Vost15} in quite good agreement
with the calculations. However, no spin and parity asignments have be done for the groud state and the isomeric state.\\
The other case concerns $^{254}$Lr. Levels at E$^{*}$$<$250 keV were recently calculated on the basis of a
'Two-Quasi-Particle-Rotor-Model' \cite{Govri18}. The results
are shown in fig. 36. The ground-state is predicted as K$^{\pi}$\,=\,1$^{+}$ and an isomeric state K$^{\pi}$\,=\,4$^{+}$
is predicted at E$^{*}$$\approx$75 keV. Recently an isomeric state at E$^{*}$\,=\,108 keV was
identified in $^{254}$Lr. Tentative spin and parity assigments are, however, different.
The ground-state was assigned as K$^{\pi}$\,=\,4$^{+}$, the isomeric state as K$^{\pi}$\,=\,1$^{-}$ (see fig. 36). This
assignment was based on the assumed ground-state configuration K$^{\pi}$\,=\,0$^{-}$ of $^{258}$Db and the low $\alpha$-decay hindrance factor
HF$\approx$30 for the transition $^{258g}$Db $\rightarrow$ $^{254m}$Lr
which rather favors K$^{\pi}$\,=\,1$^{-}$ than K$^{\pi}$\,=\,4$^{+}$ as the latter configuration would require an angular momentum change
$\Delta$K\,=\,3 and a change of the parity which requires a much larger hindrance factor (see sect. 7.1). \\
Here, however, two items should be considered: \\
a) the spin-parity assigmnent of $^{258}$Db is only tentative,\\
b) the calculations are based on the energies of low lying levels in the neighboring odd mass nuclei,
in the respecting case $^{253}$No (N = 151) and $^{253}$Lr (Z = 103). The lowest Nilsson
levels in $^{253}$No are 9/2$^{-}$[734] for the ground-state and 5/2$^{+}$[622] for a shortlived isomer at E$^{*}$\,=\,167 keV \cite{Streich10}.
In $^{253}$Lr tentative assignments of the ground-state (7/2$^{-}$[514]) and 1/2$^{-}$[521]
for a low lying isomer are given in in \cite{HesH01}. The energy of the isomer is experimentally not established, for the calculations a value
of 30 keV was taken \cite{Govri18}. It should be noted, however, that for the neighboring N = 152 isotope of lawrencium, $^{255}$Lr the ground-state had be determined as the Nilsson level 1/2$^{-}$[521], while 7/2$^{-}$ was attributed to a low lying isomeric
state at E$^{*}$\,=\,37 keV \cite{ChaT06}. Therefore, with respect to the uncertain starting conditions,
the results of the calculations although not in 'perfect agreement' with the experimental results, are still
promising, and may be improved in future.\\
It should be noticed the that existence and excitation energy of the isomeric state in $^{254}$Lr has been confirmed
by direct mass measurents at SHIPTRAP \cite{Kaleja20}, and there is some confidence that spins can be determined in near future
by means of laser spectroscopy using the RADRIS technique \cite{Laati14}.
\subsection{\bf{ 7.7 Attempts to synthesize elements Z\,=\,119 and Z\,=\,120}}
Although elements up to Z\,=\,118 have been syntheszied so far, the quest for the location of the spherical 'superheavy' proton and neutron shells is still open.
Indeed synthesis of elements up Z\,=\,118 in $^{48}$Ca induced reactions show a maximum in the cross sections at Z\,=\,114, which might be seen as an indication of a proton shell at Z\,=\,114 (see fig. 16). Such an interpretation, however, is not unambiguous since a complete understanding of the evaporation residue (ER) production process (capture of projectile and target nuclei, formation of the compund nucleus, deexcitation of the compound nucleus, competition between particle emission and fission) is required to draw firm conclusions. Indeed V.I. Zagebaev and W. Greiner \cite{ZagG15} could reproduce cross-sections for elements Z\,=\,112 to Z\,=\,118 produced in $^{48}$Ca induced reaction quite fairly, but evidently a main ingredient of their calculations was quite uncertain. They approximated fission barriers as the sum of the 'shell effects' (according to \cite{Moller95}) and a 'zero-point energy' of 0.5 MeV, which resulted in quite different values than obtained from 'direct' fission barrier calculations (see e.g. \cite{Moller09,Moller15,Kowal15}). Due to these uncertainties measured cross sections are not a good argument identification for a proton shell at
Z\,=\,114\footnote{We want to note that recently Samark-Roth et al. \cite{Sarm21} claimed on the basis of their results on decay studies of $^{286,288}$Fl and their daughter products that their is not real indication for a poton shell at Z\,=\,114.}. \\
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig37.eps}
}
\caption{Predicted Q$_{\alpha}$ values along the N\,=\,172 and N\,=\,184 isotones lines \cite{Smolan95}. The
experimental Q$_{\alpha}$ values for $^{286}$Fl and $^{284}$Cn (data from \cite{Sarm21}) are shown by the open squares.}
\label{fig:37}
\end{figure}
\begin{figure}
\resizebox{0.8\textwidth}{!}
\includegraphics{fig38.eps}
}
\caption{Comparison of Q$_{\alpha}$ values and halflives for element 120 isotopes from different models. See text for details.}
\label{fig:38}
\end{figure}
From this point of view it rather seems useful to take the $\alpha$ decay properties as a signature for a shell, as discussed in sect. 7.2. However, one has to note that strictly spoken even - even nuclei have to be considered since only for those a ground-state to ground-state transition can be assumed a priori as the strongest decay line. However one is presently not only confronted with the lack of experimental data. The situation is shown in fig. 37 where predicted Q$_{\alpha}$ values for the N\,=\,172 and N\,=\,184 are presented. Different to situation at Z\,=\,82 (see fig. 28) calculations of Smolanczuk et al. \cite{Smolan95} predicting Z\,=\,114 as proton shell
result only in a rather small change in decrease of Q$_{\alpha}$ values when crossing the shell, even at the predicted neutron shell at N\,=\,184 compared to the heavier and lighter isotones.
At N\,=\,172 there is practically no effect any more, one gets a more or less straight decrease of the
Q$_{\alpha}$ values. So probably Q$_{\alpha}$ values could not be
applied for identifying Z\,=\,114 as a proton shell even if more data would be available.\\
A possibility to decide whether the proton shell is located Z\,=\,114 or Z\,=\,120 results from comparison of experimental
Q$_{\alpha}$ values and halflives with results from models predicting either Z\,=\,114 or Z\,=\,120. However, one has to consider a large straggling
of the predicted
values, so it is required to produce and to investigate nuclei in a region, where uncertainties of the predictions are
larger than the results from models predicting either Z\,=\,114 or Z\,=\,120 as proton shells.
An inspection of the different models shows that element 120 seems to be first one where the differences are so large that the quest of the proton shell can be
answered with some certainty.
The situation is shown in fig. 38, where predicted Q$_{\alpha}$ values and calculated halflives are compared.
Despite the large straggling of the predicted $\alpha$ energies and halflives there is seemingly a borderline at E$_{\alpha}$\,=\,12.75 MeV evident between models precting Z\,=\,120 als proton shell \cite{Typel03,Cwiok05,Cwiok99,Litvinova12} and those predicting Z\,=\,114 as proton shell \cite{Smolan95,Moller95} while halflives $<$10$^{-5}$ s hint to Z\,=\,114, halflives $>$10$^{-5}$ s to Z\,=\,120 as proton shell. This feature makes synthesis of element 120 even more interesting than the synthesis of element 119. Suited reactions to produce an even-even isotope of element 120 seem $^{50}$Ti($^{249}$Cf,3n)$^{296}$120 (N=176), $^{54}$Cr($^{248}$Cm,2n,4n)$^{298,300}$120 (N=178,180). Expected cross-sections are, however small. \\
V.I. Zagrebaev and W. Greiner \cite{ZagG08} predicted cross sections of $\sigma$\,$\approx$\,25 fb for $^{54}$Cr($^{248}$Cm,4n)$^{298}$120 and a slightly higher
value of $\sigma$\,$\approx$\,40 fb for $^{50}$Ti($^{249}$Cf,3n)$^{296}$120. So far only few experiments on synthesis of element 120 reaching cross sections limits
below 1 pb have been performed: $^{64}$Ni + $^{238}$U at SHIP, GSI with $\sigma$$<$0.09 pb \cite{HofA08}, $^{54}$Cr + $^{248}$Cm at SHIP, GSI with $\sigma$$<$0.58 pb \cite{HofH16}, $^{50}$Ti + $^{249}$Cf at TASCA, GSI with $\sigma$$<$0.2 pb \cite{Khuyag20}, and $^{58}$Fe + $^{244}$U at DGFRS, JINR Dubna with $\sigma$$<$0.5 pb \cite{OganU09}.
\section{\bf{9. Challenges / Future}}
There are two major problems concerning the experimental techniques used in the investigation of superheavy elements.
The first is connected with the implantation of the reaction products into silicon detectors which are also used to measure
the $\alpha$-decay energy, conversion electrons and fission products. This simply means that, {\it{e.g.}} in the case of
$\alpha$ decay not only the kinetic energy of the $\alpha$-particle is measured but also part of the recoil energy transferred by the $\alpha$ particle to the residual nucleus. Due to the high ionisation density in the stopping process of the heavy residual nucleus and partial recombination of the charge carriers, typically only about one third of the recoil energy is measured
\cite{Eyal82}. It results in energy shift of the $\alpha$-decay energy by $\approx$50 keV, which can be compensated by a proper calibration, and a deterioration of the energy resolution of the detector by typically 5\,-\,10 keV.\\
A second item is more severe. It is connected with populating excited levels in nuclei decaying promptly (with life-times of some $\mu$s or lower) by internal conversion. In these cases energy summing of $\alpha$ particles with conversion electrons (and also low energy X-rays and Auger electrons from deexcitation of the atomic shell) is observed \cite{HessH87}. The influence on the measured $\alpha$ spectra is manifold, depending also on the energy of the conversion electrons; essentially are broadening and shifting the $\alpha$ energies often washing out peak structures of $\alpha$ decay pattern. An illustrative case is $\alpha$ decay of $^{255}$No has been investigated using the implantation technique \cite{Hess06} and the He-jet technique with negligible probability of energy summing \cite{Asai11}. Specifically, different low lying members of the same rotational are populated, which decay by practically completely converted M1 or E2 transitions towards the band-head, these fine structures of the $\alpha$ decay spectrum cannot be resolved using the implantation technique (see also \cite{Asai15}). Although in recent years successful attempts have been untertaken to model those influences by GEANT - simulations \cite{Hess12}, direct measurement are preferred from experimental side. First steps in this direction have been recently undertaken by coupling an ion trap \cite{Rudo10} or an MRTOF - system \cite{Schury17} to a recoil separator and the BGS + FIONA system, which was used
to directly measure mass number of $^{288}$Mc \cite{Gates18}.\\
Also mass number measurement is an interesting feature, the ultimate goal is a save {\it{Z}} and {\it{A}} identification of a nuclide. This can be
achieved via high precision mass measuremts, allowing for clear isobaric separation (ion traps and possibly also MRTOF - systems).
Presently limits are set by the production rate. \\
The most direct method to determine the atomic number of a nucleus is measuring characteristic X-rays in prompt or delayed coincidence with its radioactive decay.
Such measurements are, however, a gamble as they need both, highly K - converted transitions (M1, M2) with transition energies above the K - binding energy.
The latter is not a trivial problem as energies raise steadily and are in the order of 180 keV at {\it{Z}}\,=\,110.
Such measurements have been applied so far up bohrium ({\it{Z}} = 107 \cite{Hess09}). In the region of superheavy nuclei ({\it{Z}}\,$>$\,112) such attempts have been recently performed by D. Rudolph et al. \cite{RuF13} and J. Gates et al. \cite{GaG15} by investigating the $\alpha$ decay chains starting from the odd-odd nucleus $^{288}$Mc ({\it{Z}}\,=\,115), but no positive result was obtained.\\
Alternatively one can attempt to measure L - X-rays to have excess to lower energies and also to E2 transitions. Such measurement have been performed successfully
up to {\it{Z}}\,=\,105 \cite{Bemis77}, but are more complicated due to the more complex structure of the L X-ray spectra.\\
An alternative method for X-ray identification is measuring the X-rays emitted during electron capture (EC) decay in delayed coincidence
with $\alpha$ decay or spontaneous fission of the daughter nucleus. This technique has been recently for the first time successfully applied in the transactinide region \cite{Hess16a}, by measuring K$_{\alpha}$ and
K$_{\beta}$ X-rays from EC - decay of $^{258}$Db in delayed coincidence with spoantaneous fission and $\alpha$ decay of the daughter nucleus $^{258}$Rf.
Application in the SHN region seems possible, problems connected with that technique are discussed in sect. 7.3.
|
{
"timestamp": "2021-02-18T02:19:52",
"yymm": "2102",
"arxiv_id": "2102.08793",
"language": "en",
"url": "https://arxiv.org/abs/2102.08793"
}
|
\section{Extended Results}
\subsection{Additional Experiments}
\label{social_exp}
\input{Tables/TUD_Results}
In this section we evaluate \textsc{CRaWl}{} on commonly used benchmark datasets from the domain of social networks.
We use a subset from the \mbox{TUDataset} \citep{Morris+2020}, a list of typically small graph datasets from different domains e.g. chemistry, bioinformatics, and social networks.
We focus on three datasets originally proposed by \citet{yanardag2015deep}: COLLAB, a scientific collaboration dataset, IMDB-MULTI, a multiclass dataset of movie collaboration of actors/actresses, and REDDIT-BIN, a balanced binary classification dataset of Reddit users which discussed together in a thread.
These datasets do not have any node or edge features and the tasks have to be solved purely with the structure of the graphs.
We stick to the experimental protocol suggested by \citet{Keyulu18}.
Specifically, we perform a 10-fold cross validation.
Each dataset is split into 10 stratified folds.
We perform 10 training runs where each split is used as test data once, while the remaining 9 are used for training.
We then select the epoch with the highest mean test accuracy across all 10 runs.
We report this mean test accuracy as the final result.
This is not the most realistic setup for simulating real world tasks, since there is no clean split between validation and test data.
But in fact, it is the most commonly used experimental setup for these datasets and is mainly justified by the comparatively small number of graphs.
Therefore, we adopt the same procedure for the sake of comparability to the previous literature.
For COLLAB and IMDB-MULTI we use the same 10-fold split used by \citet{zhang2018end}.
For REDDIT-BIN we computed our own stratified splits.
We also computed separate stratified 10-fold splits for hyperparameter tuning.
We adapt the training procedure of \textsc{CRaWl}{} towards this setup.
Here, the learning rate decays with a factor of 0.5 in fixed intervals.
These intervals are chosen to be 20 epochs on COLLAB and REDDIT-BINARY and as 50 epochs on IMDB-MULTI.
We train for 200 epochs on COLLAB and REDDIT-BINARY and for 500 epochs on IMDB-MULTI.
This ensures a consistent learning rate profile across all 10 runs for each dataset.
Table \ref{social_results} reports the achieved accuracy of \textsc{CRaWl}{} and several key baselines on those datasets.
For the baselines, we provide the results as reported in the literature.
For comparability, we only report values for baselines with the same experimental protocol.
On IMDB-MULTI, the smallest of the three datasets, \textsc{CRaWl}{} yields a slightly lower accuracy than most baselines.
On COLLAB, our method performs similarly to standard MPGNN architectures such as GIN.
\textsc{CRaWl}{} outperforms all baselines that report values for REDDIT-BIN.
Note that GSN, the method with the best results on COLLAB and IMDB-MULTI, does not scale as well as \textsc{CRaWl}{} and is infeasible for REDDIT-BIN which contains graphs with several thousand nodes.
\subsection{Detailed Results for all Experiments}
\input{Tables/Extended_Results}
Table \ref{extended_results} provides the full results from our experimental evaluation.
It reports the performance on the train, validation, and test data.
Recall that the output of \textsc{CRaWl}{} is a random variable.
The predictions for a given input graph may vary when different random walks are sampled.
To quantify this additional source of randomness, we measure two deviations for each experiment:
The cross model deviation (CMD) and the internal model deviation (IMD).
For clarity, let us define these terms formally.
For each experiment, we perform $q \in \mathbb{N}$ training runs with different random seeds.
Let $m_i$ be the model obtained in the $i$-th training run with $i\in[q]$.
When evaluating (both on test and validation data), we evaluate each model $r \in \mathbb{N}$ times, with different random walks in each evaluation run.
Let $p_{i,j} \in \mathbb{R}$ measure the performance achieved by the model $m_i$ in its $j$-th evaluation run.
Note that the unit of $p_{i,j}$ varies between experiments (Accuracy, MAE, $\dots$).
We formally define the \emph{internal model deviation} as
\begin{equation*}
\small{
\text{IMD} = \frac{1}{q} \cdot \sum_{1 \leq i \leq q} \text{STD}\left( \{p_{i,j} \mid 1 \leq j \leq r \} \right),}
\end{equation*}
where $\text{STD}(\cdot)$ is the standard deviation of a given distribution.
Intuitively, the IMD measures how much the performance of a trained model varies when applying it multiple times to the same input.
It quantifies how the model performance depends on the random walks that are sampled during evaluation.
We formally define the \emph{cross model deviation} as
\begin{equation*}
\small{
\text{CMD} = \text{STD}\left( \left\{\frac{1}{r} \cdot \sum_{1 \leq j \leq r} p_{i,j} \mid 1 \leq i \leq q \right\} \right).}
\end{equation*}
The CMD measures the deviation of the average model performance between different training runs.
It therefore quantifies how the model performance depends on the random initialization of the network parameters before training.
In the main section, we only reported the CMD for simplicity.
Note that the CMD is significantly larger then the IMD across all experiments.
Therefore, trained \textsc{CRaWl}{} models can reliably produce high quality predictions, despite their dependence on randomly sampled walks.
\section{Model and Setup Details}
\subsection{Convolution Module}
\label{Conv}
Here, we describe the architecture used for the 1D CNN network $\text{CNN}^t$ in each layer $t$.
Let $\text{Conv1D}(d, d', k)$ be a standard 1D convolution with input feature dimension $d$, output feature dimension $d'$, kernel size $k$ and no bias.
This module has $d \cdot d' \cdot k$ trainable parameters.
The term scales poorly for larger hidden dimensions $d$, since the square of this dimension is scaled with an additional factor of $k$, which we typically set to 9 or more.
To address this issue we leverage \emph{Depthwise Separable Convolutions}, as suggested by \cite{Chollet_2017_CVPR}.
This method is most commonly applied to 2D data in Computer Vision, but it can also be utilized for 1D convolutions.
It decomposes one convolution with kernel size $k$ into two convolutions:
The first convolution is a standard 1D convolution with kernel size 1.
The second convolution is a depthwise convolution with kernel size $k$, which convolves each channel individually and therefore only requires $k \cdot d'$ parameters.
The second convolution is succeeded by a Batch Norm layer and a ReLU activation function.
Note that there is no non-linearity between the two convolutions.
These operations effectively simulate a standard convolution with kernel size $k$ but require substantially less memory and runtime.
After the ReLU activation, we apply an additional (standard) convolution with kernel size $1$, followed by another ReLU non-linearity.
This final convolution increases the expressiveness of our convolution module which could otherwise only learn linearly separable functions.
This would limit its ability to distinguish the binary patterns that encode identity and adjacency.
The full stack of operations effectively applies a 2-layer MLP to each sliding window position of the walk feature tensor.
Overall, $\text{CNN}^t$ is composed of the following operations:
\makebox[\textwidth]{\parbox{1.3\textwidth}{%
\begin{equation*}
\text{Conv1D}(d,d',1)\rightarrow\text{Conv1D}^{dw}(d',d',k)\rightarrow\text{BatchNorm}\rightarrow\text{ReLU}\rightarrow\text{Conv1D}(d',d',1)\rightarrow\text{ReLU}
\end{equation*}}}
Here, $\text{Conv1D}$ is a standard 1D convolution and $\text{Conv1D}^{dw}$ is a depthwise convolution.
The total number of parameters of one such module (without the affine transformation of the Batch Norm) is equal to $dd'+kd+{d'}^2$.
\subsection{Hyperparameters}
\input{Tables/Hyper_Param}
Table \ref{hyper_param} provides the hyperparameters used in each experiment.
Hyperparameters that define/modify the network architecture:
\begin{itemize}
\item The number of layers $L$. \\
We tried out $L \in \{2,3,4\}$ on all datasets, except for MOLPCBA, where we searched through $L \in \{3,5,7\}$.
\item The latent state size $d$. \\ On ZINC, CIFAR10, MNIST and CSL we chose sizes that would roughly use the chosen parameter budgets. On COLLAB, IMDB-MULTI and REDDIT-BIN we set $d=100$ and for MOLPCBA we chose the largest feasible size for our hardware $(d=400)$.
\item The local window size $s$.
\item The global pooling function (either \emph{mean} or \emph{sum})
\item The architecture of the final output network (either \emph{mlp} or \emph{linear})
\item The number of random walk steps during training ($\ell_\text{train}$) and evaluation ($\ell_\text{eval}$).
\item The dropout rate. \\
We searched through \{0.0, 0.25, 0.5\}. One dropout layer is placed behind the global pooling step.
\item Whether or not a virtual node (VN) is used as an intermediate update layer.
\end{itemize}
Hyperparameters for the walks and the training procedure:
\begin{itemize}
\item The probability of starting a walk from each node during training $p^*$.
We choose $p^*=1$ by default.
On MOLPCBA we set $p^*=0.2$ to reduce overfitting.
\item The walk strategy (either \emph{uniform} (un) or \emph{non-backtracking} (nb))
\item The number of evaluation runs with different random walks for validation ($r_\text{val}$) and testing ($r_\text{test}$).
\item The initial learning rate $lr$ was chosen as $0.001$ in all experiments.
\item The patience for learning rate decay (with a factor of $0.5$) is $10$ by default.
\item The batch size
\end{itemize}
\subsection{Model Size and Runtime}
Table \ref{param_count} provides the number of trainable parameters in each model.
Additionally, we report the runtime observed during training.
All experiments were run on a machine with 64GB RAM, an Intel Xeon 8160 CPU and an Nvidia Tesla V100 GPU with 16GB GPU memory.
The resources were provided by our internal compute cluster.
\input{Tables/Param_Count}
\subsection{Virtual Node}
\label{vn}
\citet{GilmerSRVD17}, \citet{li2017learning}, and \citet{ishiguro2019graph} suggested the use of a \emph{virtual node} to enhance GNNs for chemical datasets.
Intuitively, a special node is inserted into the graph that is connected to all other nodes.
This node aggregates the states of all other nodes and uses this information to update its own state.
The virtual node has its own distinct update function which is not shared by other nodes.
The updated state is then sent back to all nodes in the graph.
Effectively, a virtual node allows global information flow after each layer.
Formally, a virtual node updates a latent state $h_{vn}^t \in \mathbb{R}^d$, where $h_{vn}^t$ is computed after the $t$-th layer and $h_{vn}^0$ is initialized as a zero vector.
The update procedure is defined by:
\begin{align*}
h_{vn}^t &= U_{vn}^t\left(h_{vn}^{t-1} + \sum_{v \in V} h^t(v)\right)\\
\tilde{h}^t(v) &= h^t(v) + h_{vn}^t.
\end{align*}
Here, $U_{vn}^t$ is a trainable MLP and $h^t$ is the latent node embedding computed by the $t$-th \textsc{CRaWl}{} layer.
$\tilde{h}^t$ is an updated node embedding that is used as the input for the next \textsc{CRaWl}{} layer instead of $h^t$.
In our experiments, we choose $U_{vn}^t$ to contain a single hidden layer of dimension $d$.
When using a virtual node, we perform this update step after every \textsc{CRaWl}{} layer, except for the last one.
Note that we view the virtual node as an intermediate update step that is placed between our \textsc{CRaWl}{} layers to allow for global communication between nodes.
No additional node is actually added to the graph and, most importantly, the ``virtual node'' does not occur in the random walks sampled by \textsc{CRaWl}{}.
\subsection{Cross Validation on CSL}
Let us briefly discuss the experimental protocol used for the CSL dataset.
Unlike the other benchmark datasets provided by \citet{dwivedi2020benchmarkgnns}, CSL is evaluated with 5-fold cross-validation.
We use the 5-fold split \citet{dwivedi2020benchmarkgnns} provide in their repository.
In each training run, three folds are used for training and one is used for validation and model selection.
After training, the remaining fold is used for testing.
Finally, Figure \ref{fig:skiplink} provides an example of two skip-link graphs.
The task of CSL is to classify such graphs by their isomorphism class.
\begin{figure}
\centering
\includegraphics[]{Pictures/picture_skiplink.pdf}
\caption{Two cyclic skip-link graphs \citep[see][]{murphy2019relational} \textbf{}with 11 nodes and a skip distance of 2 and 3 respectively.}
\label{fig:skiplink}
\end{figure}
\section{Ablation Study}
\label{ablation study}
We perform an ablation study to understand how the key aspects of \textsc{CRaWl}{} influence the empirical performance.
We aim to answer two main questions:
\begin{itemize}
\item How useful are the identity and adjacency features we construct for the walks?
\item How do different strategies for sampling random walks impact the performance?
\end{itemize}
Here, we use the ZINC, MOLPCBA, and CSL datasets to answer these questions empirically.
We trained multiple versions of \textsc{CRaWl}{} with varying amounts of structural features used in the walk feature matrices.
The simplest version only uses the sequences of node and edge features without any structural information.
For ZINC and CSL, we also train intermediate versions using either the identity or the adjacency encoding, but not both.
We omit these for MOLPCBA to save computational resources.
Finally, we measure the performance of the standard \textsc{CRaWl}{} architecture, where both encodings are incorporated into the walk feature matrices.
For each version, we compute the performance with both walk strategies.
On each dataset, the experimental setup and hyperparameters are identical to those used in the previous experiments on both datasets.
In particular, we train five models with different seeds and provide the average performance as well as the standard deviation across models.
Note that we repeat the experiment independently for each walk strategy.
Switching walk strategies between training and evaluation does not yield good results.
\input{Tables/Ablation_Results}
Table \ref{ablation_results} reports the performance of each studied version of \textsc{CRaWl}{}.
On ZINC, the networks without any structural encoding yield the worst predictions.
Adding either the adjacency or the identity encoding improves the results substantially.
The best results are obtained when both encodings are utilized and non-backtracking walks are used.
On MOLPCBA, the best performance is also obtained with full structural encodings.
However, the improvement over the version without the encodings is only marginal.
Again, non-backtracking walks perform significantly better than uniform walks.
On CSL, the only version to achieve a perfect accuracy of 100\% is the one with all structural encodings and non-backtracking walks.
Note that the version without any encodings can only guess on CSL since this dataset has no node features (we are not using the Laplacian features here).
Overall, the structural encodings of the walk feature matrices yield a measurable performance increase on all three datasets.
However, the margin of the improvement varies significantly and depends on the specific dataset.
For some tasks, such as MOLPCBA, \textsc{CRaWl}{} yields highly competitive results even when only the sequences of node and edge features are considered in the walk feature matrices.
Finally, the non-backtracking walks consistently outperform the uniform walks.
This could be attributed to their ability to traverse sparse substructures quickly.
On sparse graphs with limited degree, such as molecules, uniform walks will backtrack often.
This slows down the traversal of the graph.
Each substructure will be traversed less frequently and the average size of the subgraphs induced by the walklets decreases.
On the three datasets used here these effects seem to cause a significant loss in performance.
\section{Theory}
\label{appendix:proof}
In this appendix, we prove Theorem 1 and discuss its context. Let us first recall the setting and introduce some additional notation. Throughout the paper, graphs are
undirected and simple (that is, without self-loops and parallel edges).%
\footnote{It is possible to simulate directed edges and parallel edges through edge labels and loops through node labels, but so far, we have only worked with undirected simple, though possibly labeled graphs.}
In this appendix, all graphs will be unlabeled. All result can easily be extended to (vertex and edge) labelled graphs. In fact, the (harder) inexpressivity results only become stronger by restricting them to the subclass of unlabelled graphs.
We further assume that graphs have no isolated nodes, which enables us to
start a random walk from every node. This makes the setup cleaner and avoids tedious case distinctions, but again is no serious restriction.
We denote the edge set of a
graph $G$ by $V(G)$ and the node set by $E(G)$. The \emph{order}
$|G|$ of $G$ is the number of nodes, that is,
$|G|\coloneqq |V(G)|$.
For a set $X\subseteq V(G)$, the
\emph{induced subgraph} $G[X]$ is the graph with node set $X$ and
edge set $\{vw\in E(G)\mid v,w\in X\}$.
A walk of length $\ell$ in $G$ is a sequence
$W=(w_0,\ldots,w_{\ell})\in V(G)^{\ell+1}$ such that $w_{i-1}w_{i}\in
E(G)$ for $1\le i \le \ell$. The walk is \emph{non-backtracking}
if for $1< i<\ell$ we have
$w_{i+1}\neq w_{i-1}$ unless the degree of vertex $w_i$ is $1$.
Before we prove the theorem, let us precisely specify what it means that \textsc{CRaWl}{}
distinguishes two graphs. Recall that \textsc{CRaWl}{} has three (walk related) hyperparameters:
\begin{itemize}
\item the \emph{window size} $s$;
\item the \emph{walk length} $\ell$;
\item the \emph{samples size} $m$.
\end{itemize}
Recall furthermore that with every walk $W=(w_0,\ldots,w_\ell)$ we associate a
\emph{walk feature matrix} $X\in\mathbb R^{(\ell+1)\times(d+d'+s+(s-1))}$. For
$0\le i\le\ell$, the first $d$ entries of the $i$-th row of $X$
describe the current embedding of the node $w_i$, the next $d'$ entries the
embedding of the edge $w_{i-1}w_i$ ($0$ for $i=0$), the following $s$
entries are indicators for the equalities between $w_i$ and the
nodes $w_{i-j}$ for $j=1,\ldots,s$ ($1$ if $w_i=w_{i-j}$, $0$ if
$i-j<0$ or $w_i\neq w_{i-j}$), and the remaining $s-1$ entries are
indicators for the adjacencies between $w_i$ and the nodes
$w_{i-j}$ for $j=2,\ldots,s$ ($1$ if $w_i,w_{i-j}$ are adjacent in $G$,
$0$ if $i-j<0$ or $w_i,w_{i-j}$ are non-adjacent; note that
$w_i,w_{i-1}$ are always be adjacent because $W$ is a walk in $G$).
Note that in the unlabelled graphs we consider here the initial node and edge embeddings
are the same for all nodes and for all edges, and therefore they do not contribute to the expressivity. As our expressiveness results
are based on the initial feature matrices---they carry all information that \textsc{CRaWl}{} extracts from the graph---we can safely ignore these embeddings and focus on the subgraph features encoded in the last $2s-1$ columns. For simplicity, we regard $X$ as an $(\ell+1)\times(2s-1)$ matrix with only these features in the following.
We
denote the entries of the matrix $X$ by $X_{i,j}$ and the rows by
$X_{i,-}$. So $X_{i,-}=(X_{i,1},\ldots,X_{i,d+d'+2s-1})\in\{0,1\}^{2s-1}$.
We denote the walk feature matrix of a walk $W$ by $X(W)$. It is
immediate from the definitions that for walks
$W=(w_1,\ldots,w_\ell),W'=(w_1',\ldots,w_\ell')$ in graphs $G,G'$ with
feature matrices $X\coloneqq X(W),X'\coloneqq X(W')$, we have:
\begin{enumerate}
\item
if $X_{i-j,-}=X'_{i-j,-}$ for $j=0,\ldots,s-1$ then the mapping $w_{i-j}\mapsto w'_{i-j}$ for
$j=0,\ldots,s$ is an isomorphism from the induced subgraph
$G[\{w_{i-j}\mid j=0,\ldots,s\}]$ to the induced subgraph
$G'[\{w'_{i-j}\mid j=0,\ldots,s\}]$;
\item
if the mapping $w_{i-j}\mapsto w'_{i-j}$ for
$j=0,\ldots,2s-1$ is an isomorphism from the induced subgraph
$G[\{w_{i-j}\mid j=0,\ldots,2s-1\}]$ to the induced subgraph
$G'[\{w'_{i-j}\mid j=0,\ldots,2s-1\}]$, then $X_{i-j,-}=X'_{i-j,-}$ for $j=0,\ldots,s-1$.
\end{enumerate}
The reason that we need to include the vertices
$w_{i-2s+1},\ldots,w_{i-s}$ and $w'_{i-2s+1},\ldots,w'_{i-s}$ into the subgraphs in (2) is that row $X_{i-s+1,-}$ of the feature matrix records edges
and equalities between $w_{i-s+1}$ and $w_{i-2s+1},\ldots,w_{i-s}$.
For every graph $G$ we denote the distribution of random walks on $G$
starting from a node chosen uniformly at random by ${\mathcal W}(G)$ and
${\mathcal W}_{nb}(G)$ for the
non-backtracking walks. We let
${\mathcal X}(G)$ and ${\mathcal X}_{nb}(G)$) be the push-forward distributions on
$\{0,1\}^{(\ell+1)\times(2s-1)}$, that is, for every
$X\in \{0,1\}^{(\ell+1)\times(2s-1)}$ we let
\[
\Pr_{{\mathcal X}(G)}(X)=\Pr_{{\mathcal W}(G)}\big(\{W\mid X(W)=X\}\big)
\]
A \textsc{CRaWl}{} run on $G$ takes $m$ samples from ${\mathcal X}(G)$. So to
distinguish two graphs $G,G'$, \textsc{CRaWl}{} must detect that the
distributions ${\mathcal X}(G),{\mathcal X}(G')$ are distinct using $m$ samples.
As a warm-up, let us prove the following simple result.
\setcounter{theorem}{1}
\begin{theorem}\label{theo:a1}
Let $G$ be a cycle of length $n$ and $G'$ the disjoint union of two
cycles of length $n/2$. Then $G$ and
$G'$ cannot be distinguished by \textsc{CRaWl}{} with window size $s<n/2$
(for any choice of parameters $\ell$ and $m$).
\end{theorem}
\begin{proof}
With a window size smaller than the length of the shortest
cycle, the graph \textsc{CRaWl}{} sees in its window is always a path. Thus for every walk $W$ in either $G$ or $G'$
the feature matrix $X(W)$ only depends on the backtracking pattern
of $W$. This means that ${\mathcal X}(G)={\mathcal X}(G')$.
\end{proof}
It is worth noting that the graphs $G,G'$ of Theorem~\ref{theo:a1} can
be distinguished by $2$-WL (the 2-dimensional Weisfeiler-Leman
algorithm), but not by $1$-WL.
Proving that two graphs $G,G'$ have identical feature-matrix distributions
${\mathcal X}(G)={\mathcal X}(G')$ is the ultimate way of proving that they are not
distinguishable by \textsc{CRaWl}{}. Yet for more interesting graphs, we rarely
have identical feature-matrix distributions. However, if the
distributions are sufficiently close we will still not be able to
distinguish them. To quantify closeness, we use the \emph{total variation
distance} of the distributions. Recall that the total variation distance between two
probability distributions ${\mathcal D},{\mathcal D}'$ on the same finite sample space
$\Omega$ is
\[
\dist_{TV}({\mathcal D},{\mathcal D}')\coloneqq\max_{S\subseteq\Omega}|\Pr_{{\mathcal D}}(S)-\Pr_{{\mathcal D}'}(S)|.
\]
It is known that the total variation distance is half the
$\ell_1$-distance between the distributions, that is,
\begin{align*}
\dist_{TV}({\mathcal D},{\mathcal D}')&=\frac{1}{2}\|{\mathcal D}-{\mathcal D}\|_1\\
&=\frac{1}{2}\sum_{\omega\in\Omega}|\Pr_{{\mathcal D}}(\{\omega\})-\Pr_{{\mathcal D}'}(\{\omega\})|.
\end{align*}
Let $\varepsilon>0$. We say that two graphs $G,G'$ are
\emph{$\varepsilon$-indistinguishable} by \textsc{CRaWl}{} with window size $s$,
walk length $\ell$, and sample size $m$ if
\begin{equation}\label{eq:a1}
\dist_{TV}({\mathcal X}(G),{\mathcal X}(G'))<\frac{\varepsilon}{m}.
\end{equation}
The rationale behind this definition is that if
$\dist_{TV}({\mathcal X}(G),{\mathcal X}(G'))<\frac{\varepsilon}{m}$ then for every
property of feature matrices that \textsc{CRaWl}{} may want to use to
distinguish the graphs, the expected numbers of samples with this
property that \textsc{CRaWl}{} sees in both graphs are close together
(assuming $\varepsilon$ is small).
Often, we want to make asymptotic statements, where we have two
families of graphs $(G_n)_{n\ge 1}$ and $(G_n')_{n\ge 1}$, typically of
order $|G_n|=|G_n'|=\Theta(n)$, and classes $S,L,M$ of functions, such
as the class $O(\log n)$ of logarithmic or the class $n^{O(1)}$ of
polynomial functions. We say that $(G_n)_{n\ge 1}$ and
$(G_n')_{n\ge 1}$ are \emph{indistinguishable} by \textsc{CRaWl}{} with window
size $S$, walk length $L$, and sample size $M$ if for all $\varepsilon>0$
and all $s\in S,\ell\in L,m\in M$ there is an $n$ such that $G_n,G_n'$
are $\varepsilon$-indistinguishable by \textsc{CRaWl}{} with window size $s(n)$,
walk length $\ell(n)$, and sample size $m(n)$.
We could make similar definitions for distinguishability, but we omit
them here and deal with distinguishability in an ad-hoc fashion (in
the following subsection).
\subsection{Proof of Theorem 1(1)}
Here is a precise quantitative version of the first part of Theorem 1.
\begin{theorem}\label{theo:1-1}
For all $k\ge 1$ there are families of graphs $(G_n)_{n\ge 1}$,
$(G_n')_{n\ge 1}$ of order $|G_n|=|G_n'|=n+O(k)$ that are distinguishable by \textsc{CRaWl}{} with
window size $s=O(k^2)$, walk length $s=O(k^2)$, and samples size
$m=O(n)$, but not by $k$-WL (and hence not by $k$-dimensional GNNs).
\end{theorem}
Fortunately, to prove this theorem we do not need to know any details
about the Weisfeiler-Leman algorithm (the interested reader is
deferred to \citep{gro21b,kie20}). We can use the following
well-known inexpressibility result as a black box.
\begin{theorem}[\citet{caifurimm92}]\label{theo:cfi}
For all $k\ge 1$ there are graphs $H_k,H_k'$ such that
$|H_k|=|H'_k|=O(k)$, the graphs $H_k$ and $H_k'$ are $3$-regular, and $k$-WL cannot distinguish $H_k$ and $H_k'$.
\end{theorem}
It is a well-known fact that the \emph{cover time} of a connected
graph of order $n$ with $m$ edges, that is, the expected time it takes
a random walk starting from a random node to visit all nodes of
the graph, is bounded from above by $4nm$ \citep{alekarlip+79}. By Markov's
inequality, a path of length $8nm$ visits all nodes with
probability at least $1/2$. Sampling several such paths, we can bring
the success probability arbitrarily close to $1$.
\begin{proof}[Proof of Theorem~\ref{theo:1-1}]
Let $k\ge 1$ and let $H_k,H_k'$ be
the graphs obtained from the Theorem~\ref{theo:cfi}. Let
$n_k\coloneqq|H_k|$. For every $n\ge1$, we let $G_n$ be the
disjoint union of $H_k$ with a path of length $n$, and let $G_n'$
be defined in the same way from $H_k'$. Then $|G_n|=|G_n'|=n_k+n+1$.
Let $m_k=\frac{3}{2}n_k$ be the number of edges of the $3$-regular
graphs $H_k$ and $H_k'$, and let $s\coloneqq 8n_km_k=O(k^2)$. This
will be our window size, and in fact also our walk length:
$\ell\coloneqq s$. Let $\varepsilon>0$. We choose a sufficiently large
$m=m(n)\in O(n)$ to make sure that a sample of $m$ nodes from $V(G_n)$
or $V(G_n')$ contains sufficiently many nodes in the subset
$V(H_k)\subseteq V(G_k)$ resp.\ $V(H_k')\subseteq V(G_k')$. Then, if we
sample $m$ paths of length $\ell$ from ${\mathcal W}(G_n)$, with probability at
least $1-\varepsilon$, one of these paths covers $V(H_k)$. This means that $m$ random walks of length $\ell$
will detect the subgraphs $H_k$, and as the window size $s$ is
equal to $\ell$, these subgraphs will appear in the feature
matrix. Since the subgraph $H_k$ does not appear as a subgraph of
$G_n'$, this means that with probability at least $1-\varepsilon$,
\textsc{CRaWl}{} can distinguish the two graphs.
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}[
vertex/.style={circle,draw,inner sep = 0pt, minimum size=6pt},
]
\begin{scope}
\node[vertex,fill=Blue] (v1) at (0,0) {};
\node[vertex,fill=Cyan] (v2) at (-1,1) {};
\node[vertex,fill=Cyan] (v3) at (0,1) {};
\node[vertex,fill=Cyan] (v4) at (1,1) {};
\node[vertex,fill=Cyan] (v5) at (-1,2) {};
\node[vertex,fill=Cyan] (v6) at (0,2) {};
\node[vertex,fill=Cyan] (v7) at (1,2) {};
\node[vertex,fill=Blue] (v8) at (0,3) {};
\draw (v1) edge (v2) edge (v3) edge (v4) (v2) edge (v5) (v3) edge
(v6) (v4) edge (v7) (v8) edge (v5) edge (v6) edge (v7);
\node at (-1,3) {$G_3$};
\end{scope}
\begin{scope}[xshift=4cm]
\node[vertex,fill=BrickRed] (v1) at (0,0) {};
\node[vertex,fill=Goldenrod] (v2) at (-1,1.5) {};
\node[vertex,fill=YellowOrange] (v3) at (0,1) {};
\node[vertex,fill=Red] (v4) at (1,0.75) {};
\node[vertex,fill=YellowOrange] (v5) at (0,2) {};
\node[vertex,fill=Lavender] (v6) at (1,1.5) {};
\node[vertex,fill=Red] (v7) at (1,2.25) {};
\node[vertex,fill=BrickRed] (v8) at (0,3) {};
\draw (v1) edge[bend left] (v2) edge (v3) edge (v4) (v3) edge (v5) (v4) edge
(v6) (v6) edge (v7) (v8) edge[bend right] (v2) edge (v5) edge (v7);
\node at (-1,3) {$G_3'$};
\end{scope}
\end{tikzpicture}
\caption{The graphs $G_3$ and $G_3'$ in the proof of
Theorem~\ref{theo:a2} with their stable coloring computed by $1$-WL}
\label{fig:3paths}
\end{figure}
\subsection{Proof of Theorem 1(2)}
To prove the second part of the theorem, it will be necessary to
briefly review the \emph{1-dimensional Weisfeiler-Leman algorithm
($1$-WL)}, which is also known as \emph{color refinement} and as
\emph{naive node classification}. The algorithm iteratively computes
a partition of the nodes of its input graph. It is convenient to
think of the classes of the partition as colors of the
nodes. Initially, all nodes have the same color. Then in each
iteration step, for all colors $c$ in the current coloring and all
nodes $v,w$ of color $c$, the nodes $v$ and $w$ get different colors
in the new coloring if there is some color $d$ such that $v$ and $w$
have different numbers of neighbors of color $d$. This refinement
process is repeated until the coloring is \emph{stable}, that is, any
two nodes $v,w$ of the same color $c$ have the same number of
neighbors of any color $d$. We say that $1$-WL \emph{distinguishes}
two graphs $G,G'$ if, after running the algorithm on the disjoint
union $G\uplus G'$ of the two graphs, in the stable coloring of
$G\uplus G'$ there is a color $c$ such that $G$ and $G'$ have a different
number of nodes of color $c$.
For the results so far, it has not mattered if we allowed backtracking
or not. Here, it makes a big difference. For the non-backtracking
version, we obtain a stronger result with an easier proof. The
following theorem is a precise quantitative statement of Theorem 1(2).
\begin{theorem}\label{theo:a2}
There are families $(G_n)_{n\ge 1}$, $(G_n')_{n\ge 1}$ of graphs of
order $|G_n|=|G_n'|=3n-1$ with the following properties.
\begin{enumerate}
\item For all $n\ge 1$, $1$-WL distinguishes $G_n$ and $G_n'$.
\item $(G_n)_{n\ge 1}$, $(G_n')_{n\ge 1}$ are indistinguishable by
the non-backtracking version of \textsc{CRaWl}{} with window size $s(n)=o(n)$
(regardless of the walk length and sample size).
\item $(G_n)_{n\ge 1}$ $(G_n')_{n\ge 1}$ are indistinguishable by
\textsc{CRaWl}{} with walk length $\ell(n)=O(n)$,
and samples size $m(n)=n^{O(1)}$ (regardless of the window size).
\end{enumerate}
\end{theorem}
\begin{proof}
The graphs $G_n$ and $G_n'$ both consist of three internally
disjoint paths with the same endnodes $x$ and $y$. In $G_n$ the lengths of
all three paths is $n$. In $G_n'$, the length of the
paths is $n-1,n,n+1$ (see
Figure~\ref{fig:3paths}).
\medskip
It is easy to see that $1$-WL
distinguishes the two graphs.
\medskip
To prove assertion (b),
let $s\coloneqq 2n-3$. Then the length of the shortest
cycle in $G_n$, $G_n'$ is $s+2$. Now consider a non-backtracking walk
$W=(w_1,\ldots,w_\ell)$ in either $G_n$ or $G_n'$ (of arbitrary length
$\ell$). Then for all $i$ and $j$ with $i-s\le j\le i$ we have
$w_i\neq w_j$, and unless $j=i-1$, there is no edge between $w_i$ and
$w_j$. Thus $X(W)=X(W')$ for all walks $W'$ of the same length $\ell$,
and since it does not matter which of the two graphs $G_n,G_n'$ the
walks are from. It follows that ${\mathcal X}_{nb}(G_n)={\mathcal X}_{nb}(G_n')$.
\medskip
Before we prove (c), we remark that the backtracking version of \textsc{CRaWl}{} can distinguish
$(G_n)_{n\ge 1}$ and $(G_n')_{n\ge 1}$ with a constant window size
$6$, walk length $n^{O(1)}$, and samples size $n^{O(1)}$. The reason
is that by going back and forth between a node and all its
neighbors within its window, \textsc{CRaWl}{} can distinguish the two degree-$3$
nodes $x,y$ from the remaining degree-$2$ nodes. Thus, the feature matrix reflects traversal times between degree-3 nodes, and the distribution of traversal times is different in $G_n$ and $G_n'$.
With sufficiently many samples, \textsc{CRaWl}{} can detect this.
So, let us turn to the proof that with walks of linear length this is not possible, that is, assertion (c).
The reason for this is simple:
random walks of length $O(n)$ are very unlikely to traverse a path
of length at least $n-1$ from $x$ to $y$. It is well known that the expected
traversal time is $\Theta(n^2)$ (this follows from the analysis of the
gambler's ruin problem). However, this does not suffice for us. We
need to bound the probability that a path of length $O(n)$ is a
traversal. Using a standard, Chernoff type tail bound, it is
straightforward to prove that for every constant
$c\ge 0$ there is a constant $d\ge 1$ such that the probability that a
random walk of length $cn$ in either $G_n$ or $G_n'$ visits both $x$
and $y$ is at most
$\exp(-n/d)$. As only walks visiting both $x$ and $y$ can
differentiate between the two graphs, this gives us an upper bound of
$\exp(-n/d)$ for the total variation distance between ${\mathcal X}(G_n)$ and
${\mathcal X}(G_n')$.
\end{proof}
\section{Conclusion}
\label{conclusion}
We have introduced a novel neural network architecture \textsc{CRaWl}{} for graph learning that is based on random walks and 1D CNNs.
Thus, \textsc{CRaWl}{} is fundamentally different from standard graph neural networks.
We demonstrated that this approach works very well across a variety of graph level tasks and is able to outperform state-of-the-art GNNs.
By construction, \textsc{CRaWl}{} can detect arbitrary substructures up to the size of its local window.
In particular, on the regular graphs of CSL where pure MPGNNs fail because of the lack of expressiveness, \textsc{CRaWl}{} is able to extract useful features and solve this task.
Future work includes extending the experimental framework to node-level tasks and to motif counting.
In both cases, one needs to scale \textsc{CRaWl}{} to work on individual large graphs instead of many medium sized ones.
\textsc{CRaWl}{} can be viewed as an attempt
to process random walks and the structures they induce with end-to-end neural networks.
The strong empirical performance demonstrates the potential of this general approach.
However, many variations remain to be explored, including different walk strategies, variations in the walk features, and alternative pooling functions for pooling walklet embeddings into nodes or edges. In view of the incomparability of the expressiveness of GNNs and \textsc{CRaWl}{}, hybrid approaches that interleave \textsc{CRaWl}{} layers and GNN layers seem attractive as well.
Beyond plain 1D-CNNs, other deep learning architectures for sequential data, such as transformer networks, could be used to process random walks.\mrrand{acks wieder einbauen (inkl unravel)}
\section*{Acknowledgements}
This work is supported by the German Research Foundation (DFG) under grants GR 1492/16-1 and GRK 2236 UnRAVeL.
\section{Experiments}
\label{experiments}
Recently, two initiatives were launched by \citet{dwivedi2020benchmarkgnns} (Benchmarking GNNs) and \citet{hu2020ogb} (Open Graph Benchmark, OGB) to improve the experimental standards used in graph learning research.
Both projects aim to solve common problems of previous experimental settings.
Those problems included varying training and evaluation protocols as well as the use of small datasets without standardized splits into training, validation, and test sets.
This made the results hard to compare.
Both projects introduced novel benchmark datasets with fixed splits and specified training and evaluation procedures.
Here, we will use datasets from both projects to evaluate the empirical capabilities of \textsc{CRaWl}{}.
In Appendix A we provide additional results for some of the formerly more common datasets.
\subsection{Datasets}
From the OGB project, we use the molecular property prediction dataset MOLPCBA with more than 400k molecules.
Each of its 128 binary targets states whether or not a molecule is active towards a particular bioassay (a method that quantifies the effect of a substance on a particular kind of living cells or tissues).
The dataset is adapted from the MoleculeNet \citep{wu2018moleculenet} and represents molecules as graphs of atoms.
It contains multidimensional node and edge features which encode information such as atomic number and chirality.
Additionally, it provides a train/val/test split that separates structurally different types of molecules for a more realistic experimental setting.
On MOLPCBA, the performance is measured in terms of the average precision (AP).
From the other initiative, started by \citet{dwivedi2020benchmarkgnns}, we use 4 datasets.
The first dataset ZINC is a molecular regression dataset.
It is a subset of 12K molecules from the larger ZINC database.
The aim is to predict the \emph{constrained solubility}, an important chemical property of molecules.
The node label is the atomic number and the edge labels specify the bond type.
The datasets CIFAR10 and MNIST are graph datasets derived from the corresponding image classification tasks and contain 60K and 70K graphs, respectively.
The original images are modeled as networks of super-pixels.
Both datasets are 10-class classification problems.
The last dataset CSL is a synthetic dataset containing 150 \emph{Cyclic Skip Link} graphs \citep{murphy2019relational}.
Those are 4-regular graphs obtained by adding cords of a fixed length to a cycle.
The formal definition and an example are provided in the appendix.
The aim is to classify the graphs by their isomorphism class.
Since all graphs are 4-regular and no node or edge features are provided, this task is unsolvable for most message passing architectures such as standard GNNs.
\input{Tables/ZINC_MNIST_CIFAR_Results}
\subsection{Experimental Setting}
We adopt the training procedure specified by \citet{dwivedi2020benchmarkgnns}.
In particular, the learning rate is initialized as $10^{-3}$ and decays with a factor of $0.5$ if the performance on the validation set stagnates for $10$ epochs.
The training stops once the learning rate falls below $10^{-6}$.
\citet{dwivedi2020benchmarkgnns} also specify that networks need to stay within parameter budgets of either 100K or 500K parameters.
This ensures a fairer comparison between different methods.
For \textsc{ZINC}, we use the larger budget of 500K parameters.
For MNIST, CIFAR10 and CSL we build \textsc{CRaWl}{} models with the smaller budget of 100K since more baseline results are available in the literature.
The OGB Project does not specify a standardized training procedure or parameter budgets.
For MOLPCBA, we train for 60 epochs and decay the learning rate once with a factor of $0.1$ after epoch 50.
During training, we always set the walk length to $\ell=50$.
For evaluation, we use walks of length $\ell=150$, except for MOLPCBA where we use $\ell=100$ for efficiency.
All hyperparameters and the exact number of trainable parameters are listed in the appendix.
There, we also specify the sets of hyperparameters we searched for each dataset.
For each dataset, we train 5 models with different random seeds, except for MOLPCBA where we trained 10 models to meet the submission requirements of the OGB project.
We report the mean performance and standard deviation across those models.
During inference, the output of each model depends on the sampled random walks.
Thus, we evaluate each model on 10 different seeds used for the generation of random walks and take the average of those runs as the model's performance.
In the appendix we provide extended results that additionally specify the internal model deviation, that is, the impact of the random walks on the performance.
Since this internal model deviation is substantially lower than the differences between the models, they are comparatively insignificant when comparing \textsc{CRaWl}{} to other models
\subsection{Baselines}
We compare the results obtained with \textsc{CRaWl}{} to a wide range of graph learning methods.
We report values that are currently listed on the leaderboard for the benchmark datasets as well as additional results from the literature that are not officially listed yet.
Our main baselines are numerous message passing GNN architectures that have been proposed in recent years (see Section \ref{relwork}).
Additional methods not yet mentioned in Section \ref{relwork} are MPNN by \cite{corso2020principal} as well as FLAG \citep{kong2020flag} which was proposed to improve the training of GNNs with adversarial data augmentation.
\subsection{Results}
Table \ref{zinc_results} provides our results on ZINC, MNIST, CIFAR10, and MOLPCBA.
On the ZINC dataset, \textsc{CRaWl}{} achieves an MAE of 0.085.
This is approximately a 40\% improvement over the current first place (PNA) of the official leaderboard.
\textsc{CRaWl}{}'s performance on MNIST dataset is on par with PNA (within standard deviation), which is also the state of the art on this dataset.
On CIFAR10, \textsc{CRaWl}{} achieves the fourth highest accuracy among the eleven compared approaches.
For MOLPCBA, we report the baseline results from the leaderboard of the OGB project.
On MOLPCBA, \textsc{CRaWl}{} yields state-of-the-art results and beats all other architectures.
\input{Tables/CSL_Results}
The results on CSL are reported in Table \ref{csl_results}.
We consider two variants of CSL, the pure task and an easier variant in which node features based on Laplacian eigenvectors are added as suggested by \citet{dwivedi2020benchmarkgnns}.
Without additional node features, \textsc{CRaWl}{} achieves an accuracy of 100\% which indicates that the task is comparatively easy for it.
None of the 5 trained \textsc{CRaWl}{} models misclassified a single graph in the test folds.
3WLGNN is theoretically capable of solving the task without additional node features but unlike \textsc{CRaWl}{} does not achieve 100\% accuracy.
Without the Laplacian features that essentially encode the solution, MPGNNs cannot distinguish the 4-regular CSL graphs and achieve at most 10\% accuracy.
With Laplacian features, all but 3WLGNN achieve very good performance.
Overall, \textsc{CRaWl}{} performs very well on a variety of datasets across several domains.
\subsection{Ablation Study}
\label{sec:ablation study}
In an ablation study on the ZINC, MOLPCBA, and CSL datasets, we evaluated the importance of the identity and adjacency encodings in the walk features and the effects of uniform walks and non-backtracking walks.
The structural encodings improve \textsc{CRaWl}{}'s performance on all three datasets.
On ZINC and CSL, the structural encodings give a significant benefit on the performance, while on MOLPCBA the improvement is only marginal.
Furthermore, non-backtracking walks consistently outperform uniform walks on all three datasets.
The detailed results are provided in Appendix C.
\section{Expressiveness}
\label{sec:expressiveness}
In this section, we report on theoretical results
comparing the expressiveness of \textsc{CRaWl}{} with that of other
methods. The additional strength of \textsc{CRaWl}{} is mainly derived from the fact
that it detects small subgraphs (of size determined by the window size
hyperparameter $s$) and can sample such subgraphs from a non-uniform, but
well-defined, distribution determined by the random walks. In this
sense, it is similar to network analysis techniques based on motif
detection \citep{alo07} and graph kernels based on counting subgraphs, such as the graphlet kernel \citep{shevispet+09}.
The following results are concerned with the basic expressiveness question which graphs
can be distinguished by the various methods, assuming that optimal
parameters for the models are available. They do not discuss how such
parameters can be learned. This limits the scope of these results,
but they still give useful intuition about the different approaches.
It is known that the expressiveness of GNNs corresponds exactly to
that of the 1-dimensional Weisfeiler-Leman algorithm (1-WL)
\citep{MorrisAAAI19,Keyulu18}, in the sense that two graphs are
distinguished by 1-WL if and only if they can be distinguished by a
GNN.
It is also known that higher-dimensional versions of WL characterize the expressiveness of higher-order GNNs \citep{MorrisAAAI19}.
\newtheorem{theorem}{Theorem}
\begin{theorem}\label{theo:1}
(1) For every $k\ge 1$ there are graphs that are distinguishable
by \textsc{CRaWl}{}, but not
by $k$-WL (and hence not by $k$-dimensional GNNs).
(2) There are graphs that are distinguishable by 1-WL (and hence
by GNNs), but not by \textsc{CRaWl}{}.
\end{theorem}
We state a precise quantitative version of the theorem and give a
proof in Appendix~D. Let us just note that for
{assertion (1)} we need a window size $s$ and walk length $\ell$
quadratic in $k$. However, the execution cost of \textsc{CRaWl}{} remains linear in the graph size $n$, compared to the $\Omega(n^k)$-execution cost for even a single layer of $k$-dimensional GNN.
For assertion (2) of the theorem, we can allow \textsc{CRaWl}{} to use a window
size and path length linear in the size of the graphs.
It can also be shown that
\textsc{CRaWl}{} with a window size polynomial in the size of the graphlets is strictly more expressive than graphlet kernels. We omit the precise result, which can be proved similarly to Theorem~\ref{theo:1}\,(1), due to space
limitations.
Let us finally remark that the expressiveness of GNNs can be
considerably strengthened by adding a random node initialization
\citep{abbceygroluk20,SatoRandom2020}. The same can be done for \textsc{CRaWl}{},
but so far the need for such a strengthening (at the cost of a higher
variance) did not arise.
\section{Introduction}
\label{introduction}
Graph data is ubiquitous across multiple domains, reaching from cheminformatics and social network analysis to \mbox{knowledge} graphs.
Being able to effectively learn on such graph data is thus extremely important.
We propose a novel neural network architecture called \textsc{CRaWl}{} (\textbf{C}NNs for \textbf{Ra}ndom \textbf{W}a\textbf{l}ks) that is based on random walks and standard 1D CNNs.
Essentially, \textsc{CRaWl}{} samples a set of random walks and extracts features that fully describe the subgraphs visible within a sliding window over these walks.
The walks with the subgraph features are then processed with standard 1D convolutions.
We experimentally verify that this approach consistently achieves state-of-the-art performance.
For example, \textsc{CRaWl}{} outperforms all other approaches on the standard graph learning benchmarks MOLPCBA \citep[graph classification;][]{hu2020ogb} and ZINC \citep[graph regression;][]{dwivedi2020benchmarkgnns}.
The \textsc{CRaWl}{} architecture was originally motivated from the empirical observation that in many application scenarios random walk based methods perform surprisingly well in comparison with graph neural networks (GNNs).
A notable example is node2vec \citep{DBLP:conf/kdd/GroverL16} in combination with various classifiers.
A second observation is that standard GNNs are not very good at detecting small subgraphs, for example, cycles of length 6 \citep{MorrisAAAI19, Keyulu18}
The distribution of such subgraphs in a graph carries relevant information about the structure of a graph, as witnessed by the extensive research on motif detection and counting \citep[e.g.][]{alo07}.
We believe that the key to the strength of \textsc{CRaWl}{} is a favorable combination of engineering and expressiveness aspects.
Even large numbers of random walks can be sampled very efficiently.
Once the random walks are available, we can rely on existing highly optimized code for 1D CNNs, which allows us to fully exploit the strengths of modern hardware.
Sampling small subgraphs in a sliding window on random walks has the advantage that even in sparse graphs it usually yields meaningful subgraph patterns.
In terms of expressiveness, \textsc{CRaWl}{} detects both the global connectivity structure in a graph by sampling longer random walks as well as the full local structure within its window size.
The gain in expressiveness compared to GNNs is mainly due to the detailed view on the local structure in the sliding window, which standard message passing GNNs \citep[e.g.][]{GilmerSRVD17} do not have.
We show that the expressiveness of \textsc{CRaWl}{} is incomparable to that of GNNs (Theorem~\ref{theo:1}).
In particular, \textsc{CRaWl}{} even detects features that are not even accessible by higher-order GNNs.
\textsc{CRaWl}{} empirically outperforms advanced message passing GNN architectures on major benchmark datasets.
On the molecular regression dataset ZINC \citep{dwivedi2020benchmarkgnns}, \textsc{CRaWl}{} improves the best results currently listed on the leaderboard by roughly 40\% (and 20\% compared to the best published approach).
\textsc{CRaWl}{} also places first on the leaderboard for MOLPCBA, a large molecular property prediction dataset from the OGB Project \citep{hu2020ogb}.
A basic requirement for graph learning methods is their isomorphism invariance, which guarantees that the result of a computation only depends on the structure and not on the specific representation of the input graph.
A \textsc{CRaWl}{} model represents a random variable defined on graphs.
This random variable is invariant \citep[in the sense of][]{MaronInvEqui19}, which means that it does not depend on a particular node numbering, but only on the isomorphism type of the input graph.
Note that it does not contradict the invariance that on every single random walk that we sample we see the vertices in a specific order and can process the vertices in this order by 1D CNNs.
\section{Method}
\label{method}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{Pictures/picture_featureMatrix2.pdf}
\caption{
Example of the information flow in a \textsc{CRaWl}{} layer for a graph with 8 nodes.
We sample a walk $W$ and compute the feature matrix~$X$ based on node embeddings $f,$ edge embeddings $g,$ and a window size of $s\!=\!4$. To this matrix we apply a 1D CNN with receptive field $r\!=\!5$ and pool the output into the nodes to update their embeddings.
%
%
%
}
\label{fig:featurematrix}
\end{figure*}
\textsc{CRaWl}{} processes random walks with convolutional neural networks.
We initially sample a large enough set of (relatively long) random walks.
Each \textsc{CRaWl}{} layer uses these walks to update a latent node embedding as follows.
For each of the walks, the layer constructs features that contain the sequences of node and edge labels that occurred.
Additionally, for every position in each walk, the features encode to which of its $s$ predecessors the current node is identical or adjacent.
The window size $s$ is a hyperparameter of the model.
These walk features are then processed by a 1D CNN.
The output of the CNN is an embedding for every position in each random walk.
These embeddings are pooled into the nodes, that is, for each node in the graph, we average over the embeddings of all the positions at which it occurs in the random walks.
Finally, the \textsc{CRaWl}{} layer uses a simple MLP to produce the new node embedding from this information.
In the full network, each layer uses the embedding produced by the preceding layer as node labels.
After the last layer, the node embeddings can be pooled to perform graph level tasks.
Effectively, through the CNN, we extract structural information from many small subgraphs of the size of the CNN's receptive field.
Those subgraphs are always connected since they are induced from the nodes of a random walk
The process of sampling random walks in a graph is not deterministic and therefore the final output of \textsc{CRaWl}{} is a random variable.
However, the output of a trained \textsc{CRaWl}{} model has low variance such that the inherent randomization does not limit our method's usefulness in real world applications.
\subsection{Random Walks}
\label{RW}
A walk of length $\ell\in\mathbb{N}$ in a graph $G = (V,E)$ is a sequence of nodes $(v_0,\dots,v_\ell)\in V^{\ell+1}$ with $v_{i-1}v_{i} \in E$ for all $i \in [\ell]$.
A random walk in a graph is obtained by starting at some initial node $v_0 \in V$ and then iteratively sampling the next node $v_{i+1}$ randomly from the neighbors $N_G(v_i)$ of the current node $v_i$.
We consider two different random walk strategies: \emph{uniform} and \emph{non-backtracking}.
The uniform walks are obtained by sampling the next node uniformly from all neighbors:
\begin{equation*}
v_{i+1} \sim \mathcal{U}\big(N_G(v_i)\big).
\end{equation*}
On sparse graphs with nodes of small degree (such as molecules) this walk strategy has a tendency to backtrack often.
This slows the traversal of the graph and interferes with the discovery of long-range patterns.
The non-backtracking walk strategy addresses this issue by excluding the previous node from the sampling (unless the degree is one):
\begin{align*}
v_{i+1} \sim \mathcal{D}_{\text{NB}}(v_i) \quad\text{with}\quad
\mathcal{D}_{\text{NB}}(v_i) &\!=\!
\begin{cases}
\mathcal{U}\big(N_G(v_i)\big), \!\!\!& \text{if $i\!=\!0 \lor \text{deg}(v_i) \!=\! 1$}\\
\mathcal{U}\big(N_G(v_i)\! \setminus \!\{v_{i-1}\}\big)\!, \!\!& \text{else}.
\end{cases}
\end{align*}
The choice of the walk strategy is a hyperparameter of \textsc{CRaWl}{}.
In our experiments the non-backtracking strategy usually performs better as shown in Section \ref{sec:ablation study}.
\textsc{CRaWl}{} initially samples $m$ random walks $\Omega=\{W_1,\dots,W_m\}$ of length $\ell$ from the input graph $G$.
The values for $m$ and $\ell$ are not fixed hyperparameters of the model but instead can be chosen at runtime.
By default, we start one walk at every node, i.e., $m=|V|$.
We noted that reducing the number of walks during training can help against overfitting and of course is a way to reduce the memory footprint which is important for large graphs.
If we choose to use fewer random walks, we sample $m=p^\ast\cdot |V|$ starting nodes uniformly at random from the nodes of the graph with chosen probability \(p^\ast\).
We typically choose $\ell\geq 50$, practically ensuring that each node appears multiple times in the walks.
For inference, we choose a larger $\ell$ of up to $150$ which improves the predictions.
While, in theory, every layer of \textsc{CRaWl}{} may use different random walks, we sample the random walks once in the beginning of a run and then make use of the same walks in every layer.
This allows us to increase the number of random walks that each layer may work with as the total number of walks is bounded by the GPU memory.
This empirically improves stability in training and also the overall performance.
We call contiguous segments $W[i\,\colon\!j]\coloneqq(w_i,\ldots,w_j)$ of a walk $W=(w_0,\ldots,w_\ell)$ \emph{walklets}. The \emph{center} of a walklet $(w_i,\ldots,w_j)$ of even length $j-i$ is the node $w_{(i+j)/2}$. For each walklet $w=(w_i,\ldots,w_j)$, by $G[w]$ we denote the subgraph induced by $G$ on the set $\{w_i,\ldots,w_j\}$. Note that $G[w]$ is connected as it contains all edges $w_{k}w_{k+1}$ for $i\le k<j$ and may contain additional edges. Also note that the $w_k$ are not necessarily distinct.
\subsection{Walk Features}
\label{WF}
Based on the walks and a local window size $s$, we define feature vectors which can then be processed by 1D CNNs.
Those feature vectors consist of four parts: one for node features, one for edge features along the walk, and the last two for local structural information.
Figure \ref{fig:featurematrix} depicts an example of a walk feature matrix and its use in a \textsc{CRaWl}{} layer.
Given a walk $W\in V^\ell$ of length $\ell-1$ in a graph $G=(V,E)$, a $d$-dimensional node embedding $f:V \rightarrow \mathbb{R}^{d}$, a $d'$-dimensional edge embedding $g:E \rightarrow \mathbb{R}^{d'}$, and a local window size $s>0$ we define the \emph{walk feature matrix} $X(W, f, g, s) \in \mathbb{R}^{\ell \times d_X}$ with feature dimension $d_X = d + d' + s + (s-1)$ as
\begin{equation*}
X(W, f, g, s) = (f_W \, g_W \, I^s_W \, A^s_W).
\end{equation*}
For ease of notation, the first dimensions of the matrices $f_W,g_W,I_W^s,A_W^s$ are indexed from \(0\) to \(\ell-1\).
Here, the \emph{node feature sequence} $f_W \in \mathbb{R}^{\ell \times d}$ and the \emph{edge feature sequence} $g_W \in \mathbb{R}^{\ell \times d'}$ are defined as the concatenation of node and edge features, respectively.
Formally,
\begin{align*}
(f_W)_{i,\_} = f(v_{i}) \qquad \text{and}\qquad
(g_W)_{i,\_} = \begin{cases}
\mathbf{0},& \text{if $i=0$}\\
g(v_{i-1}v_{i}), & \text{else.}
\end{cases}
\end{align*}
We define the \emph{local identity relation} $I^s_W \in \{0,1\}^{\ell \times s}$ and the \emph{local adjacency relation} \mbox{$A^s_W \in \{0,1\}^{\ell \times (s-1)}$} as
\begin{align*}
\left(I^s_{W}\right)_{i,j} &=
\begin{cases}
1, & \text{if $i\!-\!j\geq0 \,\land\, v_i = v_{i-j}$}\\
0, & \text{else}
\end{cases}\quad\text{and}\\
\left(A^s_{W}\right)_{i,j} &=
\begin{cases}
1, & \text{if $i\!-\!j\geq1 \,\land\, v_iv_{i-j-1} \in E$}\\
0, & \text{else}.
\end{cases}
\end{align*}
Intuitively, $I_W^s$ and $A_W^s$ are binary matrices that contain one row for every node $v_i$ in the walk $W.$
The bitstring for $v_i$ in $I_W^s$ encodes which of the $s$ predecessors of $v_i$ in $W$ are identical to $v_i$, that is, where the random walk looped or backtracked.
Similarly, $A_W^s$ stores to which of its predecessors $v_i$ has an edge in $G$.
The direct predecessor $v_{i-1}$ must share an edge with $v_i$ and is thus omitted in $A_W^s$.
Note that we do not leverage edge labels of edges not on the walk, only the existence of such edges within the local window is encoded in $A_W^s$.
For any walklet $w=W[i:i\!+\!s]$, the restriction of the walk feature matrix to rows $i,\ldots,i+s$ contains a full description about the induced subgraphs $G[w]$.
Hence, when we apply a CNN with receptive field of size at most $s+1$ to the walk feature matrix, the CNN filter has full access to the subgraph induced by the walklet within its scope.
Let $\Omega = \{W_1,\dots,W_m\}$ be the sampled set of walks.
By stacking the individual feature matrices for each walk, we get the \emph{walk feature tensor} $X(\Omega, f, g, s) \in \mathbb{R}^{m\times\ell\times d_X}$ defined as
\begin{equation*}
X(\Omega, f, g, s)_{i} = X(W_i, f, g, s).
\end{equation*}
\subsection{\textsc{CRaWl}{} Layer}
\label{CNN}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{Pictures/picture_architecture2.pdf}
\caption{Top: Update procedure of latent node embeddings $h^t$ in a \textsc{CRaWl}{} layer. $\Omega$ is a set of random walks. Bottom: Architecture of a 3-layer \textsc{CRaWl}{} network as used in the experiments.}
\label{fig:architecture}
\end{figure*}
A \textsc{CRaWl}{} network iteratively updates latent embeddings for each node.
Let $G=(V,E)$ be a graph with initial node and edge feature maps $F^V:V \rightarrow \mathbb{R}^{d_V}$ and $F^E:E \rightarrow \mathbb{R}^{d_E}$.
The function $h^{t}:V \rightarrow \mathbb{R}^{d_t}$ stores the output of the $t$-th layer of \textsc{CRaWl}{} and the initial node features are stored in $h^{0} = F^V$.
In principle, the size of the output node embedding $d_t$ is an independent hyperparameter for each layer.
In practice, we use the same size $d$ for the output node embeddings of all layers for simplicity.
The $t$-th layer of a \textsc{CRaWl}{} network constructs the walk feature tensor $X^t = X(\Omega,h^{t-1},F^E,s)$ using $h^{t-1}$ as its input node embedding and the graph's edge features $F^E$.
This walk feature tensor is then processed by a convolutional network $\text{CNN}^t$ based on 1D CNNs.
The first dimension of $X^t$ of size $m$ is viewed as the batch dimension.
The convolutional filters move along the second dimension (and therefore along each walk) while the third dimension contains the feature channels.
Each $\text{CNN}^t$ consists of 3 convolutional layers combined with ReLU activations and batch normalization.
A detailed description is provided in Appendix B.
The stack of operations has a receptive field of $s \!+\! 1$ and effectively applies an MLP to each subsection of this length in the walk feature matrices.
In each $\text{CNN}^t$, we use \emph{Depthwise Separable Convolutions} \citep{Chollet_2017_CVPR} for efficiency.
Each such CNN network uses $\mathcal{O}(d^2+sd)$ trainable parameters.
Both the time and the memory complexity of applying $\text{CNN}^t$ to $X^t$ are therefore in $\mathcal{O}\!\left(m \cdot \ell \cdot(d^2+sd)\right)$.
The output of CNN$^t$ is a tensor $C^t \in \mathbb{R}^{m \times (\ell-s) \times d}$.
Note that the second dimension is $\ell\!-\!s$ instead of $\ell$ as no padding is used in the convolutions.
Through its receptive field, the CNN operates on walklets of size $s\!+\!1$ and produces embeddings for those.
We pool those embeddings into the nodes of the graph by collecting for each node $v\in V$ all embeddings of walklets centered at~$v$.
Let $\Omega = \{ W_1,\dots,W_m\}$ be a set of walks with $W_i=\{v_{i,1},\dots,v_{i,\ell}\}$.
Then $C^t_{i,j-s/2}\in \mathbb{R}^d$ is the embedding computed by CNN$^t$ for the walklet $w=W_i[j\!-\!\frac{s}{2}:j\!+\!\frac{s}{2}]$ centered at $v_{i,j}$.
Then, the pooling operation is given as
\begin{align*}
p^t(v) &= \operatorname*{mean}_{(i,j) \in \text{center}(\Omega, s, v)} C^t_{i,j-s/2} \qquad\text{with}\\
\text{center}(\Omega, s, v) &= \left\{(i,j) ~\middle|~ v_{i,j}\! =\! v,\ i\in [m],\ \frac{s}{2}<j<\ell-\frac{s}{2}\right\}.
\end{align*}
where $\text{center}(\Omega, s, v)$ encodes the walklets of length $s\!+\!1$ in which $v$ occurs as center.
An illustration of how the output of the CNN is pooled into the nodes of the graph can be found in Figure \ref{fig:featurematrix}.
The output of the pooling step is a vector $p^t(v) \in \mathbb{R}^d$ for each $v$.
This vector is then processed by a trainable MLP $U^{t}$ with a single hidden layer of dimension $2d$ to compute the next intermediate node embedding $h^{t}(v)$.
Formally, the update procedure of a \textsc{CRaWl}{} layer is defined by
\begin{align*}
h^{t}(v) &= U^{t}\big(p^t(v)\big).
\end{align*}
The upper part of Figure \ref{fig:architecture} gives an overview over the elements of a \textsc{CRaWl}{} layer.
The runtime of each \textsc{CRaWl}{} layer is linear in the number of walk steps $m \cdot \ell$ for the CNN and nodes $|V|$ for the final MLP.
The initial generation of the random walks is in $\mathcal O\big(m\cdot \ell\big)$.
Note that \(m\) and \(\ell\) are not fixed hyperparameters.
They can be chosen freely at runtime and have no effect on the number of trainable parameters.
\subsection{Architecture}
\label{NET}
The architecture we use in the experiments, as illustrated in Figure \ref{fig:architecture} (bottom), works as follows.
The first step for running \textsc{CRaWl}{} is to compute a set of random walks $\Omega$ as described in Section \ref{RW}.
We then apply multiple \textsc{CRaWl}{} layers with residual connections.
In each \textsc{CRaWl}{} layer, we typically choose $s=8$.
After the final \textsc{CRaWl}{} layer, we apply batch normalization and a ReLU activation to the latent node embeddings before we perform a global pooling step.
As pooling we use either sum-pooling or mean-pooling.
Finally, a simple feedforward neural network is used to produce a graph-level output which can then be used in classification and regression tasks.
In our experiments, we use either an MLP with one hidden layer of dimension $d$ or a single linear layer.
Since \textsc{CRaWl}{} layers are based on iteratively updating latent node embeddings, they are fully compatible with conventional message passing layers and related techniques such as virtual nodes \citep{GilmerSRVD17,li2017learning,ishiguro2019graph}.
In our experiments, we use virtual nodes whenever this increases validation performance.
A detailed explanation of our virtual node layer is provided in the Appendix.
Combining \textsc{CRaWl}{} with message passing layers is left as future work.
We implemented \textsc{CRaWl}{} in PyTorch \citep{pytorch, pytorch-geometric}, a public repository is available at GitHub\footnote{\url{https://github.com/toenshoff/CRaWl}}.
Crucially, the random walks and the feature matrices are computed entirely on the GPU, increasing speed and reducing the data exchange between CPU and GPU.
As a downside of this approach, the current implementation struggles to stay within the available RAM of most GPUs for large graphs such as those occurring in many node classification tasks
\section*{Paper Checklist}
\textbf{For all authors...}
\renewcommand{\theenumi}{\alph{enumi}}
\begin{enumerate}
\item \emph{Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?}\\
Yes.
\item \emph{Have you read the ethics review guidelines and ensured that your paper conforms to them?}\\
Yes.
\item \emph{Did you discuss any potential negative societal impacts of your work?} \\
N/A.
\item \emph{Did you describe the limitations of your work?}\\
Yes. See Section 4 (Expressiveness).
\end{enumerate}
\textbf{If you are including theoretical results...}
\begin{enumerate}
\item \emph{Did you state the full set of assumptions of all theoretical results?}\\
Yes.
\item \emph{Did you include complete proofs of all theoretical results?} \\
Yes, in Appendix D.
\end{enumerate}
\textbf{If you ran experiments...}
\begin{enumerate}
\item \emph{Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?}\\
Yes, we include Code in the supplemental material. All Data we use is publicly available and cited.\\
We will release the code on github or a similar service, upon acceptance.
\item \emph{Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?} \\
Yes. We used standard methods for training and the standard splits of any dataset (see Section "Experiments").
Details on our Hyperparamaters can be found in Appendix B.
\item \emph{Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?}\\
We report standard deviations.
\item \emph{Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?}\\
Yes, in Appendix B3.
\end{enumerate}
\textbf{If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...}
\begin{enumerate}
\item \emph{If your work uses existing assets, did you cite the creators?}\\
Yes, we cited all datasets.
\item \emph{Did you mention the license of the assets?}\\
N/A.
\item \emph{Did you include any new assets either in the supplemental material or as a URL?}\\
No. We do not provide new assets. Our code is in supplemental material.
\item \emph{Did you discuss whether and how consent was obtained from people whose data you're using/curating?} \\
N/A.
\item \emph{Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?}\\
N/A.
\end{enumerate}
\textbf{If you used crowdsourcing or conducted research with human subjects...}
N/A.
\section{Preliminaries}
\subsection{Related Work}
\label{relwork}
Over the last few years, message passing GNNs (MPGNNs) have been the dominant type of architecture in all kinds of graph related learning tasks \citep{wu2020comprehensive}.
Thus, MPGNNs constitute the main baselines in our experiments.
Many variants of this architecture exist, such as GCN \citep{kipf2017semi}, GIN \citep{Keyulu18}, GAT \citep{velivckovic2017graph}, GraphSage \citep{DBLP:journals/corr/HamiltonYL17}, and \mbox{GatedGCN} \citep{bresson2017residual}.
A novel variant of MPGNNs is PNA \citep{corso2020principal} which combines multiple types of local aggregation to improve performance.
Another recent advance is DeeperGCN (DGCN) by \cite{li2020deepergcn} which is designed for significantly deeper GNNs.
PHC-GNNs \citep{le2021parameterized} are GNNs with complex and hypercomplex feature vectors with learned multiplication strategies.
Multiple extensions to the standard message passing framework have been proposed that strengthen the theoretical expressiveness which otherwise is bounded by the 1-dimensional Weisfeiler-Leman algorithm.
With 3WLGNN, \citet{maron2019provably} suggested a higher-order GNN, which is equivalent to the 3-dimensional Weisfeiler-Leman kernel and thus more expressive than standard MPGNNs.
In HIMP \citep{fey2020hierarchical}, the backbone of a molecule graph is extracted and then two GNNs are run in parallel on the backbone and the full molecule graph.
This allows HIMP to detect structural features that are otherwise neglected.
Explicit counts of fixed substructures such as cycles or small cliques have been added to the node and edge features by \citet{bouritsas2020improving} (GSN).
Similarly, \citet{sankar2017motif}, \citet{lee2019graph}, and \citet{peng2020motif} added the frequencies of motifs, i.e., common connected induced subgraphs, to improve the predictions of GNNs.
\citet{sankar2020beyond} introduce motif-based regularization, a framework that improves multiple MPGNNs.
A novel approach with strong empirical performance is GINE+ \citep{brossard2020graph}.
It is based on GIN and aggregates information from higher-order neighborhoods, allowing it to detect small substructures such as cycles.
Combining GINE+ with APPNP \citep{guenneman2019personalizedPagerank}, a propagation scheme based on the personalized pagerank, improves the performance even further \citep{gineappnp}.
\citet{beaini2020directional} proposed DGN, which incorporates directional awareness into message passing.
A different way to learn on graph data is to use similarity measures on graphs with graph kernels \citep{kriege2020survey}.
Graph kernels often count induced subgraphs such as graphlets, label sequences, or subtrees, which relates them conceptually to our approach.
The graphlet kernel \citep{shevispet+09} counts the occurrences of all 5-node (or more general $k$-node) subgraphs.
The Weisfeiler-Leman kernel \cite{shervashidze2011weisfeiler} is based on iterated degree sequences and effectively counts occurrences of local subtrees.
The Weisfeiler-Leman algorithm is the traditional yardstick for the expressiveness of GNN architectures.
A few previous approaches utilize either random walks or conventional CNNs in the context of end-to-end graph learning, two concepts our method is also based on.
\citet{DBLP:conf/nips/NikolentzosV20} propose a differentiable version of the random walk kernel and integrate it into a GNN architecture.
In \cite{geerts2020walk}, the \(\ell\)-walk MPGNN adds random walks directly as features to the nodes and connects the architecture theoretically to the 2-dimensional Weisfeiler-Leman algorithm.
Patchy-SAN \citep{Nie+2016} normalizes graphs in such a way that they can be interpreted by CNN layers.
\citet{zhang2018end} proposed a pooling strategy based on sorting intermediate node embeddings and presented DGCNN which applies a 1D CNN to the sorted embeddings of a graph.
Recently, \citet{yuan2021node2seq} used a 1D CNN layer and attention for neighborhood aggregation to compute node embeddings.
|
{
"timestamp": "2021-06-08T02:41:50",
"yymm": "2102",
"arxiv_id": "2102.08786",
"language": "en",
"url": "https://arxiv.org/abs/2102.08786"
}
|
\section{Introduction}\label{sec:Introduction}
Managing and reducing congestion on roads is a fundamental challenge faced across the world.
In many countries, one being the UK, there is limited scope to build additional infrastructure to cope with the demand for road traffic.
Instead, the focus is on an approach referred to as `Intelligent Mobility', broadly described as combining real-time data and modelling to improve the management of existing physical infrastructure.
An appropriate test-bed for such approaches is the UK Strategic Road Network (SRN), which constitutes around 4,400 miles of motorways and major trunk roads across England \citep[see][]{strategic_road_network_stats}.
Further, the SRN carries 30\% of all traffic in the UK, with 4 million vehicles using it everyday and 1 billion tonnes of freight being transported across it each year \citep[see][]{highways_england_srn_report}.
There is little chance that in the short term, the SRN will see significant infrastructure changes, however data describing the traffic state across the network is already being collected and made available for analysis.
Whilst a significant component of the UK transport infrastructure, congestion remains a major problem on the network, with the cited report suggesting 75\% of businesses consider tackling congestion on the SRN is important or critical to their business.
Traffic congestion can be broadly separated into two types: recurrent and non-recurrent.
Recurrent congestion is simply the result of the demand regularly exceeding capacity on busy sections of road during `rush hour' periods.
Non-recurrent congestion on the other hand is mainly caused by traffic incidents and rare incidents \citep{overview_of_traffic_incident_duration_analysis_and_prediction}.
To better manage traffic during these incidents, traffic operators require reliable estimates of how long a particular incident will last.
Whilst there is significant existing work on modelling incident duration, many fundamental challenges remain that are both of practical interest to traffic management centres and remain active areas of research in an academic sense.
A review of existing work on this problem is found in \citet{overview_of_traffic_incident_duration_analysis_and_prediction}, where six future challenges for incident duration prediction are listed.
These are: combining multiple data-sources, time sequential prediction models, outlier prediction, improvement of prediction methods through machine learning or alternative frameworks, combining recovery times and accounting for unobserved factors.
Our work aims to address five of these challenges, combining time-series from sensor networks with incident reports to issue dynamically updated duration predictions.
Inspired by approaches adopted in medical applications, we consider classical survival approaches and non-linear, machine learning methods for prediction, understanding where gains in performance are attained.
Finally, as we are able to observe traffic behaviour over long periods of time through the sensor network, we are able to judge not just when a traffic incident has been cleared, but when the traffic state has returned to normal operating conditions, thereby combining recovery times into our modelling approach.
The rest of this paper is structured as follows.
Firstly, we overview existing work on incident duration prediction, specifying how the duration is defined, existing methods and offer more detail on challenges highlighted in the literature.
Secondly, we detail our dataset, its collection and processing and an initial exploratory analysis of it.
We then describe the considered modelling approaches, both for static and dynamic predictions, and results for our dataset.
Finally, we consider what variables are important for the models and end by summarising our main findings.
Note that throughout this paper, an `event' is defined as the traffic state on a section of road returning to a baseline behaviour.
As such, when an event has `occurred' we really mean the traffic state has recovered.
This is just a note of terminology commonly used in the survival analysis literature.
\section{Background}\label{sec:LitReview}
In this section, we summarise existing work on traffic incident duration analysis that is relevant to our own, along with relevant research from other disciplines that has influenced our approach.
Before any methodologies are considered, it is first important to define exactly what is being modelled.
A traffic incident is considered to have four different time-phases: the time taken to detect and report an incident, the time to dispatch an operator to the scene, the travel time of the operator to the scene, and finally the time to clear an incident.
Such a framework is described in \cite{overview_of_traffic_incident_duration_analysis_and_prediction}.
We consider an incident to have `started' when the incident is reported by the human operator, who would use a series of cameras to observe the state of the road in the case of the SRN.
Our focus is to model the time from this point until both the incident has been physically cleared, and traffic behaviour on the road has returned to some sense of `normal' behaviour.
We describe exactly how we determine normal behaviour in section \ref{sec:DataDrivenBaseline}.
In brief, we use a seasonal model of the speed time-series to estimate when the traffic speed on a section of road has recovered to a level close to what would be normally be expected for that section at that time of day.
The idea to use such a speed profile is also considered in \citet{modelling_total_duration_of_traffic_incidents_including_incident_detection_and_recovery_time}, where they define the `total incident duration' to be the time from incident start until the speed has recovered to the profile.
Whatever explicit definition of duration is used, there is an enormous amount of work on predicting incident durations.
An initial step of many of these is to determine an appropriate distributional form that the durations take.
These are typically heavy tailed and empirically show significant variation.
Examples of this include modelling the distribution of incident durations as log-normal in \cite{an_analysis_of_the_severity_and_incident_duration_of_truck_involved_freeway_accidents} and \cite{analytical_method_to_estimate_accident_duration_using_archived_speed_profile_and_its_statistical_analysis}, log-logistic in \cite{analysis_of_cascading_incident_event_duraitons_on_urban_freeways} and \cite{estimating_freeway_incident_duraiton_using_accelerated_failure_time_modelling}, Weibull in \cite{modelling_total_duration_of_traffic_incidents_including_incident_detection_and_recovery_time} and \cite{response_time_of_highway_traffic_accidents_in_abu_Dhabi_investigation_with_hazard_based_duration_models}, and generalised F in \cite{examination_of_factors_affecting_freeway_incident_clearance_times_a_comparision_of_the_generalized_F_model_and_several_alternative_nested_models}.
In the latter, it is noted that the generalised F distribution can be equivalent to many other distributional forms for particular parameter choices, including the exponential, Weibull, log-normal, log-logistic and gamma distributions.
Hence, it offers more freedom than choosing any single one of these forms.
Indeed, the authors state that the increased flexibility it offers allows it to fit the data better.
Even further flexibility in the distributional choice is given in \citet{application_of_finite_mixture_models_for_analysing_freeway_incident_clearance_time}, where it was shown that modelling the distribution as a mixture, that is the sum of multiple components, may improve model performance.
Specifically, they consider a 2-component log-logistic mixture model, where the final distribution is the weighted average of two log-logistic distributions.
Finally, other authors \cite{forecasting_the_clearance_time_of_freeway_accidents} have had difficulty finding statistically defensible distributional fits to their data, although it should be noted that different definitions of incident durations will likely impact this.
Using common probability distribution is appealing as it limits a model's freedom and can be easier to fit to data.
However, it is clear that authors are exploring more complex distributional forms to better model the data and seeing better results when they do so, as in \citet{examination_of_factors_affecting_freeway_incident_clearance_times_a_comparision_of_the_generalized_F_model_and_several_alternative_nested_models} and \citet{application_of_finite_mixture_models_for_analysing_freeway_incident_clearance_time}.
Mixture distributions are a naturally appealing form, as we assume the data is generated by multiple sub-populations, and can have different effects of covariates for different populations.
We incorporate ideas from this section of the literature by considering models that assume log-normal and Weibull distributions on incident durations, as-well as mixture distributions where one supposes the data is generated by one of many sub-populations, each of which has a parametric form.
Recent applications of survival analysis in healthcare \cite{deephit_a_deep_learning_approach_to_survival_analysis_with_competing_risks} have removed distributional assumptions entirely, and instead formulate models that output distributions with no closed form.
This is done by treating the output space as discrete, and treating the model output as a probability mass function (PMF) defined over it, allowing for construction of a fully non-parametric estimate.
Such an approach offers even more freedom, and as we see more complex distributions used in the traffic literature to provide more freedom, one could ask if removing the distribution assumption entirely can improve model performance.
After a distribution is determined, many works focus on methods from survival analysis, with a common choice being the Accelerated Failure Time (AFT) model.
Example applications of this are given in \cite{an_exploratory_hazard_based_analysis_of_highway_incident_duration}, \cite{modelling_accident_duration_and_its_mitigation_strategies_on_south_korean_freeway_systems} and \cite{simultaneous_equation_modelling_of_freeway_accident_duration_and_lanes_blocked}.
Such a model assumes that each covariate either accelerates or decelerates the life-time of a particular individual.
These models are widely used, and offer an interpretable means of investigating what factors strongly or weakly influence incident duration.
However, it can be difficult to incorporate time-series features into them.
Whilst it is possible to model time-varying effects of covariates, for example in \cite{efficient_estimation_for_the_accelerated_faliure_time_model}, it is more complex to derive optimal features from a time-series that are also interpretable.
Clearly, AFT models are useful in that they produce interpretable outputs and relationships between variables, and can model the non-Gaussian distributions empirically observed in incident duration data.
The alternative and well known classical survival model one could apply is a Cox regression model \cite{regression_models_and_life_tables}.
Such a model assumes a baseline hazard function for the population, describing the instantaneous rate of incidents, from which survival probabilities can be calculated, see section \ref{sec:SurvivalAnalysisMethods} for more details.
Covariate vectors for individuals shift this baseline hazard allowing for individualised predictions.
Applications of this to transportation problems are given in \cite{determination_of_the_risk_factors_that_influence_occurence_time_of_traffic_accidents_with_survival_analysis}, \cite{the_effect_of_earlier_or_automatic_collision_notification_on_traffic_mortality_by_survival_analysis} and \cite{competing_risks_analysis_on_traffic_accident_duration_time}.
Whilst these two methods are widely used, a number of alternatives exist.
One such method is a sequential regression approach, an early example being \citet{a_simple_time_sequential_procedure_for_predicting_freeway_incident_duration}.
Importantly, the authors identify that more information describing an incident will become available over time, and hence consider a series of models to make sequential predictions.
However, their sequential information was more descriptive from an operational stand-point, for example identifying damage to the road and response time of rescue vehicles.
We do not have this, and instead focus on the minute-to-minute updates provided through traffic time-series recorded by sensors along the road, and how to engineer features from these.
Truncated regression approaches were discussed in \citet{analysis_of_cascading_incident_event_duraitons_on_urban_freeways}, where there was a specific effort to model `cascading' incidents (referred to as primary and secondary incidents in some literature).
The thought here was that incidents that occur nearby in time and space would lead to a significantly longer clearance time for the road segment, and hence this should be accounted for in modelling.
We performed an extensive analysis of primary and secondary incidents in our dataset in \cite{a_non_parametric_hawkes_process_model_of_primary_and_secondary_accidents_on_a_uk_smart_motorway}. Building on this and the previous cited work, we include a cascade variable in the models, allowing this to influence duration predictions.
Further regression approaches are explored in \cite{estimating_magnitude_and_duration_of_incident_delays} and \cite{cluster_based_lognormal_distribution_model_for_accident_duration}, and switching regression models are used in \cite{exploring_the_influential_factors_in_incident_clearance_time_disentangling_causation_from_self_selection_bias}.
Note that in \cite{cluster_based_lognormal_distribution_model_for_accident_duration}, the authors first cluster the incident data, then use this clustering as additional features for a model, further suggesting that there is some element of sub-population structure in the data.
A final relevant regression based work is \cite{modelling_traffic_incident_duration_using_quantile_regression}, where quantile regression is used to model incident durations.
This is a natural choice, as there is a clear skew in the empirically observed duration distributions, and if one does not want to assume a particular distributional form, they can instead model properties of the distribution, in this case quantiles of the data.
Further methodologies to note are those based on tree models or ensembles of them, particularly because we apply such a method later in this work.
Tree based models are discussed in \cite{forecasting_the_clearance_time_of_freeway_accidents}, where the authors compare models that assume particular incident duration distributions, a k-nearest-neighbour approach and a classification tree method based on predicting `short', `medium' and `long' incidents.
They concluded that no model provided accurate enough results on their dataset to warrant industrial implementation, but found the classification tree was the preferred model of those considered.
Further, \cite{prediction_of_lane_clearance_time_of_freeway_incidents_using_the_M5p_tree_algorithm} considered a regression tree approach, where the terminal nodes of each tree were themselves multivariate linear models.
Such an approach avoids binning of incidents into pre-defined categories, and achieved 42.70\% mean absolute percentage error, better than the compared reference models.
From an interdisciplinary setting, alternative tree methods have been considered, namely one known as `random survival forests' \cite{random_survival_forests} as an extension to random forests to a survival analysis setting.
In such a framework, the terminal nodes of each tree specify cumulative hazard functions for all data-points that fall into that node, and these hazards are combined across many trees to determine an ensemble hazard.
There is no defined distributional assumption in such a model, again leaning towards the side of freedom in allowing the data to construct its own hazard function estimate rather than parametrizing an estimated form.
Neural networks are a rapidly developing methodology in machine learning, and have been used extensively in incident duration prediction and form the basis for some of our considered approaches.
Examples of this include \cite{vehicle_breakdown_duration_modelling}, \cite{applying_data_fusion_techniques_to_traveler_information_services_in_highway_network} and \cite{a_computerized_feature_selection_method_using_genetic_algorithms}.
Each of these applies feed forward neural networks to determine estimates of incident durations, and particularly in \cite{a_computerized_feature_selection_method_using_genetic_algorithms} sequential prediction was considered, using two models.
The first took standard inputs, and the second took these along with detector information near the incident.
These were input into feed-forward neural networks, and used to generate point predictions.
Additional neural network applications are given in \cite{a_comparative_study_of_models_for_the_incident_duration_prediction}, where their performance is compared to that of linear regressions, decision trees, support vector machines and k-nearest-neighbour methods.
The authors find that different models have optimal performance at different incident durations, suggesting there is still much to improve on feed forward networks.
A final point to note is that neural networks have been applied to survival analysis problems in healthcare a number of times.
Examples of this include \cite{deep_surv_personalized_treatment_recommender_system_using_a_cox} which develops a Cox model, replacing linear regression with a neural network output, \cite{deephit_a_deep_learning_approach_to_survival_analysis_with_competing_risks} which removes any distributional assumptions, and \cite{dynamic_prediction_in_clinical_survival_analysis_using_temporal_convolutional_networks} which uses a sliding window mechanism and temporal convolutions for dynamic predictions.
We consider if the later two are useful in the application to traffic incidents later in this work.
Specifically, using \cite{dynamic_prediction_in_clinical_survival_analysis_using_temporal_convolutional_networks} offers an automated way to engineer features from our sensor network data, whilst being able to model a parametric or non-parametric output.
Whilst we have discussed a number of different methodological approaches, the actual features used to make these predictions, regardless of approach, appear quite consistent across different works.
In \cite{estimating_magnitude_and_duration_of_incident_delays}, the authors state that using number of lanes affected, number of vehicles involved, truck involvement, time of day, police response time and weather condition, one can explain 81\% of variation in incident duration.
An overview of various feature types is given in \cite{overview_of_traffic_incident_duration_analysis_and_prediction}, identifying incident characteristics, environmental conditions, temporal characteristics, road characteristics, traffic flow measurements, operator reactions and vehicle characteristics as important factors when modelling incident durations.
We note that we are not the first to use sensor data in incident duration analysis.
Speed data collected from roads was used in \cite{examination_of_factors_affecting_freeway_incident_clearance_times_a_comparision_of_the_generalized_F_model_and_several_alternative_nested_models}, where the authors included two features based on the speed series: if the difference between the 15th and 85th percentiles of the speed data was greater than 7mph and if the 85th percentile for speed was less than 70mph.
Further, in \cite{sequential_forecast_of_incident_duraiton_using_artifical_neural_network_models} and \cite{a_computerized_feature_selection_method_using_genetic_algorithms} the authors train two feed forward neural networks, with input features that include the speed and flow for detectors near the incident.
The first model provided a forecast just before the incident occurred, where as new data was fed into the second whenever available, updating predictions as time progressed.
In the second paper, the focus was on reducing the dimensionality of the problem through feature selection via a genetic algorithm.
We fundamentally differ from these for multiple reasons.
With regard to \cite{examination_of_factors_affecting_freeway_incident_clearance_times_a_comparision_of_the_generalized_F_model_and_several_alternative_nested_models}, this was an analysis of which factors impact incident duration the most, and where as they used hazard based models for analysis, we use the sensor data to engineer dynamic features, either manually or through temporal convolutions.
Additionally, we differ from \cite{sequential_forecast_of_incident_duraiton_using_artifical_neural_network_models} and \cite{a_computerized_feature_selection_method_using_genetic_algorithms} through how we determine features and network structure, and through predicting an output distribution, not just a point estimate.
With dynamic prediction being highlighted as an area to address in the literature, some recent papers have looked at this problem through different methods to those already discussed.
One example of this is \cite{competing_risk_mixture_model_and_text_analysis_for_sequential_incident_duration_prediction}, where a topic model was used to interpret written report messages and predictions were made as new textual information arrived.
Further, multiple regression models were built in \cite{dynamic_prediction_of_the_incident_duration_using_adaptive_feature_set}, and as different features became available, data-points were assigned clusters, and a prediction was generated using a regression model tailored to each cluster.
Lastly \cite{sequential_prediction_for_large_scale_traffic_incident_duraiton_application_and_comparision_of_survival_models} considered a five stage approach, where a prediction was made at each stage and different features were available defining these stages.
These included vehicles involved, agency response time and number of agencies responding.
While this structured approach addresses some aspects of dynamic prediction, the purely data-driven approach which we present in this paper provides much more flexibility.
\section{Data Collection \& Processing}\label{sec:Data}
As discussed, various traffic data for the SRN is provided by a system called the National Traffic Information Service (NTIS)\footnote{Technical details of the NTIS data feeds are available at \url{http://www.trafficengland.com/services-info}}.
This is both historic and real-time data.
Whilst the SRN includes all motorways and major A-roads in the UK, we focus our analysis on one the busiest UK motorways, the M25 London Orbital, pictured in \textbf{Figure 1}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.48\textwidth]{./Images/Figure_1_Map.png}
\caption{The M25 London Orbital, roughly 180 kilometres in length. The Dartford crossing, located in the east, is a short segment of road that is not included in the data set.}\label{fig:M25Plot}
\end{figure}
Inside NTIS, roads are represented by a directed graph, with edges (referred to as links from now on) being segments of road that have a constant number of lanes and no slip roads joining or leaving.
We extract all links that lie on the M25 in the clockwise direction, yielding a subset of the road network to collect data for.
Placed along links are physical sensors called `loops', which record passing vehicles and report averaged quantities each minute.
The most relevant components of NTIS to our work are incident flags that are manually entered by traffic operators.
These flags specify an incident type, for example accident or obstruction, the start and end time, date and the link the incident occurred on.
Accompanying this information are further details on the incident, for example what vehicles it involved and how many lanes were blocked if any.
We extract all incidents that occurred on the chosen links between September 1st 2017 and September 30th 2018.
Along with this, NTIS provides time-series data for each link, recording the average travel time, speed and flow for vehicles and publishing it at 1 minute intervals.
These values are determined by a combination of floating vehicle data and loop sensors.
As well as the incident flags, we extract the time-series of these quantities in the specified time period.
In total, our dataset has 4415 incidents that we train on, and 2011 incidents that we use for out of sample validation of the models.
Note that of these 4415 training incidents, we use 1324 as hold-out data to judge when to stop our training machine learning models.
\subsection{Establishing A Data-Driven Baseline}\label{sec:DataDrivenBaseline}
As discussed, incident duration consists of 4 distinct phases and we want to model the time it takes for a link to recover to some baseline state.
How to determine this baseline, and ensure it is robust to outliers whilst retaining important features of the data is an open problem.
However a natural way to approach it is to develop some seasonal model of behaviour on a link and use this seasonality as a baseline behaviour.
We define such a baseline by first taking the speed time-series and pre-filtering it by removing the periods impacted by incidents.
After, we account for any potential missing incident flags by further removing any remaining periods with a severity higher than 0.3.
Severity is defined as in \cite{anomaly_detection_and_classification_in_traffic_flow_data_from_fluctuations_in_the_flow_density_relationship}, which in short, considers the joint behaviour of the speed-flow time-series and questions what points correspond to large fluctuations from a region of typical behaviour in this relationship.
We then take this filtered series and extract the seasonal components, in our case daily and weekly components, to capture natural variability on the link.
Note that inspection of the data shows no trend.
To construct a seasonal estimate, we consider simple phase averaging, taking the median of data collected at a given time-point in a week, and STL decomposition \cite{STL_a_seasonal_trend_decompostion_procedure_based_on_loess}.
We see little difference between the two methods, so choose to use the phase average baseline for simplicity.
We define one speed baseline for each link, establishing a robust profile describing the speed behaviour on a `typical week', and replicate this over the entire data period.
It is robust in the sense that we have pre-filtered the extreme outliers. It also captures the clear seasonality in the problem and can be applied to new test data without any difficulty.
An example of this profile for a single link, along with the residuals from it are given in \textbf{Figure 2A}, along with an example incident in \textbf{Figure 2B}.
We specifically mark where the NTIS flag was raised, where it was closed, and where the speed behaviour returned to the baseline.
\begin{figure}[ht!]
\centering
\begin{minipage}[t]{.47\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_2_a_speed_baseline.pdf}
\centering \textbf{A}
\end{minipage}
~
\begin{minipage}[t]{.47\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_2_b_example_event.pdf}
\centering \textbf{B}
\end{minipage}
\caption[Example Weekly Baseline]{The baseline for a single link and an example incident. The baseline captures clear seasonality in the data. On a much shorter time-scale, we see a drop in speed in the wake of an NTIS incident flag, and a recovery after a sustained period of low speed. \textbf{(A)}: An example weekly baseline, and residuals (data - baseline) for a single link, showing 4 weeks of data. \textbf{(B)} An example comparison between NTIS end times and the Return to Normal (RTN) end time. We are trying to predict the time until we return to normal operating conditions, rather than the time at which the NTIS flag is turned off. }\label{fig:ExampleIncidentWithFlags}
\end{figure}
Using this methodology, we process our dataset such that we have a set of records with the start time of each incident and the time at which the link returned to normal.
We include a safety margin in this baseline to account for any persistent but minor problems, shifting it down by 8km/hr ($\approx 5$mph).
A link is considered to have returned to normal when its speed is above this shifted baseline for at-least 3 consecutive minutes, acting as a persistence check.
\section{Methodology}\label{sec:Methodology}
Our modelling approach compares multiple methodologies to predict incident durations.
A concise summary with all models considered and the main points of note about them is available in the supplementary material.
\subsection{Incident Features}\label{sec:EventFeatures}
As discussed, there is much work in the literature identifying the most influential features for incidents.
However, we are restricted in two ways.
The first is that at any time-step of a prediction, we only incorporate features that would be known at that step.
The second is that our dataset does not provide as comprehensive an overview of some features that are likely to be highly informative of the duration, for example we do not have in-depth injury reports or arrival time of police forces.
The features we do use are separated into time invariant and time varying categories.
The time varying features are derived from time-series of speed, flow and travel-time provided at a 1-minute resolution, which we take for the link an incident occurs on.
We first remove the seasonality from each of these series by determining `typical weeks' just as in the case of the speed baseline, then subtracting this from the time-series to generate a set of residual series.
We then hand engineer features that may be of use for some simple dynamic models, computing the gradient of the residuals at some time $t$ using the previous 5-minutes of data from $t$, as-well as simply recording the value of the series.
These are used as initial features from the time-series as they provide a sensible and intuitive summary of an incident from the sensors, however of-course more complex features might be derived from the series.
We consider models that do just this by applying temporal convolutions across the residual series, and compare the modelling results in section \ref{sec:Results}.
The time invariant features on the other hand are detailed in \textbf{Table 1}, and are a combination of what an operator might know using existing camera and phone coverage of the SRN.
For completeness we also detail the time varying features there also.
\begin{table}[ht!]
\centering
\caption[Incident Features]{Overview of the features considered for incident duration modelling. These would be recorded by an NTIS operator when an incident is declared in the system and observed on the network. The time-series features are recorded by inductive loop sensors along the road, and reported each minute. We further consider machine learning models that engineer time-series features automatically. }\label{table:EventFeatures}
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{2}
\begin{tabular}[t]{|c|c|c|}
\hline
Feature & Variable Type & Description \\
\hline
\makecell{Binned daily \\ time} & Categorical & \makecell[l]{An indicator of the time of day. Bins: Morning Rush (6a.m.-9a.m.), \\ Afternoon (9a.m.-3p.m.), Evening Rush (3p.m.-6p.m.) and Night (6p.m.-6a.m.). } \\
\hline
\makecell{Capacity \\ reduction} & Categorical & \makecell[l]{The fraction of lanes that are blocked due to the incident, binned into 0-25\%,\\ 25-50\%, 50-75\% and 75-100\%.} \\
\hline
Incident type & Categorical & \makecell[l]{Specified incident type, coded as Accident, Vehicle Obstruction, Non-Vehicle \\ Obstruction, and Abnormal Traffic.} \\
\hline
Link length & Continuous & \makecell[l]{Length of the link the incident occurred on in metres} \\
\hline
\makecell{Link downstream \\ atypical?} & Binary & \makecell[l]{Is the link downstream perturbed to some atypical state at the time of the \\ incident flag?} \\
\hline
\makecell{Link upstream \\ atypical?} & Binary & \makecell[l]{Is the link upstream perturbed to some atypical state at the time of the incident \\ flag?} \\
\hline
\makecell{Number of \\ vehicles} & Categorical & \makecell[l]{How many vehicles are involved in the incident?} \\
\hline
Has Cascade? & Binary & \makecell[l]{Did the incident occur immediately after another incident nearby in space \\ and time?} \\
\hline
Has roadworks? & Binary & \makecell[l]{Did the incident occur on a link with roadworks active?} \\
\hline
Spatial location & Categorical & \makecell[l]{Network is split into 8 sections (North, North East, East, \dots, North West) and \\ the incident location is specified with this encoding} \\
\hline
Season & Categorical & \makecell[l]{What season did the incident occur during?} \\
\hline
\makecell{Vehicle types \\ involved} & Binary & \makecell[l]{Binary variables indicating if an incident involved a car, motorcycle, lorry, \\ trailer and articulated vehicle.} \\
\hline
Weekend indicator & Binary & \makecell[l]{1 if incident occurs on a weekend, 0 otherwise.} \\
\hline
Time-Series Residual & Continuous & \makecell[l]{Speed, flow and travel time residuals from their weekly baselines (used only \\ in landmarking models)} \\
\hline
\makecell{Gradient of Time \\ Series Residual} & Continuous & \makecell[l]{Gradient of speed, flow and travel time residuals from their weekly baselines \\ (used only in landmarking models)} \\
\hline
\end{tabular}%
}
\end{table}
Whilst not exhaustive, the features in \textbf{Table 1} offer a combination of contextual information in time, space and specific to the incident.
Our choice of bins for time of day reflects typical commuting patterns in the UK.
Some authors use day and night time as separation, as in \cite{application_of_finite_mixture_models_for_analysing_freeway_incident_clearance_time}, where as others account for peak times in their binning \cite{an_exploratory_hazard_based_analysis_of_highway_incident_duration} and \cite{a_simple_time_sequential_procedure_for_predicting_freeway_incident_duration}, as we have.
\subsection{Survival Analysis Methods}\label{sec:SurvivalAnalysisMethods}
We first offer more detail on models in the vein of classic survival analysis that we will consider for our problem. We remind the reader that, following the convention in the survival analysis literature, we use the word ``event" to mean the occurrence of the outcome of interest. In our case, this is the end of a traffic incident as determined by the return of the speed to within some threshold difference from the the profile value.
Survival analysis methods aim to model some property of the duration distribution.
Let $f(t)$ be the PDF of incident durations, and $F(t)$ be the cumulative distribution function (CDF).
A key component in survival analysis is the survival function $S(t)$, which in our context describes the probability an incident has not ended by time $t$, where time is measured from the start of the incident.
Denoting the event time (the incident end time) by $T$, we formally write:
\begin{equation}
\begin{split}
S(t) &= \mathbb{P}\left( T \geq t \right) \\
&= 1 - F(t) \\
&= \int_{t}^{\infty} f(x) dx.
\end{split}
\end{equation}
Further, many survival analysis methods are concerned with the hazard function $\lambda(t)$, describing the instantaneous rate of occurrence of events.
One can show that:
\begin{equation}
\begin{split}
\lambda(t) &= \lim_{dt \to 0} \left( \frac{\mathbb{P}\left( t \leq T < t + dt \, \, | \, \, T \geq t \right)}{dt} \right) \\
&= \frac{f(t)}{S(t)}.
\end{split}
\end{equation}
In practice, this means that the instantaneous rate of events is equal to the density of events at that time divided by the probability of surviving to that time.
A final concept of note is the cumulative hazard function $\Lambda(t)$, which is the integral of the hazard function between time 0 and $t$:
\begin{equation}
\Lambda(t) = \int_{0}^t \lambda(s) ds.
\end{equation}
Using these concepts, the first model we apply is a Cox regression model, reviewed in \cite{proportional_hazards_model_a_review}.
Suppose some `individual' $i$ (incident in this application) has covariate vector $\bm{x_i}$.
A Cox model specifies the hazard function for individual $i$ as:
\begin{equation}\label{equ:CoxHazard}
\lambda_i(t \, \, | \, \, \bm{x}_i) = \lambda_0(t)e^{\bm{x}_i'\bm{\beta}}
\end{equation}
where $\lambda_0(t)$ is some baseline hazard at time $t$, and $\bm{\beta}$ is vector of regression coefficients.
The baseline hazard describes the hazard function for an individual with covariates all equal to $0$, and then it is adjusted for a particular individual with the exponential of the regression term.
In this original formulation, the covariate effect is constant in time, but the baseline hazard varies in time.
Various methods exist for estimating a baseline hazard function, with more details found in \cite{handbook_of_survival_analysis}.
In short, the baseline hazard is assumed to be piecewise constant and determined without any distributional assumptions, allowing the data to construct an approximation.
One can determine $\bm{\beta}$ by optimizing the partial likelihood:
\begin{equation}\label{equ:PartialLikelihoodBeta}
PL(\bm{\beta}) = \prod_{i=1}^N \left[ \frac{e^{\bm{x}_i'\bm{\beta}}}{\sum_{j=1}^N e^{\bm{x}_j'\bm{\beta}}Y_j(\tau_i)} \right]^{\delta_i}
\end{equation}
where $\delta_i$ is 1 if the event time is observed and 0 if it is censored and $Y_l(\tau_i)$ is 1 if individual $l$ is still at risk at time $\tau_i$.
Here $\tau_i$ represents the recorded incident duration for incident $i$.
The baseline hazard function can be determined using the Breslow estimator:
\begin{equation}
\lambda_0(\tau_i) = \frac{d_i}{\sum_{j \in \mathcal{R}(\tau_i)}e^{\bm{x}_j'\bm{\beta}}}
\end{equation}
where $\mathcal{R}(\tau_i)$ is the set of at-risk individuals at time $\tau_i$ and $d_i$ is the number of events that have occurred at the $i-th$ time.
We use the implementation of Cox models provided in the R package 'survival' \cite{survival_package_R}.
Ties in event times are handled using the `Efron' method, detailed in \cite{the_efficiency_of_coxs_likelihood_function_for_censored_data}, altering the likelihood in Eq.~(\ref{equ:PartialLikelihoodBeta}).
The next model we apply is an accelerated failure time model (recall AFT from the literature review).
Such a model supposes relationships between the survival and hazard functions of the form:
\begin{equation}
s_i(t) = s_0\left(te^{\bm{x}_i\bm{\beta}}\right), \, \, \, \lambda_i(t) = \lambda_0\left(te^{\bm{x}_i\bm{\beta}}\right)e^{\bm{x}_i\bm{\beta}}.
\end{equation}
Here, $s_0(t)$ and $\lambda_0(t)$ represent assumed baseline survival and hazard forms, and covariates `accelerate' or `decelerate' the survival time of particular individuals.
Given an assumed form, for example Weibull, Log-normal or so on with parameters $\bm{\theta}$, one can then fit this model through maximum likelihood, optimizing:
\begin{equation}\label{equ:LikelihoodAFT}
L(\bm{\theta}) = \prod_{i=1}^N \left[ f(\tau_i) \right]^{\delta_i}s(\tau_i)^{1-\delta_i}.
\end{equation}
A common way to interpret the AFT model is as a regression on the log of the durations:
\begin{equation}\label{equ:AFTEquation1}
log(\tau_i) = \tau_0 + \bm{x}_i'\bm{\beta} + \epsilon_i
\end{equation}
where $\epsilon_i$ is noise, with some assumed form.
We implement the models using the R package `flexsurv' detailed in \cite{flexsurv_package}.
As Cox and AFT models involve, in some way, a linear regression on covariates of interest, they are unable to account for potential non-linear, complex interactions and effects of variables without manual investigation and specification.
One way to account for this is to instead use random survival forests (recall RSF from the literature review), which are non-linear models based on an ensemble of individual tree models.
The basic idea is as follows.
Firstly, one takes a training set and generates $B$ bootstrap samples from it, that is samples with replacement.
Each of these samples is used to grow a decision tree, however randomness is introduced in the growing of the tree, by selecting a set of potential split variables at each point in which the tree needs to be split.
The optimal split variable from this set is chosen to optimise some survival criterion. One of the most commonly used is the log-rank splitting rule \citep{random_survival_forests}.
The tree is then grown until some criteria is met, either a maximal size or minimum number of cases remaining, and the output at the end of any branch is the cumulative hazard function for all data-points that are placed into that branch when passed through the tree.
This process is repeated several times and the collection of trees is referred to as a forest.
Each decision tree is a non-linear mapping from input covariates to the output cumulative hazard function, and the collection of many trees acts as an ensemble learner.
Ensemble models are known to show promising performance in a range of tasks, and this in addition to the non-linear decision tree models suggests such models may improve upon Cox and AFT models for certain datasets.
For our work, we use the R implementation found in \citet{random_survival_forests_for_R}.
\subsection{Deep Learning Methods}\label{sec:DeepLearningMethods}
Alternative approaches in survival analysis have focused on applying methods from deep learning to incorporate non-linear covariate effects and behaviours.
One of the first such methods was in \cite{a_neural_network_model_for_survival_data}, where a Cox model was considered, but the term $\bm{x}_i'\bm{\beta}$ was replaced with $g\left( \bm{x}_i \right)$, which was the output from a neural network given input $\bm{x}_i$.
Similarly, \cite{deep_surv_personalized_treatment_recommender_system_using_a_cox} and \cite{deep_learning_for_patient_specific_kidney_graft_survival_analysis} extended the cox model to a neural network setting, however fundamentally such models are still somewhat restrictive in that they assume a form of the hazard function.
More recently, \cite{deephit_a_deep_learning_approach_to_survival_analysis_with_competing_risks} suggested to make far fewer assumptions, and instead train a network to directly model the function $F(t \, \, | \, \, \bm{x}) = 1 - \mathbb{P}\left( T > t \right)$, referred to as the failure function.
To avoid specifying any particular form of this function, the output space was treated as discrete, defined on times $\{ t_1, t_2, \dots t_{max} \}$.
We suppose a single output value in this discrete space at time $t_j$ gives $\mathbb{P}\left( t_j \, \, | \, \, \bm{x}_i \right)$ and hence we derive $F(t_j \, \, | \, \, \bm{x})$ as:
\begin{equation}
F(t_j \, \, | \, \, \bm{x}_i) = \sum_{t = t_1}^{t_j} \mathbb{P}\left( t \, \, | \, \, \bm{x}_i \right).
\end{equation}
However, we still need to enforce that the output vector actually defines a discrete probability distribution.
A natural way to enforce this is apply a softmax function on the output layer, normalizing the sum of the values to 1 though:
\begin{equation}
\sigma\left( \bm{z} \right)_j = \frac{e^{z_j}}{\sum_{k=1}^{t_{max}}e^{z_k}}.
\end{equation}
In particular, \cite{deephit_a_deep_learning_approach_to_survival_analysis_with_competing_risks} considered an application with competing risks, where individuals experienced one of many possible events.
Here we consider a simpler case, having only one event (traffic state returning to normal), however the methodology remains consistent in this application.
As we are only able to measure if an event has occurred each minute from our sensor data, the discrete nature of the model is not restrictive in our context, yet the non-parametric output is appealing as we have seen in an exploratory analysis (described in the supplementary material) that the data does not appear to be generated from any particular, simple closed form distribution.
We adapt our implementation from the implementation found in \citet{deephit_github}.
For our implementation, we specify a $t_{max}$ value equal to the longest duration, plus a 20\% margin as in the original implementation, and define the output grid at a 1 minute resolution.
However doing so leads to a very large output space for the model, and could potentially lead to over-fitting.
To combat this, we apply dropout after every full connected layer in the network, elastic net regularization on the weights and early stopping based on holdout data.
We further consider if a parametric distribution may be sufficiently flexible when attached to a neural network model to perform well in the prediction task.
To test this, we build another model as described above, but remove the softmax output layer and replace with with a mixture distribution layer, influenced by \cite{application_of_finite_mixture_models_for_analysing_freeway_incident_clearance_time}.
We choose our mixture components to be log-normal, avoiding the other specified distributions for numerical stability.
This alters the output size to be $3 \times N_m$ where $N_m$ is the number of mixtures, a hyper-parameter to tune.
A final alternative that compromises between the full non-parametric discrete output and the parametric mixture is to allow the output layer of the network to define a set of weights, and construct a probability distribution from these weights using kernel smoothing.
Kernel smoothing is a non-parametric technique that aims to construct a distribution by summing a set of kernel functions, evaluated at given data-points.
Formally, we can write a kernel smoothed result for some desired point $z$, with kernel centres $Z_i$ and weights $\omega_i$ as:
\begin{equation}\label{equ:KernelSmoothWithWeights}
\hat{\nu}(z) = \frac{1}{h_{bw}} \sum_{i=1}^N \omega_iK\left(\frac{z-Z_i}{h_{bw}}\right).
\end{equation}
Here $h_{bw}$ is the smoothing bandwidth and the kernel $K(x)$ is often taken to be Gaussian, and the resulting estimate essentially builds a distribution as a weighted smoothed sum over all kernel centres.
A point with high weighting will result in a significant amount of mass near this location, and a wide bandwidth will smooth this mass out to the surrounding area.
Applying this to our problem, it allows us to avoid treating the output space as discrete, and instead we place a kernel centre at each point in the formerly discrete grid, and treat the neural network output (including having a softmax applied) as definitions of the weights $\omega_i$.
Doing this also enforces some amount of smoothness in the output distribution, determined by the choice of $h_{bw}$.
We choose to use a bandwidth of 3 minutes, which still allows significant freedom to the distribution.
The actual function proposed in \cite{deephit_a_deep_learning_approach_to_survival_analysis_with_competing_risks} to optimize in order to train the network is a combination of two loss values, the first accounting for the likelihood of the observed data and the second enforcing ordering.
The likelihood loss function is given as:
\begin{equation}
\mathcal{L}_1 = - \sum_{i=1}^N \left[ \delta_i \log\left( \hat{f}(\tau_i \, \, | \, \, \bm{x}_i) \right) + (1 - \delta_i) \log( 1 - \hat{F}(\tau_i \, \, | \, \, \bm{x}_i) ) \right]
\end{equation}
where $\hat{f}$ is the PDF (or PMF in the discrete case) implied by the model output, and $\hat{F}$ is the CDF or (cumulative mass function in the discrete case) implied by the model output, given a particular input $\bm{x}_i$.
This is exactly as in Eq.~(\ref{equ:LikelihoodAFT}), but taking logs, describing the likelihood of survival data.
The second loss function is written:
\begin{equation}
\begin{split}
\mathcal{L}_2 &= \sum_{i \neq j} \mathbbm{1}\left( \tau_i < \tau_j \right) \eta\left( \hat{F}(\tau_i \, \, | \, \, \bm{x}_i), \hat{F}(\tau_i \, \, | \, \, \bm{x}_j) \right) \\
\eta(x,y) &= e^{-\frac{x-y}{\eta_\sigma}}. \\
\end{split}
\end{equation}
This loss penalizes the incorrect ordering of pairs in terms of the cumulative probability at their event time.
If an incident $i$ ends before $j$, then we would expect $\hat{F}(\tau_i \, \, | \, \, \bm{x}_i)$ to be larger than $\hat{F}(\tau_i \, \, | \, \, \bm{x}_j)$, and if so this pair is considered correctly ordered.
Large deviations from correct ordering are penalized by $\eta(x,y)$.
The total loss function is then the sum of $\mathcal{L}_1$ and $\mathcal{L}_2$.
The hyper-parameter grids used for all machine learning models can be found in the supplementary material, and all models are trained using 100 instances of random search.
\subsection{Dynamic Methods}\label{sec:DynamicModels}
To this point, all models discussed in this section have been static, that is an individual has a covariate vector $\bm{x_i}$, it is passed through some model, and an estimate of its hazard function, failure function or alternative is attained.
However, in-practice our specific application contains a significant amount information that may be useful in determining the duration that is not available at the start of the incident.
Such information in the traffic domain is a police report made on the scene, recovery information, and specifically of use to us, the time-series provided by the sensors along the road.
A significant incident on a road network could lead to speed drops, flow breakdown and travel time spikes, all of which will be evident when we inspect the time-series as the incident progresses.
However, the recovery of the link to normal operating conditions is closely tied to these time-series, firstly through the level of the speed series (as this defines how far from a baseline we are), but one could imagine much richer indicators of traffic state can be mined from them.
Recall \textbf{Figure 2B}, we see significant structure in the series, having a linear drop near the start of the incident, an unstable oscillation period at lower speeds, then a recovery to normal conditions.
Further examples of these series for incident periods can be found in the supplementary material.
A number of methods have been suggested to handle dynamic predictions in a survival analysis setting.
\subsubsection{Landmarking}\label{sec:Landmarking}
With any dynamic prediction approach, the goal is to provide estimates of a hazard function, survival function or similar at some time $t$, conditioned on the fact that the individual has survived to time $t$ and any covariates they provide.
A simple method to do this is known as `landmarking' and is discussed in \cite{landmark_analysis_at_the_25_year_landmark_point}.
We note from the outset that landmarking is similar to truncated regression discussed in section \ref{sec:LitReview}, however this terminology is consistent with the wider survival analysis literature.
To carry out landmarking, one first specifies a set of `landmark times' $\{ t_{LM_1}, t_{LM_2}, \dots, t_{LM_K} \}$ at which we want to make dynamic predictions.
One then chooses some survival model, for example a Cox model, and the hazard function at landmark time $t_{LM_j}$ becomes:
\begin{equation}
\lambda_i(t \, \, | \, \, \bm{x}_i(t_{LM_j}), t_{LM_j}) = \lambda_0(t \, \, | \, \, t_{LM_j})e^{ \bm{x}_i(t_{LM_j})'\beta(t_{LM_j})}
\end{equation}
with $t_{LM_j} \leq t < t_{LM_j} + \Delta t$, for some $\Delta t$ defining how far ahead we are interested in looking.
Notice how, compared to Eq.~(\ref{equ:CoxHazard}), the covariate values $\bm{x}_i$ are replaced with those known at time $t_{LM_j}$ and the regression coefficients and baseline hazard can vary based on landmark time.
At each landmark time, only incidents that are still ongoing are retained, so the model is therefore conditioned on surviving up to this landmark time.
To account for potential time-varying effects and avoid misspecification of the regression parameters, events that occur after $t_{LM_j} + \Delta t$ are administratively censored, that is they are marked as censored if they survived past the look-ahead time of interest.
Such a model is simple to implement as one can refine the dataset at different times to produce dynamic models. However, as the landmark time grows and less data becomes available, some power may be lost when drawing statistical conclusions.
To implement these models, we choose landmark times $t_{LM}$ of $\{ 0, 15, 30, 45, 60, 120 \}$ minutes and horizons $\Delta t$ of $\{ 5, 15, 30, 45, 60, 120, 180, 240 \}$ minutes and display results throughout section \ref{sec:Results}.
Finally, the landmarking framework can be applied with models other than a Cox model, so we consider both Cox and RSF landmarking models as two candidate dynamic prediction models, with RSF offering a non-linear alternative.
The same was done to compare to dynamic models in \cite{dynamic_deephit_a_deep_learning_approach_for_dynamic_survival_analysis_with_competing_risks_based_on_longitudinal_data}.
\subsubsection{MATCH-Net Based Sliding Window Model}\label{sec:SlidingWindows}
The models considered so far require us to manually engineer features from the time-series variables to incorporate them as covariates.
As discussed, we use the level and gradient of the residual series, as these will indicate both how close the link is to reaching standard behaviour, and if the situation is getting better or worse.
However, gradients computed in short windows can be noisy, yet computed on large windows may lead to significant delay in identifying features.
Instead, we would like some automated method that, given a time-series, is able to learning meaningful features from them and incorporate them into predictions.
Such a method is proposed in \cite{dynamic_prediction_in_clinical_survival_analysis_using_temporal_convolutional_networks}, where the authors detail a sliding window model which they name MATCH-Net.
We note that the algorithm is designed to make dynamic predictions accounting for missing data, although in our application we do not have missing data so are interested in the dynamic prediction aspect only.
In this model, a window of longitudinal measurements is fed through a convolutional neural network (CNN), with the convolutions learning features from the time-series that are then used for prediction of risk in some look-ahead window.
The model slides across the data, shifting the window each time, meaning features are updated as time progresses and predictions are also shifted forward.
It is upon this we base our sliding window methodology.
Specifically, we take a historical window of length $w$, and at time $t$ feed the time-series from time $t-w$ to $t$ through a CNN, where the filters in the CNN aim to derive features from the series without any manual specification of what they should be.
We then concatenate the features output by the CNN with the time-invariant features, and then pass these through a series of fully connected layers.
In \cite{dynamic_prediction_in_clinical_survival_analysis_using_temporal_convolutional_networks}, the output layer was a discrete space upon which a softmax activation was applied, and we again consider this, a mixture of log-normal distributions and a kernel smoothed output distribution.
At each input time, we consider a window ahead of the the same length as in the fixed case, and treat $w$ as a hyper-parameter to optimize.
Since this model is more complex and has far more parameters than in the former case, we consider the discrete distribution to be piecewise constant for 5 minute intervals.
As a result, the output space decreases in size by 80\% without sacrificing too much freedom.
A schematic of the network architecture is given in \textbf{Figure 3}, with the output layer left intentionally vague to be clear that we consider multiple different forms of output.
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{./Images/Figure_3_Network.pdf}
\caption{Network schematic for sliding window model. We pass filters across the residual time-series to engineer features from each time-series, then concatenate these with the time invariant features to create a feature vector, which is passed through a series of fully connected layers, and some output layer is applied to the result. The example shown is for a single traffic incident being passed through the network. The number of boxes for features is not to scale. The window of time-series represents 3 variables and a window size of 7 in this simple example. }\label{fig:NetworkDiagram}
\end{figure}
\section{Results}\label{sec:Results}
A point infrequently discussed in the context of traffic incidents, is that there are multiple criteria that define a `good' hazard model, and multiple ways to measure this in the dynamic setting.
We discuss some of these ways in the text below.
We note also that elastic net regularization is applied to all deep learning methods, and the optimal Cox and AFT models are selected though inspection of sample-size adjusted Akaike information criterion (AIC) to avoid over-fitting.
\subsection{Discriminative Performance - Concordance Index}
The concordance index (C-index) has different definitions in the static and dynamic setting.
In the static setting, we write it as:
\begin{equation}\label{equ:CIndexFixed}
C = \mathbb{P}\left( \hat{F}(\tau_i \, \, | \, \, \bm{x}_i ) > \hat{F}(\tau_i \, \, | \, \, \bm{x}_j ) \, \, | \, \, \tau_i < \tau_j \right).
\end{equation}
Eq.~(\ref{equ:CIndexFixed}) is the so called `time dependent' definition used in \cite{deephit_a_deep_learning_approach_to_survival_analysis_with_competing_risks}, accounting for the fact that we care about the entire function $\hat{F}$ and not a single point value.
In the dynamic setting, it is written given prediction time $t$ and evaluation time $\Delta t$ as:
\begin{equation}\label{equ:CIndexVaried}
C(t, \Delta t) = \mathbb{P}\left( \hat{F}(t+\Delta t \, \, | \, \, \bm{x}_i(t)) > \hat{F}(t+\Delta t \, \, | \, \, \bm{x}_j(t)) \, \, | \, \, \tau_i < \tau_j, \tau_i < t + \Delta t \right)
\end{equation}
The only difference here is now we are specifically computing the C-index at a given prediction time and horizon rather than over the entire dataset, and this is the definition given in \cite{dynamic_deephit_a_deep_learning_approach_for_dynamic_survival_analysis_with_competing_risks_based_on_longitudinal_data}.
In computing this, we are taking the $\hat{F}$ values at some time $t + \Delta t$ and compare pairs where an incident $i$ actually ended in the horizon.
As described in \cite{dynamic_deephit_a_deep_learning_approach_for_dynamic_survival_analysis_with_competing_risks_based_on_longitudinal_data}, such a measure compares the ordering of pairs.
If individual $i$ experienced an event before individual $j$, then we would expect a good model to correctly assign more chance of an event to individual $i$ than $j$.
A model with perfect C-index, given $N$ traffic incidents, will perfectly predict the order in which the incidents will end.
This idea stems from viewing survival analysis as a ranking problem, and since we compare the CDF for two events, we see that it incorporates the entire history from a prediction time up to an evaluation time, not just a single point measurement.
A random model will achieve a C-index on average of 0.5, and a perfect model will attain a value of 1.0, so these are reference values to consider when interpreting this measure.
We formally compute Eq.~(\ref{equ:CIndexVaried}) given our dataset by evaluating:
\begin{equation}\label{equ:CIndexDynamic}
\begin{split}
C(t, \Delta t) &\approx \frac{ \sum_{i \neq j}A_{i,j} \cdot \mathbbm{1}\left( \hat{F}(t+\Delta t \, \, | \, \, \bm{x}_i(t)) > \hat{F}(t+\Delta t \, \, | \, \, \bm{x}_j(t)) \right) }{ \sum_{i \neq j} A_{i,j} } \\
A_{i,j} &= \mathbbm{1}\left( \tau_i < \tau_j, \tau_i < t + \Delta t \right)
\end{split}
\end{equation}
where we simply evaluate empirically how often the ordering is correct conditioned on the requirements.
The same is true for the static case.
If two incidents happen to give exactly the same CDF values when evaluating, we take the convention of adding 0.5 to the total rather than 0 or 1, following the convention in \cite{multivariable_prognostic_models_issues_in_developing_models}.
\subsection{Calibration Performance - Brier Score}
The brier score measures how well calibrated a model is, and compares the binary label (1 if an event has happened at some time, 0 otherwise) with the model prediction at that time.
Formally, we write:
\begin{equation}
BS(t, \Delta t) = \frac{1}{N}\sum_{i=1}^N \left( \mathbbm{1}\left( \tau_i < t + \Delta t \right) - \hat{F}(t+\Delta t \, \, | \, \, \bm{x}_i(t)) \right)^2
\end{equation}
where we sum over all events still active at time $t$, and ask did incident $i$ end before $t + \Delta t$.
If it did, then we would expect a good model to have a high CDF value at this point, with 1 being perfect (i.e. predicting the incident would end by $t + \Delta t$ for certain).
On the other hand, if the incident did not end before $t + \Delta t$, then we would expect a low CDF value.
This definition is that proposed in the supplementary material of \cite{dynamic_deephit_a_deep_learning_approach_for_dynamic_survival_analysis_with_competing_risks_based_on_longitudinal_data}.
In a sense, this measures the mean square error of a probabilistic forecast of a binary outcome.
In terms of reference values, a model that outputs a survivor function value equal to 0.5 at a particular time will have a Brier score of 0.25, so lower values than this are desirable, and a perfect model will achieve a score of 0.
\subsection{Point-Wise Performance - Mean Absolute Percentage Error}
Whilst C-index and Brier score are used throughout the survival literature, we also note that mean absolute percentage error (MAPE) is used throughout the traffic literature and has some practical relevance to our application.
Since we have no censoring, we know the true duration for each traffic incident.
As such, we can evaluate for every data-point, what is the error between a point-prediction, and the true duration.
A natural choice for such a point prediction is the median of each models output distribution, \cite{predictiion_performance_of_survival_models}, \cite{an_information_based_time_sequential_approach_to_online_incident_duration_prediction}, \cite{competing_risk_mixture_model_and_text_analysis_for_sequential_incident_duration_prediction} as the distribution of traffic incident durations is known to be heavy tailed.
We can then ask what is the point-wise error for each model.
Note that, C-index and Brier score asked about the accuracy of the output distribution, where as this asks about a single point taken from the distribution.
Highways England currently states that NTIS are measured on their prediction of the `likely delay associated with an event.'
Specifically, NTIS is scored as follows.
One aggregates all incidents that have a predicted return to profile time made at their half way point and lasted over 1 hour.
The MAPE between these predictions at the half-way point and the true values is computed.
The target for NTIS predictions is for this to be below 35\%, and it is stated in \cite{highways_englands_provison_of_information_to_road_users} that the current value in practice is 35.49\%.
There is of course a problem with this criterion, in that a `perfect' model by this standard would just always predict double the current duration, which would optimise the prediction at the mid-point, but be of no practical use aside from this.
Regardless, this rough measure allows us to frame our work in the context of the practical considerations traffic operators are currently working towards.
\subsection{Static Prediction Models}
We begin by considering how a range of models perform in the static sense.
For this, we use only the fixed covariate information available at the start of the incident to fit the models.
We apply each of the discussed models, and show results for all metrics in \textbf{Table 2}.
\begin{table}[ht!]
\centering
\caption{Performance measures for models in a static setting, where we make only a single prediction per incident using a set of time-invariant covariates. Optimal values are highlighted in bold. AFT (LN) - An accelerated failure time model assuming a log-normal distribution of incident durations. AFT (W) - An accelerated failure time model assuming a Weibull distribution of incident durations. Cox - A linear Cox regression model. RSF - Random survival forest. NN (LN) - A feed-forward neural network model with an output layer that parametrises a mixture of log-normal distributions. NN (NP) - A feed-forward neural network model with a non-parametric output layer. NN (Kernel) - A feed-forward neural network with a kernel smoothed output.}\label{table:StaticResults}
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|}
\hline
\multirow{3}{*}{Model} & \multirow{3}{*}{\makecell{C-\\Index}} & Point \\
& & -Wise \\
& & MAPE \\
\hline
AFT (LN) & 0.624 & 40.677 \\
\hline
AFT (W) & 0.624 & 38.543 \\
\hline
Cox & 0.626 & 38.545 \\
\hline
RSF & \textbf{0.676} & 39.961 \\
\hline
NN (LN) & 0.666 & 41.401 \\
\hline
NN (NP) & 0.647 & \textbf{37.416} \\
\hline
NN (Kernel) & 0.659 & 39.332 \\
\hline
\end{tabular}
~
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{9}{|c|}{Brier Score} \\
\hline
\multicolumn{8}{|c|}{Prediction Horizon (minutes)} & \multirow{2}{*}{Mean} \\
\cline{1-8}
5 & 15 & 30 & 45 & 60 & 120 & 180 & 240 & \\
\hline
\textbf{0.000} & 0.052 & 0.110 & 0.146 & 0.185 & 0.231 & 0.168 & 0.102 & 0.124 \\
\hline
\textbf{0.000} & 0.052 & 0.113 & 0.148 & 0.186 & 0.226 & 0.164 & 0.100 & 0.124 \\
\hline
\textbf{0.000} & 0.052 & 0.113 & 0.149 & 0.186 & 0.226 & 0.164 & 0.100 & 0.124 \\
\hline
\textbf{0.000} & \textbf{0.048} & \textbf{0.103} & \textbf{0.134} & \textbf{0.167} & \textbf{0.210} & \textbf{0.149} & \textbf{0.093} & \textbf{0.113} \\
\hline
\textbf{0.000} & \textbf{0.049} & 0.106 & 0.141 & 0.178 & 0.221 & 0.157 & \textbf{0.094} & 0.118 \\
\hline
\textbf{0.000} & 0.050 & 0.108 & 0.142 & 0.179 & 0.223 & 0.163 & 0.101 & 0.121 \\
\hline
\textbf{0.000} & 0.049 & 0.104 & 0.137 & 0.173 & 0.218 & 0.157 & 0.097 & 0.117 \\
\hline
\end{tabular}%
}
\end{table}
We see the ordering of incident durations, measured by the C-index, attains values between 0.624 and 0.676.
All models are informative, beating the 0.5 reference value, and the biggest gains in C-index are seen when we go from the linear to non-linear modelling frameworks.
The RSF achieves the optimal C-index, followed by the neural network with a mixture of log-normals.
In terms of point-wise error, we do not make predictions at the half-way point of incidents, we only make them at the start of an incident in this setting.
Doing so and measuring for all incidents longer than 60 minutes, we see that all models achieve a MAPE between 37\% and 41\%, with the best model being the non-parametric neural network.
No model achieves an MAPE of less than 35\%.
Finally, the optimal Brier score is always achieved by the RSF method, with the most noticeable differences observed at horizons of 120 and 180 minutes.
There is not much to distinguish many of these models, and ultimately one might suggest that in a static setting, a RSF offers a good compromise between performance measured by C-index, Brier score and MAPE, however if MAPE is the single desired criterion, a non-parametric neural network model would be preferred.
\subsection{Dynamic Prediction Models}
We now consider the models in a dynamic setting.
We consider C-index as defined in Eq.~(\ref{equ:CIndexDynamic}), and show results for it at various prediction times and horizons in \textbf{Table 3}.
Initially, the RSF achieves optimal C-index across all horizons when predicting at $t=0$.
As time of prediction increases, we see a strong favouring of neural network models, specifically the sliding window model with a kernel smoothed output achieves the optimal C-index in most cases.
There is a systematic preference for the non-parametric sliding window models compared to others at all prediction horizons when considering prediction times of 30 minutes or greater.
Even at a prediction time of 15 minutes, the non-parametric sliding window models are preferred when considering 180 and 240 minute horizons.
As a general summary of \textbf{Table 3}, one should see that out of all $47$ prediction time, prediction horizon pairs considered, the optimal model in terms of C-index is the RSF roughly 34\% of the time, the sliding window neural network with kernel output 43\% of the time, and the sliding window neural network with non-parametric output the remainder of times.
Averaged over all horizons, we see that one would prefer the RSF model when initially making predictions, but all prediction times after 15 minutes favour the kernel smoothed output, with the non-parametric neural network often similar in performance.
The neural network model parametrising a mixture of log-normals achieved the highest C-index of the neural network models in the static case, closely following the RSF model, however it never wins in the dynamic case, suggesting that when we provide the time-series features, the RSF makes better use of them initially, and then the other neural network models make better use of them as time-passes.
All models achieve C-index values higher than the reference value of 0.5, across all prediction times and horizons showing their predictions remain informative.
One point of note from \textbf{Table 3} is that the Cox model has quite poor C-index compared to the alternatives considered when predicting at a horizon of 5 minutes.
We believe this is due to the amount of administrative censoring introduced at such a short horizon.
If we look to \cite{dynamic_prediction_in_clinical_survival_analysis}, an assumption of the Cox landmarking model is that there is not too much censoring at the horizon time.
For a very short horizon of 5 minutes, almost all incidents last longer than this, and hence, when we are applying our administrative censoring, this assumption becomes invalid, and we suspect this is why the Cox model has poor results at this horizon.
\begin{table}[ht!]
\centering
\caption{C-Index values for considered models, across a range of different prediction times (when predications are made) and prediction horizons (at what time after the prediction time they are evaluated). Higher values show a better model. Optimal values for each prediction time - prediction horizon pair are shown in bold. $Cox$ - A linear Cox landmarking model. RSF - Random survival forest landmarking model. SW (LN) - Sliding window with log-normal mixture output. SW (NP) - Sliding window with non-parametric output. SW (Kernel) - Sliding window with kernel smoothed output.}\label{table:DynamicCIndexScores}
\resizebox{\textwidth}{!}{\renewcommand{\arraystretch}{1.25}\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Prediction Time & \multirow{2}{*}{Model} & \multicolumn{8}{c|}{Prediction Horizon (minutes)} & Mean Over \\
\cline{3-10}
(minutes) & & 5 & 15 & 30 & 45 & 60 & 120 & 180 & 240 & Horizons \\
\hline
\multirow{6}{*}{$t = 0$} & Cox & - & 0.851 & 0.799 & 0.754 & 0.712 & 0.654 & 0.642 & 0.638 & 0.721 \\
& RSF & - & \textbf{0.870} & \textbf{0.832} & \textbf{0.803} & \textbf{0.774} & \textbf{0.698} & \textbf{0.667} & \textbf{0.651} & \textbf{0.756} \\
& SW (LN) & - & 0.766 & 0.709 & 0.673 & 0.650 & 0.606 & 0.599 & 0.597 & 0.657 \\
& SW (NP) & - & 0.798 & 0.743 & 0.705 & 0.682 & 0.642 & 0.637 & 0.634 & 0.692 \\
& SW (Kernel) & - & 0.823 & 0.757 & 0.717 & 0.689 & 0.641 & 0.634 & 0.630 & 0.699 \\
\hline
\multirow{6}{*}{$t = 15$} & Cox & 0.513 & 0.864 & 0.784 & 0.732 & 0.698 & 0.648 & 0.639 & 0.637 & 0.689 \\
& RSF & \textbf{0.956} & \textbf{0.893} & \textbf{0.826} & \textbf{0.788} & \textbf{0.751} & \textbf{0.686} & 0.662 & 0.653 & \textbf{0.777} \\
& SW (LN) & 0.927 & 0.851 & 0.773 & 0.732 & 0.694 & 0.644 & 0.633 & 0.632 & 0.736 \\
& SW (NP) & 0.947 & 0.884 & 0.811 & 0.772 & 0.731 & 0.678 & \textbf{0.669} & \textbf{0.666} & 0.770 \\
& SW (Kernel) & 0.953 & 0.891 & 0.815 & 0.774 & 0.733 & 0.679 & 0.667 & 0.662 & 0.772 \\
\hline
\multirow{6}{*}{$t = 30$} & Cox & 0.529 & 0.861 & 0.770 & 0.731 & 0.702 & 0.662 & 0.652 & 0.648 & 0.664 \\
& RSF & 0.921 & 0.880 & 0.795 & 0.760 & 0.738 & \textbf{0.699} & 0.676 & 0.662 & 0.766 \\
& SW (LN) & 0.947 & 0.867 & 0.774 & 0.735 & 0.707 & 0.662 & 0.653 & 0.653 & 0.750 \\
& SW (NP) & 0.960 & 0.905 & 0.803 & 0.761 & 0.733 & 0.689 & \textbf{0.681} & \textbf{0.677} & 0.776 \\
& SW (Kernel) & \textbf{0.971} & \textbf{0.907} & \textbf{0.810} & \textbf{0.768} & \textbf{0.739} & 0.691 & 0.679 & 0.675 & \textbf{0.780} \\
\hline
\multirow{6}{*}{$t = 45$} & Cox & 0.504 & 0.859 & 0.796 & 0.764 & 0.724 & 0.679 & 0.667 & 0.661 & 0.707 \\
& RSF & 0.970 & 0.884 & 0.817 & 0.788 & \textbf{0.757} & \textbf{0.706} & 0.684 & 0.654 & 0.783 \\
& SW (LN) & 0.950 & 0.852 & 0.778 & 0.750 & 0.718 & 0.673 & 0.663 & 0.662 & 0.756 \\
& SW (NP) & 0.967 & 0.880 & 0.813 & 0.784 & 0.749 & 0.703 & \textbf{0.692} & \textbf{0.688} & 0.785 \\
& SW (Kernel) & \textbf{0.974} & \textbf{0.893} & \textbf{0.821} & \textbf{0.787} & 0.750 & 0.701 & 0.687 & 0.683 & \textbf{0.787} \\
\hline
\multirow{6}{*}{$t = 60$} & Cox & 0.578 & 0.872 & 0.796 & 0.755 & 0.723 & 0.684 & 0.670 & 0.668 & 0.718 \\
& RSF & 0.954 & 0.887 & 0.811 & 0.782 & 0.742 & 0.700 & 0.674 & 0.654 & 0.776 \\
& SW (LN) & 0.928 & 0.871 & 0.796 & 0.759 & 0.726 & 0.681 & 0.672 & 0.671 & 0.763 \\
& SW (NP) & 0.953 & 0.903 & 0.830 & 0.787 & 0.755 & 0.711 & \textbf{0.702} & \textbf{0.699} & 0.793 \\
& SW (Kernel) & \textbf{0.969} & \textbf{0.912} & \textbf{0.834} & \textbf{0.793} & \textbf{0.758} & \textbf{0.712} & 0.701 & 0.698 & \textbf{0.797} \\
\hline
\multirow{6}{*}{$t = 120$} & Cox & 0.522 & 0.850 & 0.804 & 0.781 & 0.750 & 0.706 & 0.692 & 0.683 & 0.724 \\
& RSF & 0.961 & 0.889 & 0.839 & 0.807 & 0.777 & 0.731 & 0.697 & 0.688 & 0.799 \\
& SW (LN) & 0.944 & 0.878 & 0.822 & 0.799 & 0.769 & 0.718 & 0.715 & 0.713 & 0.795 \\
& SW (NP) & 0.968 & 0.896 & 0.852 & 0.824 & 0.791 & \textbf{0.744} & \textbf{0.739} & \textbf{0.735} & 0.819 \\
& SW (Kernel) & \textbf{0.986} & \textbf{0.904} & \textbf{0.853} & \textbf{0.825} & \textbf{0.793} & 0.743 & 0.737 & 0.732 & \textbf{0.822} \\
\hline
\end{tabular}}
\end{table}
We further show the Brier scores for each model in \textbf{Table 4}.
Again, we observe that initially, the random survival forest achieves optimal scores across all horizons, however as time of prediction increases we gradually see the sliding window neural network with kernel smoothed output start to achieve better Brier scores for short prediction horizons.
This is again systematic, and we see for a prediction time of 120 minutes that the optimal model is the sliding window neural network with kernel smoothed output at horizons up to and including 45 minutes, but for a prediction time of 45 minutes it is only optimal for horizons up to and including 15 minutes.
One could postulate that initially, time-series provide less useful information than the fixed features, that is at the very start of an incident, we see only the state of the link before the incident, which might have been reasonably seasonal.
However, as time progresses, we will attain more informative features specific to single incidents, and in this case the fact the sliding window method engineers its own features, rather than our noisy gradient and level values we manually input to the RSF model, may prove more useful.
Despite this, if a duration is very long, say 4 hours, and make a prediction 60 minutes into it, how much do we truly expect to gain from inspecting the time-series so far? It may be that there is just simply no sign of recovery and all we can really conclude is that speed has been slow for a long time, and shows no other clear features.
Another point of note when considering Brier score is that RSF appeared to perform well compared to a non-parametric neural network model in other works.
If we look to the supplementary material of \cite{dynamic_deephit_a_deep_learning_approach_for_dynamic_survival_analysis_with_competing_risks_based_on_longitudinal_data}, we see that a neural network with a non-parametric output did not consistently improve upon the Brier score achieved by a RSF model (see table VI in the supplementary material of the cited reference).
It is unclear therefore if there is some fundamental reason for this in the modelling framework, as two entirely different datasets and applications appear to have observed the same behaviour.
Despite this, all models achieve Brier scores below the reference value of 0.25.
\begin{table}[ht!]
\centering
\caption{Brier scores for considered models, across a range of different prediction times (when predications are made) and prediction horizons (at what time after the prediction time they are evaluated). Lower values indicate a better model. Optimal values for each prediction time - prediction horizon pair are shown in bold. All keys are as defined in \textbf{Table 3}.}\label{table:DynamicBrierScores}
\resizebox{\textwidth}{!}{\renewcommand{\arraystretch}{1.25}\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Prediction Time & \multirow{2}{*}{Model} & \multicolumn{8}{c|}{Prediction Horizon (minutes)} & Mean Over \\
\cline{3-10}
(minutes) & & 5 & 15 & 30 & 45 & 60 & 120 & 180 & 240 & Horizons \\
\hline
\multirow{6}{*}{$t = 0$} & Cox & \textbf{0.000} & 0.046 & 0.096 & 0.130 & 0.171 & 0.224 & 0.164 & 0.099 & 0.116 \\
& RSF & \textbf{0.000} & \textbf{0.041} & \textbf{0.089} & \textbf{0.120} & \textbf{0.154} & \textbf{0.205} & \textbf{0.149} & \textbf{0.091} & \textbf{0.106} \\
& SW (LN) & 0.006 & 0.096 & 0.197 & 0.260 & 0.315 & 0.317 & 0.204 & 0.115 & 0.189 \\
& SW (NP) & 0.007 & 0.082 & 0.160 & 0.213 & 0.262 & 0.290 & 0.195 & 0.113 & 0.165 \\
& SW (Kernel) & 0.006 & 0.079 & 0.159 & 0.211 & 0.261 & 0.292 & 0.197 & 0.114 & 0.165 \\
\hline
\multirow{6}{*}{$t = 15$} & Cox & 0.027 & 0.062 & 0.113 & 0.152 & 0.192 & 0.218 & 0.149 & 0.088 & 0.125 \\
& RSF & \textbf{0.018} & \textbf{0.059} & \textbf{0.107} & \textbf{0.141} & \textbf{0.177} & \textbf{0.204} & \textbf{0.138} & \textbf{0.083} & \textbf{0.116} \\
& SW (LN) & 0.020 & 0.077 & 0.157 & 0.212 & 0.266 & 0.280 & 0.176 & 0.097 & 0.161 \\
& SW (NP) & 0.019 & 0.069 & 0.134 & 0.180 & 0.230 & 0.259 & 0.170 & 0.096 & 0.145 \\
& SW (Kernel) & \textbf{0.018} & 0.070 & 0.136 & 0.181 & 0.230 & 0.259 & 0.171 & 0.097 & 0.145 \\
\hline
\multirow{6}{*}{$t = 30$} & Cox & 0.025 & 0.068 & 0.127 & 0.165 & 0.198 & 0.204 & 0.138 & 0.082 & 0.126 \\
& RSF & 0.020 & \textbf{0.064} & \textbf{0.121} & \textbf{0.157} & \textbf{0.186} & \textbf{0.190} & \textbf{0.128} & \textbf{0.076} & \textbf{0.118} \\
& SW (LN) & 0.019 & 0.074 & 0.154 & 0.204 & 0.250 & 0.255 & 0.161 & 0.090 & 0.151 \\
& SW (NP) & 0.018 & 0.067 & 0.138 & 0.183 & 0.223 & 0.236 & 0.155 & 0.088 & 0.139 \\
& SW (Kernel) & \textbf{0.017} & 0.066 & 0.137 & 0.180 & 0.218 & 0.235 & 0.156 & 0.089 & 0.137 \\
\hline
\multirow{6}{*}{$t = 45$} & Cox & 0.028 & 0.073 & 0.131 & 0.162 & 0.196 & 0.194 & 0.132 & 0.082 & 0.125 \\
& RSF & 0.018 & \textbf{0.069} & \textbf{0.125} & \textbf{0.153} & \textbf{0.185} & \textbf{0.185} & \textbf{0.125} & \textbf{0.077} & \textbf{0.117} \\
& SW (LN) & 0.021 & 0.082 & 0.157 & 0.201 & 0.245 & 0.239 & 0.154 & 0.090 & 0.149 \\
& SW (NP) & 0.019 & 0.074 & 0.138 & 0.176 & 0.217 & 0.221 & 0.148 & 0.089 & 0.135 \\
& SW (Kernel) & \textbf{0.016} & 0.070 & 0.136 & 0.173 & 0.213 & 0.219 & 0.149 & 0.089 & 0.133 \\
\hline
\multirow{6}{*}{$t = 60$} & Cox & 0.034 & 0.083 & 0.139 & 0.168 & 0.196 & 0.185 & 0.129 & 0.083 & 0.127 \\
& RSF & 0.023 & 0.079 & \textbf{0.135} & \textbf{0.161} & \textbf{0.190} & \textbf{0.177} & \textbf{0.119} & \textbf{0.077} & \textbf{0.120} \\
& SW (LN) & 0.025 & 0.082 & 0.154 & 0.199 & 0.240 & 0.228 & 0.145 & 0.090 & 0.145 \\
& SW (NP) & 0.023 & 0.073 & 0.136 & 0.176 & 0.212 & 0.210 & 0.139 & 0.089 & 0.132 \\
& SW (Kernel) & \textbf{0.019} & \textbf{0.070} & \textbf{0.135} & 0.172 & 0.208 & 0.208 & 0.140 & 0.090 & 0.130 \\
\hline
\multirow{6}{*}{$t = 120$} & Cox & 0.034 & 0.090 & 0.146 & 0.168 & 0.184 & 0.169 & 0.117 & 0.083 & 0.124 \\
& RSF & 0.024 & 0.084 & 0.132 & 0.157 & \textbf{0.174} & \textbf{0.159} & \textbf{0.109} & \textbf{0.078} & \textbf{0.115} \\
& SW (LN) & 0.026 & 0.087 & 0.148 & 0.179 & 0.213 & 0.199 & 0.137 & 0.091 & 0.135 \\
& SW (NP) & 0.023 & 0.082 & \textbf{0.128} & 0.157 & 0.189 & 0.183 & 0.130 & 0.089 & 0.123 \\
& SW (Kernel) & \textbf{0.018} & \textbf{0.076} & \textbf{0.128} & \textbf{0.155} & 0.186 & 0.183 & 0.132 & 0.090 & 0.121 \\
\hline
\end{tabular}}
\end{table}
Finally, we show the error in a point prediction made at various times throughout incidents in \textbf{Table 5}.
For reference we also include the value achieved by the fixed model to get an idea of what we are gaining from making dynamic predictions.
From \textbf{Table 5}, we see that when making a prediction after 30\% of the duration of an incident has passed, we can expect between 30\% and 33\% MAPE in that prediction.
This is around an 5-10\% improvement from the prediction made from the corresponding static models at the start of the incident.
If we predict half way through an incident, we see that the neural network models all now achieve quite a significantly better MAPE value than the landmarking models, with an optimal MAPE of 21.6\% achieved by the mixture of log-normals model, followed by 22\% for the non-parametric model.
The discrepancy between the sliding window and landmarking models grows as we make predictions later and later, with the sliding window models achieving an MAPE of between 16.5\% and 17.3\% compared to a value of 26.5\% for the optimal landmarking model (RSF).
Additionally, the prediction error shows very little improvement moving beyond the 50th percentiles of an incidents duration for the RSF model, and actually increases for the Cox model, suggesting that they are not sufficiently capturing signs in the time-series that indicate the end is near.
A key point of practical interest is that with the dynamic models, we do indeed achieve a MAPE value below 35\% as desired by Highways England.
Of-course, we would need to attain data for all incidents across the UK to truly ensure that we are able to maintain this on a wider scale, but as far as we are able to measure we achieve what would be considered industrially satisfactory error rates with the dynamic models.
Whilst the landmarking models appear to plateau in point-wise performance here, we note that it is partly due to plateauing or noisy error they appear to exhibit when making predictions at large landmarking times when only very few incidents remain active.
We visualize the error per minute into incidents in the supplementary material, which shows this.
The plots within the supplementary material are more akin to something one can find in other dynamic works, for example Figure 2 in \cite{competing_risk_mixture_model_and_text_analysis_for_sequential_incident_duration_prediction}.
\begin{table}[ht!]
\centering
\caption{MAPE at various points for incidents, all of which are at-least 60 minutes long. The optimal model for each prediction point is shown in bold. The point prediction is generated as the median of the output distribution from each model. All keys are as defined in \textbf{Table 3}.}\label{table:DynamicErrorAtPercentiles}
\renewcommand{\arraystretch}{1.25}\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Model} & \multirow{3}{*}{\makecell{MAPE \\ Static Model}} & \multicolumn{4}{c|}{MAPE Dynamic Model} \\
\cline{3-6}
& & \multicolumn{4}{c|}{Percentile Into Incident Prediction Made at} \\
\cline{3-6}
& & 30th & 50th & 70th & 90th \\
\hline
Cox & 38.545 & 32.513 & 31.002 & 32.180 & 35.923 \\
\hline
RSF & 41.607 & \textbf{30.286} & 27.707 & 26.478 & 25.319 \\
\hline
SW (LN) & 37.416 & 32.839 & \textbf{21.576} & \textbf{16.506} & 11.319 \\
\hline
SW (NP) & 41.401 & 31.660 & 21.998 & 17.069 & 10.399 \\
\hline
SW (Kernel) & 39.332 & 31.056 & 22.432 & 17.319 & \textbf{10.040} \\
\hline
\end{tabular}
\end{table}
\subsection{Do Temporal Convolutions Improve Predictions?}
As we have applied methods from \cite{dynamic_prediction_in_clinical_survival_analysis_using_temporal_convolutional_networks} to formulate a sliding window model, we have naturally included temporal convolutions to generate information from the time-series.
However, a valid question one could ask is are these required in our application, or do the simple levels and gradients retain enough information to make informative predictions?
We test this by implementing a model without the CNN structure and instead feeding an input vector consisting of the time-invariant features and the level and gradients computed as in the landmarking case to a feed forward network.
Doing so results in a model that achieves a worse Brier score and C-index across all prediction time, horizon pairs and a worse error at the half way point.
Exact results are given in the supplementary material.
\section{Feature Importance and Model Interpretability}\label{sec:VariableImportance}
Variable importance is a topic often addressed in the literature, and we offer some discussion of it here for both the RSF model and the non-parametric neural network model, chosen for simplicity compared to the kernel model.
\subsection{Random Survival Forest Variable Importance}\label{sec:RSFVariableImportance}
As RSF are adaptations of random forest methods, standard variable importance metrics are well explored and readily implemented.
Recall that trees are trained based on a bootstrap sampled dataset, meaning a set of observations remains for each tree that are `out of bag' which will be used for measuring variable importance.
Given a trained forest and some variable of interest $x$, we drop the out of bag data for each tree down the tree and whenever a split on $x$ is encountered, we assign a daughter node at random instead of evaluating based on the value of $x$.
We then compute the estimates from the model doing this, and the variable importance for $x$ is the prediction error for the original ensemble subtracted from the prediction error for the new ensemble ignoring the $x$ value.
A large variable importance suggests that a variable is highly useful in accurately predicting the output.
We compute the importance for all features, then scale the importance values by diving by the largest.
This yields variable importance on a scale from 0 to 1 and we plot particularly important variables in \textbf{Figure 4}.
\begin{figure}[ht!]
\centering
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_4_a_RSF_Importance_Horizon_5.pdf}
\centering \textbf{A}
\end{minipage}
~
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_4_b_RSF_Importance_Horizon_30.pdf}
\centering \textbf{B}
\end{minipage}
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_4_c_RSF_Importance_Horizon_60.pdf}
\centering \textbf{C}
\end{minipage}
~
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_4_d_RSF_Importance_Horizon_180.pdf}
\centering \textbf{D}
\end{minipage}
\caption{Variable importance, as measured for the random survival forest model, for a subset of the important variables. Each plot is a different prediction horizon $h$, all shown are at a prediction time of $t=30$. The importance at each prediction horizon has been normalized such that the largest importance at any time is 1, and all others are relative to this. The rank of each variable at a given time is written beside each bar. Due to the scaling, one should focus on the the ranking and relative difference between bars in each plot. \textbf{(A)}: $h=5$. \textbf{(B)}: $h=30$. \textbf{(C)}: $h=60$. \textbf{(D)}: $h=180$. }\label{fig:VariableImportanceRSF}
\end{figure}
From \textbf{Figure 4}, we see that the speed value is the most important variable for horizons of 5, 30 and 60 minutes.
At 180 minutes, the most important variable becomes the time of day.
This makes intuitive sense, as we expect the time-varying features to provide more useful information about the immediate future rather than times far into the future.
Similar reasoning applies to the importance of travel time and flow.
It is interesting to note that flow is often less important than other, time-invariant features, across all horizons.
The gradients are always less important than the residual values themselves, which may be a consequence of noise when estimating them, or the fact that the we can see short term rises and falls in the traffic variables that do not indicate the incident is actually near ending, but rather traffic state is just unstable as in \textbf{Figure 4}.
The location of an incident is always somewhat important, ranking between sixth and fourth across all horizons.
This suggests clear heterogeneity in durations by location.
Note that with a location and length, a model should be able to identify single links in the network, so predictions can be specific to these if it improves performance.
However, the importance of length decays over horizons, and is always less than the location itself, so the coarse segmentation we have introduced for location seems more important than the specific link an incident occurs on.
We see that the season becomes increasingly important as horizon increases.
The type of incident varies between the sixth, fifth, eighth and sixth most important variable going from horizons of 5, 30, 60 and 180 minutes respectively.
\subsection{Neural Network Variable Importance}\label{sec:NNVarImportanceSHAP}
Recently, there has been a significant effort to improve the interpretability of prediction models, both those involving neural networks and more general frameworks.
Examples of this include \cite{why_should_I_trust_you_explaining_the_predictions_of_any_classifier} and \cite{learning_important_features_through_propagating_activation_differences}.
In the first, the general idea is to build a simpler `explainer' model $g$ that locally approximates some complex model $f$
One optimizes $g$ by penalizing complexity, and assigning more weight to data-points near the one we wish to explain the prediction of, hence resulting in local accuracy.
It was then shown in \cite{a_unified_approach_to_interpreting_model_predictions} that many existing model interpretability methods could be phrased in-terms of a concept from game-theory known as Shapley values.
In short, they proposed explainer models of the form:
\begin{equation}\label{equ:SHAPExplainerModel}
g(z') = \phi_0 + \sum_{i=1}^M\phi_iz_i'
\end{equation}
where $z_i'$ is a binary value indicating the inclusion or exclusion of a particular feature and we have $M$ features.
They then specified three properties that one might desire in a feature attribution method: local accuracy (predictions of the same input give the same output), missingness (no attributed impact of missing features) and consistency (for two models, if ones output is more sensitive to a particular feature change than another, then it achieves a higher attribution value).
The authors showed that under these properties and Eq.~(\ref{equ:SHAPExplainerModel}), the $\phi_i$ values actually coincide with Shapley values from game theory.
This approach unified many existing methods, including \cite{why_should_I_trust_you_explaining_the_predictions_of_any_classifier} and \cite{learning_important_features_through_propagating_activation_differences}.
They denoted $\phi_i$ a `SHAP value' and they are appealing as they are additive, showing how particular features shift a models predictions away from some mean $\phi_0$ to the final result for a particular data-instance.
Their absolute value shows the size of a particular features importance, however one can go deeper and ask for a given feature, is this value increasing or decreasing the final prediction of the model?
One actually computes the SHAP value for feature $i$ as:
\begin{equation}\label{equ:MainTexShapValue}
\phi_i =\sum_{S \subseteq \mathcal{M} \text{\textbackslash} {i} } \frac{|S|!(|M| - |S| - 1)!}{M!} \left[ F(S \cup \{i\}) - F(S) \right]
\end{equation}
where $\mathcal{M}$ is the set of all features.
In Eq.~(\ref{equ:MainTexShapValue}), we sum over all subsets of feature vectors that do not include feature $i$.
For any one of these sets $S$, we compute the difference between the model output using the features in $S$ and feature $i$, and the model output using only the features in $S$, shown by $F(S \cup \{i\}) - F(S)$.
The remaining term $\frac{|S|!(|M| - |S| - 1)!}{M!}$ accounts for all possible orderings of the feature vector.
It is upon this that we base our feature importance exploration for the neural network model.
We compute the SHAP values of the network, for the fixed features and the features output from the CNN deriving information from the time-series.
We use the implementation provided by the original authors\footnote{Implementation and examples given in \url{https://github.com/slundberg/shap}}, specifically the permutation method for computational speed and the incorporation of structured inputs.
More details on SHAP values are given in the supplementary material.
A point of note for this method of feature importance is that we are computing values for each output neuron in the network that correspond to particular horizons, and how this output value changes, not how some performance metric changes.
As a result, we can question if different features have more or less impact on different parts of the output distribution for a single input data-point.
First, we consider raw feature importance, that is does a variable have a large or small impact the output of the model at particular horizons, showing results in \textbf{Figure 5}.
\begin{figure}[ht!]
\centering
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_5_a_Bar_t_30_H_5.pdf}
\centering \textbf{A}
\end{minipage}
~
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_5_b_Bar_t_30_H_30.pdf}
\centering \textbf{B}
\end{minipage}
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_5_c_Bar_t_30_H_60.pdf}
\centering \textbf{C}
\end{minipage}
~
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_5_d_Bar_t_30_H_180.pdf}
\centering \textbf{D}
\end{minipage}
\caption{Variable importance, as measured for the sliding window neural network model, for a subset of the important variables. Each plot is a different prediction horizon, all shown are at a prediction time of $t=30$. The importance at each prediction time is computed by taking the average of the absolute SHAP values for each feature across all query instances, and then has been normalized such that the largest importance at any horizon is 1, and all others are relative to this. The rank of each variable at a given time is written beside each bar. Due to the scaling, one should focus on the the ranking and relative difference between bars in each plot. TSF stands for time-series feature, that have been extracted through passing temporal convolutions across the data. \textbf{(A)}: $h=5$. \textbf{(B)}: $h=30$. \textbf{(C)}: $h=60$. \textbf{(D)}: $h=180$. }\label{fig:VariableImportanceNN}
\end{figure}
From \textbf{Figure 5}, we see that for very short horizons (5 minutes) there are a large number of time-series features with high importance.
This makes intuitive sense for the same reasoning as in the RSF case.
After the time-series features, we see the time of day, location and incident type are the features with the highest impact.
Moving to a horizon of 30 minutes, we then see the time-series features become less important, and location and time of day dominate the other features.
Note here that in the 5 minute horizon, there were lots of features with quite high importance, showing quite a number influenced the model's output, but at a horizon of 30 minutes we see two with large importance and many others with far less.
At a horizon of 60 minutes, we again see the importance of the time-series features increase, but the time of day and location are still the two most important of the time-invariant features, ranking first and fifth respectively.
One might question why the time-series resurge in importance here, and we explore this further in \textbf{Figure 6} and the analysis of it.
At a long horizon of 180 minutes, the time of day is by far the most important feature, and the location is second, but is less important relatively than it was at a horizon of 30 minutes.
A natural interpretation of this might be that for long horizons into the future, knowing if an incident will overlap with rush hour or go into lunch time or the night is a good indication of if we believe it might last a long time.
Having inspected the magnitude of SHAP values, we now question how do actual feature values shift the network output, either increasing or decreasing it.
Visualizing this is more complex due to the fact that we want to plot the impact of various features, the value they attain, and if this shifted the prediction up or down.
A standard way to do this for SHAP values is to make a `beeswarm' plot, in which each data-instance is plotted as a single dot, once per each feature.
Examples of making such plots for our dataset are given in \textbf{Figure 6}.
One should read these plots as follows. Firstly, along the y-axis are features the model used, where categorical ones have been split into their one-hot encoded states.
Secondly, the x-axis displays the SHAP values, not normalized as they were in the previous analysis, showing how the particular feature shifts the model output either up or down.
Thirdly, the colour indicates the feature value. For binary features such as `Morning Rush' a high value indicates the data-point was in the morning rush.
Fourth, where many data-points had similar SHAP values for the same feature, points are expanded outwards in the y-axis, so a large vertical strip of points for a single feature indicates a high density of points at that SHAP value. Horizon is indicated by $h$ in each sub-caption, which corresponds to looking at a particular output node of the network. We split the fixed and series features to aid readability. Note that pushing the output `up' (a feature with a positive SHAP value) indicates increasing the probability mass functions value at this time-horizon.
\begin{figure}[ht!]
\centering
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_a_Fixed_T_30_H_5.pdf}
\centering \textbf{A}
\end{minipage}
~
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_b_Series_T_30_H_5.pdf}
\centering \textbf{B}
\end{minipage}
~
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_c_Fixed_T_30_H_30.pdf}
\centering \textbf{C}
\end{minipage}
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_d_Series_T_30_H_30.pdf}
\centering \textbf{D}
\end{minipage}
~
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_e_Fixed_T_30_H_60.pdf}
\centering \textbf{E}
\end{minipage}
~
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_f_Series_T_30_H_60.pdf}
\centering \textbf{F}
\end{minipage}
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_g_Fixed_T_30_H_180.pdf}
\centering \textbf{G}
\end{minipage}
~
\begin{minipage}[t]{.30\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_6_h_Series_T_30_H_180.pdf}
\centering \textbf{H}
\end{minipage}
\caption{SHAP values visualising how feature values shift the output of the network at particular horizons $h$.
\textbf{(A)}: Time-invariant features, $h=5$. \textbf{(B)}: Time-varying features, $h=5$.
\textbf{(C)}: Time-invariant features, $h=30$. \textbf{(D)}: Time-varying features, $h=30$.
\textbf{(E)}: Time-invariant features, $h=60$. \textbf{(F)}: Time-varying features, $h=60$.
\textbf{(G)}: Time-invariant features, $h=180$. \textbf{(H)}: Time-varying features, $h=180$. }\label{fig:SHAPImpact}
\end{figure}
Whilst there is a large amount of information in \textbf{Figure 6}, we breakdown the main points here.
Firstly, we see that at a horizon of 5 minutes (\textbf{Figures 6a, 8b}), there is clear coherence in the features, shown by there not being a random assortment of colours across single features.
Earlier, we saw time-series features, the time of day and location were influential factors at this horizon.
Considering the time of day more finely, we see from \textbf{Figure 6a} that when incidents are in the morning rush period, the value of the PMF at this time is decreased, suggesting the model believes it is unlikely incidents at this time of day will end very quickly.
Inspecting horizons of 30 and 60 minutes, in \textbf{Figures 6c, 6d}, we see that that when incidents occur in the morning rush, this increases the output values at these horizons.
Finally, there appears to be more complex behaviour at a horizon of 180 minutes, as incidents occurring in the morning rush sometimes increase, and sometimes decrease the output at this time.
If we turn instead to view how a location impacts the result, for example inspecting the SHAP values for `West' we see that attaining a value of 1 here decreases the model output at a horizon of 5 minutes (\textbf{Figure 6a}), increases it at horizons of 30 and 60 minutes (\textbf{Figures 6c, 6e}) and then decreases it again at a horizon of 180 minutes (\textbf{Figure 6g}).
Note however that since some of these features are in-fact categorical and have been one-hot encoded for use with a neural network, care must be taken in interpreting the impact of such values.
In doing this analysis, we attain a SHAP value for each feature, however every data-point has as single location value equal to 1, and the rest equal to 0.
So each location feature here will alter the neural network output, but the total impact of a data-point having a particular location will be the sum of the SHAP values for that data-point over all encoded categories.
As such, we can better visualize the impact of categorical variables, for example location, by first summing the SHAP values for a data-point for all encoded groups of a particular feature, then visualizing how the overall feature impacted predictions.
We do so in \textbf{Figure 7}.
\begin{figure}[ht!]
\centering
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_7_a_t_30_H_5.pdf}
\centering \textbf{A}
\end{minipage}
~
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_7_b_t_30_H_30.pdf}
\centering \textbf{B}
\end{minipage}
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_7_c_t_30_H_60.pdf}
\centering \textbf{C}
\end{minipage}
~
\begin{minipage}[t]{.46\textwidth}
\includegraphics[width=\textwidth]{./Images/Figure_7_d_t_30_H_180.pdf}
\centering \textbf{D}
\end{minipage}
\caption{SHAP values for the location feature. In each plot, we have summed the SHAP values for each one-hot encoded value, and then only plotted the resulting value in the row corresponding to each data-points true feature value. This shows the overall impact of the location feature, and allows one to view this impact separately for each value it attains. \textbf{(A)}: $h=5$. \textbf{(B)}: $h=30$. \textbf{(C)}: $h=60$. \textbf{(D)}: $h=180$. }\label{fig:SHAPLocationSplit}
\end{figure}
The more intuitive description offered in \textbf{Figure 7} allows one to see the overall impact of the location variable.
An example interpretation one could read from \textbf{Figure 7} is: for data-points where the spatial location was east, the overall impact of having this location on the model output was an increase in the output at horizons of 5 and 30 minutes, and a decrease at 60 and 180 minutes.
From this more refined view, we can see that incidents in the north east are quite varied, as their SHAP values sometimes shift up and sometimes shift down for all horizons.
Incidents in the north typically have the output increased at horizons of 5 and 30 minutes (\textbf{Figures 7a, 7b}) and then decreased at horizons of 60 and 180 minutes (\textbf{Figures 7c, 7d}).
This suggests that, for example, the model has learnt incidents in the north are of shorter duration than incidents in the south, however this can then be adjusted further by other observed features.
Locations can be compared in this way for all possible pairs.
A further question of interpretability relates to the features engineered from the time-series: what effect do these have on the model?
We previously saw that they were highly important at 5 and 60 minute horizons, but we can now use \textbf{Figure 6} to consider what impact they are having.
If we start with a horizon of 5 minutes, we see from \textbf{Figure 6b} that when the time-series features attain a high value, the model output at this horizon is increased.
We do not have an interpretable explanation of these features, but we see that they can provide quite significant shifts up in the output if their values are high.
Moving to a horizon of 30 minutes, we see from \textbf{Figure 6d} that the series features become less coherent.
Some features attain a high value and shift the output up, others down, and the overall impact is small for many features compared to the impact of the fixed features.
This does indicate that the features learnt from the series are distinct in some sense, providing different impacts on the model output.
At a horizon of 60 minutes, we see from \textbf{Figure 6f} that the features attaining a high value indicates a decrease in the model value at this point.
From this we can interpret that when high values of features from the are attained, they are making the model put more mass in the immediate future, and less at horizons of 60 minutes or longer.
This is perhaps an intuitive result, that the time-series are providing information that can significantly increase the model output at short term horizons, and attaining these same values shifts down the predictions at long horizons.
Of course, the sheer amount of data available here is somewhat overwhelming, however using SHAP values one can gain a significant understanding of why the machine learning model is outputting particular values.
We further show plots for the overall impact of categorical features in the supplementary material, for interested readers, omitted for brevity here.
However, care must always be taken in ensuring that feature importance and causality are not assumed, rather we are able to question why the model gives a set of outputs for a particular set of inputs.
\section{Conclusion}\label{sec:Conclusion}
In this work, we have addressed a number of issues raised in the literature regarding traffic incident duration prediction.
Firstly, we considered a method to determine incident duration as not only when an operator declared it cleared, but also when the traffic stated had returned to some typical behaviour.
This ensured that our predictions reflected when commuters could expect normal traffic conditions to resume if an incident had a significant impact even after it was cleared.
Secondly, we considered a range of models, some based on classic survival analysis and others based on machine learning and assessed how they performed on our dataset in both a static and dynamic setting.
In-particular, we took inspiration from work in the domain of healthcare and applied emerging methods used there to problems in traffic incident analysis.
We saw that in a static setting, there was little to choose between the models but generally either a neural network or random survival forest method would be preferred to the others considered.
We moved into the dynamic setting by utilising landmarking and sliding window neural networks that applied temporal convolutions to the time-series, both of which were inspired by success in healthcare applications.
We saw non-parametric methods that made no distributional assumptions were preferred to methods that parametrized mixture distributions, and saw some benefit in applying kernel smoothing to enforce some minimal structure on a non-parametric output.
Utilizing models with non-parametric distributional forms was influenced by the increasingly complex distributions being considered for this problem in the traffic literature, and showed significant promise in our results.
We assessed how each model performed using three different scoring criteria and in the dynamic sense, we saw clear structure in the results, with the kernel smoothed neural network model achieving an optimal C-index, Brier score depending on prediction time and horizon as to which model was preferred between a random survival forest and kernel smoothed neural network, and finally saw the neural network models showed much better performance in terms of point-wise error than the comparison models.
We saw that all score criteria were improved by engineering features from windows of sensor data through temporal convolutions, compared to feeding in local levels and gradients, suggesting future work should continue to explore methods of deriving features from sensor data rather than inputting raw values.
After, we considered variable importance for our model in the dynamic sense, assessing how the random survival forest and neural network models were influenced by the derived features.
Whilst we are aware of variable importance being studied in the traffic literature previously, we are not aware of SHAP values being applied to neural network models for incident duration analysis.
Time of day and location were generally important across models, particularly at long horizons, and the time-series features were shown to have significant impact on the neural network output at 5 and 60 minute horizons.
Finally, our suggestion to use the output of a neural network to define kernel weights, and from this construct a non-parametric distribution through smoothing was novel in that we are not aware of this being done before to the considered model.
It has relevance to other applications, specifically if one requires a continuous distribution to be output, but does not want to make strong parametric assumptions about what form this will take.
Further work could be done to consider alternative methods to select a bandwidth when applying this type of model, but the freedom it provides appeared promising on our data.
Other avenues for further work include incorporating more features in the dynamic prediction models.
In-particular, traffic operators will be able to report when recovery vehicles arrive, police involvement and details from on-site reports.
Further, social media and weather data could be collected in real-time.
It is likely these will have some predictive power for the duration of an incident, and considering how best to incorporate them is an interesting problem.
Finally, from our analysis in section \ref{sec:Results} we suspect that if one was able to derive robust, complex features from the time-series and feed them into a random survival forest model, we may see further improvements in the model.
One way to do this may be through an auto-encoder framework in which we pre-train a model to determine hidden representations of the series, however it is unclear if these will offer the same predictive power as we have observed from the time-series here.
\section*{Funding}
This work was supported by the EPSRC (grant number EP/L015374/1).
\section*{Acknowledgments}
We thank Dr. Steve Hilditch, Thales UK for sharing expertise on NTIS and UK transportation systems.
|
{
"timestamp": "2021-02-18T02:16:45",
"yymm": "2102",
"arxiv_id": "2102.08729",
"language": "en",
"url": "https://arxiv.org/abs/2102.08729"
}
|
\section{Introduction}
Let $f:\mathbf{R}^d\to \mathbf{C}$ be a radial Schwartz function.
Let $\mathcal{F}(f)(\xi)$ be the Fourier transformation of $f$
\[
\mathcal{F}(f)(\xi):=\int_{\mathbf{R}^d}f(x)e^{2\pi i \left<x,\xi\right>} dx.
\]
Radchenko and Viazovska recently proved an elegant formula~\cite{inter} for $d=1$ that expresses the value of $f$ at any
given point in terms of the values of $f$ and $\mathcal{F}(f)$ on the
set $\{ \sqrt{|n|}:n\in \mathbf{Z}\}.$ Their method can be generalized to every $d\geq1.$
\\
Cohn, Kumar, Miller, Radchenko and Viazovska~\cite[Theorem 1.7]{Maryna3} developed new Fourier interpolation formulas to prove the optimality of the $E_8$ and the leech lattice. Their formulas express the value of $f$ at any
given point in terms of the values of $f,$ $\mathcal{F}(f),$ $\frac{d f}{du}$ and $\frac{d \mathcal{F}(f)}{du}$ on the
set $\{ \sqrt{2|n|}:n\geq n_d, \text{ and }n\in \mathbf{Z}\},$ where $u=|x|^2,$ and $(d,n_d)=(8,1), (24,2).$
\\
In the last section of their paper the authors ask two deep questions. The first question speculate~\cite[Open problem 7.1]{Maryna3} the existence of interpolation formulas using the values of the higher derivatives $\frac{d^k f}{du^k}$ and $\frac{d^k \mathcal{F}(f)}{du^k}.$ They state that that their methods cannot apply to $k \geq 2$ without serious modification.
In the second question, they speculate~\cite[Open problem 7.3]{Maryna3} the existence of Fourier interpolation formulas for other discrete sets.
They state that the special nature of the interpolation points $\sqrt{2n}$ plays an essential role in their proofs.
\\
One particular case of discrete sets is related to the optimality of the hexagonal lattice.
They conjectured the following based on their numerical experiments. We discuss its relation to the hexagonal lattice in section~\ref{opin}.
\begin{conjecture}\cite[Conjecture 7.5]{Maryna3} Let $r_1, r_2,\dots $ be the positive real numbers of the form
$
(4/3)^{1/4}\sqrt{
j^2+jk+k^2},$ where j and k are integers. Then radial Schwartz
functions $f :\mathbf{R}^2\to \mathbf{R}$ are not uniquely determined by the values of $f(r_n),$ $\mathcal{F}(f)(r_n),$ $\frac{d f}{du}(r_n)$ and $\frac{d \mathcal{F}(f)}{du}(r_n)$ for $n\geq1.$
\end{conjecture}
Our main goal in the paper is to address these two questions. We develop new interpolation formulas using the values of the higher derivatives on new discrete sets. In particular, we prove the above conjecture in Theorem~\ref{mainconj}. We restrict our formulas to $d=2$ for clarity of exposition.
\subsection{Interpolation with values} Suppose that $l\geq 6$ is an integer and $l\equiv 2$ mod $4$. In this section, we discuss our main theorem in the special case $k=0$ and $f(x)=e^{i\pi \tau |x|^2},$ where $\Im(\tau),\Im(-\frac{1}{\tau})>\sin\left(\frac{\pi}{l}\right).$
Let $\mathcal{F}(f)(\xi)$ be the Fourier transformation of $f$
\[
\mathcal{F}(f)(\xi):=\int_{\mathbf{R}^2}f(x)e^{2\pi i \left<x,\xi\right>} dx.
\]
It is well-known that $\mathcal{F}(f)(\xi)=\frac{i}{\tau}e^{i\pi \frac{-1}{\tau} |\xi|^2}$. We define
\[
f^{\varepsilon}:=f+\varepsilon\mathcal{F}(f),
\]
where $\varepsilon=\pm1.$ Note that $f^{\varepsilon}$ is an eigenfunction of the Fourier transformation with eigenvalue $\varepsilon$ and
\[
f=\frac{f^++f^-}{2}.
\]
Next, we introduce a family of $\pm1$ eigenfunctions for the Fourier transformation. Let $\Gamma$ be the triangle group $(2,l,\infty);$ see Figure~\ref{heckef}. Let
\begin{equation}\label{dim1}
d_{\varepsilon}= \begin{cases} 0, \text{ if } \varepsilon=+1, \\ 1, \text{ if } \varepsilon=-1.\end{cases}
\end{equation}
Later, we identify $d_{\varepsilon}$ with the dimension of a specific space of modular forms of $\Gamma$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw[-latex](\myxlow,0) -- (\myxhigh ,0);
\pgfmathsetmacro{\succofmyxlow}{\myxlow+1}
\foreach \x in {\myxlow,\succofmyxlow,...,\myxhigh}
{ \draw (\x,-0) -- (\x,-0.0) node[below,font=\tiny] {\x};
}
{ \draw (-0.866,-0.1)node[below,font=\tiny]{$-\cos\left(\frac{\pi}{l}\right)$}--(-0.866,0.5)node[left,font=\tiny]{$w_1$}--(0,0.5)node[left,font=\tiny]{$\sin\left(\frac{\pi}{l}\right)$} --(0.866,0.5)node[right,font=\tiny]{$w_2$} -- (0.866,-0.1) node[below,font=\tiny] {$\cos\left(\frac{\pi}{l}\right)$};
}
\foreach \y in {0,1}
{ \draw (0,\y) -- (-0.0,\y) node[left,font=\tiny] {\pgfmathprintnumber{\y}};
}
\draw[-latex](0,-0.0) -- (0,1.6);
\begin{scope}
\clip (\myxlow,0) rectangle (\myxhigh,1.2);
{ \draw[very thin, blue] (1,0) arc(0:180:1);
}
\end{scope}
\begin{scope}
\begin{pgfonlayer}{background}
\clip (-0.866,0) rectangle (0.866,1.7);
\clip (1,1.7) -| (-1,0) arc (180:0:1) -- cycle;
\fill[gray,opacity=0.8] (-1,-1) rectangle (1,2);
\end{pgfonlayer}
\end{scope}
\end{tikzpicture}
\captionof{figure}{Fundamental domain for the Hecke triangle $(2,l,\infty)$}\label{heckef}
\end{figure}
Let $\phi_{n,0}^{\varepsilon}(z)$ the weakly holomorphic modular form of weight $1,$ multiplier $\varepsilon$ and of depth $n\geq d_{\varepsilon}$ defined in~\eqref{defeqqq}. It follows that we only have weakly holomorphic modular forms of weight 1 if $l\equiv 2$ mod 4 and $l\geq 3$. In particular when $l=6$,
\[
\phi_{0,0}^{+}(z):=\sum_{x,y\in \mathbf{Z}} e^{2\pi i z \frac{( x^2+xy+y^2)}{\sqrt{3}}}
\]
is the theta series associated to the normalized hexagonal lattice in $\mathbf{R}^2.$
\\
Let \[w_1:=-\cos\left(\frac{\pi}{l}\right)+i\sin\left(\frac{\pi}{l}\right), \text{ and } w_2=\cos\left(\frac{\pi}{l}\right)+i\sin\left(\frac{\pi}{l}\right).\] We define
\begin{equation}\label{1bas}
a_n^{\varepsilon}(x):= \frac{1}{\lambda}\int_{w_1}^{w_2}\phi_{n,0}^{\varepsilon}(z)e^{\pi i z |x|^2} dz,
\end{equation}
where $\lambda=2\cos\left(\frac{\pi}{l}\right).$ We show that $a_n^{\varepsilon}(x)$ is a radial Schwartz $\varepsilon$ eigenfunctions of the Fourier transformation in $\mathbf{R}^2$.
Moreover, for $m,n \geq d_{\varepsilon}$
\[
a_n^{\varepsilon}\left(\sqrt{\frac{2m}{\lambda}}\right)=\delta_{m,n}:=\begin{cases} 1, \text{ if }m=n, \\ 0, \text{ otherwise.}\end{cases}
\] We state a version of our interpolation formula for $f^{\varepsilon}.$
\begin{corollary}\label{mainthm}
We have
\begin{equation}\label{mainform}f^{\varepsilon}(x)=\sum_{n\geq d_{\varepsilon}}a_n^{\varepsilon}(x)f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right).
\end{equation}
\end{corollary}
Corollary~\ref{mainthm} is a special case of Theorem~\ref{mainthmder}.
Corollary~\ref{mainthm} generalizes the interpolation formula of Radchenko and Viazovska~\cite{inter} from $l=\infty$ and $d=1$ to every $l$ and $d=2,$ where $l\geq 6$ and $l\equiv 2 $ mod $4.$ \footnote{We have learned from Radchenko that he and Viazovska
have also considered other triangle groups $(2,l,\infty)$
in this context.}
They used the values at $\sqrt{n}$ and the weakly holomorphic modular of weight $3/2$ which are invariant by the theta group (Hecke triangle group $(2,\infty,\infty)$).
We use the weakly holomorphic modular forms of weight $1$ which are invariant by the Hecke triangle group $(2,l,\infty)$. Our integration is over the arc between $w_1$ and $w_2$ and it is the whole semicircle when $l=\infty$ as defined in~\cite{inter}. Next, we discuss some new features that occur in our work which are different from that of Radchenko and Viazovska~\cite{inter}.
\\
Radchenko and Viazovska proved their interpolation formula for every $\tau$ in the upper half-plane and from this they deduced that formula~\eqref{mainform} holds for every
radial Schwartz function $f$ in $\mathbf{R}$. We show that formula~\eqref{mainform} is false for some finite linear combination of Gaussian (unless $l=\infty$). In fact, the bound $\Im(\tau),\Im(-\frac{1}{\tau})>\sin\left(\frac{\pi}{l}\right)$ is optimal. We show this by constructing for every $\epsilon>0$ a finite linear combination of Gaussian $\sum a_ie^{i\pi \tau_i r^2}$ which vanishes at all points $\sqrt{\frac{2n}{\lambda}},$ where $\Im(\tau_i),\Im(-\frac{1}{\tau_i})>\sin\left(\frac{\pi}{l}\right)-\epsilon$ for every $i$. We realize these functions conceptually by monodromy around $w_1$ and its orbit $\Gamma w_1$ in section~\ref{monodromy}. We introduce them explicitly and state our next theorem.
Let
\[
S:=\begin{bmatrix} 0 & 1 \\-1 &0
\end{bmatrix} , \text{ and }T:=\begin{bmatrix} 1 & \lambda \\0 &1
\end{bmatrix} \text{ and } V:=TS.
\]
It is well-known that $\Gamma$ is generated by $T$ and $S$, and $V^l=I.$ Let $|^{\varepsilon}_k \gamma$ denote the slash operator of weight $k$ and multiplier $\varepsilon$ associated to $\gamma\in \Gamma$. In particular,
\[
\begin{split}
f(z)|^{\varepsilon}_k S=\varepsilon \left(\frac{ i}{z}\right)^kf\left(\frac{-1}{z}\right) ,
\\
f(z)|^{\varepsilon}_k T=f\left(z+ \lambda\right).
\end{split}
\]
We define
\[
r^{\varepsilon}(\gamma,\tau;x):=e^{i\pi \tau |x|^2}|^{-\varepsilon}_1 (T^{-1}-I)(1+V+\dots+V^{l-1})\gamma,
\]
where $\gamma\in \Gamma,$ $\tau$ is in the upper half-plane and $x\in \mathbf{R}^2$.
\begin{corollary}\label{mainthm22} $r^{\varepsilon}(\gamma,\tau;x)$ is an eigenfunction of the Fourier transformation with respect to $x$ with eigenvalue $\varepsilon$ for any $\gamma\in \Gamma.$ Moreover,
\[
r^{\varepsilon}\left(\gamma,\tau;\sqrt{\frac{2n}{\lambda}}\right)=0
\]
for every integer $n\geq0.$
\end{corollary}
Corollary~\ref{mainthm22} is a special case of Theorem~\ref{mainthm2}.
Note that if $\tau$ is near $w_2$ and $\gamma=id$ then
\[
r^{\varepsilon}(id,\tau;x)=\sum_{i} \alpha_i e^{i\pi \tau_i |x|^2},
\]
where
$\Im(\tau_i),\Im(-\frac{1}{\tau_i})>\sin\left(\frac{\pi}{l}\right)-\epsilon.$ This shows that the bound $\Im(\tau),\Im(-\frac{1}{\tau})>\sin\left(\frac{\pi}{l}\right)$ is optimal.
We use the following key identity in the group algebra $\mathbf{Z}[PSL_2(\mathbf{R})]$
\[
S(T^{-1}-I)(1+V+\dots+V^{l-1})=-(T^{-1}-I)(1+V+\dots+V^{l-1}).
\]
for the proof of Theorem~\ref{mainthm2}.
\begin{remark}
Note that the number of integers $n$ such that $\sqrt{\frac{2n}{\lambda}}<X$ is about $\cos\left(\frac{\pi}{l}\right)X^2.$ Radchenko and Viazovska~\cite{inter} in their interpolation formula for $\mathbf{R}$ uses the values of $f^{\varepsilon}$ at $\sqrt{n}$ which uses $X^2$ number of points less than $X$. After stating~\cite[Conjecture 7.5]{Maryna3}, the authors speculate that any interpolation formula in $\mathbf{R}^2$ for all Schwartz functions should also contain at least $X^2$ nodes less than $X$ based on the interpolation formulas in dimensions 1,8 and 24. Corollary ~\ref{mainthm} and Corollary~\ref{mainthm2} are compatible with this speculation. As we introduce a refined class of functions where the interpolation formula holds with given values at $\sqrt{\frac{2n}{\lambda}}$.\end{remark}
\subsection{Interpolation with higher derivatives}
Let $u:=|x|^2$ and $k\geq 0.$ Suppose that
\[f(x)=\int e^{i\pi \tau |x|^2}d\mu(\tau),\] where $\mu$ is a measure with bounded variation and supported on a compact subset of $\Im(\tau),\Im(-\frac{1}{\tau})>\sin\left(\frac{\pi}{l}\right)$ . We may consider $f$ and $\mathcal{F}(f)$ as a smooth function of $u$ in $\mathbf{R}^{+}.$
Next, we develop an interpolation formula for $f$ using the values of the $k$-derivatives $\frac{d^k}{du^k} $ of $f$ and $\mathcal{F}(f)$ at $u=\frac{2n}{\lambda}>0$.
\\
Let $M_k^{\varepsilon}(\Gamma)$ be the space of weight $k$ modular forms with multiplier $\varepsilon.$
Let $d(\varepsilon,k)=\dim \left(M_{2k+1}^{-\varepsilon}(\Gamma)\right).$ In section~\ref{basisk}, we define the space of weakly holomorphic modular forms by allowing a pole at cusp $\infty.$
We introduce a unique weakly holomorphic modular form of weight $-2k+1$ of $\Gamma$ for every $n\geq d(\varepsilon,k)$ and denote it by $\phi^{\varepsilon}_{n,k}(z).$ We define
\[
a_{n,k}^{\varepsilon}(x):= \frac{1}{\lambda}\int_{w_1}^{w_2}\phi_{n,k}^{\varepsilon}(z)\frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\]
We show that $a_{n,k}^{\varepsilon}(x)$ is a radial Schwartz $\varepsilon$ eigenfunctions of the Fourier transformation in $\mathbf{R}^2$. Moreover, for $m,n\geq d(\varepsilon,k)$
\begin{equation}\label{baseff}
\frac{d^k}{du^k} a_{n,k}^{\varepsilon}\left(\sqrt{\frac{2m}{\lambda}}\right)
=\delta(m,n).
\end{equation}
In particular when $k=0$, $a_{n,0}^{\varepsilon}(x)= a_{n}^{\varepsilon}(x),$ which was defined in \eqref{1bas}, and $d(\varepsilon,0)=d_\varepsilon$ which was defined in \eqref{dim1}. We state a version of our interpolation formula for $f$ with the higher derivatives.
\begin{theorem}\label{mainthmder} Let $k\geq 0$ and $f^{\varepsilon}$ and $a^{\varepsilon}_{n,k}$ be as above.
We have
\[f^{\varepsilon}(x)=\sum_{n\geq d(\varepsilon,k)}a^{\varepsilon}_{n,k}(x) \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right).
\]
\end{theorem}
We prove Theorem~\ref{mainthmder} in section~\ref{proofm}.
\\
Next, we show that the condition $\Im(\tau),\Im(-\frac{1}{\tau_i})>\sin\left(\frac{\pi}{l}\right)$ is optimal.
We define
\[
r^{\varepsilon}_k(\gamma,\tau;x):=\frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}|^{-\varepsilon}_{2k+1} (T^{-1}-I)(1+V+\dots+V^{l-1})\gamma,
\]
where $\gamma\in \Gamma,$ $\tau$ is in the upper half-plane and $x\in \mathbf{R}^2$.
\begin{theorem}\label{mainthm2} $r^{\varepsilon}_k(\gamma,\tau;x)$ is an eigenfunction of the Fourier transformation with respect to $x$ with eigenvalue $\varepsilon$ for any $\gamma\in \Gamma.$ Moreover,
\[
\frac{d^k}{du^k}r_k^{\varepsilon}\left(\gamma,\tau;\sqrt{\frac{2n}{\lambda}}\right)=0.
\]
for every integer $n\geq0.$
\end{theorem}
We prove Theorem~\ref{mainthm2} in section~\ref{monodromy}
\\
Theorem~\ref{mainthmder} generalizes Corollary~\ref{mainthm} to higher derivatives and it implies Corollary~\ref{mainthm} when $k=0.$ We also note that $d(\varepsilon,k)>0$ is growing linearly for $k\geq 1.$ This means that there are relations among $\frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right).$ We describe the space of relations completely in section~\ref{obs}.
\subsection{Obstructions for Fourier interpolation}\label{obs} Our theorem in this section holds for every radial Schwartz function $f(x)$ on $\mathbf{R}^2$. We introduce a complete family of linear obstructions to the Fourier interpolation with $k$-th derivatives $\frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right).$ We show that the obstructions are associated to the modular forms of weight $2k+1$ and multiplier $-\varepsilon.$
\\
\begin{theorem}\label{obsthm} Suppose that $g(z)=\sum_{n\geq 0} b_n q^n \in M_{2k+1}^{-\varepsilon}(\Gamma),$ where $q=e^{\frac{2\pi i z}{\lambda}}$ and $f(x)$ is a radial Schwartz function. We have
\[
\sum_{n\geq 0}b_n \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right)=0.
\]
\end{theorem}
\begin{proof}
Since $g(z)$ is a holomorphic modular forms. By Hardy's upper bound, its coefficients satisfy the polynomial bound $|b_n|\ll n^{2k+\varepsilon}.$ Since $f$ is a Schwartz function then the following series is absolutely convergent
\[
\sum_{n\geq 0}b_n \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right).\]
We want to show that the above tempered distribution is zero on the space of radial Schwartz function. It is enough to prove it for a dense subset of radial Schwartz functions. Following \cite[section 6]{inter}, it is enough to prove it for the normalized complex Gaussian $G_k(z,x):=\frac{e^{i\pi z|x|^2}}{(iz\pi)^k}$ where $z\in \mathbb{H}.$ Note that
\begin{equation}\label{mfe}g(z)=-\varepsilon\left(\frac{i}{z}\right)^{2k+1}g\left(\frac{-1}{z}\right).\end{equation}
It is easy to check that
\[\frac{d^k}{du^k}G_k(z,x)=G_0(z,x),\]
and,
\[
\frac{d^k}{du^k} \mathcal{F}G_k\left(z,x\right)=\left(\frac{i}{z}\right)^{2k+1}G_0\left(\frac{-1}{z},x\right),
\]
where $u=|x|^2$.
We rewrite~\eqref{mfe} as
\[
\sum_{n\geq 0} b_n \frac{d^k}{du^k} \left( G_k\left(z,\sqrt{\frac{2n}{\lambda}} \right)+\varepsilon \mathcal{F}G_k\left(z,\sqrt{\frac{2n}{\lambda}}\right) \right)=0.
\]
This implies that
\[
\sum_{n\geq 0}b_n \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right)=0
\]
for \(f(x)=G_k(z,x)^{\varepsilon}\). This completes the proof of our theorem.
\end{proof}
\begin{remark}
We note Theorem~\ref{obsthm} imposes $d(\varepsilon,k)=\dim \left(M_{2k+1}^{-\varepsilon}(\Gamma)\right)$ independent linear equations on the values of $ \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right).$ On the other hand, by using the basis functions $a_{n,k}^{\varepsilon}$ defined in \eqref{baseff}, it follows that these are the only linear obstructions on the values of $ \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right)$.
\end{remark}
\subsection{Universal optimality of the hexagonal lattice}\label{opin}
In this section, we state our theorem which implies a conjecture of Cohn, Kumar, Miller, Radchenko and Viazovska~\cite[Conjecture 7.5]{Maryna3} motivated by the universal optimality of the hexagonal lattice. We begin by stating a conjecture of Cohn and Elkies. This conjecture is based on a version of the linear programing method developed by Cohn and Elkies~\cite{Elkies} for giving upper bounds on the density of the sphere packings in Euclidean spaces.
\\
Cohn and Elkies conjectured~\cite{Elkies} that there exists a radial Schwartz function $f:\mathbf{R}^2\to \mathbf{R}$ that satisfies
\begin{enumerate}
\item $f(r)\leq 0$ for $r^2\geq \frac{2}{\sqrt{3}}$,
\item $\mathcal{F}(f)(r)\geq 0$ for all $r$,
\item $f(0)=\mathcal{F}(f)(0),$
\end{enumerate}
where $r=|x|.$
It follows from the Poisson summation formula for the hexagonal lattice that
\[
f\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=\mathcal{F}(f)\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=\frac{d}{dr}\mathcal{F}(f)\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=0, n\geq 1,
\]
and
\[
\frac{d}{dr}f\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=0, n>1,
\]
where $n=x^2+xy+y^2$ for some $x,y\in \mathbb{Z}.$
Constructing a generalized version of this function is equivalent to proving the universality of the hexagonal lattice in the plane~\cite{Maryna3}, which is an outstanding open problem. Namely, constructing $f_t$ such that
\begin{enumerate}
\item $f_t(r)\leq e^{-\pi t r^2}$,
\item $\mathcal{F}f_t(r)\geq 0$,
\item $\mathcal{F}f_t(0)-f_t(0)=\theta_0(it)-1.$
\end{enumerate}
Very recetly~Viazovska and her collaborators \cite{Maryna1,Maryna2, Maryna3} in their spectacular works resolved the Cohn and Elkies conjecture in dimensions 8 and 24 and also proved the universality of $E_8$ and the Leech lattice. As discussed by the authors~\cite[page 91]{Maryna3} their method does not generalize directly to dimension $2$ and one needs a new idea for dimension 2 to construct $f$ satisfying the above conditions. We discuss some properties of $f_t.$ It follows from the Poisson summation formula for the hexagonal lattice that
\[
\begin{split}
f_t\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=e^{-\frac{2\pi t n}{\sqrt{3}}}, n\geq 1,
\\
\mathcal{F}(f)\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=\frac{d}{dr}\mathcal{F}(f)\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=0, n\geq 1,
\end{split}
\]
and
\[
\frac{d}{dr}f\left(\sqrt{\frac{2n}{\sqrt{3}}}\right)=-2\sqrt{\frac{2n}{\sqrt{3}}}\pi t e^{-\frac{2\pi t n}{\sqrt{3}}}, n>1,
\]
where $n=x^2+xy+y^2$ for some $x,y\in \mathbb{Z}.$
The authors in~\cite{Maryna3} showed that in dimensions 8 and 24 the analogues equations uniquely determine the radial Schwartz function $f$. However, they conjectured~\cite{Maryna3} that in dimension $2,$ the radial Schwartz function $f$ is not uniquely determined by the values of $f,f',\mathcal{F}(f)$ and $\mathcal{F}(f)'$ at $\sqrt{\frac{2n}{\sqrt{3}}}$, where $n$ is represented by $x^2+xy+y^2.$ One heuristic reason is that the number of the integers less than $X$ represented by $x^2+xy+y^2$ is $X/\sqrt{\log(X)}$ asymptotically. So we have less equations than in dimensions $8$ and $24$, where we have equations associated to each integers $n\geq 0$.
\\
We prove the conjecture of Cohn, Kumar, Miller, Radchenko and Viazovska~\cite[Conjecture 7.5]{Maryna3}.
\begin{theorem}\label{mainconj}
There are infinitely many linearly independent radial Schwartz function $f$ on the plane such that $f$ and $\mathcal{F}(f)$ vanishes of order $2$ at $|x|=\sqrt{\frac{2n}{\sqrt{3}}}$ where $n=x^2+xy+y^2$ for some $x,y \in\mathbf{Z}.$
\end{theorem}
\subsubsection{Method of the proof}
In this section we discuss some new ideas that we develop to prove Theorem~\ref{mainconj}. The first step is to cover integers $n=x^2+xy+y^2$ by a periodic set of integers with small density. Note that this is a basic result in sieve theory and we include a proof for the convenience of the reader in Lemma~\ref{Acons}.
\\
Suppose that
\[
A:=\left\{a>100 | a\equiv a_i \mod L, \text{ for some }a_i \text{ where } 1\leq i\leq l\right\}.
\]
Furthermore, we suppose that the density is small $\frac{l}{L}<\delta.$ We prove a stronger version of Theorem~\ref{mainconj} that we state next.
\begin{theorem}\label{strongthm}
Suppose that $\delta<0.001$ and $a\in A$ is any element.
There exists a radial Schwartz function $f$ such that $f$ and $\mathcal{F}(f)$ vanishes of order $2$ at $\sqrt{\frac{2m}{\sqrt{3}}}$ where $m\in A-\{a\},$ and
\[
f'\left(\sqrt{\frac{2a}{\sqrt{3}}}\right)= 1.
\]
\end{theorem}
We only use Corollary~\ref{mainthm22} for the Hecke triangle group $\Gamma=(2,6,\infty)$ from the previous sections. Let
\[
r^{\varepsilon}(\tau;x):=e^{i\pi \tau |x|^2}|^{\varepsilon}_1 (T^{-1}-I)(1+V+\dots+V^5),
\]
and
\[
s^{\varepsilon}(\tau;x):=e^{i\pi \tau |x|^2}|^{\varepsilon}_1 (1+V+\dots+V^5).
\]
By Corollary~\ref{mainthm22}
\[
r^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right)=0
\]
for every integer $m\geq0.$ It is easy to check that
\[
\frac{d}{du}r^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right)=c s^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right),
\]
where $u=|x|^2$ and $c=\pi i.$ Our new idea is to average $r^{\varepsilon}(\tau;x)$ over $\tau$ with respect to a measure $\mu$ supported on a compact region of the upper half-plane such that the derivative vanishes at $\sqrt{\frac{2m}{\sqrt{3}}}$ for every $m\in A-\{a\}$. More precisely, let
\[
f(x)=\int r^{\varepsilon}(\tau;x) d\mu(\tau).
\]
Then
\[
f\left(\sqrt{\frac{2m}{\sqrt{3}}}\right)=0, \text{ and } \frac{d}{du}f\left(\sqrt{\frac{2m}{\sqrt{3}}}\right)=c\int s^{\varepsilon}(\tau;x) d\mu(\tau).
\]
We construct $\mu$ as the weak$^*$ limit of a sequence of measures $\{\mu_n\}$ such that
\[
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right) d\mu_n(\tau)=0
\]
where $m\in A-\{a\}$ and $0\leq m<n.$
The existence of a weak$^*$ limit is a consequence of the compactness of the space of probability measures on a compact Borel measure space. By computing the first derivative at $\sqrt{\frac{2a}{\sqrt{3}}}$ we show that $f(x)\neq 0.$ Constructing $\mu_n$ is challenging and is at the heart of our proof that we discuss in Section~\ref{zeros}. In particular, $\mu_n$ is a probability measure with a support on $\mathbf{X}_{\delta},$ where
\[
\mathbf{X}_{\delta}:= \left\{\frac{\sqrt{3}}{2}+x+i0.27: |x|<\delta\right\}.
\]
\section{Modular forms for the Hecke triangle group}\label{hecket}
We discuss the Hecke triangle group $(2,l,\infty)$ and give an explicit basis for the associated space of modular forms. We refer the reader to the excellent book of Berndt and Knopp~\cite{Berndt}, and the thesis of Jonas Jermann~\cite{Jonas}. We only prove some results that we could not find in the literature.
\subsection{Hecke triangle group $(2,l,\infty)$}
Let
\[
S:=\begin{bmatrix} 0 & 1 \\-1 &0
\end{bmatrix} , \text{ and }T:=\begin{bmatrix} 1 & \lambda \\0 &1
\end{bmatrix} \text{ and } U:=ST.
\]
We consider the action of the fractional transformation on the upper half-plane. The group generated by $S$ and $T$ is the triangle group $(2,l,\infty).$
It is also well-known that as an abstract group
\[
\Gamma= \frac{\mathbf{Z}}{2\mathbf{Z}}*\frac{\mathbf{Z}}{l\mathbf{Z}},
\]
where $\frac{\mathbf{Z}}{2\mathbf{Z}}*\frac{\mathbf{Z}}{l\mathbf{Z}}$ is the free product of the cyclic groups and the isomorphism is given by sending $S$ to the generator of $ \frac{\mathbf{Z}}{2\mathbf{Z}}$ and $U$ to a generator of $\frac{\mathbf{Z}}{l\mathbf{Z}}$.
\subsection{Modular forms} We record some well-known facts about the space of holomorphic modular forms for $\Gamma$; see~\cite[Chapter 5]{Berndt} for a detailed discussion. For $z\in \mathbf{C}$, $z\neq 0,$ and $r\in R$ let
\[
z^r= |z|^r e^{ir\arg(z)},
\]
where $-\pi\leq\arg(z)<\pi.$
Let $M_r^{\varepsilon}(\Gamma)$ be the space of holomorphic bounded functions $f(z)$ defined on the upper half-plane which satisfy
\[
\begin{split}
f(z)=\varepsilon \left(\frac{ i}{z}\right)^r f\left(\frac{-1}{z}\right) ,
\\
f(z)=f\left(z+\lambda\right),
\end{split}
\]
where $\varepsilon=\pm1.$ For $\gamma\in \Gamma$ and $f\in M_r^{\varepsilon}(\Gamma)$ let
\[
j_{r}^{\varepsilon}(z,\gamma):=\frac{f(z)}{f(\gamma z)},
\]
where $f(\gamma z)\neq 0.$ It follows that
\[
|j_{r}^{\varepsilon}(z,\gamma)|=\frac{1}{|cz+d|^r},
\]
where $\gamma=\begin{bmatrix} a & b \\ c & d \end{bmatrix}.$ We define the slash operator acting on $h\in C(\mathbb{H})$ as
\[
h|_r^{\varepsilon}\gamma= j_{r}^{\varepsilon}(z,\gamma)h(\gamma z).
\]
We cite~\cite[Theorem 5.5]{Berndt} and~\cite[Theorem 5.6]{Berndt}.
\begin{theorem}[Theorem 5.5 of \cite{{Berndt}}]\label{zerobr}
There exist modular forms $f_{w}\in M_{\frac{4}{l-2}}^1(\Gamma),$ $f_i \in M_{\frac{2l}{l-2}}^{-1}(\Gamma)$ and $f_{\infty}\in M_{\frac{4l}{l-2}}^1(\Gamma)$ such that each has a simple root at $w_1$, $i$ and $i\infty$, respectively, and no other zeros.
\end{theorem}
\begin{theorem}[Theorem 5.6 of \cite{Berndt}]\label{dimbr}Suppose that $\dim M_{l}^{\varepsilon}(\Gamma)\neq0.$ Then
\[
l=\frac{4m}{l-2}+1-\varepsilon,
\]
where $m\geq 1$ is an integer, and
\[
\dim M_{l}^{\varepsilon}(\Gamma)= 1+\lfloor \frac{m+(\varepsilon-1)/2}{l} \rfloor.
\]
\end{theorem}
\subsection{Weakly holomorphic modular forms}\label{basisk}
We remove the boundedness condition in the upper half-plane, and allow a pole at the cusp at $\infty$ and obtain weakly holomorphic modular forms; see~\cite[Chapter 3]{Jonas} for a detailed discussion. We denote the space of weakly holomorphic modular forms of weight $r$ and multiplier $\varepsilon$ by
\(
M^{\varepsilon !}_r(\Gamma).
\)
Let $J(z)$ be Hauptmodul function for $\Gamma$. It is the unique Riemann map from the hyperbolic triangle with vertices at $w_2$, $i$ and $i\infty$ to the upper half-plane normalized such that
\[
J(w_2)=0,
\\
J(i\infty)=\infty,
\\
J(q)=q^{-1}+O(1),
\]
where $q=e^{\frac{2\pi i z}{\lambda}}.$
It is well known that $J(z)\in M_0^{+!},$ and has rational coefficients~\cite{Lehner}.
\\
Let $n_p(f)$ denote the order of vanishing of the meromorphic function $f$ at point $l.$ We write $N(f)$ for the sum of orders of points except from $\{ i , w,i\infty \}$.
We cite~\cite[Lemma 3.1]{Jonas} which extends \cite[Lemma 5.1]{Berndt} to weakly holomorphic modular forms.
\begin{lemma}[Lemma 3.1 of \cite{Jonas}]
Suppose that $f\in M_r^{\varepsilon!}$ and $f\neq 0.$ Then
\[
N(f)+n_{\infty}(f)+\frac{1}{2}n_i(f)+\frac{n_{w}(f)}{r}=\frac{r(r-2)}{4r}.
\]
\end{lemma}
Note that
\[
N(J)=0, n_{\infty}(J)=-1, n_i(J)=0, \text{ and } n_w(J)=l.
\]
The derivative of $J$ is also a weakly holomorphic modular form. We have
\[
J'(z) \in M_{2}^{-!}(\Gamma),
\]
and
\begin{equation}\label{jjj}
N(J')=0, n_{\infty}(J')=-1, n_i(J')=1, \text{ and } n_w(J')=l-1.
\end{equation}
\subsubsection{Weakly holomorphic modular forms of weight $2k+1$} Suppose that $d(\varepsilon,k)=\dim M_{2k+1}^{-\varepsilon}(\Gamma),$ where $k\in\mathbf{Z}$ and $k\geq 0.$ We write $d$ for $d(\varepsilon,k)$ in this section.
\begin{lemma}
Suppose that $d>0$, then there exists a unique $f_{d-1,k}\in M_{2k+1}^{-\varepsilon}(\Gamma)$ such that
\[
f_{d-1,k}=q^{d-1}+\alpha_{d-1}q^{d}+O(q^{d+1}).
\]
Furthermore, $d=0$ if and only if $\varepsilon=-1$ and $k=0.$ In this case, there exists a unique $f_{-1,0}\in M_{1}^{-!}(\Gamma)$ such that
\[
f_{-1,0}=q^{-1}+\alpha_{-1}+O(q).
\]
\end{lemma}
\begin{proof}
Suppose that $d>0.$ By linear algebra there exists $f\in M_{2k+1}^{-\varepsilon}(\Gamma)$ such that
\(
f=O(q^{l}),
\)
where $l\geq d-1.$
It follows that $\{f, fJ,\dots,fJ^{l}\}\subset M_{2k+1}^{-\varepsilon}(\Gamma)$ are linearly independent. Hence, there exists $f_{d-1,k}\in M_{2k+1}^{-\varepsilon}(\Gamma)$ such that
\[
f_{d-1,k}=q^{d-1}+\alpha_{d-1}q^{d}+O(q^{d+1}).
\]
For $ 0\leq i\leq d-1$, let
\[
f_{i,k}:=f_{d-1,k}P_i(J(z))=q^{i}+\alpha_iq^{d}+O(q^{d+1})\in M_{2k+1}^{-\varepsilon}(\Gamma)
\]
for some real polynomial $l_i(x).$
It is easy to check that $\{f_{i,k}\}$ form a basis for $M_{2k+1}^{-\varepsilon}(\Gamma),$ and $l=d-1.$
\\
Next suppose that $d=0.$
By Theorem~\ref{dimbr} and our assumption $l\equiv 2$ mod $4$, we have $d>0$ unless $k=0$ and $\varepsilon=1.$
Then $k=0$ and $\varepsilon=-1.$ Let
\[
f_{-1,0}:=\frac{-\frac{\lambda}{2\pi i}J'(z)}{f_{w}^{\frac{l-2}{4}}}.
\]
Note that the order of vanishing of $J'$ at $w_1$ is $l-1> \frac{l-2}{4}$, hence $f_{-1,0}\in M_{1}^{-!}(\Gamma).$ We have
\[
f_{-1,0}=q^{-1}+\alpha_{-1}+O(q).
\]
\end{proof}
Next, we introduce a duality between $M_{2k+1}^{-\varepsilon!}(\Gamma)$ and $ M_{-2k+1 }^{\varepsilon!}(\Gamma).$
\begin{lemma}\label{duality}
Suppose that $f=\sum_{n} a_nq^n \in M_{2k+1}^{-\varepsilon!}(\Gamma)$ and $\phi=\sum b_mq^m \in M_{-2k+1 }^{\varepsilon!}(\Gamma).$ Then,
\[
\sum_{n} a_n b_{-n}=0.
\]
\end{lemma}
\begin{proof}
We have
\[
\sum_{n} a_n b_{-n}= \frac{1}{\lambda}\int_{w_1}^{w_2}f(z)g(z) dz.
\]
We note that $f(z)g(z)\in M_{2!}^{-}(\Gamma).$ We show that
\[
\int_{w_1}^{w_2}h(z) dz=0
\]
for every $h(z)\in M_{2!}^{-}(\Gamma).$
By substituting $w=\frac{-1}{z}$ in the integral, we obtain
\[
\begin{split}
\int_{w_1}^{w_2}h(z) dz&=\int_{w_2}^{w_1} h\left(\frac{-1}{w}\right) d (\frac{-1}{w})
\\
&=\int_{w_2}^{w_1} h\left(\frac{-1}{w}\right) \frac{1}{w^2} dw
\\
&=\int_{w_2}^{w_1} h(w) dw= -\int_{w_1}^{w_2}h(z) dz,
\end{split}
\]
where we used
\(
h(w)= \frac{ 1}{w^2} h\left(\frac{-1}{w}\right).
\) This implies that
\[
\int_{w_1}^{w_2}h(z) dz=0.
\]
\end{proof}
Let
\begin{equation}\label{jder}
\phi_{d,k}^{\varepsilon}(z):=\frac{\lambda}{2\pi i} \frac{J'(z)}{f_{d-1,k}}
\end{equation}
It follows form~\eqref{jjj} that $ \phi_{d,k}\in M_{-2k+1}^{\varepsilon !}(\Gamma).$ Moreover, by Lemma~\ref{duality} $\phi_{d,k}^{\varepsilon}(z)$ has the following coefficients
\[
\phi_{d,k}^{\varepsilon}(z)=q^{-d}-\sum_{i=0}^{d-1} \alpha_i q^{-i}+O(q).
\]
Let $Q(x)$ be a polynomial with complex coefficients. Define
\[
\mathcal{Q}^{\varepsilon}(z):=Q(J^{+}(q)) \phi_{d,k}^{\varepsilon}(z).
\]
It is easy to check that $\mathcal{Q}^{\varepsilon}_k(z)$ satisfies the modular transformation of a weight $-2k+1$ modular form with multiplier $\varepsilon$; see \cite[Proposition 3.13]{Jonas}. Moreover,
\[\mathcal{Q}^{\varepsilon}(z)= O(q^{-(\deg(Q)+d)}), \text{ as }z\to \infty.\]
Let
\begin{equation}\label{defeqqq}
\phi_{n,k}^{\varepsilon}:=Q^{\varepsilon}_{n,k}(J)\phi_{d,k}^{\varepsilon}(z)= q^{-n}+O(q^{-d(\varepsilon,k)+1}), n\geq d(\varepsilon,k),
\end{equation}
be the unique modular forms of weight $-2k+1$ with respect to $\Gamma.$
Let
\[
K_k^{\varepsilon}(z,\tau):=\sum_{n\geq d(\varepsilon,k)}^{\infty} \phi_{n,k}^{\varepsilon}(z) e^{\frac{2\pi i \tau n}{\lambda}}.
\]
Zagier proved an explicit formula for a special case of the above generating series for the modular curve in~\cite{Zagiertr}.
The following Proposition is stated for $d(\varepsilon,k)>0$ in ~\cite[Theorem 3.14]{Jonas}. We write a proof for $d(\varepsilon,k)\geq 0 $ that follows closely the proof of~\cite[Theorem 2]{Jenkins}.
\begin{proposition}\label{genf}
We have
\[
K_k^{\varepsilon}(z,\tau)=\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{J(\tau)-J(z)}.
\]
In particular, $K_k^{\varepsilon}(z,\tau)$ is a modular form of weight $2k+1$ and multiplier $-\varepsilon$ with respect to $\tau$ and of weight $1-2k$ and multiplier $\varepsilon$ with respect to $z.$
\end{proposition}
\begin{proof}
By~\eqref{jder}, it is enough to show that
\[
\frac{1}{\lambda}K_k^{\varepsilon}(z,\tau)=-\frac{1}{2\pi i}\frac{J'(\tau)\phi_{d,k}^{\varepsilon}(z)}{\left(J(\tau)-J(z)\right)\phi_{d,k}^{\varepsilon}(\tau)}.
\]
For $\Im\tau>\Im z,$ the generating series is convergent and by circle method,
\[
\phi_{n,k}^{\varepsilon}(z)= \frac{1}{\lambda}\int_{\tau_0}^{\tau_0+\lambda} K_k^{\varepsilon}(z,\tau)q(-n\tau) d\tau,
\]
where $\Im(\tau_0)>\Im(z).$
It is enough to show that for $n\geq d$
\[
\phi_{n,k}^{\varepsilon}(z)=-\frac{1}{2\pi i}\int_{\tau_0}^{\tau_0+\lambda}\frac{J'(\tau)\phi_{d,k}^{\varepsilon}(z)}{\left(J(\tau)-J(z)\right)\phi_{d,k}^{\varepsilon}(\tau)}q(-n\tau) d\tau.
\]
Since $f_{d-1,k}^{-\varepsilon}(\tau)q(-n\tau) $ is holomorphic at cusp $\infty$ for every $n< d$ and by \eqref{jder}, we have
\[
\int_{\tau_0}^{\tau_0+\lambda} \frac{J'(\tau)\phi_{d,k}^{\varepsilon}(z)}{\left(J(\tau)-J(z)\right)\phi_{d,k}^{\varepsilon}(\tau)}q(-n\tau) d\tau=\int_{\tau_0}^{\tau_0+\lambda} \frac{J'(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)f_{d-1,k}^{-\varepsilon}(z)}q(-n\tau) d\tau=0.
\]
For $n\geq d$, we note that
\[
\phi_{n,k}^{\varepsilon}(\tau)-q(-n\tau)=O(q(-(d-1)\tau)).
\]
Hence,
\[
\int_{\tau_0}^{\tau_0+\lambda} \frac{J'(\tau)\phi_{d,k}^{\varepsilon}(z)}{\left(J(\tau)-J(z)\right)\phi_{d,k}^{\varepsilon}(\tau)}\left(\phi_{n,k}^{\varepsilon}(\tau)-q(-n\tau)\right) d\tau=0.
\]
Therefore,
\[
\begin{split}
-\frac{1}{2\pi i}\int_{\tau_0}^{\tau_0+\lambda} \frac{J'(\tau)\phi_{d,k}^{\varepsilon}(z)}{\left(J(\tau)-J(z)\right)\phi_{d,k}^{\varepsilon}(\tau)}q(-n\tau) d\tau&=-\frac{1}{2\pi i}\int_{\tau_0}^{\tau_0+\lambda} \frac{J'(\tau)\phi_{d,k}^{\varepsilon}(z)}{\left(J(\tau)-J(z)\right)\phi_{d,k}^{\varepsilon}(\tau)}\phi_{n,k}^{\varepsilon}(\tau)d\tau
\\
&=\phi_{d,k}^{\varepsilon}(z)\frac{-1}{2\pi i}\int_{J(\tau_0)}^{J(\tau_0+\lambda)} \frac{\mathcal{Q}^{\varepsilon}_{n,k}(J(\tau) )}{\left(J(\tau)-J(z)\right)}dJ(\tau)
\\
&=\phi_{d,k}^{\varepsilon}(z)\mathcal{Q}^{\varepsilon}_{n,k}(J(z) ) = \phi_{n,k}^{\varepsilon}(z).
\end{split}
\]
\end{proof}
\begin{remark}
Note that there is a duality between weights $-2k+1$ and $2k+1.$ More precisely,
for every $k\geq 0,$ we have
\[
\sum_{n\geq d(\varepsilon,k)}^{\infty} \phi_{n,k}^{\varepsilon}(z) e^{2\pi i \tau \frac{n}{\lambda}}=\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)}= -\sum_{n\geq 1-d(\varepsilon,k)}^{\infty} f_{n,k}^{-\varepsilon}(\tau) e^{2\pi i z \frac{n}{\lambda}}.
\]
\end{remark}
\subsection{Modular integrals}
For the $k$-th derivatives interpolation formula, we introduce $F_k^{\varepsilon}(\tau,x)$ which is holomorphic for $\tau>\sin\left(\frac{\pi}{l}\right)$ (it has multiple values with analytic continuation for $\sin\left(\frac{\pi}{l}\right) \geq\Im(\tau)>0$) such that
\begin{equation}\label{cobn}
\begin{split}
F_k^{\varepsilon}(\tau,x)|_{2k+1}{I+\varepsilon S}=\frac{e^{\pi i \tau x^2}}{(i\pi \tau)^k} |_{2k+1}{I+\varepsilon S},
\\
F_k^{\varepsilon}(\tau+\lambda,x)=F_k^{\varepsilon}(\tau,x),
\end{split}
\end{equation}
where $\Im (\tau),\Im(\frac{-1}{\tau})>\sin\left(\frac{\pi}{l}\right).$
Following Knopp~\cite{Knopp,Knoppm}, we denote the solution to the above functional equations by Modular integrals, and the given function on the right hand side by the period function. We find a solution to the above functional equations in section~\ref{modint}. Our method is based on the work Radchenko and Viazovska~\cite{inter} which follows closely the work of Duke, Imamo\={g}lu, and T\'{o}th~\cite{Dukec}.
\section{Interpolation basis}
\subsection{Interpolation basis for higher derivatives}
We define a family of eigenfunctions of the Fourier transformation with eigenvalue $\varepsilon$ such that their $k$-derivatives vanish with order $1$ on all except one point of the form
$\sqrt{\frac{2n}{\lambda}}$ where $n\geq d(\varepsilon,k)=\dim M_{2k+1}^{-\varepsilon}(\Gamma).$ Recall that
\[
a_{n,k}^{\varepsilon}(x):= \frac{1}{\lambda}\int_{w_1}^{w_2}\phi_{n,k}^{\varepsilon}(z)\frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\]
\begin{lemma}
We have
\[
\widehat{a_{n,k}^{\varepsilon}}(\xi)=\varepsilon a_{n,k}^{\varepsilon}(\xi).
\]
\end{lemma}
\begin{proof} We have
\[
\begin{split}
\widehat{a_{n,k}^{\varepsilon}}(\xi)
&= \int_{\mathbf{R}^2} \frac{1}{ \lambda}\int_{w_1}^{w_2}\phi_{n,k}^{\varepsilon}(z)\frac{e^{\pi i z |x|^2}}{(iz\pi)^k} e^{2 \pi i \left<x,\xi \right>}dz dx
\\
&= \frac{1}{(i\pi)^k \lambda}\int _{w1}^{w_2}\phi_{n,k}^{\varepsilon}(z)\frac{i}{z^{k+1}} e^{\pi i \frac{-1}{z} |\xi|^2} dz
\\
&= \frac{1}{(i\pi)^k \lambda}\int _{w2}^{w_1} \phi_{n,k}^{\varepsilon}\left(\frac{-1}{v}\right)i(-v)^{k+1} e^{\pi i v |\xi|^2} v^{-2} dv
\\
&= \frac{1}{(i\pi)^k\lambda}\int _{w2}^{w_1} -\phi_{n,k}^{\varepsilon}\left(\frac{-1}{v}\right)\left(\frac{i}{v}\right)^{-2k+1} \frac{e^{\pi i v |\xi|^2}}{v^k} dv
\\
&= \frac{\varepsilon}{\lambda}\int _{w2}^{w_1} -\varepsilon\phi_{n,k}^{\varepsilon}(v) \frac{e^{\pi i v |\xi|^2}}{(i\pi v)^k} dv =\varepsilon a_{n,k}^{\varepsilon}(\xi).
\end{split}
\]
where $v=\frac{-1}{z}.$
\end{proof}
Recall that $u=|x|^2.$
\begin{lemma} For $m,n \geq d(\varepsilon,k)$, we have
\[
\frac{d^k}{du^k} a_{n,k}^{\varepsilon}\left(\sqrt{\frac{2m}{\lambda}}\right)
=\delta(m,n).
\]
\end{lemma}
\begin{proof}
We have
\[
\begin{split}
\frac{d^k}{du^k} a_{n,k}^{\varepsilon}\left(x\right)&= \frac{1}{\lambda} \int _{w_1}^{w_2} e^{-\frac{2\pi iz}{\lambda} n} \left(\frac{d}{d|x|^2}\right)^k\frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz + \frac{1}{\lambda} \int _{w_1}^{w_2}\sum_{j\geq 0}r_{n,k}^{\varepsilon}(j)e^{\frac{2\pi iz}{\lambda}j} \frac{d}{d|x|^2}\frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\\
&= \frac{1}{\lambda}\int_{w_1}^{w_2}e^{2\pi i z(\frac{-n+\lambda|x|^2/2}{\lambda})}dz +\sum_{j\geq 0}\frac{r_{n,k}^{\varepsilon}(j)}{2\pi i (j+\lambda|x|^2/2)} \Big|_{w_1}^{w_2}e^{2\pi i z(\frac{j+\lambda|x|^2/2}{\lambda})}
\end{split}
\]
Suppose that $x=\sqrt{\frac{2m}{\lambda}}$, then we have
\[
\begin{split}
\frac{d^k}{du^k} a_{n,k}^{\varepsilon}\left(\sqrt{\frac{2m}{\lambda}}\right)&=
\delta_{m,n}+\sum_{j\geq 0}\frac{r_{n,k}^{\varepsilon}(j)}{2\pi i (j+m)} \Big|_{w_1}^{w_2}e^{2\pi i z(\frac{j+m}{\lambda})}
\\
&= \delta_{m,n}+\sum_{m\geq 0}\frac{r_{n,k}^{\varepsilon}(j)}{2\pi i (j+m)} e^{-\pi(\frac{k+m}{\lambda})} \sin \left( \pi (k+m) \right).
\end{split}
\]
Since $ \sin \left( \pi (m+ \lambda\frac{x^2}{2}) \right)=0,$
\[ \frac{d^k}{du^k} a_{n,k}^{\varepsilon}\left(\sqrt{\frac{2m}{\lambda}}\right)=
\delta_{m,n}.\]
\end{proof}
\section{Modular integral with given period function}~\label{modint}
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw[-latex](\myxlow,0) -- (\myxhigh ,0);
\pgfmathsetmacro{\succofmyxlow}{\myxlow+1}
\foreach \x in {\myxlow,\succofmyxlow,...,\myxhigh}
{ \draw (\x,-0) -- (\x,-0.0) node[below,font=\tiny] {\x};
}
{ \draw (-0.866,-0.1)node[below,font=\tiny]{$-\frac{\lambda}{2}$}--(-0.866,0.5)node[left,font=\tiny]{$w_1$} (0.866,0.5)node[right,font=\tiny]{$w_2$} -- (0.866,-0.1) node[below,font=\tiny] {$\frac{\lambda}{2}$} ;}{
\draw (-0.866,1.2)node[left,font=\tiny]{$\tau_0$}--(0.866,1.2)node[right,font=\tiny]{$\tau_0+\lambda$} ;
}
{
\draw(0.866,1.2)node[below,font=\tiny]{$\Omega_0$} ;
}
\foreach \y in {0,1}
{ \draw (0,\y) -- (-0.0,\y) node[left,font=\tiny] {\pgfmathprintnumber{\y}};
}
\draw[-latex](0,-0.0) -- (0,1.6);
\begin{scope}
\clip (\myxlow,0) rectangle (\myxhigh,2.2);
{ \draw[very thin, blue] (1,0) arc(0:180:1);
}
{\draw [very thin, red] (-0.866,0.5)node[left,font=\tiny]{$w_1$}--(0.866,0.5)node[right,font=\tiny]{$w_2$} (0,0.5)node[right,font=\tiny]{$\gamma$} (0,0.8)node[right,font=\tiny]{$D_1$} ;
}
{ \draw (1,1)[very thin, red] arc(0:360:1) (0,2)node{$S\gamma$};
}
\end{scope}
\begin{scope}
\begin{pgfonlayer}{background}
\clip (-0.866,0) rectangle (0.866,1.7);
\clip (1,1.7) -| (-1,0) arc (180:0:1) -- cycle;
\fill[gray,opacity=0.8] (-1,-1) rectangle (1,2);
\end{pgfonlayer}
\end{scope}
\end{tikzpicture}
\captionof{figure}{}\label{heckefff}
\end{figure}
\begin{lemma} \label{inbd}Suppose that $z$ is on the arc from $w_1$ to $w_2.$
We have
\[
|\phi_{n,k}^{\varepsilon}(z)|\ll_{k} \delta^{-1}e^{ \frac{2\pi (1+\delta)}{\lambda}n}.
\]
for any $\delta>0.$
\end{lemma}
\begin{proof} Recall that
\[
K_k^{\varepsilon}(z,\tau):=\sum_{n\geq d(\varepsilon,k)}^{\infty} \phi_{n,k}^{\varepsilon}(z) e^{2\pi i \tau \frac{n}{\lambda}}.
\]
By Proposition~\ref{genf}, we have
\[
K_k^{\varepsilon}(z,\tau)=\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{J(\tau)-J(z)} .\]
By circle method, we have
\[
\begin{split}
|\phi_{n,k}^{\varepsilon}(z)|&\ll \left|\int_{\tau_0}^{\tau_0+\lambda} K_k^{\varepsilon}(z,\tau) e^{-2\pi i \tau \frac{n}{\lambda}}d\tau\right|
\\
&\ll \int_{\tau_0}^{\tau_0+\lambda} \left| \frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{J(\tau)-J(z)} e^{-2\pi i \tau \frac{n}{\lambda}}d\tau \right|,
\end{split}
\]
where $\tau_0=-\frac{\lambda}{2}+(1+\delta)i$ for some fixed $0<\delta<1;$ see Figure~\ref{heckefff}. We note that $J(z)$ is injective on the fundamental domain of $\Gamma.$ Hence
\[
\frac{1}{|J(\tau)-J(z) | } = O(\delta^{-1}),
\]
where $\Im(\tau)=1+\delta$ and, $z$ belongs to the arc between $w_1$ and $w_2.$ Since $\phi_{d,k}^{\varepsilon}(z)$ and $f_{d-1,k}^{-\varepsilon}(\tau)$ are fixed and holomorphic on the upper half-plane, we have
\[
|\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau) |=O_{k}(1),
\]
where $\Im(\tau)=1+\delta$ and $z$ belongs to the arc between $w_1$ and $w_2.$ Therefore,
\[
|\phi_{n,k}^{\varepsilon}(z)| \ll_{k} \int_{\tau_0}^{\tau_0+\lambda} \left| \delta^{-1} e^{-2\pi i \tau \frac{n}{\lambda}}d\tau \right| \ll \delta^{-1}e^{ \frac{2\pi (1+\delta)}{\lambda}n}.
\]
\end{proof}
\begin{lemma}\label{inibd}
We have
\[
|a_{n,k}^{\varepsilon}(x)| \ll_k \delta^{-1}e^{ \frac{2\pi (1+\delta)}{\lambda}n}e^{-\pi\sin\left(\frac{\pi}{l}\right) |x|^2}
\]
for any $\delta>0.$
\end{lemma}
\begin{proof}
By Lemma~\ref{inbd}, we have
\[
|a_{n,k}^{\varepsilon}(x)|= \frac{1}{\lambda} \left|\int_{w_1}^{w_2}\phi_{n,k}^{\varepsilon}(z)\frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz \right|\ll_{k} \delta^{-1}e^{ \frac{2\pi (1+\delta)}{\lambda}n} e^{-\pi\sin\left(\frac{\pi}{l}\right) |x|^2} .
\]
\end{proof}
Lemma~\ref{inibd} implies that the following series is absolutely convergent for $\Im(\tau)>1$
\[
F_k^{\varepsilon}(\tau,x):=\sum_{n\geq 0}a_{n,k}^{\varepsilon}(x) e^{2\pi i \tau \frac{n}{\lambda}}.
\]
Moreover, we can switch the order of summation and the integration and obtain for $\Im(\tau)>1$
\[
F_k^{\varepsilon}(\tau,x)=\frac{1}{\lambda}\int _{w1}^{w_2} \left( \sum_{n\geq 0} \phi_{n,k}^{\varepsilon}(z)e^{2\pi i \tau \frac{n}{\lambda}} \right) \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\]
By Proposition~\ref{genf}, we have
\[
\sum_{n\geq 0} \phi_{n,k}^{\varepsilon}(z)e^{2\pi i \tau \frac{n}{\lambda}} =K_k^{\varepsilon}(z,\tau)=\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{J(\tau)-J(z)}.
\]
Therefore, we have
\begin{equation}\label{contourdef}
F_k^{\varepsilon}(\tau,x)=\frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\end{equation}
for $\Im(\tau)>1.$
\begin{proposition}\label{ccontour}
$F^{\varepsilon}(x,\tau)$ has analytic continuation to $\Im(\tau)>\sin\left(\frac{\pi}{l}\right)$. Moreover, we have
\[
\begin{split}
F_k^{\varepsilon}(\tau,x)|_{2k+1}{I+\varepsilon S}=\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k} |_{2k+1}{I+\varepsilon S},
\\
F_k^{\varepsilon}(\tau+\lambda,x)=F_k^{\varepsilon}(\tau,x),
\end{split}
\]
when both sides of the above identities are in the domain of definition of $F_k^{\varepsilon}(\tau,x).$
\end{proposition}
\begin{proof}
We note that $J(z)$, takes real values on the unit circle and the image of the arc between $w_1$ and $w_2$ is the interval $[0,J(i)]\subset \mathbf{R}.$
We also note that the contour integral in~\eqref{contourdef} is well defined for every $\tau$ such that
\[
j(\tau)\notin [0,J(i)]\subset\mathbf{R}.
\]
This implies that $F^{\varepsilon}(x,\tau)$ has analytic continuation to the Fundamental domain $\mathcal{D}$ and all its horizontal translations by $z\to z\pm\lambda$. We denote this region by $\Omega_0$; see Figure~\ref{heckefff}.
\\
Let $\gamma$ be the chord between $w_1$ and $w_2$, and $D_1$ be the region between the chord and the arc on the unit circle between $w_1$ and $w_2.$ Let $\Omega_1$ be the union of $D_1$ and all its horizontal translations by $z\to z\pm\lambda$; see Figure~\ref{heckefff}. Next, we analytically continue $F^{\varepsilon}(x,\tau)$ on $D_1.$ It follows from $F^{\varepsilon}(x,\tau)=F^{\varepsilon}(x,\tau+\lambda)$ that it has analytic continuation to $\Omega_1.$
\\
Let $S\gamma$ be the image of $\gamma$ by sending $z \to \frac{-1}{z}.$ It is easy to check that $S\gamma\subset \Omega_0.$ Suppose that $\tau \in D_1,$ then $S\tau\in \Omega_0.$ Moreover, we have
\begin{equation}\label{eqdiff}
\begin{split}
\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz -\frac{1}{\lambda}&\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz
\\
&=\frac{2\pi i}{\lambda}Res_{z=\tau}\left(\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k}\right)
\\
&= \frac{2\pi i}{\lambda}\frac{\phi_{d,k}^{\varepsilon}(\tau)f_{d-1,k}^{-\varepsilon}(\tau)}{-J'(\tau)}\frac{e^{\pi i \tau |x|^2}}{(i\tau\pi)^k}
\\
&=\frac{e^{\pi i \tau |x|^2}}{(i\tau\pi)^k},
\end{split}
\end{equation}
where we use identity~\eqref{jder}. Similarly, suppose that $\tau\in S(D_1)\subset \Omega_0$ then
\[
\begin{split}
\frac{1}{\lambda}\int_{\gamma} \frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz -\frac{1}{\lambda}&\int_{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz
\\
&=\frac{2\pi i}{\lambda}Res_{z=\frac{-1}{\tau}}\left(\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k}\right)
\\
&= \frac{2\pi i}{\lambda} \frac{\phi_{d,k}^{\varepsilon}(\frac{-1}{\tau})f_{d-1,k}^{-\varepsilon}(\tau)}{-J'(\frac{-1}{\tau})}\frac{e^{\pi i \frac{-1}{\tau} |x|^2}}{(\frac{-i}{\tau}\pi)^k}
\\
&=-\varepsilon\left(\frac{i}{\tau}\right)^{2k+1} \frac{e^{\pi i \frac{-1}{\tau} |x|^2}}{(\frac{-i}{\tau}\pi)^k}.
\end{split}
\]
We note that for $\tau\in \mathcal{D}-SD_1,$ we have
\[
F^{\varepsilon}(x,\tau)= \frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz=\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\]
We note that for $\tau\in SD_1,$
\[
\begin{split}
F^{\varepsilon}(x,\tau)&= \frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz
\\
&=\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz+\varepsilon\left(\frac{i}{\tau}\right)^{2k+1} \frac{e^{\pi i \frac{-1}{\tau} |x|^2}}{(\frac{-i}{\tau}\pi)^k}.
\end{split}
\]
We note that the right hand side is well-defined on $D_1\cup SD_1.$ Hence,
we analytically continue $F^{\varepsilon}(x,\tau)$ to $\tau \in D_1\cup SD_1$ by defining
\begin{equation}\label{ancont}
F^{\varepsilon}(x,\tau):=\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz+\varepsilon\left(\frac{i}{\tau}\right)^{2k+1} \frac{e^{\pi i \frac{-1}{\tau} |x|^2}}{(\frac{-i}{\tau}\pi)^k}.
\end{equation}
This completes the proof of the first part of our Proposition.
\\
Next, suppose that $\tau \in D_1.$ By the above, we have
\[
F^{\varepsilon}(x,\tau)=\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz+\varepsilon\left(\frac{i}{\tau}\right)^{2k+1} \frac{e^{\pi i \frac{-1}{\tau} |x|^2}}{(\frac{-i}{\tau}\pi)^k}.
\]
By equation~\eqref{eqdiff}, we have
\[
\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz -\frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz=\frac{e^{\pi i \tau |x|^2}}{(i\tau\pi)^k}.
\]
Therefore, we have
\begin{equation}\label{eqnt}
F^{\varepsilon}(x,\tau)=\frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz+\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k} |_{2k+1}{I+\varepsilon S}
\end{equation}
for every $\tau\in D_1.$ Furthermore, we have
\[
f_{d-1,k}^{-\varepsilon}(\tau)|_{2k+1}S=-\varepsilon f_{d-1,k}^{-\varepsilon}(\tau),
\]
and
\[
J(\tau)=J(\frac{-1}{\tau}).
\]
Hence,
\begin{equation}\label{eqnt1}
\frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz=-\varepsilon \left( \frac{i}{\tau} \right)^{2k+1}\frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\frac{-1}{\tau})}{\left(J(\frac{-1}{\tau})-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\end{equation}
Note that $\frac{-1}{\tau}\in SD_1\subset \Omega_0$, hence
\[
F^{\varepsilon}\left(x,\frac{-1}{\tau}\right)= \frac{1}{\lambda}\int _{w1}^{w_2}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\frac{-1}{\tau})}{\left(J(\frac{-1}{\tau})-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz.
\]
Finally, by \eqref{eqnt}, \eqref{eqnt1} and the above
\[
F_k^{\varepsilon}(\tau,x)|_{2k+1}{I+\varepsilon S}=\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k} |_{2k+1}{I+\varepsilon S}.
\]
This completes the proof of our Propostion.
\end{proof}
\subsection{Growth estimate}\label{proofm}
Next, we improve the exponent in Lemma~\ref{inibd} by a factor $\sin\left(\frac{\pi}{l}\right).$
\begin{proposition}\label{impbd}
We have
\[
|a_{n,k}^{\varepsilon}(x)| \ll_k \delta^{-1}e^{ \frac{2\pi \sin\left(\frac{\pi}{l}\right)(1+\delta)}{\lambda}n}e^{-\pi \sin\left(\frac{\pi}{l}\right) |x|^2}\]
\end{proposition}
for any $\delta>0$
\begin{proof}
Let
\[I_{\delta}:=\left\{\tau:\Im (\tau)=\sin\left(\frac{\pi}{l}\right)+\delta \text{ and } |\Re(\tau)|\leq \frac{\lambda}{2}\right\}.\]
By Proposition~\ref{ccontour}, $F^{\varepsilon}(x,\tau)$ has analytic continuation to $I_{\delta},$ and
we have
\[
a_{n,k}^{\varepsilon}(x)=\frac{1}{\lambda}\int_{I_{\delta}}F^{\varepsilon}(x,\tau) e^{-2\pi i \tau \frac{n}{\lambda}} d\tau.
\]
Note that $I_{\delta}\subset D_1\cup SD_1$ for small enough $\delta.$ By~\eqref{ancont}
\[
F^{\varepsilon}(x,\tau):=\frac{1}{\lambda}\int_{\gamma}\frac{\phi_{d,k}^{\varepsilon}(z)f_{d-1,k}^{-\varepsilon}(\tau)}{\left(J(\tau)-J(z)\right)} \frac{e^{\pi i z |x|^2}}{(iz\pi)^k} dz+\varepsilon\left(\frac{i}{\tau}\right)^{2k+1} \frac{e^{\pi i \frac{-1}{\tau} |x|^2}}{(\frac{-i}{\tau}\pi)^k}.
\]
The Proposition follows immediately from the above integration formulas.
\end{proof}
Finally, prove Theorem~\ref{mainthmder}.
\begin{proof}[Proof of Theorem~\ref{mainthmder}]
It is easy to check that
\[
D_1\cup SD_1=\left\{\tau\in\mathbf{C}: \Im(\tau),\Im(-\frac{1}{\tau})>\sin\left(\frac{\pi}{l}\right)\right \}.
\]
Without loss of generality, we assume that
\[f(x)=\int \frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k}d\lambda(\tau),\]
where $\lambda$ is a measure with bounded variation and supported on a compact subset of $ D_1\cup SD_1$. We have
\[\mathcal{F}f(x)=\int \frac{i}{\tau}\frac{e^{\pi i \frac{-1}{\tau} x^2}}{(i\pi\tau)^k}d\lambda(\tau)=\int \left(\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k}|_{2k+1}S\right) d\lambda(\tau).\]
Therefore,
\begin{equation}\label{fep}
f^{\varepsilon}(x)= \int \left(\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k}|_{2k+1}(I+\varepsilon S)\right) d\lambda(\tau).
\end{equation}
Moreover,
\[\frac{d^k}{du^k}\frac{e^{i\pi \tau |x|^2}}{(i\tau\pi)^k}=e^{i\pi \tau |x|^2},\]
and
\[
\frac{d^k}{du^k} \mathcal{F}\left(\frac{e^{i\pi \tau |x|^2}}{(i\tau\pi)^k}\right)=\left(\frac{i}{\tau}\right)^{2k+1}e^{i\pi \frac{-1}{\tau} |x|^2},
\]
where $u=|x|^2$. We average the above identities with respect to $d\lambda$, and obtain
\[
\begin{split}
\frac{d^k}{du^k} f(x)= \int e^{\pi i \tau |x|^2}d\lambda(\tau),
\\
\frac{d^k}{du^k} \mathcal{F}f(x)= \int \left( e^{\pi i \tau |x|^2}|_{2k+1}S\right) d\lambda(\tau).
\end{split}
\]
Hence,
\begin{equation}\label{feps}
\begin{split}
\frac{d^k}{du^k} f^{\varepsilon}(x)= \int \left( e^{\pi i \tau |x|^2}|_{2k+1}(I+\varepsilon S)\right) d\lambda(\tau).
\end{split}
\end{equation}
By Proposition~\ref{ccontour}, we have
\[
F_k^{\varepsilon}(\tau,x)|_{2k+1}{I+\varepsilon S}=\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k} |_{2k+1}{(I+\varepsilon S)}
\]
for every $\tau\in D_1\cup SD_1.$ By Proposition~\ref{impbd}, the Fourier expansion of $ F_k^{\varepsilon}(\tau,x)$ is convergent on $D_1\cup SD_1,$ and we have
\[
F_k^{\varepsilon}(\tau,x)|_{2k+1}{(I+\varepsilon S)}=\sum_{n\geq 0}a_{n,k}^{\varepsilon}(x) e^{2\pi i \tau \frac{n}{\lambda}}|_{2k+1}{(I+\varepsilon S)}.
\]
This implies that
\[
\sum_{n\geq 0}a_{n,k}^{\varepsilon}(x) e^{2\pi i \tau \frac{n}{\lambda}}|_{2k+1}{(I+\varepsilon S)}=\frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k} |_{2k+1}{(I+\varepsilon S)}.
\]
We average the above identity with respect to $d\lambda$ and obtain
\[
\sum_{n\geq 0}a_{n,k}^{\varepsilon}(x) \int e^{2\pi i \tau \frac{n}{\lambda}}|_{2k+1}{(I+\varepsilon S)} d\lambda= \int \frac{e^{\pi i \tau x^2}}{(i\pi\tau)^k} |_{2k+1}{(I+\varepsilon S)} d\lambda(\tau).
\]
We substitute the left hand side using \eqref{feps} with the values the $k$-derivatives and the right hand side using \eqref{fep} by $f^{\varepsilon}(x)$, and obtain
\[\sum_{n\geq d(\varepsilon,k)}a^{\varepsilon}_{n,k}(x) \frac{d^k}{du^k} f^{\varepsilon}\left(\sqrt{\frac{2n}{\lambda}}\right)=f^{\varepsilon}(x).
\]
This completes the proof of our theorem.
\end{proof}
\section{Domain of $F_k^{\varepsilon}(\tau,x)$}\label{monodromy}
Let $\tilde{\mathbb{H}}$ be the universal cover of $\mathbb{H}-\Gamma w_1,$ the upper half-plan minus the orbit of $w_1$ under $\Gamma.$ It follows from the change of the contour integral that we introduce in proof of Proposition~\ref{ccontour} that $F_k^{\varepsilon}(\tau,x)$ has an analytic continuation to the whole $\tilde{\mathbb{H}}.$ In fact, $F_k^{\varepsilon}(\tau,x)$ has non-trivial monodromy around $w_1$ and all its orbits $\Gamma w_1,$ and cannot be extended beyond $\Im(\tau)>\sin\left(\frac{\pi}{l}\right).$
\subsection{Monodromy around $w_2$}
First, we analytically continue $F_k^{\varepsilon}(\tau,x)$ to every point in a neighborhood of $w_2$
except the segment form $\frac{\lambda}{2}$ to $w_2.$ Let $\mathcal{D}$ be the fundamental domain for $\Gamma$; see Figure~\ref{heckef}. Let $V=TS$ which is the hyperbolic rotation with center $w_2$ and angle $\frac{2\pi}{l}.$ We define
\[
\mathcal{D}_i=V^{i}\left(\mathcal{D}\cup T\mathcal{D}-\{w_1,w_2\} \right).
\]
Note that $\mathcal{D}_{i+p}=\mathcal{D}_i.$ We note by Proposition~\ref{ccontour}, $F_k^{\varepsilon}(\tau,x)$ is analytic and well defined on $\mathcal{D}_0=\mathcal{D}\cup T\mathcal{D}-\{w_1,w_2\} \subset \Omega_0.$ We denote this restriction by $F(x,\tau).$ By changing the contour integral, $F$ has an analytic continuation to $S\mathcal{D} \subset \mathcal{D}_{p-1}$ which satisfies
\[
F(\tau,x)+\varepsilon \left(\frac{i}{\tau}\right)^{2k+1}F\left(\frac{-1}{\tau},x\right)=\frac{e^{\pi i \tau |x|^2}}{(i\tau\pi)^k}|_{2k+1}^{\varepsilon}{I+ S},
\]
\[
F(\tau,x)=F(\tau+\lambda,x)
\]
for every $\tau\in \mathcal{D}.$ By combining the above identities, we have
\[
F(\alpha,x)|_{2k+1}^{-\varepsilon}V^{-1}=F(\alpha,x)+\frac{e^{\pi i \alpha |x|^2}}{(i\alpha\pi)^k}|_{2k+1}^{-\varepsilon}V^{-1}-T^{-1}.
\]
for every $\alpha=\tau+\lambda \in T\mathcal{D}.$ It is clear from the above functional equation that $F$ has an analytic continuation to $\mathcal{D}_{p-1}=V^{-1}(\mathcal{D}_0)$ by choosing $\alpha\in \mathcal{D}_0.$
\\
By applying the slash operator to the above, we have
\[
F(\alpha,x)|_{2k+1}^{-\varepsilon}V^{-(i+1)}=F(\alpha,x)|_{2k+1}^{-\varepsilon}V^{-i}+\frac{e^{\pi i \alpha |x|^2}}{(i\alpha\pi)^k}|_{2k+1}^{-\varepsilon}V^{-(i+1)}-T^{-1}V^{-i}.
\]
By the above functional equation, we analytically continue $F$ to $\mathcal{D}_{l-2}$ and $\mathcal{D}_{p-3}, \dots$ recursively. Hence, by induction
\begin{equation}\label{-Vext}
F(\alpha,x)|_{2k+1}^{-\varepsilon}V^{-n}=F(\alpha,x)+\frac{e^{\pi i \alpha |x|^2}}{(i\alpha\pi)^k}|_{2k+1}^{-\varepsilon}\sum_{i=0}^{n-1}V^{-(i+1)}-T^{-1}V^{-i}.
\end{equation}
Similarly, we extend $F$ to $\mathcal{D}_1$ by writing
\[
F(\tau,x)+\varepsilon \left(\frac{i}{\tau}\right)^{2k+1}F\left(\frac{-1}{\tau},x\right)=\frac{e^{\pi i \tau |x|^2}}{(i\tau\pi)^k}|_{2k+1}^{\varepsilon}{I+ S},
\]
\[
F\left(\frac{-1}{\tau},x\right)= F\left(\frac{-1}{\tau}+\lambda,x\right)
\]
for $\tau\in \mathcal{D}.$ By combining the above, we have
\[
F(\alpha,x)|_{2k+1}^{-\varepsilon}V=F(\alpha,x)+\frac{e^{\pi i \alpha |x|^2}}{(i\alpha\pi)^k}|_{2k+1}^{-\varepsilon}T^{-1}V-I.
\]
By applying the slash operator to the above and induction, we have
\begin{equation}\label{Vext}
F(\alpha,x)|_{2k+1}^{-\varepsilon}V^n=F(\alpha,x)+\frac{e^{\pi i \alpha |x|^2}}{(i\alpha\pi)^k}|_{2k+1}^{-\varepsilon}\sum_{i=0}^{n-1}T^{-1}V^{i+1}-V^{i}.
\end{equation}
\eqref{-Vext} and~\eqref{Vext}, give two different extension of $F$ on $\mathcal{D}_3$ for $n=\pm3.$ We obtain
\[
F(\alpha,x)|_{2k+1}^{-\varepsilon}V^3-F(\alpha,x)|_{2k+1}^{-\varepsilon}V^{-3}=\frac{e^{\pi i \alpha |x|^2}}{(i\alpha\pi)^k}|_{2k+1}^{-\varepsilon}\sum_{i=0}^{5}T^{-1}V^{i+1}-V^{i}.
\]
for $\alpha \in \mathcal{D}_0.$ Recall that
\[
r^{\varepsilon}_k(\gamma,\tau;x):=\frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}|^{-\varepsilon}_{2k+1} (T^{-1}-I)(1+V+\dots+V^{l-1})\gamma.
\]
By the above, we may consider
$r^{\varepsilon}_k(\gamma,\tau;x)$ as the obstruction for the analytic continuation of $F_k^{\varepsilon}(\tau,x)$.
\\
Finally we give a proof of of Theorem~\ref{mainthm2} after two an auxiliary lemmas. Recall that $u=|x|^2.$
\begin{lemma}\label{difslsh}
Let $\frac{p(\tau)}{q(\tau)}$ be a rational function.
We have
\[
\frac{d^l}{du^l} \left(\left(\frac{p(\tau)}{q(\tau)}e^{i\pi \tau |x|^2} \right)|^{-\varepsilon}_{2k+1} \gamma \right)= \left(\frac{d^l}{du^l} \left(\frac{p(\tau)}{q(\tau)}e^{i\pi \tau |x|^2} \right) \right)|^{-\varepsilon}_{2k+1} \gamma
\]
for every $\gamma\in PSL_2(\mathbf{R})$ and $l\geq 0.$
\end{lemma}
\begin{proof}
We write
\(
\gamma(\tau)
\) for the action of $\gamma$ on $\tau\in\mathbb{H}.$
We have
\[
\left(\frac{p(\tau)}{q(\tau)}e^{i\pi \tau |x|^2} \right)|^{-\varepsilon}_{2k+1} \gamma=j^{-\varepsilon}_{2k+1}(\tau,\gamma) \frac{p(\gamma(\tau))}{q(\gamma(\tau))}e^{i\pi \gamma(\tau) |x|^2}.
\]
Then
\[
\begin{split}
\frac{d^l}{du^l} \left(\left(\frac{p(\tau)}{q(\tau)}e^{i\pi \tau |x|^2} \right)|^{-\varepsilon}_{2k+1} \gamma \right)&=j^{-\varepsilon}_{2k+1}(\tau,\gamma) \frac{p(\gamma(\tau))}{q(\gamma(\tau))} (i\pi \gamma(\tau))^l e^{i\pi \gamma(\tau) |x|^2}
\\
&=\left((i\pi \tau)^l \frac{p(\tau)}{q(\tau)}e^{i\pi \tau |x|^2} \right)|^{-\varepsilon}_{2k+1} \gamma
\\
&=\left(\frac{d^l}{du^l} \left(\frac{p(\tau)}{q(\tau)}e^{i\pi \tau |x|^2} \right) \right)|^{-\varepsilon}_{2k+1} \gamma.
\end{split}
\]
\end{proof}
\begin{lemma}\label{fourr}
Let $\mathcal{F}(f(x))$ be fourier transformation with respect to $x\in\mathbf{R}^2.$
We have
\[
\mathcal{F}\left(\frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}|^{-\varepsilon}_{2k+1} \gamma\right) (\xi) =-\varepsilon \frac{e^{i\pi \tau |\xi|^2}}{(i\pi\tau)^k} |^{-\varepsilon}_{2k+1} S\gamma
\]
\end{lemma}
\begin{proof}
We have
\[
\begin{split}
\mathcal{F}\left(\frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}|^{-\varepsilon}_{2k+1} \gamma \right) (\xi)&=j^{-\varepsilon}_{2k+1}(\tau,\gamma) \frac{1}{(i\pi\gamma(\tau))^k}\mathcal{F} \left(e^{i\pi \gamma(\tau) |x|^2}\right) (\xi).
\\
&=j^{-\varepsilon}_{2k+1}(\tau,\gamma)\frac{1}{(i\pi\gamma(\tau))^k}\frac{i}{\gamma(\tau)} e^{i\pi \frac{-1}{\gamma(\tau)} |\xi|^2}.
\end{split}
\]
We have
\[
\begin{split}
\frac{e^{i\pi \tau |\xi|^2}}{(i\pi\tau)^k} |^{-\varepsilon}_{2k+1} S\gamma&=-\varepsilon \left(\frac{i}{\tau} \right)^{2k+1}\frac{e^{i\pi \frac{-1}{\tau} |\xi|^2}}{(i\pi\frac{-1}{\tau})^k} |^{-\varepsilon}_{2k+1} \gamma
\\
&= -\varepsilon j^{-\varepsilon}_{2k+1}(\tau,\gamma) \frac{1}{(i\pi\gamma(\tau))^k} \frac{i}{\gamma(\tau)} e^{i\pi \frac{-1}{\gamma(\tau)} |\xi|^2}.
\end{split}
\]
The lemma follows from the above identities.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{mainthm2} ]
First we check that
\[
\frac{d^k}{du^k}r_k^{\varepsilon}\left(\gamma,\tau;\sqrt{\frac{2n}{\lambda}}\right)=0
\]
for $|x|=\sqrt{\frac{2n}{\lambda}}.$ By Lemma~\ref{difslsh}, we have
\[
\frac{d^k}{du^k}r^{\varepsilon}_k(\gamma,\tau;x)=\left( \left(\frac{d^k}{du^k}\frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}\right)|^{-\varepsilon}_{2k+1} (T^{-1}-I) \right)|^{-\varepsilon}_{2k+1} (1+V+\dots+V^{l-1})\gamma.
\]
For the inside function of the right hand side, we have
\[
\left(\frac{d^k}{du^k}\frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}\right)|^{-\varepsilon}_{2k+1} (T^{-1}-I) =e^{i\pi \tau |x|^2}|^{-\varepsilon}_{2k+1} (T^{-1}-I).
\]
We have
\[
e^{i\pi \tau |x|^2}|^{-\varepsilon}_{2k+1} (T^{-1}-I)= e^{i\pi \tau \frac{2m}{\sqrt{3}}}|^{-\varepsilon}_{2k+1} (T^{-1}-I)=0.
\]
for $|x|=\sqrt{\frac{2n}{\lambda}}.$ This completes the proof of the first part of Theorem~\ref{mainthm2}.
\\
Next we prove the other part. By Lemma~\ref{fourr}
\[
\mathcal{F}r^{\varepsilon}_k(\gamma,\tau;x)=-\varepsilon \frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}|^{-\varepsilon}_{2k+1} S(T^{-1}-I)(1+V+\dots+V^{l-1})\gamma.
\]
We note that
\[
S(T^{-1}-I)(1+V+\dots+V^{l-1})\gamma=\sum_{i=0}^{l-1}V^{i}-T^{-1}V^{i+1}=-(T^{-1}-I)(1+V+\dots+V^{l-1})\gamma.
\]
Therefore,
\[
\mathcal{F}r^{\varepsilon}_k(\gamma,\tau;x)=\varepsilon \frac{e^{i\pi \tau |x|^2}}{(i\pi\tau)^k}|^{-\varepsilon}_{2k+1} S(T^{-1}-I)(1+V+\dots+V^{l-1})\gamma=\varepsilon r^{\varepsilon}_k(\gamma,\tau;x).
\]
This completes the proof of Theorem~\ref{mainthm2}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw[-latex](\myxlow,0) -- (\myxhigh ,0);
\pgfmathsetmacro{\succofmyxlow}{\myxlow+1}
\foreach \x in {\myxlow,\succofmyxlow,...,\myxhigh}
{ \draw (\x,-0) -- (\x,-0.0) node[below,font=\tiny] {\x};
}
{ \draw (-0.866,-0.1)node[below,font=\tiny]{$-\frac{\sqrt{3}}{2}$}--(-0.866,0.5)node[left,font=\tiny]{$w_1$} (0.866,0.5)node[right,font=\tiny]{$w_2$} -- (0.866,-0.1) node[below,font=\tiny] {$\frac{\sqrt{3}}{2}$};
}
\foreach \y in {0.5,1}
{ \draw (0,\y) -- (-0.0,\y) node[left,font=\tiny] {\pgfmathprintnumber{\y}};
}
\draw[-latex](0,-0.0) -- (0,1.6);
\begin{scope}
\clip (\myxlow,0) rectangle (\myxhigh,1.2);
{ \draw[very thin, blue] (1,0) arc(0:180:1);
}
{ \draw[very thin, blue] (0,0) arc(0:180:0.57735);
}
{ \draw[very thin, blue] (1.1547,0) arc(0:180:0.57735);
}
\end{scope}
\begin{scope}
\begin{pgfonlayer}{background}
\clip (-0.866,0) rectangle (0.866,1.7);
\clip (1,1.7) -| (-1,0) arc (180:0:1) -- cycle;
\fill[gray,opacity=0.8] (-1,-1) rectangle (1,2);
\end{pgfonlayer}
\end{scope}
\end{tikzpicture}
\captionof{figure}{Fundamental domain for the Hecke triangle $(2,6,\infty)$}\label{heckeff}
\end{figure}
\end{proof}
\section{Proof of the Conjecture of Cohn et al.}\label{zeros}
\subsection{Proof of Theorem~\ref{mainconj}}
In this section, we show that Theorem~\ref{strongthm} implies Theorem~\ref{mainconj}.
\begin{lemma}\label{Acons}
Let $\delta>0$ be any positive real number. There exists a periodic subset of integers $\tilde{A}\subset\mathbf{Z}$ such that if $n=x^2+xy+y^2$ then $n\in \tilde{A}$ and the density of $\tilde{A}$ is smaller than $\delta.$
\end{lemma}
\begin{proof}
Suppose that
\(
n=x^2+xy+y^2
\)
and $ord_p(n)=2k+1$, where $l$ is a prime number.
It is an elementary fact that $l\equiv 1 \mod(3)$. Let
\[
L:=\prod_{\substack{p<M\\ l\equiv -1 \mod 3}} p^2.
\]
Let $B\subset \mathbf{Z}/L\mathbf{Z}$ be the subset of congruence class mod $L$ that are congruent to some $n=x^2+xy+y^2$ mod $L.$ It follows that
\[
\frac{|B|}{L}=\prod_{\substack{p<M\\ l\equiv -1 \mod 3}}\frac{p^2-p+1}{p^2}=\prod_{\substack{p<M\\ l\equiv -1 \mod 3}}\left(1-\frac{1}{l}+\frac{1}{p^2} \right)=O(\log(M)^{-1/2}).
\]
This implies that the density of $B$ could be as small as possible by taking $M$ large enough. Let
\[
\tilde{A}=\{a\in \mathbf{Z}^+: \bar{a}\in B, \text{ where } a\equiv \bar{a} \mod L \}.
\]
This completes the proof of our lemma.
\end{proof}
Next, we show that Theorem~\ref{strongthm} implies Theorem~\ref{mainconj}.
\begin{proof}[Proof of Theorem~\ref{mainconj}] Let $L$ and $\tilde{A}$ be as in the proof of Lemma~\ref{Acons}.
Let
\[
A=\{n>100: n\in \tilde{A} \}.
\]
Since the density of integers $n=x^2+xy+z^2$ is zero among all integers and $A$ has density $\frac{l}{L}.$ There exists infinity many $a_1, a_2, \dots $ such that $a_i\in A$ and $a_i\neq x^2+xy+y^2.$ By Theorem~\ref{strongthm}, there exists radial Schwartz function $f_i$ such that $f_i$ and $\mathcal{F}(f_i)$ vanish of order $2$ on all integers $n=x^2+xy+z^2,$ where $n>100.$ By linear algebra, there exists a finite linear combination
\[
f=\sum \alpha_i f_i
\]
such that $f$ and $\mathcal{F} f$ vanish on all integers $n=x^2+xy+z^2.$ Since $f_i$ are linearly independent $f\neq 0.$ It is clear that there are infinitely many linear independent radial Schwartz function $f$ as above. This completes the proof of Theorem~\ref{mainconj}.
\end{proof}
\subsection{Proof of Theorem~\ref{strongthm}}\label{reduc}
We briefly discuss our proof Theorem~\ref{strongthm}. Recall that
\[
A:=\left\{a>100 | a\equiv a_i \mod L, \text{ for some }a_i \text{ where } 1\leq i\leq l\right\}.
\]
where $\frac{l}{L}<0.001.$
\[
r^{\varepsilon}(\tau;x):=e^{i\pi \tau |x|^2}|^{\varepsilon}_1 (T^{-1}-I)(1+V+\dots+V^5),
\]
and
\[
s^{\varepsilon}(\tau;x):=e^{i\pi \tau |x|^2}|^{\varepsilon}_1 (1+V+\dots+V^5).
\]
By Corollary~\ref{mainthm22}
\[
r^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right)=0.
\]
for every integer $m\geq0.$ By Lemma~\ref{difslsh}, we have
\[
\frac{d}{du}r^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right)=c s^{\varepsilon}\left(\tau;\sqrt{\frac{2m}{\sqrt{3}}}\right),
\]
where $c=\pi i.$ Our idea is to average $r^{\varepsilon}(\tau;x)$ over $\tau$ with respect to a linear combination of probability measures supported on a compact region of the upper half-plane such that for every $m\in A-\{a\}$ the derivatives at $\sqrt{\frac{2m}{\sqrt{3}}}$ vanishes. More precisely, let
\[
f(x)=\int r^{\varepsilon}(\tau;x) d\mu(\tau).
\]
Then
\[
f\left(\sqrt{\frac{2m}{\sqrt{3}}}\right)=0, \text{ and } \frac{d}{du}f\left(\sqrt{\frac{2m}{\sqrt{3}}}\right)=c\int s^{\varepsilon}(\tau;x) d\mu(\tau).
\]
We construct $\mu$ as the weak$^*$ limit of a sequence of measures $\{\mu_n\}$ such that
\[
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}m}\right) d\mu_n(\tau)=0,
\]
and
\[
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}a}\right) d\mu_n(\tau)= 1,
\]
where $m\in A-\{a\}$ and $0<m<n.$
The existence of a weak$^*$ limit is a consequence of the compactness of the space of probability measures on a compact Borel measure space.
\subsection{Construction of $\mu_n$}
In this section, we construct $\mu_n.$ First, we define a map from the space of probability measures to a finite dimensional vector space. Let $A_n\subset A$ be subset of integers $m\in A$ where $0<m<n$. Let $H_n:=\mathbf{C}^{\#A_n}$ where each coordinate is indexed by $m\in A_n$. Let
\[
z=\frac{\sqrt{3}}{2}+x+it,
\]
where $0<t<\frac{1}{2}.$ Then
\[
\Im(V^{\pm}z_t)=\frac{t}{(\frac{\sqrt{3}}{2}\pm x)^2+t^2}.
\]
We have
\[
\Im(V^{\pm}z_t)-\Im(z_t)=\frac{t}{(\frac{\sqrt{3}}{2}\pm x)^2+t^2}-t.
\]
We note that the maximum of the above function on the interval $0<t<1/2$ and small $x\sim \delta$ is at $t_0\sim 0.27$ with value $0.058.$ Let
\[
\mathbf{X}_{\delta}:= \left\{\sqrt{3}(\frac{1}{2}+x)+it_0: |x|<\delta\right\}.
\]
\begin{lemma}\label{apprlem}
Let $t_0=0.27$, $\delta<0.01$ and $\tau\in \mathbf{X}_{\delta}$ then for every $1\leq i\leq 5,$ we have
\[
\Im(V^{i}\tau)-\Im(\tau)\geq 0.05.
\]
\end{lemma}
\begin{proof}
It is easy to check it numerically.
\end{proof}
The construction of $\mu_n$ involves both a discrete and continuous averaging that we discuss next.
\subsubsection{Continuous averaging}
We identify the unit circle $S^1$ with $2\pi \theta$ where $0\leq \theta<1.$
\begin{proposition}\label{discprop}
For $n\in \mathbf{Z}^+$ there exits a probability measure $\lambda_{n,\alpha}(\theta)d\theta$ on $S^1$ such that
\[
\int_{0}^1 e^{2\pi i m\theta} \lambda_{n,\alpha}(\theta)d\theta=\begin{cases} 1 &\text{ if } m=0,\\ \frac{1}{2} e^{ \pm i\alpha} &\text{ if } m=\pm n, \\ 0 &\text{ if } m\neq 0,\pm n. \end{cases}
\]
where $m\in \mathbf{Z}.$
\end{proposition}
\begin{proof}
Let
\[
\lambda_{n,\alpha}( \theta)d \theta=(1+\cos(2\pi n \theta-\alpha ))d\theta.
\]
It is clear that $\lambda_{n,\alpha}(\theta)\geq 0$ and $\int_{S^1} \lambda_{n,\alpha}( \theta)d \theta=1.$ The moment identities follows from orthogonality of $e^{2\pi i m\theta}$ on the unit circle.
\end{proof}
\subsubsection{Discrete averaging} Recall that
\[
A:=\{a>100| a\equiv a_k \mod L, \text{ for some }a_k \text{ where } 1\leq k\leq l\}.
\]
Let
\[
\mathbf{A}:=\begin{bmatrix} a_{j,k} \end{bmatrix}_{1\leq j,k\leq l} \in [\mathbf{C}]_{l\times l}.
\]
where $a_{j,k}:=e^{2\pi i \frac{j}{L}a_k}.$
It follows from the Vandermonde determinant that $\mathbf{A}$ is invertible. Let
\[\mathbf{A}^{-1}:=\begin{bmatrix}\alpha_{k,j} \end{bmatrix}_{1\leq k,j\leq l}.\]
Then
\[
\sum_{j}\alpha_{k,j} e^{2\pi i a_s\frac{j}{L}} = \delta_{k,s}.
\]
\subsubsection{Main result}
Let $\mathcal{P}$ be the space of probability measure on the disjoint union of $l$ unit intervals $[0,1]$. We parametrize elements of $\mu\in \mathcal{P}$ by $\mu:=(\Lambda(1)\lambda_1,\dots,\Lambda(l)\lambda_l)$ where $\lambda_i$ are probability measures on the unit interval $[0,1]$ and $\Lambda(i)\geq 0$ are non-negative real numbers where $\sum \Lambda(i)=1.$ For an integer $ 1\leq s \leq l,$ let
$I(s)\subset X_{\frac{l}{L}} $ be:
\[
I(s):=\left\{\sqrt{3}\left(\frac{1}{2}+\frac{x}{L}+\frac{s-1}{L}\right)+it_0: 0\leq x\leq 1 \right\}.
\]
Given $\mu:=(\Lambda(1)\lambda_1,\dots,\Lambda(l)\lambda_l)\in \mathcal{P},$ we define the following measure on the upper half-plane supported on $ I(s)$
\[
d\eta(s,\mu,\tau) := L\sum_{k} \Lambda(k) \alpha_{k,s} e^{-2 \pi i a_k(\frac{1}{2} +\frac{x}{L})}\lambda_k(x)dx.
\]
We define
\[
d\eta(\mu,\tau) =\sum_{s=1}^l d\eta(s,\mu,\tau),
\]
and $\psi_n:\mathcal{P} \to H_n$ as follows
\[
\psi_n(\mu):=\begin{bmatrix} \int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}m}\right) d\eta(\mu,\tau) \end{bmatrix}_{m\in A_n}.
\]
Let $(\lambda\times \Lambda)\in \mathcal{P}$ where
$\lambda(\theta)d\theta$ is a continuous probability measure on the unit circle, and $\Lambda$ a probability measure on the finite set $\{k:1\leq k\leq l\}.$
Let $\lambda_{n,\alpha}$ be the probability measure constructed in Proposition~\ref{discprop} and $\Lambda_k$ be the discrete measure with mass $1$ at $k$ and $0$ at other points.
\begin{proposition}\label{mainest}
We have
\[
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}(Lm+a_j)}\right) d\eta(\lambda_{n,\alpha}\times\Lambda_k,\tau) = \frac{e^{i \alpha}}{2} e^{-\pi t_0 \frac{2}{\sqrt{3}}(Lm+a_j)}\left(\delta_{j,k}\delta_{n,m}+O(e^{-0.05 \pi \frac{2}{\sqrt{3}}(Lm+a_j)}) \right).
\]
\end{proposition}
\begin{proof}
Suppose that $\tau \in \mathbf{X}_{\delta}.$ By Lemma~\ref{apprlem}, we have
\[\Im(V^{i}\tau)-\Im(\tau)\geq 0.05. \]
We have
\[
\begin{split}
s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}(Lm+a_j)}\right)&=e^{i\pi \tau \frac{2}{\sqrt{3}}(Lm+a_j)}|^{\varepsilon}_1 (1+V+\dots+V^5)
\\
&=e^{i\pi \tau \frac{2}{\sqrt{3}}(Lm+a_j)} \left(1+ O(e^{-0.05 \pi \frac{2}{\sqrt{3}}(Lm+a_j)}) \right).
\end{split}
\]
By the above estimate, we have
\[
\begin{split}
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}(Lm+a_j)}\right)d\eta(\lambda_{n,\alpha}\times\Lambda_k,\tau) &=\int e^{i\pi \tau \frac{2}{\sqrt{3}}(Lm+a_j)}d\eta(\lambda_{n,\alpha}\times\Lambda_k,\tau)
\\&+O\left(e^{-(t_0+0.05) \pi \frac{2}{\sqrt{3}}(Lm+a_j)}\right).
\\
&
\end{split}
\]
Next, we estimate the main term of the right hand side of the above identity. We have
\[
\begin{split}
\int e^{i\pi \tau \frac{2}{\sqrt{3}}(Lm+a_j)}&d\eta(\lambda_{n,\alpha}\times\Lambda_k,\tau)
\\
&=Le^{-t_0 \pi \frac{2}{\sqrt{3}}(Lm+a_j)} \sum_{s=0}^{l-1}\int_{0}^{\frac{1}{L}} e^{2\pi i (\frac{1}{2}+\frac{s}{L}+x) (Lm+a_j)} \alpha_{k,s+1} e^{-2 \pi i a_k (\frac{1}{2}+x)}\lambda_{n,\alpha}(Lx)dx
\\
&=e^{-t_0 \pi \frac{2}{\sqrt{3}}(Lm+a_j)} \sum_{s=0}^{l-1} e^{2\pi i \frac{sa_j}{L}}\alpha_{k,s+1}
\\&L\int_{0}^{\frac{1}{L}} e^{2\pi i (\frac{1}{2}+x) (Lm+a_j)} e^{-2 \pi i a_k (\frac{1}{2}+x)}\lambda_{n,\alpha}(Lx)dx
\\
&=e^{-t_0 \pi \frac{2}{\sqrt{3}}(Lm+a_j)} \delta_{j,k}\int_{0}^{1} e^{2\pi i m\theta} \lambda_{n,\alpha}(\theta)d\theta= \frac{e^{i \alpha}}{2} e^{-t_0 \pi \frac{2}{\sqrt{3}}(Lm+a_j)} \delta_{j,k}\delta_{m,n}.
\end{split}
\]
where $\theta=Lx.$ This proves our proposition.
\end{proof}
We state the main Proposition of this section. Suppose that $a\leq n$ and define $\vec{a}_n \in H_n $
\[
\vec{a}_n:=e^{-\pi (t_0+0.05 )\frac{2}{\sqrt{3}}a} [\delta_{m,a}]_{m\in A_n}.
\]
\begin{proposition}\label{mainpropp}
There exists $\mu_n\in \mathcal{P}$ such that $\psi_n(\mu_n)=\vec{a}_n.$
\end{proposition}
\begin{proof}
We consider $H_n$ as a real vector space of dimension $2\#A_n$ (each complex number has two real coordinates $z=(z_1,z_2)$ where $z_1=\text{Re}(z)$ and $z_2=\Im(z)).$
We note that $\psi_n(\mathcal{P})\subset H_n$ is a convex subset. Suppose the contrary that $\vec{a}_n\notin \psi_n(\mathcal{P}).$ Then
there there exits a hyperplane that separate $\psi_n(\mathcal{P})$ from $\vec{a}_n$. In other words there exits a unit vector $u\in H_n$ such that
\[
\left< u,\psi(\mu)- \vec{a}_n \right> >0
\]
for every $\mu\in \mathcal{P}.$
Suppose that $u=[(u_{(l,1)},u_{(l,2)}]_{l\in A_n},$ and
\begin{equation} \label{maxass}e^{-\pi t_0 \frac{2}{\sqrt{3}}b} \sqrt{u_{(b,1)}^2 + u_{(b,2)}^2}=\max\left(e^{-\pi t_0 \frac{2}{\sqrt{3}}l} \sqrt{u_{(l,1)}^2 + u_{(l,2)}^2}\right)_{l\in A_n}.\end{equation}
Suppose that
\[
b=nL+a_k.
\]
Let
\begin{equation}\label{defalph}
e^{i \alpha}=-\frac{u_{(b,1)}+iu_{(b,2)}}{\sqrt{u_{(b,1)}^2+u_{(b,2)}^2}},
\end{equation}
and
\[
\mu_n:=\lambda_{n,\alpha}\times\Lambda_k.
\]
By Proposition~\ref{mainest}, for $l\neq b $ we have
\begin{equation}\label{inq1}
|(\psi(\mu_n)-\vec{a}_n) _{(l,j)}| \leq e^{-\pi (t_0+0.05) \frac{2}{\sqrt{3}}l}.
\end{equation}
By \eqref{defalph} and Proposition~\ref{mainest}, we have
\begin{equation}\label{inq2}
(\psi(\mu_n)-\vec{a}_n)_{(b,1)}u_{(b,1)}+(\psi(\mu_n)-\vec{a}_n)_{(b,2)}u_{(b,2)} =-e^{-\pi t_0 \frac{2}{\sqrt{3}}b} \sqrt{u_{(b,1)}^2 + u_{(b,2)}^2}+O\left(e^{-\pi (t_0+0.05) \frac{2}{\sqrt{3}}b}\right).
\end{equation}
We have
\[
\begin{split}
\left< u,\psi(\mu_n)-\vec{a}_n \right>&= (\psi(\mu_n)-\vec{a}_n)_{(b,1)}u_{(b,1)}+(\psi(\mu_n)-\vec{a}_n)_{(b,2)}u_{(b,2)}
\\
&+\sum_{j,l\neq b} (\psi(\mu_n)-\vec{a}_n)_{(l,j)} u_{(l,j)}.
\end{split}
\]
By inequalities~\eqref{inq1} and \eqref{maxass}
\[
\begin{split}
\left|\sum_{j,l\neq b} (\psi(\mu_n)-\vec{a}_n)_{(l,j)} u_{(l,j)}\right| &\leq \sum_{j,l\neq b}e^{-\pi (t_0+0.05) \frac{2}{\sqrt{3}}l} |u_{(l,j)}|
\\
&\leq e^{-\pi t_0 \frac{2}{\sqrt{3}}b} \sqrt{u_{(b,1)}^2 + u_{(b,2)}^2} \sum_{j,l\neq b}e^{-\pi (0.05) \frac{2}{\sqrt{3}}l}.
\end{split}
\]
We note that $l>100$, hence
\[
\sum_{j,l\neq b}e^{-\pi (0.05) \frac{2}{\sqrt{3}}l} \leq e^{-\pi (0.05) \frac{2}{\sqrt{3}}100} 5.52.
\]
Note that the probability measure $\mu_n\in \mathcal{P}$ is such that
\[
\sum_{j,l\neq b}| (\psi(\mu_n)-\vec{a}_n)_{(l,j)} u_{(l,j)}|< | (\psi(\mu_n)-\vec{a}_n)_{(b,1)}u_{(b,1)}+ (\psi(\mu_n)-\vec{a}_n)_{(b,2)}u_{(b,2)}|,
\]
and
\[
(\psi(\mu_n)-\vec{a}_n)_{(b,1)}u_{(b,1)}+ (\psi(\mu_n)-\vec{a}_n)_{(b,2)}u_{(b,2)}<0.
\]
This implies that
\[
\left< u, \psi(\mu_n)-\vec{a}_n \right> <0,
\]
which is a contradiction.
\end{proof}
Finally, we give a proof of Theorem~\ref{strongthm}.
\begin{proof}[Proof of Theorem~\ref{strongthm}]
Recall that $\mathcal{P}$ is the space of probability measure on the disjoint union of $l$ unit intervals $[0,1]$. By Proposition~\ref{mainpropp}, there exists a sequence of probability measures
$\mu_n\in \mathcal{P}$ for every $n>a$ such that
\[
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}m}\right) d\eta(\mu_n,\tau) =e^{-\pi (t_0+0.05 )\frac{2}{\sqrt{3}}a} \delta_{m,a},
\]
where $m\in A_n.$
Since $\mathcal{P}$ is compact with respect to the weak$^*$ topology. There exists $\mu$ which is a weak$^*$ limit of a subsequence of $\mu_{n}.$ It is clear that
\[
\int s^{\varepsilon}\left(\tau;\sqrt{\frac{2}{\sqrt{3}}m}\right) d\eta(\mu,\tau) =e^{-\pi (t_0+0.05 )\frac{2}{\sqrt{3}}a} \delta_{m,a}
\]
for every $m\in A.$ We define
\[
f(x):=e^{\pi (t_0+0.05 )\frac{2}{\sqrt{3}}a} \int r^{\varepsilon}(\tau;x)d\eta(\mu,\tau).
\]
It follows from the reduction that we discuss in section~\ref{reduc} that $f$ satisfies that properties of Theorem~\ref{strongthm}.
\end{proof}
\bibliographystyle{alpha}
|
{
"timestamp": "2021-05-06T02:05:50",
"yymm": "2102",
"arxiv_id": "2102.08753",
"language": "en",
"url": "https://arxiv.org/abs/2102.08753"
}
|
\section{Introduction}
A significant growth of the researches' attention to the
fractional differential equations has been noticed lately . It is brought about by
many effective applications of fractional calculation to various
branches of science and engineering
\cite{Nakh:03,Old,Podlub:99,Hilfer:00,Kilbas:06,Uchaikin:08}. For
instance, we cannot dispense with mathematical language of fractional derivatives
when it comes to the description of the physical process of
statistical transfer which, as it is well known, brings us to diffusion
equations of fractional orders \cite{Nigma,Chuk}.
Let us consider the time fractional diffusion equation with variable
coefficients
\begin{equation}\label{ur01}
\partial_{0t}^{\alpha}u(x,t)=\mathcal{L}u(x,t)+f(x,t) ,\,\, 0<x<l,\,\,
0<t\leq T,
\end{equation}
\begin{equation}
u(0,t)=0,\quad u(l,t)=0,\quad 0\leq t\leq T, \quad
u(x,0)=u_0(x),\quad 0\leq x\leq l,\label{ur02}
\end{equation}
where
\begin{equation}
\partial_{0t}^{\alpha}u(x,t)=\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t}\frac{\partial
u(x,\eta)}{\partial\eta}(t-\eta)^{-\alpha}d\eta, \quad 0<\alpha<1
\label{ur02.01}
\end{equation}
is the Caputo derivative of order $\alpha$,
$$
\mathcal{L}u(x,t)=\frac{\partial }{\partial
x}\left(k(x,t)\frac{\partial u}{\partial x}\right)-q(x,t)u,
$$
$k(x,t)\geq c_1>0$, $q(x,t)\geq 0$ and $f(x,t)$ are given functions.
The time fractional diffusion equation constitutes a linear integro -
differential equation. Its solution in many cases cannot be found
in an analytical form; as a consequence it is required to apply numerical methods.
Nevertheless, in contrast to the classical case, when we numerically approximate a time
fractional diffusion equation on a certain time layer, we need information about all
the previous time layers. That is why algorithms for solving the
time fractional diffusion equations are rather labour-consuming even in one - dimensional case.
When we pass to two - dimensional and three - dimensional
problems, their complexity grows significantly. In this respect
constructing stable differential schemes of higher order
approximation is a major task.
A common difference approximation of fractional
derivative (\ref{ur02.01}) is the so-called $L1$ method
\cite{Old, Sun2} which is specified in the following way
\begin{equation}
\partial_{0t_{j+1}}^{\alpha}u(x,t)=\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=0}^{j}\frac{u(x,t_{s+1})-u(x,t_{s})}{t_{s+1}-t_{s}}\int\limits_{t_{s}}^{t_{s+1}}\frac{d\eta}{(t_{j+1}-\eta)^{\alpha}}+r^{j+1},
\label{ur02.02}
\end{equation}
where $0 = t_0 < t_1 < \ldots < t_{j+1}$, and $r^{j+1}$ is the local truncation
error. In the case of the uniform grid, $\tau=t_{s+1}-t_s$, for all
$s=0,1,\ldots, j+1$, it was proved that
$r^{j+1}=\mathcal{O}(\tau^{2-\alpha})$ \cite{Sun2,Lin,Alikh_arxiv3}.
The $L1$ method has been commonly used to solve the fractional
differential equations with the Caputo derivatives
\cite{Sun2,Lin,Alikh_arxiv3,ShkhTau:06,Liu:10,Alikh:12}.
The main idea of the traditional $L1$ formula for approximating Caputo fractional derivative $\partial_{0t}^\alpha f(t)$ of the function $f(t)$ is to replace the integrand $f(t)$ inside the integral by its piecewise linear interpolating polynomial (see \cite{Old, Sun2} ). A simple technique for improving the accuracy of $L1$ formula is to use piecewise high-degree interpolating polynomials instead of the linear interpolating polynomial. In general, the obtained numerical formulae in this way improve the accuracy of $L1$ formula from the order $2-\alpha$ to the order $r + 1-\alpha$, where $r \geq 2$ is the degree of the interpolating polynomial. When such formulae are applied to solve time-fractional PDEs, a key issue is the stability analysis of the corresponding methods for all $\alpha\in(0,1)$.
In \cite{Sun6} a new difference analog of the Caputo fractional
derivative with the order of approximation
$\mathcal{O}(\tau^{3-\alpha})$, called $L1-2$ formula, is
created. Based on this formula, calculations of difference
schemes for the time-fractional sub-diffusion equations in bounded
and unbounded spatial domains and the fractional ODEs are performed.
In \cite{Wang2019} the Caputo time-fractional derivative is discretized by a $(3 -\alpha)$ th-order numerical formula (called the $L2$ formula in this paper) which is constructed using piecewise quadratic interpolating polynomials. By developing a technique of discrete energy analysis, a full theoretical analysis of the stability and convergence of the method is carried out for all $\alpha\in(0,1)$.
Using piecewise quadratic interpolating polynomials, In \cite{AlikhanovJCP} a numerical formula (called $L2-1_\sigma$ formula) to approximate the Caputo fractional derivative $\partial_{0t}^\alpha f(t)$ at a special points with the numerical accuracy of order $3-\alpha$ was derived. Then some finite difference methods based on the $L2-1_\sigma$ formula were proposed for solving the time-fractional diffusion equation. In \cite{Gao, Ruilian} $L2-1_\sigma$ formula was generalized and applied for solving the multi-term, distributed and variable order time-fractional diffusion equations.
Difference schemes of the heightened order of approximation such as
the compact difference scheme \cite{Liu:10,Sun3,Sun4,Sun5,Wang2019} and
spectral method \cite{Lin,Lin2,Xu} were used to enhance the
spatial accuracy of fractional diffusion equations.
By means of the energy inequality method, a priori estimates for the
solution of the Dirichlet, Robin and non-local boundary value problems for the
diffusion-wave equation with the Caputo fractional derivative have been
found in \cite{Alikh:12,Alikh:10, Alikh_nonloc}.
In the present paper we construct $L2$ type difference analog of the fractional Caputo
derivative with the order of approximation
$\mathcal{O}(\tau^{3-\alpha})$ for each $\alpha\in(0,1)$.
Features of the found difference operator are
investigated. Difference schemes of the second and fourth order of
approximation in space and the $(3-\alpha)$ th-order in time for the time
fractional diffusion equation with variable coefficients are
built. By means of the method of energy inequalities, the stability
and convergence of these schemes are
proven. Numerical computations of some test problems confirming
reliability of the obtained results are implemented. The method can be
without difficulty expanded to other time fractional partial differential
equations with other boundary conditions.
\section{ The L2 type fractional numerical differentiation formula}
In this section we study a difference analog of the Caputo fractional
derivative with the approximation order $\mathcal
O(\tau^{3-\alpha})$ and explore its fundamental features.
We consider the uniform grid $\bar \omega_{\tau}=\{t_j=j\tau,\,
j=0,1,\ldots,M; \,T=\tau M\}$. For the Caputo fractional derivative
of the order $\alpha$, $0<\alpha<1$, of the function $u(t)\in
\mathcal{C}^{3}[0,T]$ at the fixed point $t_{j+1}$, $j\in \{1, 2,
\ldots, M-1\}$ the following equalities are valid
$$
\partial_{0t_{j+1}}^{\alpha}u(t)=\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{j+1}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}
$$
\begin{equation}\label{ur0.99}
=
\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}+
\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}.
\end{equation}
On each interval $[t_{s-1},t_s]$ ($1\leq s\leq j$), applying the
quadratic interpolation ${\Pi}_{2,s}u(t)$ of $u(t)$ that uses three
points $(t_{s-1},u(t_{s-1}))$, $(t_{s},u(t_{s}))$ and
$(t_{s+1},u(t_{s+1}))$, we arrive at
$$
{\Pi}_{2,s}u(t)=u(t_{s-1})\frac{(t-t_{s})(t-t_{s+1})}{2\tau^2}
$$
$$
-u(t_{s})\frac{(t-t_{s-1})(t-t_{s+1})}{\tau^2}+u(t_{s+1})\frac{(t-t_{s-1})(t-t_{s})}{2\tau^2},
$$
\begin{equation}\label{ur0.991}
\left({\Pi}_{2,s}u(t)\right)'=u_{t,s}+u_{\bar tt,s}(t-t_{s+1/2}),
\end{equation}
and
\begin{equation}\label{ur0.992}
u(t)-{\Pi}_{2,s}u(t)=\frac{u'''(\bar\xi_s)}{6}(t-t_{s-1})(t-t_{s})(t-t_{s+1}),
\end{equation}
where $t\in[t_{s-1},t_{s+1}]$, $\bar\xi_s\in(t_{s-1},t_{s+1})$,
$t_{s-1/2}=t_s-0.5\tau$, $u_{t,s}=(u(t_{s+1})-u(t_s))/\tau$,
$u_{\bar t,s}=(u(t_{s})-u(t_{s-1}))/\tau$.
In (\ref{ur0.99}), we make use of ${\Pi}_{2,s}u(t)$ in order to approximate $u(t)$ on
the interval $[t_{s-1},t_{s}]$ ($1\leq s\leq j$). In view of the equality
\begin{equation}\label{ur0.993}
\int\limits_{t_{s-1}}^{t_s}(\eta-t_{s-1/2})(t_{j+1}-\eta)^{-\alpha}d\eta=\frac{\tau^{2-\alpha}}{1-\alpha}b_{j-s+1}^{(\alpha)},
\quad 1\leq s\leq j
\end{equation}
with
$$
b_{l}^{(\alpha)}=\frac{1}{2-\alpha}\left[(l+1)^{2-\alpha}-l^{2-\alpha}\right]-\frac{1}{2}\left[(l+1)^{1-\alpha}+l^{1-\alpha}\right],\quad
l\geq 0,
$$
from (\ref{ur0.99}) and (\ref{ur0.991}) we get the difference
analog of the Caputo fractional derivative of order $\alpha$
($0<\alpha<1$) for the function $u(t)$, at the points $t_{j+1}$
($j=1, 2, \ldots$), in this form:
$$
\partial_{0t_{j+1}}^{\alpha}u(\eta)=
\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}+
\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}
$$
$$
\approx
\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{\left({\Pi}_{2,1}u(\eta)\right)'d\eta}{(t_{j+1}-\eta)^{\alpha}}+
\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\frac{\left({\Pi}_{2,s}u(\eta)\right)'d\eta}{(t_{j+1}-\eta)^{\alpha}}
$$
$$
=\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{u_{t,1}+u_{\bar
tt,1}(\eta-t_{3/2})}{(t_{j+1}-\eta)^{\alpha}}d\eta
$$
$$
+\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\frac{u_{t,s}+u_{\bar
tt,s}(\eta-t_{s+1/2})}{(t_{j+1}-\eta)^{\alpha}}d\eta
$$
$$
=\frac{\tau^{1-\alpha}}{\Gamma{(2-\alpha)}}\left((a_{j}^{(\alpha)}-b_{j}^{(\alpha)}-b_{j-1}^{(\alpha)})u_{t,0}+(a_{j-1}^{(\alpha)}+b_{j-1}^{(\alpha)}+b_{j}^{(\alpha)})u_{t,1}\right)
$$
$$
+\frac{\tau^{1-\alpha}}{\Gamma{(2-\alpha)}}\sum\limits_{s=2}^{j}(-b_{j-s}^{(\alpha)}u_{t,s-1}+(a_{j-s}^{(\alpha)}+b_{j-s}^{(\alpha)})u_{
t,s})
$$
$$
=\frac{\tau^{1-\alpha}}{\Gamma{(2-\alpha)}}\left((a_{j}^{(\alpha)}-b_{j}^{(\alpha)}-b_{j-1}^{(\alpha)})u_{t,0}+(a_{j-1}^{(\alpha)}+b_{j-1}^{(\alpha)}+b_{j}^{(\alpha)}-b_{j-2}^{(\alpha)})u_{t,1}\right)
$$
$$
+\frac{\tau^{1-\alpha}}{\Gamma{(2-\alpha)}}\left(\sum\limits_{s=2}^{j-1}(-b_{j-s-1}^{(\alpha)}+a_{j-s}^{(\alpha)}+b_{j-s}^{(\alpha)})u_{
t,s}+(a_{0}^{(\alpha)}+b_{0}^{(\alpha)})u_{ t,j}\right)=
$$
\begin{equation}
=\frac{\tau^{1-\alpha}}{\Gamma{(2-\alpha)}}\sum\limits_{s=0}^{j}c_{j-s}^{(\alpha)}u_{t,s}=\Delta_{0t_{j+1}}^\alpha
u, \label{ur0.994}
\end{equation}
where
$$
a_{l}^{(\alpha)}=(l+1)^{1-\alpha}-l^{1-\alpha}, \quad l\geq 0;
$$
for $j=1$
\begin{equation}
c_{s}^{(\alpha)}=
\begin{cases}
a_{0}^{(\alpha)}+b_{0}^{(\alpha)}+b_{1}^{(\alpha)}, \quad\quad s=0,\\
a_{1}^{(\alpha)}-b_{1}^{(\alpha)}-b_{0}^{(\alpha)}, \quad\quad s=1,
\label{ur0.995.1}
\end{cases}
\end{equation}
for $j=2$
\begin{equation}
c_{s}^{(\alpha)}=
\begin{cases}
a_{0}^{(\alpha)}+b_{0}^{(\alpha)}, \quad\quad\quad\quad\quad\quad\,\,\,\, s=0,\\
a_{1}^{(\alpha)}+b_{1}^{(\alpha)}+b_{2}^{(\alpha)}-b_{0}^{(\alpha)},
\quad\, s=1,\\
a_{2}^{(\alpha)}-b_{2}^{(\alpha)}-b_{1}^{(\alpha)},
\quad\quad\quad\quad s=2,\label{ur0.995.2}
\end{cases}
\end{equation}
and for $j\geq 3$,
\begin{equation}
c_{s}^{(\alpha)}=
\begin{cases}
a_{0}^{(\alpha)}+b_{0}^{(\alpha)}, \quad\quad\quad\quad\quad\quad\quad \, s=0,\\
a_{s}^{(\alpha)}+b_{s}^{(\alpha)}-b_{s-1}^{(\alpha)}, \quad\quad\quad \, \quad 1\leq s\leq j-2,\\
a_{j-1}^{(\alpha)}+b_{j-1}^{(\alpha)}+b_{j}^{(\alpha)}-b_{j-2}^{(\alpha)}, \quad s=j-1,\\
a_{j}^{(\alpha)}-b_{j}^{(\alpha)}-b_{j-1}^{(\alpha)},
\quad\quad\quad\quad\, s=j. \label{ur0.995.3}
\end{cases}
\end{equation}
We name the fractional numerical differentiation formula
(\ref{ur0.994}) for the Caputo fractional derivative of order
$\alpha$ ($0<\alpha<1$) the L2 formula.
\begin{lemma} \label{lem1} For any $\alpha\in(0,1)$, $j=1, 2, \ldots, M-1$
and $u(t)\in \mathcal{C}^3[0,t_{j+1}]$
\begin{equation}
|\partial_{0t_{j+1}}^{\alpha}u-\Delta_{0t_{j+1}}^\alpha
u|=\mathcal{O}(\tau^{3-\alpha}).
\label{ur0.996}
\end{equation}
\end{lemma}
\begin{proof} Let
$\partial_{0t_{j+1}}^{\alpha}u-\Delta_{0t_{j+1}}^\alpha
u=R_{0}^{2}+R_{2}^{j+1}$, where
$$
R_{0}^{2}=\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}-
\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{\left({\Pi}_{2,1}u(\eta)\right)'d\eta}{(t_{j+1}-\eta)^{\alpha}}
$$
$$
=\frac{1}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{\left(u(\eta)-{\Pi}_{2,1}u(\eta)\right)'d\eta}{(t_{j+1}-\eta)^{\alpha}}=
-\frac{\alpha}{\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}\frac{\left(u(\eta)-{\Pi}_{2,1}u(\eta)\right)d\eta}{(t_{j+1}-\eta)^{\alpha+1}}
$$
$$
=-\frac{\alpha}{6\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}{u'''(\bar\xi_1)\eta(\eta-t_1)(\eta-t_2)(t_{j+1}-\eta)^{-\alpha-1}}d\eta,
$$
$$
R_{2}^{j+1}=\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\frac{u'(\eta)d\eta}{(t_{j+1}-\eta)^{\alpha}}-
\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\frac{\left({\Pi}_{2,s}u(\eta)\right)'d\eta}{(t_{j+1}-\eta)^{\alpha}}
$$
$$
=\frac{1}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\left(u(\eta)-{\Pi}_{2,s}u(\eta)\right)'{(t_{j+1}-\eta)^{-\alpha}}d\eta
$$
$$
=-\frac{\alpha}{\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}\left(u(\eta)-{\Pi}_{2,s}u(\eta)\right){(t_{j+1}-\eta)^{-\alpha-1}}d\eta
$$
$$
=-\frac{\alpha}{6\Gamma(1-\alpha)}\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}u'''(\bar\xi_s)(\eta-t_{s-1})(\eta-t_{s})(\eta-t_{s+1}){(t_{j+1}-\eta)^{-\alpha-1}}d\eta,
$$
Next we estimate the errors $R_{0}^{2}$ and $R_{2}^{j+1}$:
For $j=1$ we have
$$
\left|R_{0}^{2}\right|
=\frac{\alpha}{6\Gamma(1-\alpha)}\left|\int\limits_{0}^{t_{2}}{u'''(\bar\xi_1)\eta(\eta-t_1)(\eta-t_2)(t_{2}-\eta)^{-\alpha-1}}d\eta\right|
$$
$$
\leq\frac{\alpha M_3
\tau^2}{3\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}(t_{2}-\eta)^{-\alpha}d\eta=\frac{2^{1-\alpha}\alpha
M_3}{3\Gamma(2-\alpha)}\tau^{3-\alpha},
$$
For $j\geq2$ we have
$$
\left|R_{0}^{2}\right|
=\frac{\alpha}{6\Gamma(1-\alpha)}\left|\int\limits_{0}^{t_{2}}{u'''(\bar\xi_1)\eta(\eta-t_1)(\eta-t_2)(t_{j+1}-\eta)^{-\alpha-1}}d\eta\right|
$$
$$
\leq\frac{2\sqrt{3}\alpha M_3
\tau^3}{54\Gamma(1-\alpha)}\int\limits_{0}^{t_{2}}{(t_{j+1}-\eta)^{-\alpha-1}}d\eta=\frac{\sqrt{3}
M_3
\tau^{3-\alpha}}{27\Gamma(1-\alpha)}\left((j-1)^{-\alpha}-(j+1)^{-\alpha}\right)
\leq\frac{\sqrt{3}(1-3^{-\alpha})M_3}{27\Gamma(1-\alpha)}\tau^{3-\alpha},
$$
$$
\left|R_{2}^{j+1}\right|=\frac{\alpha}{6\Gamma(1-\alpha)}\left|\sum\limits_{s=2}^{j}\int\limits_{t_{s}}^{t_{s+1}}u'''(\bar\xi_s)(\eta-t_{s-1})(\eta-t_{s})(\eta-t_{s+1}){(t_{j+1}-\eta)^{-\alpha-1}}d\eta\right|
$$
$$
\leq\frac{2\sqrt{3}\alpha M_3
\tau^3}{54\Gamma(1-\alpha)}\sum\limits_{s=2}^{j-1}\int\limits_{t_{s}}^{t_{s+1}}{(t_{j+1}-\eta)^{-\alpha-1}}d\eta+
\frac{\alpha M_3
\tau^2}{3\Gamma(1-\alpha)}\int\limits_{t_{j}}^{t_{j+1}}{(t_{j+1}-\eta)^{-\alpha}}d\eta
$$
$$
=\frac{\sqrt{3}\alpha M_3
\tau^3}{27\Gamma(1-\alpha)}\int\limits_{t_{2}}^{t_{j}}{(t_{j+1}-\eta)^{-\alpha-1}}d\eta+
\frac{\alpha M_3
\tau^2}{3\Gamma(1-\alpha)}\int\limits_{t_{j}}^{t_{j+1}}{(t_{j+1}-\eta)^{-\alpha}}d\eta
$$
$$
=\frac{\sqrt{3} M_3
\tau^3}{27\Gamma(1-\alpha)}\left(\tau^{-\alpha}-t_{j-1}^{-\alpha}\right)+
\frac{\alpha M_3
\tau^2}{3\Gamma(1-\alpha)}\frac{\tau^{1-\alpha}}{1-\alpha}
\leq
\left(\frac{\sqrt{3}}{9}+\frac{\alpha}{(1-\alpha)}\right)\frac{M_3}{3\Gamma(1-\alpha)}\tau^{3-\alpha}.
$$
\end{proof}
\subsection{Fundamental features of the new L2 fractional numerical differentiation formula.}
\begin{lemma}\label{lm_pr_1} For all $\alpha\in(0,1)$ and $s=1, 2, 3, \ldots$
\begin{equation}
\frac{1-\alpha}{(s+1)^\alpha}<a_s<\frac{1-\alpha}{s^\alpha},
\label{url2_1}
\end{equation}
\begin{equation}
\frac{\alpha(1-\alpha)}{(s+2)^{\alpha+1}}<a_s-a_{s+1}<\frac{\alpha(1-\alpha)}{s^{\alpha+1}},
\label{url2_2}
\end{equation}
\begin{equation}
\frac{\alpha(1-\alpha)}{12(s+1)^{\alpha+1}}<b_{s}<\frac{\alpha(1-\alpha)}{12s^{\alpha+1}},
\label{url2_3}
\end{equation}
\end{lemma}
\begin{proof} The validity of Lemma \ref{lm_pr_1} follows from the following
equalities:
$$
a_{s}=(1-\alpha)\int\limits_{0}^{1}\frac{d\xi}{(s+\xi)^{\alpha}},
$$
$$
a_{s}-a_{s+1}=\alpha(1-\alpha)\int\limits_{0}^{1}d\eta\int\limits_{0}^{1}\frac{d\xi}{(s+\xi+\eta)^{\alpha+1}},
$$
$$
b_{s}=\frac{\alpha(1-\alpha)}{2^{2-\alpha}}\int\limits_{0}^{1}\eta
d\eta\int\limits_{2s+1-\eta}^{2s+1+\eta}\frac{d\xi}{\xi^{\alpha+1}}.
$$
\end{proof}
For $j=1$ we have
$$
c_0^{(\alpha)}=\frac{2+\alpha}{2^{\alpha}(2-\alpha)}, \quad
c_1^{(\alpha)}=\frac{2-3\alpha}{2^{\alpha}(2-\alpha)}, \quad
c_0^{(\alpha)}+3c_1^{(\alpha)}=\frac{2^{3-\alpha}(1-\alpha)}{2-\alpha}>0.
$$
For $j\geq2$, the next lemma shows properties of the coefficient
$c_s^{(\alpha)}$ defined in (\ref{ur0.995.2}) and (\ref{ur0.995.3})
\begin{lemma} For any $\alpha\in(0,1)$ and $c_s^{(\alpha)}$
($0\leq s\leq j$, $j\geq 2$) the following inequalities are valid
\begin{equation}
\frac{11}{16}\cdot\frac{1-\alpha}{(j+1)^\alpha} < c_j^{(\alpha)} < \frac{1-\alpha}{j^\alpha},
\label{url2_4}
\end{equation}
\begin{equation}
c_0^{(\alpha)}>c_2^{(\alpha)}>c_3^{(\alpha)}>\ldots>c_{j-2}^{(\alpha)}>c_{j-1}^{(\alpha)}>c_j^{(\alpha)},
\label{url2_5}
\end{equation}
\begin{equation}
c_0^{(\alpha)}+3c_1^{(\alpha)}-4c_2^{(\alpha)}>0. \label{url2_6}
\end{equation}
\end{lemma}
\begin{proof} For $j\geq2$ we get
$$
c_j^{(\alpha)}=a_j^{(\alpha)}-b_j^{(\alpha)}-b_{j-1}^{(\alpha)}<a_j^{(\alpha)}<
\frac{1-\alpha}{j^\alpha}.
$$
$$
c_j^{(\alpha)}=a_j^{(\alpha)}-b_j^{(\alpha)}-b_{j-1}^{(\alpha)}>
\frac{1-\alpha}{(j+1)^\alpha}-\frac{\alpha(1-\alpha)}{12j^{\alpha+1}}-\frac{\alpha(1-\alpha)}{12(j-1)^{\alpha+1}}.
$$
$$
=\frac{1-\alpha}{(j+1)^\alpha}\left(1-\frac{\alpha}{12}\cdot\left(\frac{j+1}{j}\right)^\alpha\cdot\frac{1}{j}-\frac{\alpha}{12}\cdot\left(\frac{j+1}{j-1}\right)^\alpha\cdot\frac{1}{j-1}\right)
$$
$$
>\frac{1-\alpha}{(j+1)^\alpha}\left(1-\frac{1}{12}\cdot\frac{3}{2}\cdot\frac{1}{2}-\frac{1}{12}\cdot3\right)=\frac{11}{16}\cdot\frac{1-\alpha}{(j+1)^\alpha}
$$
Inequality (\ref{url2_4}) is proved. Let us prove inequality
(\ref{url2_5}).
$$
c_0^{(\alpha)}-c_2^{(\alpha)}\geq
a_0^{(\alpha)}-a_2^{(\alpha)}+b_0^{(\alpha)}-
b_2^{(\alpha)}+b_1^{(\alpha)}-b_3^{(\alpha)}>0,
$$
For $j \geq 5$, $2\leq s\leq j-3$ we have
$$
c_{s}^{(\alpha)}-c_{s+1}^{(\alpha)}=a_{s}^{(\alpha)}-a_{s+1}^{(\alpha)}-b_{s-1}^{(\alpha)}+2b_{s}^{(\alpha)}-b_{s+1}^{(\alpha)}
$$
$$
=\frac{1}{2-\alpha}\left(-(s+2)^{2-\alpha}+3(s+1)^{2-\alpha}-3s^{2-\alpha}+(s-1)^{2-\alpha}\right)
$$
$$
-\frac{1}{2}\left(-(s+2)^{1-\alpha}+3(s+1)^{1-\alpha}-3s^{1-\alpha}+(s-1)^{1-\alpha}\right)
$$
$$
=\alpha(1-\alpha)\int\limits_{0}^{1}dz_1\int\limits_{0}^{1}dz_2\int\limits_{0}^{1}\frac{dz_3}{(s-1+z_1+z_2+z_3)^{\alpha+1}}
$$
$$
-\frac{\alpha(1-\alpha)(1+\alpha)}{2}\int\limits_{0}^{1}dz_1\int\limits_{0}^{1}dz_2\int\limits_{0}^{1}\frac{dz_3}{(s-1+z_1+z_2+z_3)^{\alpha+2}}
$$
$$
=\alpha(1-\alpha)\int\limits_{0}^{1}dz_1\int\limits_{0}^{1}dz_2\int\limits_{0}^{1}
\frac{\left(1-\frac{1+\alpha}{2}\cdot\frac{1}{s-1+z_1+z_2+z_3}\right)}{(s-1+z_1+z_2+z_3)^{\alpha+1}}dz_3
$$
$$
>\frac{\alpha(1-\alpha)}{(s+2)^{\alpha+1}}\left(1-\frac{1+\alpha}{2}\int\limits_{0}^{1}dz_1\int\limits_{0}^{1}dz_2\int\limits_{0}^{1}
\frac{dz_3}{1+z_1+z_2+z_3}\right).
$$
Since
$$
\int\limits_{0}^{1}dz_1\int\limits_{0}^{1}dz_2\int\limits_{0}^{1}
\frac{dz_3}{1+z_1+z_2+z_3}=\frac{1}{2}\left(44\ln{2}-27\ln{3}\right)<\frac{1}{2},
$$
$$
c_{s}^{(\alpha)}-c_{s+1}^{(\alpha)}>\frac{\alpha(1-\alpha)}{(s+2)^{\alpha+1}}\left(1-\frac{1+\alpha}{4}\right)>\frac{\alpha(1-\alpha)}{2(s+2)^{\alpha+1}}>0.
$$
For $j \geq 4$ we get
$$
c_{j-2}^{(\alpha)}-c_{j-1}^{(\alpha)}=a_{j-2}^{(\alpha)}-a_{j-1}^{(\alpha)}-b_{j-3}^{(\alpha)}+2b_{j-2}^{(\alpha)}-b_{j-1}^{(\alpha)}-b_{j}^{(\alpha)}
$$
$$
>\frac{\alpha(1-\alpha)}{2j^{\alpha+1}}-b_{j}^{(\alpha)}>\frac{\alpha(1-\alpha)}{2j^{\alpha+1}}-\frac{\alpha(1-\alpha)}{12j^{\alpha+1}}=\frac{5\alpha(1-\alpha)}{12j^{\alpha+1}}>0.
$$
For $j \geq 3$ we have
$$
c_{j-1}^{(\alpha)}-c_{j}^{(\alpha)}=a_{j-1}^{(\alpha)}-a_{j}^{(\alpha)}-b_{j-2}^{(\alpha)}+2b_{j-1}^{(\alpha)}+2b_{j}^{(\alpha)}
$$
$$
>a_{j-1}^{(\alpha)}-a_{j}^{(\alpha)}-b_{j-2}^{(\alpha)}+2b_{j-1}^{(\alpha)}-b_{j}^{(\alpha)}>\frac{\alpha(1-\alpha)}{2(j+1)^{\alpha+1}}>0.
$$
Inequality (\ref{url2_5}) is proved.
For $j=2$ we get
$$
c_{0}^{(\alpha)}+3c_{1}^{(\alpha)}-4c_{2}^{(\alpha)}=a_{0}^{(\alpha)}+3a_{1}^{(\alpha)}-4a_{2}^{(\alpha)}-2b_{0}^{(\alpha)}+7b_{1}^{(\alpha)}+7b_{2}^{(\alpha)}
$$
$$
>
a_{0}^{(\alpha)}-a_{1}^{(\alpha)}-2b_{0}^{(\alpha)}=3-2^{1-\alpha}-\frac{2}{2-\alpha}.
$$
Since, for any function $f(x)\in C^2[0,1]$, if $f(0)=0$, $f(1)=0$
and $f''(x)<0$ for all $x\in(0,1)$ then $f(x)>0$ for all
$x\in(0,1)$, we have
$$
c_{0}^{(\alpha)}+3c_{1}^{(\alpha)}-4c_{2}^{(\alpha)}>f(\alpha) = 3-2^{1-\alpha}-\frac{2}{2-\alpha}>0
\quad \text{for all}\quad \alpha\in(0,1).
$$
For $j=3$ we get
$$
c_{0}^{(\alpha)}+3c_{1}^{(\alpha)}-4c_{2}^{(\alpha)}=a_{0}^{(\alpha)}+3a_{1}^{(\alpha)}-4a_{2}^{(\alpha)}-2b_{0}^{(\alpha)}+7b_{1}^{(\alpha)}-4b_{2}^{(\alpha)}-4b_{3}^{(\alpha)}
$$
$$
>a_{0}^{(\alpha)}-a_{1}^{(\alpha)}-2b_{0}^{(\alpha)}+4(a_{1}^{(\alpha)}-a_{2}^{(\alpha)}-b_{3}^{(\alpha)})>3-2^{1-\alpha}-\frac{2}{2-\alpha}>0.
$$
\end{proof}
\begin{lemma} \label{lm_ineq} For any real constants $c_0, c_1$ such that
$c_0\geq\max\{c_1,-3c_1\}$, and $\{v_j\}_{j=0}^{j=M}$ the following
inequality holds
\begin{equation}
v_{j+1}\left(c_0v_{j+1}-(c_0-c_1)v_{j}-c_1v_{j-1}\right)\geq
E_{j+1}-E_{j}, \quad j=1, \ldots, M-1, \label{url5}
\end{equation}
where
$$
E_{j}=\left(\frac{1}{2}\sqrt{\frac{c_0-c_1}{2}}+\frac{1}{2}\sqrt{\frac{c_0+3c_1}{2}}\right)^2v_{j}^{2}
+\left(\sqrt{\frac{c_0-c_1}{2}}v_{j}-\left(\frac{1}{2}\sqrt{\frac{c_0-c_1}{2}}+\frac{1}{2}\sqrt{\frac{c_0+3c_1}{2}}\right)v_{j-1}\right)^2,
\quad j=1, 2, \ldots, M.
$$
\end{lemma}
\begin{proof} The proof of Lemma \ref{lm_ineq} immediately follows from the next
equality
$$
v_{j+1}\left(c_0v_{j+1}-(c_0-c_1)v_{j}-c_1v_{j-1}\right)-
E_{j+1}+E_{j}
$$
$$
=\left(\left(\frac{1}{2}\sqrt{\frac{c_0-c_1}{2}}-\frac{1}{2}\sqrt{\frac{c_0+3c_1}{2}}\right)v_{j+1}-\sqrt{\frac{c_0-c_1}{2}}v_{j}+\left(\frac{1}{2}\sqrt{\frac{c_0-c_1}{2}}+\frac{1}{2}\sqrt{\frac{c_0+3c_1}{2}}\right)v_{j-1}\right)^2\geq0.
$$
\end{proof}
\begin{lemma} \cite{AlikhanovJCP} If
$g_{j}^{j+1}\geq g_{j-1}^{j+1}\geq \ldots\geq g_{0}^{j+1}>0$, $j=0,1,\ldots,M-1$
then for any function $v(t)$ defined on the grid $\overline
\omega_{\tau}$ one has the inequalities
\begin{equation}\label{ur04}
v^{j+1}{_g}\Delta_{0t_{j+1}}^{\alpha}v\geq
\frac{1}{2}{_g}\Delta_{0t_{j+1}}^{\alpha}(v^2)+\frac{1}{2g^{j+1}_j}\left({_g}\Delta_{0t_{j+1}}^{\alpha}v\right)^2,
\end{equation}
where
$$
{_g}\Delta_{0t_{j+1}}^{\alpha}y_i=\sum\limits_{s=0}^{j}\left(y_i^{s+1}-y_i^s\right)g_{s}^{j+1},
$$
is a difference analog of the Caputo fractional derivative
of the order $\alpha$ ($0<\alpha<1$).
\end{lemma}
\begin{lemma}\label{lem16} For any function $v(t)$ defined on the grid
$\overline \omega_{\tau}$ one has the inequality
\begin{equation}\label{url6}
v_{j+1}\Delta_{0t_{j+1}}^{\alpha}v\geq\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(E_{j+1}-E_{j}\right)+\frac{1}{2}\bar{\Delta}_{0t_{j+1}}^{\alpha}v^2 =
\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(\mathcal{E}_{j+1}-\mathcal{E}_{j}\right) - \frac{\tau^{-\alpha}}{2\Gamma(2-\alpha)}\bar c^{(\alpha)}_j v_0^2,
\end{equation}
where
$$
\bar{\Delta}_{0t_{j+1}}^{\alpha}v=\frac{\tau^{-\alpha}}{\Gamma{(2-\alpha)}}\sum\limits_{s=0}^{j}\bar{c}_{j-s}^{(\alpha)}(v_{s+1}-v_{s}),\quad
j=1, 2, \ldots, M,
$$
$$
\bar{c}_{0}^{(\alpha)}={c}_{2}^{(\alpha)},\quad
\bar{c}_{1}^{(\alpha)}={c}_{2}^{(\alpha)},\quad
\bar{c}_{s}^{(\alpha)}={c}_{s}^{(\alpha)},\quad s=2,3,\ldots,j,
$$
$$
\text{for}\quad j=1, 2, 3, \ldots, M, \quad
E_j=E_j({c}_{0}^{(\alpha)}-{c}_{2}^{(\alpha)},{c}_{1}^{(\alpha)}-{c}_{2}^{(\alpha)}),
$$
$$
\mathcal{E}_{j} = E_{j} + \frac{1}{2}\sum\limits_{s=0}^{j-1}\bar{c}_{j-1-s}^{(\alpha)}v_{s+1}^2.
$$
\end{lemma}
\begin{proof} For $j=1$ we have
$$
v_{2}\Delta_{0t_{2}}^{\alpha}v=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}v_2\left(c_{0}^{(\alpha)}(v_2-v_1)+c_{1}^{(\alpha)}(v_1-v_0)\right)
$$
$$
=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}v_2\left((c_{0}^{(\alpha)}-c_{2}^{(\alpha)})v_2-(c_{0}^{(\alpha)}-c_{1}^{(\alpha)})v_1-(c_{1}^{(\alpha)}-c_{2}^{(\alpha)})v_0\right)
$$
$$
+
\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}c_{2}^{(\alpha)}\left(v_2^2-v_2v_0\right)
$$
$$
\geq\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(E_{2}-E_{1}\right)+\frac{\tau^{-\alpha}}{2\Gamma(2-\alpha)}c_{2}^{(\alpha)}\left(v_2^2-v_0^2\right)
$$
$$
=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(E_{2}-E_{1}\right)+\frac{1}{2}\bar{\Delta}_{0t_{2}}^{\alpha}v^2.
$$
For $j=2, 3, \ldots, M-1$ we have
$$
v_{j+1}\Delta_{0t_{j+1}}^{\alpha}v=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}v_{j+1}\sum\limits_{s=0}^{j}{c}_{j-s}^{(\alpha)}(v_{s+1}-v_{s})
$$
$$
\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}v_{j+1}\left((c_{0}^{(\alpha)}-c_{2}^{(\alpha)})(v_{j+1}-v_{j})+(c_{1}^{(\alpha)}-c_{2}^{(\alpha)})(v_{j}-v_{j-1})\right)+v_{j+1}\bar{\Delta}_{0t_{j+1}}^{\alpha}v
$$
$$
=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}v_{j+1}\left((c_{0}^{(\alpha)}-c_{2}^{(\alpha)})v_{j+1}-(c_{0}^{(\alpha)}-c_{1}^{(\alpha)})v_{j}-(c_{1}^{(\alpha)}-c_{2}^{(\alpha)})v_{j-1}\right)+v_{j+1}\bar{\Delta}_{0t_{j+1}}^{\alpha}v
$$
$$
\geq\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(E_{j+1}-E_{j}\right)+\frac{1}{2}\bar{\Delta}_{0t_{j+1}}^{\alpha}v^2.
$$
In addition, the following equality holds
$$
\bar{\Delta}_{0t_{j+1}}^{\alpha}v^2 =
\frac{\tau^{-\alpha}}{\Gamma{(2-\alpha)}}\sum\limits_{s=0}^{j}\bar{c}_{j-s}^{(\alpha)}(v_{s+1}^2-v_{s}^2) =
\frac{\tau^{-\alpha}}{\Gamma{(2-\alpha)}}\left(
\sum\limits_{s=0}^{j}\bar{c}_{j-s}^{(\alpha)}v_{s+1}^2 - \sum\limits_{s=0}^{j-1}\bar{c}_{j-1-s}^{(\alpha)}v_{s+1}^2 -
\bar{c}_{j}^{(\alpha)}v_{0}^2\right).
$$
\end{proof}
\section{A difference scheme for the time fractional diffusion equation}
In this section for problem (\ref{ur01})--(\ref{ur02}) a
difference scheme with the approximation order
$\mathcal{O}(h^2+\tau^{3-\alpha})$ is constructed. The stability of the
constructed difference scheme as well as its convergence in the grid
$L_2$ - norm with the rate equal to the order of the approximation
error is proved. The obtained results are supported with numerical
calculations carried out for a test example.
\subsection{Derivation of the difference scheme}
\begin{lemma} \cite{AlikhanovJCP} For any functions $k(x)\in \mathcal{C}_{x}^{3}$
and $v(x)\in\mathcal{C}_{x}^{4}$ the following equality holds true:
\begin{equation}\label{ur16.1}
\left.\frac{d}{dx}\left(k(x)\frac{d}{dx}v(x)\right)\right|_{x=x_i}=\frac{k(x_{i+1/2})v(x_{i+1})-(k(x_{i+1/2})+k(x_{i-1/2}))v(x_{i})+k(x_{i-1/2})v(x_{i-1})}{h^2}+\mathcal{O}(h^2).
\end{equation}
\label{lemh2}
\end{lemma}
Let $u(x,t)\in \mathcal{C}_{x,t}^{4,3}$ be a solution of problem
(\ref{ur01})--(\ref{ur02}). Then we consider equation (\ref{ur01})
for $(x,t)=(x_i,t_{j+1})\in\overline Q_T$,\,
$i=1,2,\ldots,N-1$,\, $j=1,2,\ldots,M-1$:
\begin{equation}\label{ur17}
\partial_{0t_{j+1}}^{\alpha} u=\left.\mathcal{L}u(x,t)\right|_{(x_i,t_{j+1})}+f(x_i,t_{j+1}).
\end{equation}
On the basis of Lemmas \ref{lem1} and \ref{lemh2} we have
$$
\partial_{0t_{j+1}}^{\alpha} u= \Delta_{0t_{j+\sigma}}^{\alpha}u + \mathcal{O}(\tau^{3-\alpha})
$$
$$
\left.\mathcal{L}u(x,t)\right|_{(x_i,t_{j+1})}=\Lambda
u(x_i,t_{j+1})+\mathcal{O}(h^2),
$$
where the difference operator $\Lambda$ is defined as follows
$$
(\Lambda y)_i=\left((ay_{\bar
x})_x-dy\right)_i=\frac{a_{i+1}y_{i+1}-(a_{i+1}+a_i)y_i+a_iy_{i-1}}{h^2}-d_iy_i,
$$
$$
y_{\bar x,i}=\frac{y_i-y_{i-1}}{h},\quad y_{x,i}=\frac{y_{i+1}-y_{i}}{h},
$$
with the coefficients
$a_i^{j+1}=k(x_{i-1/2},t_{j+1})$,\,
$d_i^{j+1}=q(x_{i},t_{j+1})$. Let
$\varphi_i^{j+1}=f(x_i,t_{j+1})$, then
we get the difference scheme with the approximation order
$\mathcal{O}(h^2+\tau^{3-\alpha})$:
\begin{equation}\label{ur18}
\Delta_{0t_{j+1}}^{\alpha}y_i=\Lambda y^{j+1}_i
+\varphi_i^{j+1}, \quad i=1,2,\ldots,N-1,\quad j=1,2,\ldots,M-1,
\end{equation}
\begin{equation}
y(0,t)=0,\quad y(l,t)=0,\quad t\in \overline \omega_{\tau}, \quad
y(x,0)=u_0(x),\quad x\in \overline \omega_{h},\label{ur19}
\end{equation}
\textbf{Remark.} We assume that the solution $y^1_i$ is
found with the order of accuracy $\mathcal{O}(h^4+\tau^{3-\alpha})$. For
example, we can use $L1$-formula and solve problem (1.2)-(1.4) on the
time layer $[0,\tau]$ with step $\tau_1=\mathcal{O}(\tau^{\frac{3-\alpha}{2-\alpha}})$.
\subsection{Stability and convergence}
\begin{theorem}\label{thm_1} The difference scheme (\ref{ur18})--(\ref{ur19})
is unconditionally stable and its solution meets the following a
priori estimates:
\begin{equation}\label{ur20}
\sum\limits_{j=1}^{M-1}\left(\|y^{j+1}\|_0^2 + \|y_{\bar x}^{j+1}]|_0^2\right)\tau \leq M_1\left(\|y^1\|_0^2 +\|y^0\|_0^2+\sum\limits_{j=1}^{M-1}\|\varphi^{j+1}\|_0^2\tau\right),
\end{equation}
where $\|y]|_0^2=\sum\limits_{i=1}^{N}y_i^2h$, $M_1 > 0$ is a known number independent of $h$ and $\tau$.
\end{theorem}
\begin{proof} Taking the inner product of the equation
(\ref{ur18}) with $y^{j+1}$, we have
\begin{equation}\label{ur18_1}
\left(y^{{j+1}},\Delta_{0t_{j+1}}^\alpha y\right)-\left(y^{{j+1}},\Lambda
y^{{j+1}}\right)=\left(y^{{j+1}},\varphi^{j+1}\right).
\end{equation}
Using Lemma \ref{lem16}, we obtain
$$
\left(y^{{j+1}},\Delta_{0t_{j+1}}^\alpha y\right)\geq \frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(E_{j+1}-E_{j}\right)+\frac{1}{2}\bar{\Delta}_{0t_{j+1}}^{\alpha}\|y\|_0^2
$$
$$
=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(\mathcal{E}_{j+1}-\mathcal{E}_{j}\right) - \frac{\tau^{-\alpha}}{2\Gamma(2-\alpha)}\bar c^{(\alpha)}_j \|y^0\|_0^2, ,\quad j=1, 2, \ldots, M-1,
$$
where
$$
E_{j}=\left(\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}-c_1^{(\alpha)}}{2}}+\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}+3c_1^{(\alpha)}-4c_2^{(\alpha)}}{2}}\right)^2\|y^{j}\|_0^{2}
$$
$$
+\left\|\sqrt{\frac{c_0^{(\alpha)}-c_1^{(\alpha)}}{2}}y^{j}-\left(\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}-c_1^{(\alpha)}}{2}}+\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}+3c_1^{(\alpha)}-4c_2^{(\alpha)}}{2}}\right)y^{j-1}\right\|_0^2.
$$
$$
\mathcal{E}_{j} = E_{j} + \frac{1}{2}\sum\limits_{s=0}^{j-1}\bar{c}_{j-1-s}^{(\alpha)}\|y^{s+1}\|_0^2.
$$
For the difference operator $\Lambda$ using
Green's first difference formula for the functions vanishing at $x=0$ and $x=l$, we
get $(-\Lambda y,y)\geq c_1\|y_{\bar x}]|_0^2$.
From (\ref{ur18_1}), using that
$$
\left(y^{{j+1}},\varphi^{j+1}\right)\leq\frac{c_1}{l^2} \|y^{j+1}\|_0^2+\frac{l^2}{4c_1}\|\varphi^{j+1}\|_0^2\leq\frac{c_1}{2} \|y_{\bar x}^{j+1}]|_0^2+\frac{l^2}{4c_1}\|\varphi^{j+1}\|_0^2,
$$ one obtains the inequality
\begin{equation}\label{ur18_2}
\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(\mathcal{E}_{j+1}-\mathcal{E}_{j}\right) + \frac{c_1}{2}\|y_{\bar x}^{j+1}]|_0^2\leq \frac{l^2}{4c_1}\|\varphi^{j+1}\|_0^2 + \frac{\tau^{-\alpha}}{2\Gamma(2-\alpha)}\bar c^{(\alpha)}_j \|y^0\|_0^2.
\end{equation}
Multiplying inequality (\ref{ur18_2}) by $\tau$ and summing the resulting relation over $j$ from $1$ to $M-1$ and taking into account inequality (\ref{url2_4}), one obtains a priori estimate (\ref{ur20}).
The stability and convergence
of the difference scheme (\ref{ur18}) - (\ref{ur19}) follow from the a priori estimate (\ref{ur20}).
\end{proof}
\subsection{Numerical results}
Numerical computations are executed for a test problem on the assumption that the
function
$$u(x,t)=\sin(\pi x)\left(t^{3+\alpha}+t^2+1\right)$$
is the exact solution of problem (\ref{ur01})--(\ref{ur02}) with
the coefficients $k(x,t)=2-\cos(xt)$, $q(x,t)=1-\sin(xt)$ and $l=1$,
$T=1$.
The errors ($z=y-u$) and convergence order (CO) in the norms
$\|\cdot\|_0$ and $\|\cdot\|_{\mathcal{C}(\bar\omega_{h\tau})}$,
where
$\|y\|_{\mathcal{C}(\bar\omega_{h\tau})}=\max\limits_{(x_i,t_j)\in\bar\omega_{h\tau}}|y|$,
are given in Table 1.
\textbf{Table 1} demonstrates that as the number of the spatial
subintervals and time steps increases, while $h^2=\tau^{3-\alpha}$,
then the maximum error decreases, as it is expected and the
convergence order of the approximate scheme is
$\mathcal{O}(h^2)=\mathcal{O}(\tau^{3-\alpha})$, where the convergence order
is given by the formula:
CO$=\log_{\frac{h_1}{h_2}}{\frac{\|z_1\|}{\|z_2\|}}$ ($z_{i}$ is the
error corresponding to $h_{i}$).
\textbf{Table 2} shows that if $h=1/50000$, then as the number of
time steps of our approximate scheme increases, then
the maximum error decreases, as it is expected and the convergence order
of time is $\mathcal{O}(\tau^{3-\alpha})$, where the convergence order is
given by the following formula:
CO$=\log_{\frac{\tau_1}{\tau_2}}{\frac{\|z_1\|}{\|z_2\|}}$.
\begin{table}[h!]
\begin{center}
\caption{The error and the convergence order in the norms $\|\cdot\|_{0}$
and $\|\cdot\|_{C(\bar{\omega}_{h\tau})}$ when
decreasing time-grid size for different values of $\alpha=0.1; 0.5; 0.9$, $\tau^{3-\alpha}=h^2.$}
\label{tab:table1}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\, \textbf{$\alpha$}\, & \,\textbf{$\tau$}\, & \, \textbf{$h$} \, & \, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{0}$} \, & \, \textbf{ CO } \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{{C(\bar{\omega}_{h\tau})}}$}\, & \,\textbf{ CO} \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j_{\bar x}]|_{0}$} \,& \, \textbf{ CO } \,\\
\hline
0.1 & 1/10 & 1/29 & 1.694597e-3 & & 2.387728e-3 & & 5.341255e-2 & \\
& 1/20 & 1/78 & 2.343539e-4 & 2.8541 & 3.304157e-4 & 2.8533 & 7.389758e-4 & 2.8536 \\
& 1/40 & 1/211 & 3.204204e-5 & 2.8707 & 4.517782e-5 & 2.8706 & 1.010417e-4 & 2.8706 \\
& 1/80 & 1/575 & 4.316975e-6 & 2.8918 & 6.086836e-6 & 2.8919 & 1.361321e-5 & 2.8919 \\
& 1/160 & 1/1571 & 5.786215e-7 & 2.8993 & 8.158422e-7 & 2.8993 & 1.824621e-6 & 2.8993 \\
\hline
0.5 & 1/10 & 1/18 & 4.556026e-3 & & 6.401088e-3 & & 1.434106e-2 & \\
& 1/20 & 1/43 & 8.011052e-4 & 2.5077 & 1.129064e-3 & 2.5032 & 2.524196e-3 & 2.5063 \\
& 1/40 & 1/101 & 1.452643e-4 & 2.4633 & 2.047995e-4 & 2.4628 & 4.577935e-4 & 2.4630 \\
& 1/80 & 1/240 & 2.575571e-5 & 2.4957 & 3.631166e-5 & 2.4957 & 8.116952e-5 & 2.4956 \\
& 1/160 & 1/570 & 4.568945e-6 & 2.4950 & 6.441587e-6 & 2.4949 & 1.439907e-5 & 2.4950 \\
\hline
0.9 & 1/10 & 1/12 & 1.181474e-2 & & 1.662948e-2 & & 3.707516e-2 & \\
& 1/20 & 1/24 & 2.931153e-3 & 2.0110 & 4.125467e-3 & 2.0111 & 9.218339e-3 & 2.0078 \\
& 1/40 & 1/49 & 7.018065e-4 & 2.0623 & 9.891705e-4 & 2.0603 & 2.208378e-3 & 2.0615 \\
& 1/80 & 1/100 & 1.678681e-4 & 2.0637 & 2.367034e-4 & 2.0631 & 5.283157e-4 & 2.0635 \\
& 1/160 & 1/207 & 3.921292e-5 & 2.0979 & 5.529153e-5 & 2.0979 & 1.234141e-4 & 2.0979 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{The error and the convergence order in the norms $\|\cdot\|_{0}$
and $\|\cdot\|_{C(\bar{\omega}_{h\tau})}$ when
decreasing time-grid size for different values of $\alpha=0.3; 0.5; 0.7$, $h=1/50000.$}
\label{tab:table1}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\, \textbf{$\alpha$}\, & \,\textbf{$\tau$}\, & \, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{0}$} \, & \, \textbf{ CO } \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{{C(\bar{\omega}_{h\tau})}}$}\, & \,\textbf{ CO} \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j_{\bar x}]|_{0}$} \,& \, \textbf{ CO } \,\\
\hline
0.3 & 1/10 & 7.281556e-5 & & 1.036431e-4 & & 2.293180e-4 & \\
& 1/20 & 1.202886e-5 & 2.5977 & 1.712493e-5 & 2.5974 & 3.787942e-5 & 2.5978 \\
& 1/40 & 1.881330e-6 & 2.6766 & 2.674734e-6 & 2.6786 & 5.928309e-6 & 2.6757 \\
& 1/80 & 2.908398e-7 & 2.6934 & 4.140875e-7 & 2.6914 & 9.159351e-7 & 2.6943 \\
\hline
0.5 & 1/10 & 2.726395e-4 & & 3.880588e-4 & & 8.586014e-4 & \\
& 1/20 & 5.051848e-5 & 2.4321 & 7.190513e-5 & 2.4321 & 1.590939e-4 & 2.4321 \\
& 1/40 & 9.152847e-6 & 2.4645 & 1.302443e-5 & 2.4648 & 2.882759e-5 & 2.4643 \\
& 1/80 & 1.623335e-6 & 2.4952 & 2.310709e-6 & 2.4948 & 5.112271e-6 & 2.4954 \\
\hline
0.7 & 1/10 & 8.556143e-4 & & 1.217803e-3 & & 2.694425e-3 & \\
& 1/20 & 1.810137e-4 & 2.2408 & 2.576392e-4 & 2.2408 & 5.700338e-4 & 2.2408 \\
& 1/40 & 3.759528e-5 & 2.2674 & 5.351332e-5 & 2.2673 & 1.183890e-4 & 2.2675 \\
& 1/80 & 7.685019e-6 & 2.2904 & 1.093830e-5 & 2.2905 & 2.420107e-5 & 2.2903 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{A compact difference scheme for the time
fractional diffusion equation}
In this section for problem (\ref{ur01})--(\ref{ur02}), we create
a compact difference scheme with the approximation order
$\mathcal{O}(h^4+\tau^{3-\alpha})$ in the case when $k=k(t)$ and $q=q(t)$.
The stability and convergence of the constructed difference scheme
in the grid $L_2$ - norm with the rate equal to the order of the
approximation error are proved. The found results are
supported by the numerical calculations implemented for a test
example.
\subsection{Derivation of the difference scheme}
Let a difference scheme be put into a correspondence with differential problem
(\ref{ur01})--(\ref{ur02}) in the case when $k=k(t)$ and $q=q(t)$:
\begin{equation}\label{ur21}
\Delta_{0t_{j+1}}^{\alpha}\mathcal{H}_hy_i=a^{j+1}y_{\bar
xx,i}^{j+1}
-d^{j+1}\mathcal{H}_hy_i^{j+1}+\mathcal{H}_h\varphi_i^{j+1}, \,
i=1,\ldots,N-1,\, j=0,1,\ldots,M-1,
\end{equation}
\begin{equation}
y(0,t)=0,\quad y(l,t)=0,\quad t\in \overline \omega_{\tau}, \quad
y(x,0)=u_0(x),\quad x\in \overline \omega_{h},\label{ur22}
\end{equation}
where $\mathcal{H}_hv_i=v_i+h^2v_{\bar xx,i}/12$, $i=1,\ldots,N-1$,
$a^{j+1}=k(t_{j+1})$, $d^{j+1}=q(t_{j+1})$,
$\varphi_i^{j+1}=f(x_i,t_{j+1})$.
From Lemma \ref{lem1} it follows that if $u\in
\mathcal{C}_{x,t}^{6,3}$, then the difference scheme has the
approximation order $\mathcal{O}(\tau^{3-\alpha}+h^4)$.
\subsection{Stability and convergence}
\begin{theorem} The difference scheme (\ref{ur21})--(\ref{ur22})
is unconditionally stable and its solution meets the following a
priori estimate:
\begin{equation}\label{ur23}
\sum\limits_{j=1}^{M-1}\left(\|\mathcal{H}_hy^{j+1}\|_0^2 + \|y_{\bar x}^{j+1}]|_0^2\right)\tau \leq M_2\left(\|\mathcal{H}_h y^1\|_0^2+\|\mathcal{H}_h y^0\|_0^2+\sum\limits_{j=1}^{M-1}\|\mathcal{H}_h\varphi^{j+1}\|_0^2\tau\right),
\end{equation}
where $M_2>0$ is a known number independent of $h$ and $\tau$.
\end{theorem}
\begin{proof} Taking the inner product of the equation
(\ref{ur21}) with
$\mathcal{H}_hy^{j+1}=(\mathcal{H}_hy)^{j+1}$, we have
$$
(\mathcal{H}_hy^{j+1},\Delta_{0t_{j+1}}^{\alpha}\mathcal{H}_hy)-a^{j+1}(\mathcal{H}_hy^{j+1},y_{\bar
xx}^{j+1})
$$
\begin{equation}\label{ur24}
+d^{j+1}(\mathcal{H}_hy^{j+1},\mathcal{H}_hy^{j+1})=(\mathcal{H}_hy^{j+1},\mathcal{H}_h\varphi^{j+1}).
\end{equation}
We transform the terms in identity (\ref{ur24}) as
$$
(\mathcal{H}_hy^{j+1},\Delta_{0t_{j+1}}^{\alpha}\mathcal{H}_hy)\geq
\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(E_{j+1}-E_{j}\right)+
\frac{1}{2}\bar\Delta_{0t_{j+\sigma}}^{\alpha}\|\mathcal{H}_hy\|_0^2=
$$
$$
=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(\mathcal{E}_{j+1}-\mathcal{E}_{j}\right) - \frac{\tau^{-\alpha}}{2\Gamma(2-\alpha)}\bar c^{(\alpha)}_j \|\mathcal{H}_hy^0\|_0^2, ,\quad j=1, 2, \ldots, M-1,
$$
where
$$
E_{j}=\left(\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}-c_1^{(\alpha)}}{2}}+\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}+3c_1^{(\alpha)}-4c_2^{(\alpha)}}{2}}\right)^2\|\mathcal{H}_h y^{j}\|_0^{2}
$$
$$
+\left\|\sqrt{\frac{c_0^{(\alpha)}-c_1^{(\alpha)}}{2}}\mathcal{H}_hy^{j}-\left(\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}-c_1^{(\alpha)}}{2}}+\frac{1}{2}\sqrt{\frac{c_0^{(\alpha)}+3c_1^{(\alpha)}-4c_2^{(\alpha)}}{2}}\right)\mathcal{H}_hy^{j-1}\right\|_0^2,
$$
$$
\mathcal{E}_{j} = E_{j} + \frac{1}{2}\sum\limits_{s=0}^{j-1}\bar{c}_{j-1-s}^{(\alpha)}\|\mathcal{H}_h y^{s+1}\|_0^2.
$$
$$
-(\mathcal{H}_hy^{j+1},y_{\bar
xx}^{j+1})=-(y^{j+1},y_{\bar
xx}^{j+1})-\frac{h^2}{12}\|y_{\bar
xx}^{j+1}\|_0^2=\|y_{\bar
x}^{j+1}]|_0^2-\frac{1}{12}\sum\limits_{i=1}^{N-1}(y_{\bar
x,i+1}^{j+1}-y_{\bar x,i}^{j+1})^2h
$$
$$
\geq\|y_{\bar x}^{j+1}]|_0^2-\frac{1}{3}\|y_{\bar
x}^{j+1}]|_0^2=\frac{2}{3}\|y_{\bar
x}^{j+1}]|_0^2,
$$
$$
(\mathcal{H}_hy^{j+1},\mathcal{H}_h\varphi^{j+1})\leq\varepsilon\|\mathcal{H}_hy^{j+1}\|_0^2+
\frac{1}{4\varepsilon}\|\mathcal{H}_h\varphi^{j+1}\|_0^2
$$
$$
=\varepsilon\sum\limits_{i=1}^{N-1}\left(\frac{y_{i-1}^{j+1}+10y_{i}^{j+1}+y_{i+1}^{j+1}}{12}\right)^2h+
\frac{1}{4\varepsilon}\|\mathcal{H}_h\varphi^{j+1}\|_0^2
$$
$$
\leq\varepsilon\|y^{j+1}\|_0^2+\frac{1}{4\varepsilon}\|\mathcal{H}_h\varphi^{j+1}\|_0^2\leq \frac{\varepsilon l^2}{2}\|y_{\bar x}^{j+1}]|_0^2+\frac{1}{4\varepsilon}\|\mathcal{H}_h\varphi^{j+1}\|_0^2.
$$
In view of the above-performed transformations, from
identity (\ref{ur24}) at $\varepsilon=\frac{c_1}{3l^2}$ we
get the inequality
$$
\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)}\left(\mathcal{E}_{j+1}-\mathcal{E}_{j}\right) + \frac{c_1}{2}\|y_{\bar x}^{j+1}]|_0^2\leq\frac{3l^2}{4c_1}\|\mathcal{H}_h\varphi^{j+1}\|_0^2+\frac{\tau^{-\alpha}}{2\Gamma(2-\alpha)}\bar c^{(\alpha)}_j \|\mathcal{H}_hy^0\|_0^2.
$$
The following process is similar to the proof of Theorem \ref{thm_1}, and we
leave it out.
\end{proof}
The norm $\|\mathcal{H}_hy\|_0$ is equivalent to the norm $\|y\|_0$, which follows from the inequalities
$$
\frac{5}{12}\|y\|_0^2\leq\|\mathcal{H}_hy\|_0^2\leq\|y\|_0^2.
$$
Using a
priori estimate (\ref{ur23}), we obtain the convergence result.
\begin{theorem}
Let $u(x,t)\in\mathcal{C}_{x,t}^{6,3}$
be the solution of problem (\ref{ur01})--(\ref{ur02}) in the
case $k=k(t)$, $q=q(t)$, and let $\{y_i^j \,|\, 0\leq i\leq N, \,
1\leq j\leq M\}$ be the solution of difference scheme
(\ref{ur21})--(\ref{ur22}). Then it holds true that
$$
\sqrt{\sum\limits_{j=1}^{M-1}\left(\|z^{j+1}\|_0^2 + \|z_{\bar x}^{j+1}]|_0^2\right)\tau}\leq C_R\left(\tau^{3-\alpha}+h^4\right),\quad 1\leq
j\leq M,
$$
where $z_i^j = u(x_i,t_j)-y_i^j$ and $C_R$ is a positive constant independent of $\tau$ and $h$.
\end{theorem}
\subsection{Numerical results}
Numerical calculations are performed for a test problem when the
function
$$u(x,t)=\sin(\pi x)\left(t^{3+\alpha}+t^2+1\right)$$
is the exact solution of the problem (\ref{ur01})--(\ref{ur02}) with
the coefficients $k(x,t)=2-\cos(t)$, $q(x,t)=1-\sin(t)$ and $l=1$,
$T=1$.
The errors ($z=y-u$) and convergence order (CO) in the norms
$\|\cdot\|_0$ and $\|\cdot\|_{\mathcal{C}(\bar\omega_{h\tau})}$,
where
$\|y\|_{\mathcal{C}(\bar\omega_{h\tau})}=\max\limits_{(x_i,t_j)\in\bar\omega_{h\tau}}|y|$,
are given in Table 1.
\textbf{Table 3} shows that as the number of the spatial
subintervals and time steps increases keeping $h^4=\tau^{3-\alpha}$,
the maximum error decreases, as it is expected and the
convergence order of the compact difference scheme is
$\mathcal{O}(h^2)=\mathcal{O}(\tau^{3-\alpha})$, where the convergence order
is given by the formula:
CO$=\log_{\frac{h_1}{h_2}}{\frac{\|z_1\|}{\|z_2\|}}$ ($z_{i}$ is the
error corresponding to $h_{i}$).
\textbf{Table 4} demonstrates that if $h=1/2000$, then as the number of
time steps of our approximate scheme increases, then
the maximum error decreases, as it is expected and the convergence order
of time is $\mathcal{O}(\tau^{3-\alpha})$, where the convergence order is
given by the following formula:
CO$=\log_{\frac{\tau_1}{\tau_2}}{\frac{\|z_1\|}{\|z_2\|}}$.
\begin{table}[h!]
\begin{center}
\caption{The error and the convergence order in the norms $\|\cdot\|_{0}$
and $\|\cdot\|_{C(\bar{\omega}_{h\tau})}$ when
decreasing time-grid size for different values of $\alpha=0.1; 0.5; 0.9$, $\tau^{3-\alpha}=(h/2)^4.$}
\label{tab:table1}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\, \textbf{$\alpha$}\, & \,\textbf{$\tau$}\, & \, \textbf{$h$} \, & \, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{0}$} \, & \, \textbf{ CO } \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{{C(\bar{\omega}_{h\tau})}}$}\, & \,\textbf{ CO} \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j_{\bar x}]|_{0}$} \,& \, \textbf{ CO } \,\\
\hline
0.1 & 1/40 & 1/29 & 1.321499e-6 & & 1.866140e-6 & & 4.149581e-6 & \\
& 1/80 & 1/47 & 1.912169e-7 & 2.7889 & 2.702706e-7 & 2.7875 & 6.006140e-7 & 2.7884 \\
& 1/160 & 1/79 & 2.443382e-8 & 2.9683 & 3.454781e-8 & 2.9677 & 7.675607e-8 & 2.9680 \\
& 1/320 & 1/131 & 3.267337e-9 & 2.9027 & 4.620380e-9 & 2.9025 & 1.026439e-8 & 2.9026 \\
& 1/640 & 1/217 & 4.395804e-10 & 2.8939 & 6.216471e-10 & 2.8938 & 1.380970e-9 & 2.8939 \\
\hline
0.5 & 1/40 & 1/21 & 1.178052e-5 & & 1.661359e-5 & & 3.697512e-5 & \\
& 1/80 & 1/31 & 2.241843e-6 & 2.3936 & 3.166375e-6 & 2.3914 & 7.039944e-6 & 2.3929 \\
& 1/160 & 1/47 & 4.096169e-7 & 2.4523 & 5.789623e-7 & 2.4512 & 1.286610e-6 & 2.4519 \\
& 1/320 & 1/73 & 7.195323e-8 & 2.5091 & 1.017336e-7 & 2.5086 & 2.260303e-7 & 2.5089 \\
& 1/640 & 1/113 & 1.268803e-8 & 2.5036 & 1.794185e-8 & 2.5034 & 3.985934e-8 & 2.5035 \\
\hline
0.9 & 1/40 & 1/13 & 1.470087e-4 & & 2.063859e-4 & & 4.607185e-4 & \\
& 1/80 & 1/19 & 3.419750e-5 & 2.1039 & 4.819739e-5 & 2.0983 & 1.073122e-4 & 2.1020 \\
& 1/160 & 1/29 & 7.726281e-6 & 2.1460 & 1.091058e-5 & 2.1432 & 2.426096e-5 & 2.1451 \\
& 1/320 & 1/41 & 1.824394e-6 & 2.0823 & 2.578190e-6 & 2.0812 & 5.730103e-6 & 2.0820 \\
& 1/640 & 1/59 & 4.260141e-7 & 2.0984 & 6.022614e-7 & 2.0978 & 1.338204e-6 & 2.0982 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{The error and the convergence order in the norms $\|\cdot\|_{0}$
and $\|\cdot\|_{C(\bar{\omega}_{h\tau})}$ when
decreasing time-grid size for different values of $\alpha=0.3; 0.5; 0.7$, $h=1/1000.$}
\label{tab:table1}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\, \textbf{$\alpha$}\, & \,\textbf{$\tau$}\, & \, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{0}$} \, & \, \textbf{ CO } \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j\|_{{C(\bar{\omega}_{h\tau})}}$}\, & \,\textbf{ CO} \, &\, \textbf{$\max\limits_{0\leq j\leq M}\|z^j_{\bar x}]|_{0}$} \,& \, \textbf{ CO } \,\\
\hline
0.3 & 1/10 & 6.155178e-5 & & 8.704736e-5 & & 1.933705e-4 & \\
& 1/20 & 1.016170e-5 & 2.5986 & 1.437081e-5 & 2.5986 & 3.192392e-5 & 2.5986 \\
& 1/40 & 1.642526e-6 & 2.6291 & 2.322883e-6 & 2.6291 & 5.160147e-6 & 2.6291 \\
& 1/80 & 2.620773e-7 & 2.6478 & 3.706331e-7 & 2.6478 & 8.233399e-7 & 2.6478 \\
& 1/160 & 4.147475e-8 & 2.6596 & 5.865421e-8 & 2.6596 & 1.302967e-7 & 2.6596 \\
\hline
0.5 & 1/10 & 2.308509e-4 & & 3.264725e-4 & & 7.252393e-4 & \\
& 1/20 & 4.277465e-5 & 2.4321 & 6.049249e-5 & 2.4321 & 1.343804e-4 & 2.4321 \\
& 1/40 & 7.775493e-6 & 2.4597 & 1.099620e-5 & 2.4597 & 2.442742e-5 & 2.4597 \\
& 1/80 & 1.398769e-6 & 2.4747 & 1.978159e-6 & 2.4747 & 4.394362e-6 & 2.4747 \\
& 1/160 & 2.500909e-6 & 2.4836 & 3.536819e-7 & 2.4836 & 7.856834e-7 & 2.4836 \\
\hline
0.7 & 1/10 & 7.275485e-4 & & 1.028909e-3 & & 2.285660e-3 & \\
& 1/20 & 1.539001e-4 & 2.2410 & 2.176477e-4 & 2.2410 & 4.834914e-4 & 2.2410 \\
& 1/40 & 3.192728e-5 & 2.2691 & 4.515200e-5 & 2.2691 & 1.003024e-4 & 2.2691 \\
& 1/80 & 6.558744e-6 & 2.2832 & 9.275465e-6 & 2.2832 & 2.060489e-5 & 2.2832 \\
& 1/160 & 1.340380e-6 & 2.2907 & 1.895584e-6 & 2.2907 & 4.210928e-6 & 2.2907 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
In the current paper we construct a $L2$ type difference approximation of the Caputo fractional derivative
with the approximation order $\mathcal{O}(\tau^{3-\alpha})$.
The fundamental features of this difference operator are
studied. New difference schemes of the second and fourth
approximation order in space and the $3-\alpha$ approximation order in
time for the time fractional diffusion equation with variable
coefficients are also constructed. The stability and convergence
of these schemes with the rate equal to the
order of the approximation error are proved. The method can be
without difficulty expanded to include other time fractional partial differential
equations with other boundary conditions.
Numerical tests entirely
corroborating the found theoretical results are implemented. In all
the calculations Julia v1.5.1 is used.
|
{
"timestamp": "2021-02-18T02:20:36",
"yymm": "2102",
"arxiv_id": "2102.08813",
"language": "en",
"url": "https://arxiv.org/abs/2102.08813"
}
|
\section*{Abstract}
The modeling of damage processes in materials constitutes an ill-posed mathematical problem which manifests in mesh-dependent finite element results. The loss of ellipticity of the discrete system of equations is counteracted by regularization schemes of which the gradient enhancement of the strain energy density is often used. In this contribution, we present an extension of the efficient numerical treatment, which has been proposed in \cite{junker2019fast}, to materials that are subjected to large deformations. Along with the model derivation, we present a technique for element erosion in the case of severely damaged materials. Efficiency and robustness of our approach is demonstrated by two numerical examples.
\section{Introduction}
The modeling of damage at finite strains roots back to the eighties of the 20$^\text{th}$ century when first phenomenological models related to material degradation were developed. These models were mainly constructed based on the original ideas for small strains where damage was considered as a reduction of the effective cross section area on which the load acts resulting from the nucleation and evolution of cavities. The reduced cross section thus softens the material which gives rise to the stress- and strain-softening effects. On a material point level, this means that the stress reduces with increasing strains after a critical threshold value is reached. This phenomenological approach of describing the complex microstructural processes during material failure, that might be accompanied by the evolution of dislocations, micro-cracks and delamination effects, provides a convincing method for material modeling. Unfortunately, softening may also be accompanied by a loss of ellipticity of the system of governing equations once a certain amount of damage has evolved which renders boundary value problems ill-posed. An obvious indicator for such problems are severe dependencies of the finite element results on the individual mesh discretization: the smaller the finite element size is chosen, the more localized resulting damage bands will be. Furthermore, and even worse, the numerical solution of such problems does not converge and might be unstable regarding small perturbations of the boundary conditions, i.e., they lack numerical robustness. These effects are not only observed at the level of spatial distribution of the damage variable but also for the global behavior in terms of reaction forces \cite{bavzant2002nonlocal,peerlings2001critical}. Consequently, a mathematical correction of the problem is inevitable. This process is referred to as regularization which might be performed in time and space.
Overview articles may be found, e.g., in \cite{forest2004localization} and~\cite{yang2012micromechanics}.
Alternatively, a relaxed variational formulation might be employed
which relies on a careful analysis of the convexity properties of the underlying energy.
This energetic analysis demands a direct formulation of the damage variable as pure function of the deformation state such that the condensed energy (energy plus time-integrated dissipation) may be relaxed. It is thus convenient to formulate the model in a time-incremental way, cf.~\cite{ortiz1999variational,mielke2003energetic,hackl2003dissipation,miehe2002strain}. Thereby, a dissipative material behavior may be described by an incrementally hyperelastic representation, which is thus open for analysis with respect to generalized convexity conditions. This enables the construction of convex hulls (see \cite{gurses2011evolving} for small strains and \cite{balzani2012relaxed} and \cite{schmidt2016relaxed} for finite strains) which on the one hand ensures the recovery of ellipticity and on the other hand guarantees the existence of minimizers.
In contrast to spacial regularization approaches, no internal length scale parameter has to be chosen since the micromechanical parameters are obtained as result of the convexification procedure. On the other hand, the effort at the material point level may be increased if a nonconvex minimization problem is solved as part of the convexification at each integration point, which yields an increased expense in the assembling procedure. Another drawback may be seen in the fact that although the stress softening hysteresis as explained above may in principle be captured, a softening in the sense of negative tangents in stress-strain curves can not yet be modeled at the material point. One exception so far is the approach of Schwarz~\textit{et al.} in \cite{schwarz2020variational} for small strains, where a simplified microstructure consisting of damaged, linear elastic materials and whose resulting convexified energy indeed enables the description of strain-softening.
In the context of time-wise regularization, viscous effects are included into the model. Examples for such models may be found in \cite{faria1998strain,Needleman1988,suffis2003damage,forest2004localization,Junker2017relaxation-damage,langenfeld2018quasi}. These models introduce a rate-dependence of the damage evolution which damps the local reduction of stiffness. Consequently, damage evolves also at neighbored material points to keep the amount of dissipated energy constant. A huge benefit of such models (as well as of relaxed incremental formulations) is their implementability in any finite element framework using solely the material law interface. However, viscous damage models possess a remarkable drawback: the viscosity associated to damage evolution has to be chosen quite high, reaching values that are unphysical, in order to obtain the desired regularization. Furthermore, the effectiveness of viscous regularization strongly depends on the applied loading velocity which does not necessarily need to agree with experimental velocities. Finally, the interplay of viscosity and loading velocity has to be set and thus well-posedness has to be proven for each individual boundary value problem rendering such approaches rather impractical for application.
Spacial regularization ``homogenizes'' the damage evolution on a local level. To this end, either integral formulations are introduced or a gradient enhancement of the strain energy is employed. A major benefit of such models as compared to viscous regularization is that it can be shown that they are inherently well-posed. Examples for spatially regularized damage models may be found, e.g., in \cite{bavzant2002nonlocal,peerlings2001critical,borst2001some,PeeBorBreVre:1996:ged,PeeMasGee:2003:atm,dimitrijevic2008method,gross2017fracture,kiefer2018gradient}.
Also for the finite-strain regime, they have been used, cf. \cite{miehe2016phase,ambati2016phase,carollo20173d,borden2016phase,gultekin2016phase,fathi2017finite,WafPolMenBla:2013:age,brepols2017gradient}. Such models may be applied to arbitrary loading velocities since any dependence of the material behavior on this boundary condition is included in a physically sound manner. Furthermore, they might be coupled to other physical processes as for instance plasticity and temperature dependence without unphysical parameters being present in the complete system of model equations.
For gradient regularization, we refer to the prototype approach of \cite{PeeBorBreVre:1996:ged,PeeMasGee:2003:atm} and similarly \cite{dimitrijevic2008method}, and its
finite-strain extension in \cite{WafPolMenBla:2013:age}, where a nonlocal function carries information on the damage state. This procedure transforms the associated equation for the damage state from a local evolution equation at the material point (as is the case for viscous regularization) to a field equation (integral equation for integral regularization or partial differential equation for gradient regularization) which has to be solved along with the balance of linear momentum for the displacements. Consequently, the number of unknowns increases during numerical treatment, e.g., the number of nodal variables increases from three (displacements) to four (displacements + damage state variable) in a mixed finite element implementation.
While thermodynamic consistency and mesh-independent simulations can be ensured with these approaches, unfortunately, the increased number of degrees of freedom in the discrete global system of equations significantly increases computational costs.
To limit the usual numerical deficiency of gradient regularization, we present in this contribution a novel finite strain gradient damage formulation.
A similar approach for the small strain regime has been introduced in \cite{junker2019fast} (see also \cite{vogel2020adaptive} for a detailed numerical analysis including mesh adaptivity).
We begin with the variational derivation of the damage model that is formulated in terms of a partial differential inequality which complements the partial differential equation describing the balance of linear momentum. Then, we apply the neighbored element method which has proven to be beneficial for a robust and time-efficient numerical evaluation of the set of governing equations given by the balance of linear momentum and an additional field equation. Examples have been given in the context of small-strain damage \cite{junker2019fast,vogel2020adaptive} and small-strain \cite{jantos2019accurate} and finite-strain topology optimization \cite{junker2020new}. The neighbored element method combines a finite element approach for the balance of linear momentum and a finite difference approach for the additional field equation along with operator splitting techniques. Thereby, the neighbored element method conserves the size of the global system of equations resulting from the finite element formulation for the displacements: to be more precise, the number of nodal unknowns for the finite elements remains unchanged from a corresponding purely elastic formulation. The partial differential inequality for the damage state, however, is solved through a correspondingly modified internal update procedure. Furthermore, a novel stabilization technique based on element erosion methods enables us to obtain numerical results even for substantial deformation states including regions of highly damaged material for which the model is close to the loss of its validity. The numerical results show mesh-independence while requiring minimal extra computation time compared to purely hyperelastic simulations.
\section{A gradient-enhanced damage model at finite strains}
There exist various strategies for the derivation of fundamental material models of which an extended Hamilton principle offers a variational approach. Its stationarity conditions agree with the 2$^\text{nd}$ Law of Thermodynamics and Onsager's principle by construction if physically reasonable ansatzes for the strain energy density and the dissipation function are chosen. For details on the extended Hamilton principle and its relation to thermodynamics and other modeling strategies, such as the principle of virtual work or the principle of the minimum of the dissipation potential, we refer to \cite{JuBa20-Hamilton}.
The extended Hamilton principle is related to the stationarity of the action function. It can be shown, cf.~\cite{JuBa20-Hamilton}, that it constitutes as the following condition: for the quasi-static case, the sum of the Gibbs energy $\mathcal{G}$, which is also referred to as total potential, and the work due to dissipative processes $\mathcal{D}$ tends to be stationary:
\begin{equation}
\label{eq:Hamilton1}
\mathcal{H}[\boldface{u},\alpha] := \mathcal{G}[\boldface{u},\alpha] + \mathcal{D}[\alpha] \rightarrow \underset{\boldface{u},\alpha}{\text{stat}} \ .
\end{equation}
A detailed investigation of the extended Hamilton principle as unifying theory for coupled problems, i.e. thermo-mechanical processes, and dissipative microstructure evolution has been presented in \cite{JuBa20-Hamilton}.
In \eqref{eq:Hamilton1}, the displacements are denoted by $\boldface{u}$ and the state of microstructure is expressed in terms of the internal variable $\alpha$. The Gibbs energy reads
\begin{equation}
\mathcal{G}[\boldface{u},\alpha] = \int_\Omega \Psi(\boldface{C},\alpha) \ \mathrm{d} V - \int_\Omega \boldface{b}^\star \cdot \boldface{u} \ \mathrm{d} V - \int_{\partial\Omega} \boldface{t}^\star\cdot\boldface{u} \ \mathrm{d} A
\end{equation}
with the strain energy density $\Psi$, the prescribed body forces $\boldface{b}^\star$, and the traction vector $\boldface{t}^\star$. The integrals are evaluated for the body's volume $\Omega$ and its surface $\partial\Omega$ in its reference configuration with the position vector $\boldface{X}$. The deformation is measured by the right Cauchy Green tensor $\boldface{C} := \boldface{F}^\mathrm{T}\boldface{F}$ with the deformation gradient $\boldface{F} = \partial\boldface{x}/\partial\boldface{X}=\boldface{I} + \boldface{u}\otimes\nabla$ and the spatial coordinate $\boldface{x}=\boldface{X}+\boldface{u}$ in the current configuration. Throughout this contribution, the nabla operator is computed with respect to the reference configuration, i.e., $\nabla \equiv \nabla_{\boldface{X}}$.
The work due to dissipative processes is given by
\begin{equation}
\mathcal{D}[\alpha] := \int_\Omega p^\text{diss} \, \alpha\ \mathrm{d} V
\end{equation}
when the non-conservative force $p^\text{diss}$ performs work ``along'' the microstructure state which is described in terms of the internal variable $\alpha$. The stationarity condition of \eqref{eq:Hamilton1} thus reads
\begin{equation}
\delta\mathcal{H}[\boldface{u},\alpha](\delta\boldface{u},\delta\alpha) = \delta\mathcal{G}[\boldface{u},\alpha](\delta\boldface{u},\delta\alpha) + \delta\mathcal{D}[\alpha](\delta\alpha) = 0 \qquad \forall \ \delta\boldface{u}, \delta\alpha
\end{equation}
with
\begin{equation}
\delta\mathcal{G}[\boldface{u},\alpha](\delta\boldface{u},\delta\alpha) = \int_\Omega \Big( \delta_{\boldface{u}} \Psi(\boldface{C},\alpha) \ \mathrm{d} V + \delta_{\alpha} \Psi(\boldface{C},\alpha) \Big) \ \mathrm{d} V - \int_\Omega \boldface{b}^\star \cdot \delta\boldface{u} \ \mathrm{d} V - \int_{\partial\Omega} \boldface{t}^\star\cdot\delta\boldface{u} \ \mathrm{d} A \ .
\end{equation}
The non-conservative force is assumed to be a material-specific quantity which is determined once the process conditions are set, similarly as the external forces $\boldface{b}^\star$ and $\boldface{t}^\star$. Consequently, it does not follow from a stationarity condition but it needs to be modeled. This implies $\delta_\alpha p^\text{diss}=0$ such that the variation of the dissipated energy reads
\begin{equation}
\delta\mathcal{D}[\alpha](\delta\alpha) = \int_\Omega p^\text{diss} \, \delta\alpha\ \mathrm{d} V \ .
\end{equation}
Since the variations for displacements and microstructural state are independent, we obtain
\begin{equation}
\label{eq:stationarity1}
\begin{cases}
\displaystyle \int_\Omega \delta_{\boldface{u}} \Psi(\boldface{C},\alpha) \ \mathrm{d} V - \int_\Omega \boldface{b}^\star \cdot \delta\boldface{u} \ \mathrm{d} V - \int_{\partial\Omega} \boldface{t}^\star\cdot\delta\boldface{u} \ \mathrm{d} A &= 0 \qquad \forall \ \delta\boldface{u} \\[5mm]
\displaystyle \int_\Omega \delta_{\alpha} \Psi(\boldface{C},\alpha) \ \mathrm{d} V + \int_\Omega p^\text{diss} \, \delta\alpha \ \mathrm{d} V &= 0 \qquad \ \forall\delta\alpha
\end{cases} \ .
\end{equation}
Specifications of the strain energy density $\Psi$ and the non-conservative force $p^\text{diss}$ allow the application of the latter equations for the derivation of models for various materials. Based thereon, let us provide specific formulas for a gradient-enhanced damage model by first setting the internal variable $\alpha$ as damage variable. Then, we define the strain energy density by
\begin{equation}
\Psi(\boldface{C},\alpha) = \big(1-D(\alpha)\big)\, \Psi_0(\boldface{C}) + \frac{1}{2} \beta ||\nabla f||^2
\end{equation}
with the damage function $D(\alpha):=1-f(\alpha)$, the damage variable $\alpha$ and $f(\alpha)=\exp(-\alpha)$. The parameter $\beta$ serves as regularization parameter and thus controls the thickness of the damage zone. The formulation of the strain energy density is the same approach as in \cite{junker2019fast}; however, in contrast to the original work which was restricted to small deformations, we now make use of a hyperelastic strain energy density of the undamaged material $\Psi_0=\Psi_0(\boldface{C})$.
The non-conservative force is usually derived from a dissipation function $\Pi^\text{diss}$ such that
\begin{equation}
\label{eq:pDissPartial}
p^\text{diss} = \pf{\Pi^\text{diss}}{\dot{\alpha}}
\end{equation}
holds. For rate-independent microstructural evolution, which is the case for the present model for damage evolution, functions have to be used that are homogeneous of order one. Hence, the dissipation function
\begin{equation}
\label{eq:DefDiss}
\Pi^\text{diss} := r |\dot{\alpha}|
\end{equation}
is used \cite{junker2019fast} from which
\begin{equation}
p^\text{diss} = \partial\Pi^\text{diss} := \begin{cases} \displaystyle r \frac{\dot{\alpha}}{|\dot{\alpha}|} & \text{for } \dot{\alpha}\not= 0 \\ \displaystyle \{-r,r\} & \text{for } \dot{\alpha}= 0 \end{cases}
\end{equation}
follows. The parameter $r$ is referred to as dissipation parameter and it will be become obvious that it represents an energetic threshold value for the onset and evolution of damage. It is worth mentioning that the partial derivative in~\eqref{eq:pDissPartial} transforms to a subdifferential in our case since $\partial\Pi^\text{diss}/\partial\dot{\alpha}$ is not unique at $\dot{\alpha}=0$ for our rate-independent ansatz for $\Pi^\text{diss}$ in \eqref{eq:DefDiss}. Then, the stationarity condition in \eqref{eq:stationarity1}$_1$ result in the weak form of the balance of linear momentum
\begin{equation}
\int_\Omega \boldface{S}: \frac{1}{2}\delta\boldface{C} \ \mathrm{d} V - \int_\Omega \boldface{b}^\star \cdot \delta\boldface{u} \ \mathrm{d} V - \int_{\partial\Omega} \boldface{t}^\star\cdot\delta\boldface{u} \ \mathrm{d} A = 0 \qquad \forall \ \delta\boldface{u} \ ,
\end{equation}
where $\boldface{S}=2\partial\Psi(\boldface{C},\alpha)/\partial\boldface{C}$ denotes the 2$^{\text{nd}}$ Piola Kirchhoff stress tensor. The stationarity condition in \eqref{eq:stationarity1}$_2$ reads
\begin{equation}
\label{eq:StatF}
\int_\Omega f^\prime \Psi_0 \, \delta\alpha \ \mathrm{d} V + \int_\Omega \beta \nabla f \cdot \nabla( f^\prime \delta \alpha) \ \mathrm{d} V + \int_\Omega \partial\Pi^\text{diss} \delta\alpha \ \mathrm{d} V = 0
\end{equation}
where we considered $\delta_\alpha f = f^\prime \delta\alpha$. Integration by parts of the second term results in
\begin{equation}
\int_\Omega \beta f^\prime \nabla f \cdot \nabla \delta \alpha \ \mathrm{d} V = \int_{\partial\Omega} \beta f^\prime \boldface{n}_0 \cdot \nabla f \, \delta\alpha \ \mathrm{d} A - \int_\Omega \beta f^\prime \mathop{}\!\mathbin\bigtriangleup f \, \delta\alpha \ \mathrm{d} V
\end{equation}
and thus, \eqref{eq:StatF} transforms to
\begin{equation}
\label{eq:StatF2}
- \int_\Omega \left(f \Psi_0 - \beta f \mathop{}\!\mathbin\bigtriangleup f - \partial\Pi^\text{diss} \right) \delta\alpha \ \mathrm{d} V - \int_{\partial\Omega} \beta f \boldface{n}_0 \cdot \nabla f \, \delta\alpha \ \mathrm{d} A = 0
\end{equation}
due to $f^\prime=-f$. The local evaluation of the first integral in \eqref{eq:StatF2} results in the differential inclusion
\begin{equation}
f \Psi_0 - \beta f \mathop{}\!\mathbin\bigtriangleup f - \partial\Pi^\text{diss} \ni 0 \quad \forall \boldface{X} \in \Omega
\end{equation}
due to the set-valued character of the subdifferential $\partial\Pi^\text{diss}$. Therefore, it is convenient to perform a Legendre transformation such that we transform the dissipation function, which is formulated in terms of the thermodynamic flux $\dot{\alpha}$, into a function $\tilde\Pi^{\mathrm{diss}}$ which depends on the thermodynamic force $p:=f \Psi_0 - \beta f \mathop{}\!\mathbin\bigtriangleup f$. Consequently, we obtain
\begin{equation}
\tilde\Pi^{\mathrm{diss}} = \underset{\dot{\alpha}}{\mathrm{sup}} \Big\{ p \dot{\alpha} - \Pi^\text{diss} \Big\} = \underset{\dot{\alpha}}{\mathrm{sup}} \Big\{ |\dot{\alpha}|(p \,\mathrm{sgn}\dot{\alpha} - r) \Big\} \ .
\end{equation}
Healing in the sense of an increasing stiffness contradicts our motivation to model damage. Thus, the sign of the rate of the damage variable is $\mathrm{sgn}\dot{\alpha}=\{0,1\}$ and the Legendre transform reads
\begin{equation}
\tilde\Pi^{\mathrm{diss}} = \begin{cases} 0 & \text{if } p - r \le 0 \\ \infty & \text{else} \end{cases}
\end{equation}
From the first case, we identify
\begin{equation}
\begin{cases}
p<r : & \dot{\alpha} = 0 \\
p=r : & \dot{\alpha} > 0
\end{cases} \ .
\end{equation}
It is thus convenient to introduce the indicator function $\Phi:= p -r $ which separates stationarity of the damage state for $\Phi<0$ and evolution of the damage state for $\Phi=0$. The indicator function fulfills a similar purpose as yield functions in models for elasto-plasticity. The parameter $r$ has a similar meaning as a scalar-valued yield stress. However, it is an energetic threshold value for the present case of damage. Finally, we can collect the governing equations for damage evolution in the following form
\begin{equation}
\label{eq:KKT}
\dot{\alpha} \ge 0 \ , \qquad \Phi := f \Psi_0 - \beta f \mathop{}\!\mathbin\bigtriangleup f - r \le 0 \ , \qquad \Phi \,\dot{\alpha} = 0 \qquad \forall \ \boldface{X} \in\Omega
\end{equation}
which are identified as Karush Kuhn Tucker conditions. The surface integral constitutes as Neumann condition
\begin{equation}
\label{eq:Neumann}
\boldface{n}_0\cdot\nabla f = 0 \qquad \forall \ \boldface{X} \in \partial\Omega
\end{equation}
for $f$. Here, $\boldface{n}_0$ denotes the normal vector on the surface $\partial\Omega$ in the reference configuration. More details on the fundamentals of the damage model can be found in the original publication \cite{junker2019fast}.
\section{Numerical treatment}
\label{sec:Numerical}
The model above consists of two unknowns: the displacement field $\boldface{u}$ and the damage variable $\alpha$. They can be determined by solving
\begin{equation}
\label{eq:System}
\begin{cases}
\displaystyle \int_\Omega \boldface{S}: \frac{1}{2}\delta\boldface{C} \ \mathrm{d} V - \int_\Omega \boldface{b}^\star \cdot \delta\boldface{u} \ \mathrm{d} V &\displaystyle - \int_{\partial\Omega} \boldface{t}^\star\cdot\delta\boldface{u} \ \mathrm{d} A = 0 \qquad \forall \ \delta\boldface{u} \\[4mm]
& \displaystyle \ f \Psi_0 - \beta f \mathop{}\!\mathbin\bigtriangleup f - r \le 0 \qquad \forall \ \boldface{x}\in\Omega
\end{cases}
\end{equation}
with the Dirichlet and Neumann boundary conditions $\boldface{u}=\boldface{u}^\star \ \forall \boldface{X}\in\partial\Omega_u$ and $\boldface{F}\boldface{S}\boldface{n}_0=\boldface{t}^\star \ \forall\boldface{X}\in\partial\Omega_\sigma$ with $\partial\Omega=\partial\Omega_u\cup\partial\Omega_\sigma$ and $\partial\Omega\cap\partial\Omega_\sigma=\emptyset$ for \eqref{eq:System}${}_1$. For \eqref{eq:System}${}_2$, the initial condition $f(t=0)=1 \ \forall\boldface{X}\in\Omega$ and the Neumann condition $\boldface{n}_0\cdot\nabla f=0 \ \forall\boldface{X}\in\partial\Omega$ apply.
From \eqref{eq:System}${}_2$, we recognize that a partial differential inequality has to be solved for the primal variable $f$. Once $f$ is determined, the value of the damage variable could be computed subsequently for each material point $\boldface{X}$ by $\alpha=-\log[f(\boldface{X})]$; however, this information is of minor interest. More importantly, the damage function $D=1-f$ can be computed which governs the stresses by
\begin{equation}
\label{eq:2ndPKdamage}
\boldface{S} = (1-D) \boldface{S}_0 \qquad \text{with} \qquad \boldface{S}_0 := \pf{\Psi_0}{\boldface{C}} \ .
\end{equation}
The partial differential inequality in \eqref{eq:System}$_2$ is rather untypical in the context here and a standard finite element treatment would be complex due to the subdifferential in its weak form, cf.~\eqref{eq:StatF}. Furthermore, additional degrees of freedom at the nodes would be present in the numerical solution scheme. To avoid both drawbacks, we follow \cite{junker2019fast} and make use of the neighbored element method: standard finite element approaches are applied to the weak form of the balance of linear momentum whereas a finite difference approach for unstructured grids is used for discretizing the strong form of the evolution equation for $f$. The neighbored element method is completed by utilization of an operator split, i.e., the discretized equations are solved in a staggered manner. This procedure of combining staggered FEM and FDM has also been proven advantageous for the small-strain regime in \cite{junker2019fast}. In \cite{vogel2020adaptive}, it has been demonstrated that convergence is achieved which justifies the operator split. The finite element treatment of the nonlinear weak form of the balance of linear momentum is standard such that we dispense with a detailed presentation. More details on the finite element method can be found in standard textbooks as in, e.g. \cite{wriggers2008nonlinear}. The only modification is, of course, that both the stress and the stiffness are scaled by the damage state, cf. \eqref{eq:2ndPKdamage}.
For the derivation of an appropriate finite difference method that also operates on unstructured grids, we employ a Taylor series expansion up to order two in order to approximate $f$. Usually, the value of the function, i.e., its zeroth derivatives, along with the derivatives of higher order are known when a Taylor series expansion is used. Then, due to the chosen spatial increment, the value of the function can be approximated at a neighbored spatial point. This operation is inverted here: the value of the damage function at specific points are all known. These points are the centers of gravity of all finite elements. Then, the spatial increments in the Taylor series expansion are given by the spatial increments of the centers of gravity; they remain fixed if no mesh adaptation is employed during the computation. Consequently, the only unknown in the Taylor series expansion are the (mixed) derivatives of order $\ge 1$. Collecting an appropriate set of neighbored elements, a linear system of algebraic equations can be constructed to compute the individual (partial) derivatives from which the local value of the Laplace operator follows by simple summation of the unmixed derivatives of order two.
Our finite differences approach at unstructured meshes can be recast in mathematical formulas by first introducing the Taylor series expansion as
\begin{equation}
\label{eq:Taylor1}
f^{(\mathcal{N}^\comp{e}_k)} = f^\comp{e} + \sum_o A^{\comp{i,k}}_o f^\comp{e}_{\partial,o}
\end{equation}
where $f^\comp{e}_{\partial,o}$ stores the value of the partial derivatives of varying order $o\ge1$, such that a matrix including all derivatives can be defined as
\begin{equation}
\label{eq:fpartial}
\der{\boldface{f}}^\comp{e} := \left( \begin{matrix}
\displaystyle \frac{\partial f^\comp{e}}{\partial X} & \displaystyle \frac{\partial f^\comp{e}}{\partial Y} & \displaystyle \frac{\partial f^\comp{e}}{\partial Z} & \displaystyle\frac{\partial^2 f^\comp{e}}{\partial X \partial Y} & \displaystyle\frac{\partial^2 f^\comp{e}}{\partial Y \partial Z} & \displaystyle\frac{\partial^2 f^\comp{e}}{\partial X \partial Z} & \displaystyle\frac{\partial^2 f^\comp{e}}{\partial X^2} & \displaystyle\frac{\partial^2 f^\comp{e}}{\partial Y^2} & \displaystyle\frac{\partial^2 f^\comp{e}}{\partial Z^2}
\end{matrix} \right) \ .
\end{equation}
The quantity $A^{\comp{i,k}}_o$ comprises all spatial increments with varying power such that again the associated column matrix is defined as
\begin{eqnarray}
A^\comp{k} & := & \Bigg(
\ink{X}^\comp{\calN^\eli_k} \quad \ink{Y}^\comp{\calN^\eli_k} \quad \ink{Z}^\comp{\calN^\eli_k} \quad
\ink{X}^\comp{\calN^\eli_k} \ink{Y}^\comp{\calN^\eli_k} \quad \ink{Y}^\comp{\calN^\eli_k} \ink{Z}^\comp{\calN^\eli_k} \quad \ink{X}^\comp{\calN^\eli_k}\ink{Z}^\comp{\calN^\eli_k} \\
& & \qquad\qquad\qquad\qquad\quad \frac{1}{2}\Big( \ink{X}^\comp{\calN^\eli_k}\Big)^2 \quad \frac{1}{2}\Big( \ink{Y}^\comp{\calN^\eli_k}\Big)^2 \quad \frac{1}{2}\Big( \ink{Z}^\comp{\calN^\eli_k}\Big)^2
\Bigg) \notag
\end{eqnarray}
with
\begin{align*}
\ink{X}^\comp{\calN^\eli_k} := X^\comp{\calN^\eli_k}-X^\comp{e} \ , \quad
\ink{Y}^\comp{\calN^\eli_k} := Y^\comp{\calN^\eli_k}-Y^\comp{e} \ , \quad
\ink{Z}^\comp{\calN^\eli_k} := Z^\comp{\calN^\eli_k}-Z^\comp{e} \ .
\end{align*}
To ensure that only the required number of elements and thus extrapolation points is used, the quantity $\calN^\eli_k$ has been introduced: $\mathcal{N}^\comp{e}$ is the set of neighbored elements around element $\comp{e}$ and, hence, $\calN^\eli_k$ returns the global element number for $k$${}^\text{th}$ neighbor. All column matrices $A^\comp{k}$ in the neighborhood $\calN^\eli_k$ of the element of interest $\comp{e}$ are collected in the matrix $\boldface{A}^\comp{e}$. Then, by defining
\begin{equation}
f^\comp{e}_{\Delta,k} := f^{(\mathcal{N}^\comp{e}_k)}-f^\comp{e} \ ,
\end{equation}
we can shortly write for the Taylor series expansion \eqref{eq:Taylor1}
\begin{equation}
\label{eq:TaylorSystem}
\ink{\boldface{f}}^\comp{e} = \boldface{A}^\comp{e} \der{\boldface{f}}^\comp{e}
\end{equation}
such that the unknown vector of derivatives follows from
\begin{equation}
\der{\boldface{f}}^\comp{e} = \boldface{A}^{\comp{e}-1} \ink{\boldface{f}}^\comp{e} \ .
\end{equation}
The Laplace operator can be computed by aid of $\der{\boldface{f}}^\comp{e}$ according to
\begin{equation}
\mathop{}\!\mathbin\bigtriangleup f^\comp{e} = \frac{\partial^2 f^\comp{e}}{\partial X^2} + \frac{\partial^2 f^\comp{e}}{\partial Y^2} + \frac{\partial^2 f^\comp{e}}{\partial Z^2} = f^\comp{e}_{\partial, 7} + f^\comp{e}_{\partial, 8} + f^\comp{e}_{\partial, 9} \ ,
\end{equation}
cf. \eqref{eq:fpartial}. Consequently, only some components of $\der{\boldface{f}}^\comp{e}$ are of interest. Introducing
\begin{equation}
\boldface{a} := \left(\begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{matrix} \right) \ ,
\end{equation}
the Laplace operator is simply given by
\begin{equation}
\label{eq:lap}
\mathop{}\!\mathbin\bigtriangleup f^\comp{e} = \boldface{l}^\comp{e} \cdot \ink{\boldface{f}}^\comp{e}
\end{equation}
where the vector
\begin{equation}
\label{eq:lapM}
\boldface{l}^\comp{e} := \boldface{a}^\mathrm{T} \boldface{A}^{\comp{e}\,-1}
\end{equation}
can be computed once for each element $\comp{e}$ in advance of the actual solution of the boundary value problem. In contrast, the evaluation of \eqref{eq:lap} has to be repeated during computational evaluation of \eqref{eq:System}$_2$ to find the (current) field $f$; this evaluation, however, is computationally very cheap. It is worth mentioning that the dimension of $\boldface{A}^\comp{e}$ depends on cardinality of the set of neighbored elements $\mathcal{N}^\comp{e}$: to close the system of equations in \eqref{eq:TaylorSystem}, i.e., to ensure that $\boldface{A}^\comp{e}$ is a regular matrix, the cardinality of the set of neighbored elements has at least to equal the length of $\der{\boldface{f}}^\comp{e}=9$ in the considered three-dimensional case. For boundary elements, however, we usually find a smaller cardinality of $\mathcal{N}^\comp{e}$, i.e. less than nine neighbored elements can be identified. To circumvent this problem, we introduce ghost elements at the boundary which mirror the value of the damage field inside of $\Omega$ to the outer vicinity of the boundary $\partial\Omega$ which equals the usual treatment in the finite difference method. It is worth mentioning that \eqref{eq:Neumann} is fulfilled identically by usage of ghost elements. For the elements inside of $\Omega$, the opposite case is present: considering all neighbored elements around $\comp{e}$, i.e., the stencil of six elements plus eight diagonal elements plus twelve elements in each ``plane'', results in an overdetermined system of equations for \eqref{eq:TaylorSystem}. Here, two possibilities can be employed: the first option is to take into account element $\comp{e}$ and all 26 neighbored elements and compute the inverse of $\boldface{A}^\comp{e}$ by use of the right-inverse as
\begin{equation}
\boldface{A}^{\comp{e}\,-1} = \boldface{A}^{\comp{e}\,\text{T}}\left(\boldface{A}^\comp{e}\boldface{A}^{\comp{e}\,\text{T}}\right)^{-1} \ ,
\end{equation}
cf.~\cite{vogel2020adaptive}. The second option is to use the six stencil elements plus three diagonal elements with closest distance to the center element $\comp{e}$ as neighborhood. For sufficiently regular meshes, $\boldface{A}^\comp{e}$ contains full rank and is regular, cf.~\cite{vogel2020adaptive}. In all computational results we present later, we made use of the latter option. We refer to \cite{junker2019fast,jantos2019accurate,vogel2020adaptive} for more details on the neighbored element method.
Finally, it is mandatory to chose a strategy for solving the discretized form of \eqref{eq:System}$_2$ given by
\begin{equation}
\label{eq:IndicatorElement}
\Phi^\comp{e}=f^\comp{e} \bar{\Psi}_0^\comp{e} - \beta f^\comp{e} \ \boldface{l}^\comp{e} \cdot \ink{\boldface{f}}^\comp{e} - r \le 0
\end{equation}
where we introduced the strain energy density averaged for each finite element as
\begin{equation}
\bar{\Psi}^\comp{e}_0 := \frac{1}{\Omega^\comp{e}} \int_{\Omega^\comp{e}} \Psi_0 \ \mathrm{d} V \ .
\end{equation}
The Laplace operator turns \eqref{eq:IndicatorElement} into a system of coupled inequalities which could be solved in a monolithic manner. Although being the obvious strategy, the coupling demands the usage of (external) solver routines. In combination with the inequalities, this renders such a monolithic strategy inappropriate to be implemented into an arbitrary finite element program. Therefore, a simple yet successful numerical solution procedure has been proposed in \cite{junker2019fast}: the system of coupled inequalities is solved by means of a Jacobi method. To be more precise, the coupling is ignored at first glance and each inequality is checked individually such that an update of $f$ is performed for all elements with $\Phi^\comp{e}>0$. Of course, a modification of $f^\comp{e}$ may have an impact on the indicator function in a different element $\comp{k}$ due to the Laplace operator which ``transports'' the information. Therefore, the update has to be repeated until $\Phi^\comp{e}\le 0 \; \forall \comp{e}$. Usually, such an update scheme is known to be rather time consuming since the numbers of repetitions needed for convergence depends on the number of unknowns which equals the number of finite elements in our case. This should turn this strategy to be more expensive than monolithic solution schemes. However, the damage evolution is limited to only a small number of elements as compared to their total number and thus, an update is mandatory only for a negligible fraction of inequalities. As will be shown by our numerical experiments, the staggered solution scheme is still very fast and simulations involving damage evolution consume hardly more computation time. A similar behavior has already been demonstrated for the small deformation setting of damage modeling in \cite{junker2019fast} and even for an evolutionary method for topology optimization both at small strains in \cite{jantos2019accurate} and at finite strains in \cite{junker2020new}. After the damage function has been updated, the next time step is investigated without checking the impact of the updated damage field on the displacements for the current time step. This might be interpreted as (additional) operator step and may seem questionable whether a reliable result is obtained by this method. However, time steps with vanishing increment are mandatory for an appropriate resolution e.g. of the force/displacement diagram. For this case, the method of operator splits converges. It has been numerically shown in \cite{junker2019fast} that the artificially included pseudo-viscous behavior due to the operator step does not influence the final result on a relevant scale. Furthermore, it is worth mentioning that a detailed numerical study on the numerical behavior of the damage model for the small deformation setting in \cite{vogel2020adaptive} revealed that almost identical results are obtained when both fields (displacements and damage) are repeatedly updated until a common convergence is achieved. The update of each inequality is performed by a one-step Newton procedure as
\begin{equation}
f^\comp{e} \leftarrow f^\comp{e} - \frac{\Phi^\comp{e}}{\mathrm{d}\Phi^\comp{e}}
\end{equation}
with
\begin{equation}
\label{eq:df}
\mathrm{d}\Phi^\comp{e} = \Psi_0^\comp{e} - \beta \boldface{l}^\comp{e}\cdot\ink{\boldface{f}}^\comp{e} + \beta f^\comp{e} \ \boldface{l}^\comp{e}\cdot\boldsymbol{1}
\end{equation}
where the length of the vector $\boldsymbol{1}$ equals the cardinality of the set $\mathcal{N}^\comp{e}$ (9 for our chosen method for defining the neighborhood) and all elements in $\boldsymbol{1}$ are set to one. The one-step Newton method has been proven beneficial due to its smooth convergence behavior for neighbored elements caused by the undershooting. Additionally, this ensures a monotonic decrease of the damage function and thus agrees to \eqref{eq:KKT}$_1$: the indicator function $\Phi^\comp{e}$ is a convex quadratic function in $f^\comp{e}$, and since the update for $f^\comp{e}$ is only performed for $\Phi^\comp{e}>0$, it holds for the derivative $\mathrm{d}\Phi^\comp{e}>0$. The overshooting ensures that the solution converges from above, i.e. $0\leftarrow\Phi^\comp{e}$, and thus $f^\comp{e}$ is monotonically decreasing which implies $\dot{\alpha}\ge0$.
\begin{algorithm}[htb]
\caption{Element erosion strategy}
\label{alg:stabilization}
\begin{algorithmic}[]
\State \For{$1,\dots,n_e$} \Comment{apply to all finite elements}
\State \If{$D^\comp{e} = D_\text{crit}$} \Comment{check damage criterion}\vspace{3mm}
\State set $\boldface{r}^\comp{e} = \boldsymbol{0}$ \Comment{eliminate element residual}\vspace{3mm}
\State set $\displaystyle\frac{\mathrm{d}\boldface{r}^\comp{e}}{\mathrm{d}\hat{\boldface{u}}^\comp{e}} = \Bigg( \frac{\mathrm{d}r^\comp{e}_i}{\mathrm{d} \hat{u}^\comp{e}_j} \Bigg)= s_\text{crit} (\delta_{ij}) $ \Comment{erode element $\comp{e}$}
\EndIf\vspace{3mm}
\EndFor
\end{algorithmic}
\end{algorithm}
The above mentioned numerical procedure is analogous to the one used for the small deformation setting in \cite{junker2019fast}. However, a remarkable difference is accompanied by the large deformation setting: here, severe deformations that occur during damage evolution are correctly described. This, of course, is the primal intention of using the large deformation setting. Unfortunately, this sensitivity might result in numerical instabilities causing $\det \bar{\boldface{F}}^\comp{e}<0$ where $\bar{\boldface{F}}^\comp{e}:=\int_{\Omega^\comp{e}} \boldface{F} \ \mathrm{d}V / \Omega^\comp{e}$ is the averaged deformation gradient in element $\comp{e}$. To be more precise, deformation states are computed that do not correspond to a physically accurate material behavior: once the deformation state has reached some critical threshold, cracks will be present in a real material such that a consideration of the specimen as one continuous body cannot be justified and a description in a continuum mechanical sense is not justified anymore. To circumvent this numerical artifact and also to improve physical accuracy, a stabilization technique in terms of element erosion is applied. To this end, we define a critical value $D_\text{crit}$ when damage modeling is not suitable anymore but rather cracking needs to be considered. To correctly account for this physical behavior in our finite element simulations, we thus modify the element stiffness by setting it to a diagonal matrix with a virtual remaining stiffness value of $s_\text{crit}$ representing an eroded element through which rank deficiency of the affected elements is avoided. Finally, we set the residual forces of this element to zero and no further evolution of the damage value is considered anymore at this point.
Summarizing, we follow the element erosion technique in Alg.~\ref{alg:stabilization} and employ Alg.~\ref{alg:damage-update} for the damage update. The flowchart in Fig.~\ref{fig:Flow} visualizes the neighbored element approach including an interplay between a finite element and finite difference update for the displacements and $f$, respectively.
\begin{algorithm}[htb]
\caption{Damage update}
\label{alg:damage-update}
\begin{algorithmic}[]
\State input $\bar{\Psi}_{0,n+1}^\comp{e} = \int_{\Omega^\comp{e}} \Psi_{0,n+1} \ \mathrm{d}V / \Omega^\comp{e}$ \Comment{averaged energy for each finite element $\comp{e}$}\vspace{2mm}
\State \hphantom{input} $\bar{\boldface{F}}^\comp{e} = \int_{\Omega^\comp{e}} \boldface{F} \ \mathrm{d}V / \Omega^\comp{e}$ \Comment{averaged deformation gradient for each element $\comp{e}$}
\State \For{$1,\dots,n_\text{loop}$} \Comment{Jacobi method: repeat $n_\text{loop}$ times}
\State \State compute $\mathop{}\!\mathbin\bigtriangleup f^\comp{e}$ according to \eqref{eq:lap} \Comment{Laplace operator}
\State \State compute $\Phi^\comp{e}$ according to \eqref{eq:IndicatorElement} \Comment{indicator function}
\State \If{$\Phi^\comp{e}>0$ \textbf{and} $\det \bar{\boldface{F}}^\comp{e}>1$}\vspace{2mm}
\State update $f^\comp{e} \leftarrow f^\comp{e} - \frac{f^\comp{e}}{\mathrm{d}\Phi}$ according to \eqref{eq:df} \Comment{new damage value for inelastic $\Phi^\comp{e}$}
\State \If{$D^\comp{e}=1-f^\comp{e} > D_\text{crit}$}
\State set $D^\comp{e} = D_\text{crit}$ and $f^\comp{e}=1-D_\text{crit}$ \Comment{activation of element erosion}
\EndIf \vspace{3mm}
\Else
\State set $\Phi^\comp{e}=0$ \Comment{elimination of elastic $\Phi^\comp{e}$}
\EndIf
\State \If{$\max\{\Phi^\comp{e}\}< 10^{-6}$}
\State exit \Comment{terminate Jacobi method}
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{figure}[ht]
\centering
\tikzstyle{decision} = [diamond, draw,
text width=6em, text badly centered, inner sep=0pt,fill=lightgray]
\tikzstyle{abort} = [rectangle, draw,
text width=7.0em, text centered, rounded corners, minimum height=2em,fill=lightgray]
\tikzstyle{exit} = [rectangle, draw,
text width=3.0em, text centered, rounded corners, minimum height=2em,fill=lightgray]
\tikzstyle{blind} = [circle,
text width=0em, text centered, minimum height=0em]
\tikzstyle{line} = [draw, -latex']
\tikzstyle{ini2} = [rectangle, draw,
text width=30em, text centered, rounded corners, minimum height=2em,fill=lightgray]
\tikzstyle{ini22} = [rectangle, draw,
text width=30em, rounded corners, minimum height=2em,fill=lightgray]
\begin{tikzpicture}[node distance = 2.5cm, auto]
\node [ini2] (init2) {\textbf{\textcolor{blau}{initialization}} \vspace{-3mm}
\begin{flushleft}
\begin{tabular}{ll}
~~\llap{\textbullet}~~ damage function: & $f = 1 \quad \forall \boldface{x} \in \Omega$ \\
~~\llap{\textbullet}~~ boundary conditions: & $\boldface{u}_0^\star, \boldface{b}_0^\star, \boldface{t}_0^\star$
\end{tabular}
\end{flushleft}
};
\node[ini2, below of=init2, node distance=2.40cm](FE2) { \textbf{\textcolor{blau}{ finite element analysis}}\vspace{-4mm}
\begin{flushleft}
~~\llap{\textbullet}~~ update boundary conditions: $\boldface{u}_{n+1}^\star, \boldface{b}_{n+1}^\star, \boldface{t}_{n+1}^\star$\\
~~\llap{\textbullet}~~ employ element erosion technique for severe damage, see Alg.~\ref{alg:stabilization}\\
~~\llap{\textbullet}~~ solve nonlinear equilibrium equation for $\hat\boldface{u}$ with fixated field $f^\comp{e}$
\end{flushleft}
};
\node [decision, below of=FE2, node distance=3.0cm] (converged) {converged?};
\node [abort, left of=converged, node distance=4.1cm] (stop) {abort \& \\ error report};
\node [blind, below of=converged, node distance=2.0cm] (blind) {};
\node [ini22, below of=converged, node distance=2.7cm](update1){ ~~\llap{\textbullet}~~ collect $\bar{\Psi}_{0,n+1}^\comp{e} =\frac{1}{\Omega^\comp{e}} \int_{\Omega^\comp{e}} \Psi_0(\boldface{C}_{n+1}) \ \mathrm{d}V$, $ \Omega^\comp{e} \ \forall\; e=1,\dots,n_\text{e}$ };
\node [ini2, below of=update1, node distance=2.0cm] (update2) { \textbf{\textcolor{blau}{update of the damage function}}\vspace{-4mm}
\begin{flushleft}
~~\llap{\textbullet}~~ update damage function according to Alg.~\ref{alg:damage-update}\\
~~\llap{\textbullet}~~ initialize next time step: $t_n\leftarrow t_{n+1}$
\end{flushleft}
};
\node [blind, right of=FE2,node distance=7cm] (blind3) {};
\node [decision, right of=converged,node distance=7cm] (converged2) {$t_{n+1}>t_\text{max}?$};
\node [exit, left of=converged2, node distance=3cm](exit){exit};
\path [line] (init2) -- (FE2);
\path [line] (FE2) -- node {}(converged);
\path [line] (converged) -- node {no} (stop);
\path [line] (converged) -- node {yes} (update1);
\path [line] (update1) -- node {}(update2);
\path [line] (update2.east) -| (converged2);
\path [line] (converged2) |- (FE2.east);
\path [line] (converged2) -- node{yes}(exit);
\end{tikzpicture}
\caption{Flowchart for the proposed damage model.}
\label{fig:Flow}
\end{figure}
\section{Numerical Tests \label{s:numerical_tests}}
In this section, the introduced approach is numerically tested in a 3D finite strain setting. To this end, corresponding 8-node hexahedral finite elements were implemented and evaluated within the finite element program FEAP, cf.~\cite{FEAP}. For the strain energy corresponding to the fictively undamaged state, the Neo-Hooke energy given as
\begin{align}
&\Psi_0 =
\frac{\mu}{2}(I-3) + g(J)
\label{e:neo_hooke}
\end{align}
with $g(J) = \frac{\lambda}{4}\left(J^2-1\right)-\left(\frac{\lambda}{4}+\mu\right) \ln J$ and $I=\tr{\boldface{C}}$, $J=\det{\boldface{F}}$ is employed (see also \cite{wriggers2008nonlinear}).
The Lam\'e constants $\lambda=E\nu/((1+\nu)(1-2\nu))$ and $\mu=E/(2(1+\nu))$ are computed from the Young's modulus $E$ and the Poisson ratio $\nu$.
Throughout the numerical tests, an element erosion approach is incorporated as discussed in Sec.~\ref{sec:Numerical} through which, once the damage value of an element succeeds a critical value $D^\comp{e}>D_{\text{crit}}$, the element stiffness is replaced by a virtual remaining stiffness $s_\text{crit}$ and residual forces are eliminated, cf. Alg.~\ref{alg:stabilization}.
Moreover, to avoid damage evolution under compression, we add the condition $\det \bar\boldface{F}^\comp{e}>1$
to the damage update algorithm in Alg.~\ref{alg:damage-update}.
For the solution procedure uniform incremental loading is applied and the solution of the linearized system corresponding to the Newton-Raphson iterations is performed with the PARDISO solver. The parameters for the technique for element erosion are collected in~Tab.~\ref{t:parameters}.
\begin{table}
\caption{Material and boundary parameters used throughout the numerical tests.}
\begin{center}
\begin{tabular}{lllll} \toprule
Description & Symbol & \multicolumn{2}{c}{Value} & Unit \\ \cmidrule(lr){3-4}
& & Plate with hole & U-shape & \\ \midrule
E-modulus &$E $ & $500$ & $1000$ & MPa \\
poisson ratio &$\nu$ & $0.3$ & $0.3$ & - \\
dissipation parameter &$r$ & $5.0$ & $0.5$ & MPa \\
nonlocal parameter &$\beta$ & $\in\{10,100,1000\}$ & $100$ & N \\
critical damage value &$D_{\mathrm{crit}}$ & $0.95$ & $0.995$ & - \\
critical stiffness & $s_\text{crit}$ & $10^{-8}$ & $10^{-8}$ & N/mm \\
load increment & - & $\{0.25,0.1,0.025,0.01\}$ & $0.2$ & mm \\ \bottomrule
\end{tabular}
\end{center}
\label{t:parameters}
\end{table}
\subsection{Plate with hole benchmark test \label{ss:plate_with_hole}}
\begin{figure}
\unitlength1cm
\begin{minipage}{0.38\textwidth}
(a)
\def0.9\textwidth{\textwidth}
\input{fig/pwh_geometry.pdf_tex}
\end{minipage}
\begin{minipage}{0.6\textwidth}
\begin{picture}(0,7)
\put(0,0.5){
\includegraphics[width=0.5\textwidth]{fig/pwh_r1_mesh.eps}
\includegraphics[width=0.5\textwidth]{fig/pwh_r4_mesh.eps}
}
\end{picture}
(b)
\end{minipage}
\caption{(a) Description of the geometry of the plate with hole benchmark problem.
(b) Example finite element meshes (400 and 6250 elements).}
\label{f:pwh_description}
\end{figure}
As first example, we consider a plate with a circular hole which is subjected to the prescribed displacement ${\boldface{u}}^\star=(0,{u}^\star,0)^T$ with ${u}^\star=25\, \text{mm}$ at its upper boundary ($Y=L$, with $L=100\, \text{mm}$).
The geometry of the considered domain is depicted in Fig.~\ref{f:pwh_description}(a).
Due to the symmetry of the problem, the computational domain amounts only to the upper right quarter of the geometry.
Moreover, as a consequence of the symmetry, the displacement is fixed in X-direction with $u_X=0$ at the left edge $X=0$ and in Y-direction with $u_Y=0$ at the lower edge $Y=0$.
The radius of the circular hole as shown in Fig.~\ref{f:pwh_description}(a) is given with $R=50\, \text{mm}$ and the thickness is given with $H=10\, \text{mm}$.
An overview of the used material and load parameters is collected in Tab.~ \ref{t:parameters}.
\subsubsection{Convergence study}
\begin{figure}
\unitlength1cm
\begin{center}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e2_alpha_5e-2_three_dots.pdf}
(a)
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e2_alpha_5e-2_three_dots_enlarged.pdf}
(b)
\end{minipage}
\begin{picture}(12,6.8)
\put(-2,3.3){
\begin{minipage}{0.30\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r4_beta_1e2_alpha_5e-2_327_of_1000.pdf}
\end{minipage}
\hspace{-1cm}
\begin{minipage}{0.30\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r4_beta_1e2_alpha_5e-2_332_of_1000.pdf}
\end{minipage}
\hspace{-1cm}
\begin{minipage}{0.30\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r4_beta_1e2_alpha_5e-2_340_of_1000.pdf}
\begin{picture}(0,0)
\put(5.5,1.2){
\includegraphics[width=1.4cm]{fig/d_scale_alpha_5e-2.eps}
}
\put(5.6,6.9){{\small $D$}}
\put(6.15,1.1){{\small $0.00$}}
\put(6.15,3.85){{\small $0.46$}}
\put(6.15,6.6){{\small $0.95=D_\text{crit}$}}
\end{picture}
\end{minipage}
}
\put(-2,0){(c)}
\end{picture}
\end{center}
\caption{(a) Force/displacement curves (higher resolution in (b)) and (c) corresponding contourplots depicting the damage evolution for the 6250 element mesh in three different load steps which are depicted as bullets in (a).}
\label{f:pwh_beta_1e2_f_u_and_contourplots}
\end{figure}
In Fig.~\ref{f:pwh_beta_1e2_f_u_and_contourplots}(a), force/displacement curves corresponding to various mesh refinements (see eg. Fig.~\ref{f:pwh_description}(b)) are shown.
The value of the prescribed displacement ${u}^\star$ is depicted on the abscissa whereas the value of the reaction force $F$ in Y-direction, which is recovered from the nodes at the upper surface $Y=L$, is plotted on the ordinate.
Here, the nonlocal parameter has the value $\beta=100\,\text{N}$ and the prescribed displacement is applied in increments of $0.025$ mm, which is equivalent to $1000$ load steps.
For the given plot resolution in Fig.~\ref{f:pwh_beta_1e2_f_u_and_contourplots}(a), it can be observed that the force/displacement curve of the lowest refinement stage (400 elements) almost coincides with the curves corresponding to the higher refinement stages.
To illustrate the convergence of the curves as the number of elements increases, an enlarged resolution of the plot is given in Fig.~\ref{f:pwh_beta_1e2_f_u_and_contourplots}(b).
The area of the enlargement is the transition regime at which material softening occurs (marked with gray dashed lines in Fig.~\ref{f:pwh_beta_1e2_f_u_and_contourplots}(a)).
The evolution of the material softening is also illustrated through the contourplots of the damage function $D$ in Fig.~\ref{f:pwh_beta_1e2_f_u_and_contourplots}(c).
The contourplots correspond to the computation with the 6250 element mesh and the depicted stages of damage evolution are marked with bullets in Fig.~\ref{f:pwh_beta_1e2_f_u_and_contourplots}(a) and (b).
It can be observed that already at low damage propagation states, as depicted in the left contourplot, the damage value $D$ of several elements reaches the critical damage value $D_{\mathrm{crit}}$.
For all computations, convergence of the iterative solution procedure was obtained.
\subsubsection{Influence of the nonlocal parameter}
\begin{figure}
\unitlength1cm
\begin{center}
\begin{minipage}{0.48\textwidth}
\begin{picture}(0,0)
\put(0,0.5){(a)}
\end{picture}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e1_alpha_5e-2.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e1_alpha_5e-2_enlarged.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\begin{picture}(0,0)
\put(0,0.5){(b)}
\end{picture}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e2_alpha_5e-2.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e2_alpha_5e-2_enlarged.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\begin{picture}(0,0)
\put(0,0.5){(c)}
\end{picture}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e3_alpha_5e-2.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_beta_1e3_alpha_5e-2_enlarged.pdf}
\end{minipage}
\end{center}
\caption{Influence of the nonlocal parameter: force/displacement curves for nonlocal parameters with enlarged resolution on the right hand side.
Figures (a), (b) and (c) correspond to parameters $\beta=\{10,100,1000\}\,\text{N}$, respectively.}
\label{f:pwh_F_over_u_varbeta}
\end{figure}
\begin{figure}
\unitlength1cm
\begin{center}
\begin{picture}(12,6.8)
\put(-2.1,3.3){
\begin{minipage}{0.30\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r4_beta_1e1_alpha_5e-2_303_of_1000.pdf}
\end{minipage}
\hspace{-0.8cm}
\begin{minipage}{0.30\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r4_beta_1e2_alpha_5e-2_332_of_1000.pdf}
\end{minipage}
\hspace{-0.8cm}
\begin{minipage}{0.30\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r4_beta_1e3_alpha_5e-2_418_of_1000.pdf}
\begin{picture}(0,0)
\put(5.1,1.){
\includegraphics[width=1.4cm]{fig/d_scale_alpha_5e-2.eps}
}
\put(5.2,6.7){{\small $D$}}
\put(5.75,0.9){{\small $0.00$}}
\put(5.75,3.65){{\small $0.46$}}
\put(5.75,6.4){{\small $0.95=D_\text{crit}$}}
\end{picture}
\end{minipage}
}
\end{picture}
\end{center}
\caption{Influence of the nonlocal parameter: contourplots for varying values of $\beta$ (from left to right) corresponding to the bullet marks in Fig.~\ref{f:pwh_F_over_u_varbeta}(a)-(c).
Left: $\beta=10\,\text{N}$, middle: $\beta=100\,\text{N}$, right: $\beta=1000\,\text{N}$.
}
\label{f:pwh_contourplots_varbeta}
\end{figure}
In this subsection, the influence of the value of the nonlocal parameter $\beta$ on the convergence behavior and the damage evolution is investigated.
Therefore, in Fig.~\ref{f:pwh_F_over_u_varbeta} corresponding force/dis\-place\-ment curves are shown for values $\beta=10,100$ and $1000\,\text{N}$, respectively.
The same load increment of $0.025\,\text{mm}$ as in the previous subsection is used.
From the plots of Fig.~\ref{f:pwh_F_over_u_varbeta}, it can be observed that as $\beta$ increases
the convergence of the curves appears more rapidly.
However, as $\beta$ increases also the starting point of damage initialization is shifted to higher load values.
This goes along with the increased nonlocality of the distribution of the damage function $D$ as depicted in the contourplots of Fig.~\ref{f:pwh_contourplots_varbeta}(c):
while in the left contourplot ($\beta=10\,\text{N}$) the damage distribution is mostly concentrated to the elements adjacent to the lower edge, in the right contourplot ($\beta=1000\,\text{N}$), a more smeared damage distribution can be observed.
\subsubsection{Influence of the load increment and computing efficiency}
\begin{figure}
\unitlength1cm
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_F_over_u_r1_beta_1e3_vary_steps.pdf}
(a)
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{fig/pwh_r1_beta_1e3_1000_steps_computing_time.pdf}
(b)
\end{minipage}
\caption{
(a) Influence of the load increment on the solution:
force/displacement curves for varying load increments for a 400 element mesh and $\beta=1000\, \text{N}$.
(b) GD: computing time until reaching the marked point at $u^\star=10.85\,\text{mm}$ in (a) with the load increment $0.025\,\text{mm}$.
EL: computing time of a purely elastic 8-node hexahedral reference element until the same load of $u^\star=10.85\,\text{mm}$.}
\label{f:pwh_var_load_increment}
\end{figure}
To investigate the influence of the load increment on the solution, we show the convergence of the force/displacement curves for the specific displacement increments $\{0.25,0.1,0.025,0.01\}\, \text{mm}$,
while the mesh refinement stage (400 elements) and nonlocal parameter ($\beta=1000\, \text{N}$) remain fixed.
For the chosen parameters, the corresponding curves are depicted in Fig.~\ref{f:pwh_var_load_increment}(a) and show the tendency to converge as the value of the load increment decreases and, consequently, the number of steps increases.
Furthermore, Fig.~\ref{f:pwh_var_load_increment}(b) shows the computing time for the gradient damage model (labeled with GD) and a comparative purely elastic simulation (labeled with EL). The time is measured until the crack has completely evolved which corresponds to a load of $u^\star=10.85\,\text{mm}$. Considering also later loading steps would result in an overestimation of the computation time of the elastic simulation: whereas for the elastic material behavior a non-linear finite element simulation has to be performed which requires more than two iteration steps for convergence, the damage model has to solve the rigid body movement due to the completely evolved horizontal crack that separated the plate. Consequently, only two iteration steps are required for convergence yielding a total computing time that is larger for the elastic problem than for the damage model. The elastic problem has been solved by deactivating the update subroutine for the damage field. This implies that the same effort for writing the results to text files was needed and no unfair acceleration of either simulation has been employed. For reaching the critical load of $u^\star=10.85\,\text{mm}$, the elastic simulation required $t=154.9\,$s while the damage simulation finished after $t=159.8\,$s. This computing time increase of $3.2\, \%$ of the proposed approach compared to the purely elastic reference simulation unveils that the additional time corresponding to the damage update represents only a small portion of the overall computing time. For instance, if instead a full finite element scheme for the balance of linear momentum as well as for the microstructure evolution leading to four nodal unknowns is employed, the computation costs time can easily exceed 10 times the computing time required for the purely hyperelastic problem. A comparable observation has also been made for the evaluation of the damage model in the context of small deformations in \cite{junker2019fast}.
\subsection{U-shaped geometry}
\begin{figure}
\unitlength1cm
\begin{center}
\begin{minipage}{0.48\textwidth}
\begin{picture}(7.5,17)
\put(0,12.6){
\def0.9\textwidth{0.9\textwidth}
\input{fig/u_shape_geometry.pdf_tex}}
\put(0,12.6){(a)}
\put(0,5.5){\includegraphics[width=\textwidth]{fig/u_shape_F_over_u.pdf}}
\put(0,0){\includegraphics[width=\textwidth]{fig/u_shape_F_over_u_enlarged.pdf}}
\put(2.3,7.1){{\small $1$}}
\put(6.3,7.9){{\small $2$}}
\put(6.8,9.35){{\small $3$}}
\put(6.9,9.9){{\small $4$}}
\put(7.6,10.5){{\small $5$}}
\put(7.6,9.2){{\small $6$}}
\put(7.00,6.8){{\small $7$}}
\put(3.0,4.5){{\small $5$}}
\put(3.3,3.75){{\small $6$}}
\put(4.3,1.35){{\small $7$}}
\put(0,0){(b)}
\end{picture}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\begin{picture}(7.5,17)
\put(0,13){\includegraphics[scale=0.28]{fig/u_shape_r2_200_of_500.pdf}}
\put(7,13){\includegraphics[scale=0.28]{fig/d_scale_alpha_5e-3.eps}}
\put(6.95,18.8){{\small$D$} }
\put(7.5,12.9){{\small $0.000$}}
\put(7.5,15.7){{\small $0.500$}}
\put(7.5,18.5){{\small $0.995=D_\text{crit}$}}
\put(0,11){\includegraphics[scale=0.28]{fig/u_shape_r2_400_of_500.pdf}}
\put(0,9){\includegraphics[scale=0.28]{fig/u_shape_r2_440_of_500.pdf}}
\put(0,7){\includegraphics[scale=0.28]{fig/u_shape_r2_444_of_500.pdf}}
\put(0,5){\includegraphics[scale=0.28]{fig/u_shape_r2_450_of_500.pdf}}
\put(0,3){\includegraphics[scale=0.3]{fig/u_shape_r2_451_of_500.pdf}}
\put(0,0){\includegraphics[scale=0.28]{fig/u_shape_r2_452_of_500.pdf}}
\put(0,14){{\small $1)$}}
\put(0,12){{\small $2)$}}
\put(0,9){{\small $3)$}}
\put(0,6.6){{\small $4)$}}
\put(0,4.6){{\small $5)$}}
\put(0,2.9){{\small $6)$}}
\put(0,0.8){{\small $7)$}}
\put(0,0){(c)}
\end{picture}
\end{minipage}
\end{center}
\caption{(a) Description of the geometry of the u-shaped boundary value problem.
(b) Force/displacement curves for various mesh refinement stages (enlarged resolution in the bottom).
(c) Contourplots of the u-shaped problem visualizing the damage evolution corresponding to the marked load stages depicting the damage evolution.
}
\label{f:u_shape}
\end{figure}
As second numerical test, a u-shaped geometry is analyzed in which, caused by its geometry and boundary conditions, large deformations are present (cf. Fig.~\ref{f:u_shape}(a)) although the strains may be comparatively moderate.
Due to its symmetry, only the left half of the domain is considered and the displacement $u_X=0$ is fixed at the symmetry plane $X=0$.
The dimensions (cf. Fig.~\ref{f:u_shape}(a)) are given with $R=50\,\text{mm}$ and $S=10\,\text{mm}$.
The prescribed displacement ${\boldface{u}}^\star=({u}^\star,0,0)$ with ${u}^\star=100\,\text{mm}$ is imposed on the upper right edge $(X,Y)=(-50,50)\,\text{mm}$.
The used material and boundary parameters are shown in Tab.~\ref{t:parameters}.
From the force/displacement curves shown in the upper plot in Fig.~\ref{f:u_shape}(b) and with refined resolution in the lower figure, the tendency to converge can be concluded.
Due to the relatively large displacement, the additional material nonlinearity becomes visible in the force/displacement curves (pile-up of the reaction force close to failure).
The contourplots in Fig.~\ref{f:u_shape}(c) illustrate the damage propagation at the load stages marked with bullets in the force/displacement plots.
Here, to visualize the crack formation in the entirely damaged area of the domain, the elements in which the damage value has reached the upper bound $D^\comp{e}=D_{\text{crit}}=0.995$ were made invisible. A remarkable elastic snapback is observed once the geometry fails, cf. Fig.~\ref{f:u_shape}(c)~7).
Despite the relatively coarse time discretization, convergence of the iterative solution was obtained at all refinement stages. It is worth mentioning that the snapback phenomenon constitutes a non-trivial process in a numerical treatment. Since we also observe convergence for this regime, the u-shaped boundary value problem empirically shows the numerical robustness of our approach to damage modeling for large deformations.
\section{Conclusions}
We presented a novel gradient damage model for hyperelastic structures undergoing large deformation along with an efficient and stable numerical update scheme. Numerical results showed convergence for varying mesh size and increasing regularization parameter $\beta$. For a smooth iterative solution scheme, it was necessary to develop a stabilization technique that allows for erosion of finite elements with damage exceeding a critical damage state without the need of remeshing. Concluding, we obtained an efficient numerical approach for damage processes to be described in the large deformation setting. In future investigations, important physical aspects like plasticity and hardening will be included.
\section*{Acknowledgment}
The authors J. Riesselmann and D. Balzani greatly appreciate funding by the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) as part of the Priority Program German 1748 ``Reliable simulation techniques in solid mechanics. Development of non-standard discretization methods, mechanical and
mathematical analysis'', project ID BA2823/15-1.
\addcontentsline{toc}{chapter}{Bibliography}
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-02-18T02:20:56",
"yymm": "2102",
"arxiv_id": "2102.08819",
"language": "en",
"url": "https://arxiv.org/abs/2102.08819"
}
|
\section{Introduction}
Process mining tools provide unique capabilities to diagnose business processes existing within organizations (e.g., in transaction logs or audit trails) including discovering the running processes, as well as deviations and bottlenecks that occur or exist in the current state of the processes \cite{DBLP:books/sp/Aalst16}.
In all of the proposed tools for simulation in process mining, interaction with the user and user knowledge is an undeniable requirement for designing and running the simulation models.
Moreover, most of the approaches are dependent on external simulation tools.
For instance, in \cite{DBLP:conf/bpm/CamargoDR19}, the proposed business process simulation technique is based on the BPMN model. All the simulation parameters with the BPMN model are put into a simulation tool such as BIMP for the simulation step. \cite{SPN} provides a comprehensive platform for modeling stochastic Petri nets, however, the connection to process mining is missing. the In \cite{DBLP:journals/is/RozinatMSA09}, the created simulation model is based on the CPN tool which requires users to have knowledge of discrete event simulation as well as \emph{Standard Machine Language} (SML) to define functions and capture the output as an event log \cite{ratzer2003cpn}.
In \cite{howcloseGawin} an external tool, i.e., ADONIS for simulating the discovered model and parameters are used.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth, height=0.15\textheight]{MainFrameworkTool.pdf}
\caption{The general framework for discrete event simulation in process mining. The automatic generation of simulation models and the corresponding simulated event logs is possible by starting with an event log, extracting the process model and the performance information, generating random cases, and finally converting the processed activities in the form of events. The user annotations indicate the options for the user to simulate the process with user-defined parameters.}
\label{fig:Main_Framework_PMSIM}
\vspace{-8mm}
\end{figure}
It should be noted that oftentimes the user does not need to have in-depth knowledge of the process so as to simulate it which holds for the most of commercial tools such as \emph{Protos}, \emph{Any Logic} and \emph{ARENA}. For instance, when the user only needs to know how the process will behave if the average arrival rate increases to 5 minutes, i.e., every 5 minutes a new case arrives.
In process mining, the above-mentioned requirements can be addressed by the concept of Discrete Event Simulation (DES) \cite{DBLP:conf/scsc/Aalst18}. DES for business processes has been developed in \emph{Java} as a plugin of \emph{ProM} \cite{VanDongen2005}. However, custom options such as the ability to change the duration of activities for future performance analyses are missing.
Approaches such as in \cite{DBLP:journals/is/Rogge-SoltiW15} uses the same idea in Java, including some drawbacks, e.g., a fixed duration for case generation step. The generated cases do not have any time overlap, which is not the case in reality.
Work such as \cite{DBLP:conf/bpm/PufahlW17} tries to generate a business simulation model for business processes which relies on the user domain knowledge.
\cite{MsimModelConstructionMartinDC16} describes a range of modeling tasks that should be considered when creating a realistic business process simulation model.
Existing process mining tools provide users with a visual representation of process discovery and performance analyses using event data in the form of event logs.
Therefore, an approach is needed to play out reality and generate the exact behavior which makes further analyses in process mining possible.
Moreover, the option to extend the library as an open-source tool is easily provided. User options to add capacity to the activities and to extend the case production for different times of the day and week can be implemented.
Research work such as \cite{DBLP:conf/otm/PourbafraniZA19} and \cite{MahsaBIS} use aggregated simulation which is useful for what-if analyses in a high-level decision-making scenario \cite{DBLP:conf/ihsi/PourbafraniZA20}. The \emph{PMSD} tool represents the aggregated approach and generates a simulation model at a higher level of detail \cite{mahsaToolPMSD}.
In this paper, we introduce an easy-to-use open-source Python-based application that connect the provided process mining environment in Python \emph{PM4Py} \cite{DBLP:pm4py} to the general simulation techniques in Python, \emph{Simpy} \footnote{https://simpy.readthedocs.io}.
The latter library is used for discrete event simulation and handles the required system clock in DES. The automatically designed simulation model can be configured with user-defined duration for the activities and arrival rate. The final output is an event log based on the given number of cases that can be used further for process mining analyses.
The designed framework of the tool is shown in Fig. \ref{fig:Main_Framework_PMSIM}. It is designed on the basis of three main modules; process mining, simulation, and transformation of the generated events into an event log.
\section{PNSIM}
Event logs comprise events where each event refers to a \emph{case} (process instance), an \emph{activity}, a \emph{timestamp}, and any number of additional attributes (e.g., costs, resources, etc.). A set of events forms an event log which can be used in process mining analyses. As shown in Fig. \ref{fig:Main_Framework_PMSIM}, our approach starts with applying process mining techniques on the original event log. Therewith a process model is discovered in the form of a Petri net which presents possible flows of activities for the cases. Subsequently, performance analyses provide the case arrival rate including the business hours and the average duration of the activities. This information makes the automatic generation of process instances based on the past executions of processes possible.
\begin{wrapfigure}{r}{0.495\textwidth}
\centering
\vspace{-10 mm}
\includegraphics[width=0.5\textwidth,height=0.2\textheight ]{FlowChartHorizenSmall.pdf}
\caption{The flowchart of the integrated discrete event simulation of the processes using process mining. Each activity runs if it is available and the clock of the simulation gets updated for every new event. New events are a newly arrived case, the end of processing an activity for a case, or the start of the processing of an activity for a case.}
\label{fig:flowchart}
\vspace{-9 mm}
\end{wrapfigure}
We aim to provide a simulation model and the corresponding simulated event log as close to reality as possible. To do so, we perform the following preprocessing steps in the process mining module:
\begin{itemize}
\item Process discovery:
\begin{itemize}
\item \emph{Maximum length of traces}: The presence of loops in the process models (Petri nets) makes the generation of long unrealistic traces possible. By identifying and replacing the maximum length of traces, we limit the possibility of the execution of unrealistic loops for the simulated cases.
\end{itemize}
\item Performance analyses:
\begin{itemize}
\item \emph{Arrival rate calculation}: The business hours are considered by default in calculating the average arrival rate. Moreover, we learn the inter-arrival time distribution from the actual arrival times.
The detected distribution is used in the simulation step.
\item \emph{Activity duration}: By removing outliers from the set of duration for each activity, we provide more robust values for the duration of activities.
\end{itemize}
\end{itemize}
Using the distribution of activities’ duration, we implicitly consider the average duration of resources’ time without extracting the resource pool. This aggregated calculation includes the behavior of resources for handling each activity.
Next is the simulation module in which we generate new cases. In extracting the arrival rate of cases, i.e., the duration of time for a new case to arrive, we include the business hours in the calculation of the arrival rate to obtain an accurate value. The next step is to discover how the cases are handled in the process w.r.t. the service time of each activity and the possible flow of activities that each case can take. Based on the presence of the start and complete timestamps, the value of the average duration of each activity is captured. The discovered Petri net also is used for generating a possible flow of activities.
The provided user options to interact with and modify the simulation process are the following functions:
\begin{itemize}
\item \emph{Activity duration} generates the random values based on the extracted values for each activity and the corresponding distribution. The user is able to change the parameters of the distribution
\item \emph{Arrival rate} uses a normal distribution for generating new cases and the user is able to change the average arrival rate for the simulated log.
\item \emph{Case generator} produces random cases based on the provided number of cases by the user. It determines the terminating point of the simulation.
\end{itemize}
The final module is designed to transform the simulated events for the generated cases into event logs. The discrete event simulation clock is converted to the real timestamp and each activity is recorded for the cases in the real timestamp.
The flow chart of the simulation module of our tool is shown in Fig. \ref{fig:flowchart}.
After each new generated case, it checks the condition whether the number of cases provided by the user is met. Accordingly, it follows up with processing the picked marking from the Petri net. Either the provided outputs by the process mining module or user parameters are used to start the simulation. By selecting the available activity from the Petri net, the simulation module checks whether the previous process of the activity has finished. In the last step, after performing each possible event (generating a new case or processing of an activity) the simulation clock gets updated and the data is captured. Since the simulation technique considers the capacity of each activity, the concept of queuing is implicitly covered in the simulated event log. When an activity with full capacity, i.e., processing other cases, is selected for the current case, the case is in the waiting state which is shown in the performance analyses of the event log.
\section{Tool Maturity}
The source code of our tool, a tutorial, and a screen-cast are publicly available. \footnote{https://github.com/mbafrani/AutomaticProcessSimulation}
The tool has been used in multiple academic projects to simulate a process model in different situations and generate different event logs.
\begin{figure}[bt]
\vspace{-4mm}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.25\textheight]{originalPMVertical.pdf}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.2\textheight]{SimulatedModel.pdf}
\end{subfigure}
\caption{The discovered process model of the example event log using Petri net notation. It includes 8 unique activities and represents the process of handling requests in an organization (a).The discovered process model of the simulated event log using Petri net notation. Our tool generates the simulated event log directly from the original event log, which captures both time and activity flow features of the original process (b).}
\label{fig:PNsimReal}
\vspace{ -8 mm}
\end{figure}
For instance, for the purpose of time series analyses, different arrival rates for the same process have been selected and the tool event logs are generated. We use a sample case study to demonstrate the steps and usability of our tool.
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-2 mm}
\centering
\includegraphics[width=0.41\textwidth,height=0.12\textheight]{SampleSimLog.jpg}
\caption{Part of the simulated event log for the example event log which is generated in the \emph{.csv} format. It includes the main attributes of an event log, case id, activity, and timestamp. }
\label{fig:simulated_event_log}
\vspace{ -8 mm}
\end{wrapfigure}
Figure \ref{fig:PNsimReal}(a) shows a sample process model of the example event log in the form of a Petri net. We use the sample event log and simulate the process for 1000 cases. Using the same process discovery algorithm for the simulated event log result in the same model including concurrences in the model as shown in Fig. \ref{fig:PNsimReal}(b). The discovered model shows that our tool is able to mimic the process and simulate the model including the time aspects of the process.
Part of the simulated log is shown in Fig. \ref{fig:simulated_event_log}. The simulated event log has the main attributes of an event log. It captures the \emph{case id} which is increased incrementally to the defined number by the user, \emph{activity} names, and the corresponding complete time as \emph{timestamp}.
\section{Conclusion}
Techniques for past analyses of processes in organizations are well-supported in existing academic and commercial process mining tools. However, future analyses for business processes are not fully covered in the current tools. Commonly used options either need knowledge of simulation techniques and modeling, high interaction with users or are not accurate enough since they are not supported by real event data. In this paper, we presented the tool which directly uses the event data of a process in the form of an event log and simulates the process with the automatically extracted values as well as user-defined input. The tool is designed to simulate the processes in different scenarios. Since the simulation module is based on the discrete event simulation technique, the simulated event log includes the same behavior as the real event log.
\bibliographystyle{ieeetr}
|
{
"timestamp": "2021-02-18T02:18:31",
"yymm": "2102",
"arxiv_id": "2102.08774",
"language": "en",
"url": "https://arxiv.org/abs/2102.08774"
}
|
\section{Introduction}
\label{sec:introduction}
\input{sections/sec_introduction}
\section{Preliminaries}
\label{sec:preliminaries}
\input{sections/sec_preliminaries}
\section{Analysis}
\label{sec:analysis}
\input{sections/sec_analysis}
\section{Related work}
\label{sec:related_work}
\input{sections/sec_related_work}
\section{Experiments}
\label{sec:experiments}
\input{sections/sec_experiments}
\section{Discussion}
\label{sec:discussion}
\input{sections/sec_discussion}
\subsection*{Acknowledgements}
This research was supported in part by the
Austrian Science Fund (FWF): project FWF P31799-N38
and the Land Salzburg (WISS 2025) under project numbers 20102- F1901166-KZP and 20204-WISS/225/197-2019. We also like to
thank the anonymous reviewers for the constructive feedback during the review process.
\subsection*{Source Code}
Source code to reproduce experiments is publicly available: \url{https://github.com/plus-rkwitt/py_supcon_vs_ce}
\subsection{Cross-Entropy Loss}
\label{subsection:analysis_ce}
We start by providing a lower bound, in Theorem~\ref{thm:ce_bound_frob}, on the CE~loss, under the constraint of norm-bounded points.
\vskip1ex
\begin{restatable}{thm}{rest@thm@ce@bound@frob}
\label{thm:ce_bound_frob}
Let $\rho_\mathcal{Z}>0$, $\mathcal{Z} = \set{z\in \mathbb{R}^h:~ \norm{z}\le \rho_\mathcal{Z}}$.
Further, let $Z=(z_1,\ldots,z_N) \in \mathcal{Z}^N$ be an $N$ point configuration with labels $Y=(y_1,\ldots,y_N) \in [K]^N$ and let $W \in \mathbb{R}^{K \times h}$ be the weight matrix of the linear classifier from Definition~\ref{def:ce}.
If the label configuration $Y$ is balanced,
\begin{align*}
& \mathcal{L}_{\operatorname{CE}}(Z, W;\,Y)
\ge \\
& \log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {\sqrt{K}}{K-1}
\|W\|_F
\right)
\right)
\enspace,
\end{align*}
holds, with equality if and only if there are $\zeta_1, \dots, \zeta_K \in \mathbb{R}^h$ such that:
\begin{enumerate}[labelindent=8pt,leftmargin=!,label=(C\arabic*),labelwidth=\widthof{\ref{last-item}}]
\item\label{thm:ce_bound_frob:c1}
$\forall n \in [N]: z_n = \zeta_{y_n}$
\item\label{thm:ce_bound_frob:c2}
$\{\zeta_y\}_{y}$ form a $\rho_{\mathcal{Z}}$-sphere-inscribed regular simplex
\item\label{thm:ce_bound_frob:c3}
$\exists \rho_{\mathcal{W}} > 0: \forall y \in \mathcal{Y}: w_{y} = \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y}$
\end{enumerate}
\end{restatable}
Importantly, Theorem~\ref{thm:ce_bound_frob} states that the bound is tight, if and only if all instances with the same label collapse to points and these points form the vertices of a regular simplex, inscribed in a hypersphere of radius $\rho_{\mathcal{Z}}$.
Additionally, all weights, $w_y$, have to attain equal norm and have to be scalar multiples of the simplex vertices, thus also forming a regular simplex (inscribed in a hypersphere of radius $\rho_\mathcal{W}$).
\vskip1ex
\begin{rembold}
\label{rem:papayan}
Our result complements recent work by \citet{Papyan20a}, where it is empirically observed that training neural predictors as in Eq.~\eqref{eqn:predictor}\footnote{also including bias terms, i.e., $Wx +b$} leads to a within-class covariance collapse of the representations as we continue to minimize the CE~loss beyond zero training error.
By assuming representations to be Gaussian distributed around each class mean and taking the covariance collapse into account, the regular simplex arrangements of Theorem~\ref{thm:ce_bound_frob} arise.
Specifically, this is the optimal configuration from the perspective of recovering the correct class labels.
While the analysis in \cite{Papyan20a} is decoupled from the loss function and hinges on a probabilistic argument, we study what happens as the CE~loss attains its lower bound;
our result, in fact, implies the covariance collapse.
\end{rembold}
\vskip1ex
\begin{restatable}{cor}{rest@cor@ce@bound@r}
\label{cor:ce_bound_r}
Let $Z, Y, W$ be defined as in Theorem~\ref{thm:ce_bound_frob}. Upon requiring that $\forall y \in [K]: \|w_y\| \leq r_{\mathcal{W}}$, it holds that
\begin{align*}
\mathcal{L}_{\operatorname{CE}}(Z, W;\,Y)
\ge
\log \left(
1 + (K-1) \exp \left(
- \frac {K\,\rho_{\mathcal{Z}}\,r_{\mathcal{W}}}{K-1}
\right)
\right)
\end{align*}
with equality if and only if \ref{thm:ce_bound_frob:c1} and \ref{thm:ce_bound_frob:c2} from Theorem~\ref{thm:ce_bound_frob} are satisfied and condition \ref{thm:ce_bound_frob:c3} changes to
\begin{enumerate}[label=(C\arabic*r), labelindent=13pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}, start=3]
\item
\label{cor_ce_bound_r:c3}
$\forall y \in \mathcal{Y}: w_y =\frac{r_{\mathcal{W}}}{\rho_{\mathcal{Z}}}\zeta_y$\enspace.
\end{enumerate}
\end{restatable}
Notably, a special case of Corollary~\ref{cor:ce_bound_r} appears in Proposition 2 of \citet{Wang17a}, covering the case where $\forall n: z_n = w_{y_n}$ and $\forall y \in \mathcal{Y}: \|w_y\|=l$, i.e., equinorm weights and already collapsed classes.
Corollary~\ref{cor:ce_bound_r} obviates these constraints and provides a more general result, only assuming that $\forall n: \|z_n\|\leq \rho_{\mathcal{Z}}$ and $\forall y: \|w_y\|\leq r_{\mathcal{W}}$.
However, constraining the norm of the weights seems artificial as, in practice, the weights are typically subject to an additional $L_2$ penalty.
Corollary~\ref{cor:ce_bound_wd} directly addresses this connection, showing that applying an $L_2$ penalty of the form $\lambda \|W\|_F^2$ eliminates the necessity of an explicit norm constraint.
\vskip2ex
\begin{restatable}{cor}{rest@cor@ce@bound@wd}
\label{cor:ce_bound_wd}
Let $Z, Y, W$ be defined as in Theorem~\ref{thm:ce_bound_frob}.
For the $L_2$-regularized objective $\mathcal{L}_{\operatorname{CE}}(Z, W;\,Y) + \lambda \|W\|_F^2$ with $\lambda>0$, it holds that
\begin{align*}
& \mathcal{L}_{\operatorname{CE}}(Z, W;\,Y) + \lambda \|W\|_F^2 \\
& \ge
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {{K}}{K-1}
r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)
\right)
\right) \\
& ~~~~+
\lambda K r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)^2\enspace,
\end{align*}
where $r_\mathcal{W}(\rho_\mathcal{Z},\lambda)>0$ denotes the unique solution, in $x$, of
\begin{equation*}
0 =
K\left(2 \lambda x-\frac{\rho_\mathcal{Z} }{\exp(\frac{K \rho_\mathcal{Z} x}{K-1})+K-1}\right)
\enspace.
\end{equation*}
Equality is attained in the bound if and only if \ref{thm:ce_bound_frob:c1} and \ref{thm:ce_bound_frob:c2} from Theorem~\ref{thm:ce_bound_frob} are satisfied and \ref{thm:ce_bound_frob:c3} changes to
\begin{enumerate}[label=(C\arabic*wd), labelindent=20pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}, start=3]
\item
\label{cor_ce_bound_wd:c3}
$\forall y \in \mathcal{Y}: w_y = \frac{r_\mathcal{W}(\rho_{\mathcal{Z}},\lambda)}{\rho_{\mathcal{Z}}}\zeta_y \enspace.$
\end{enumerate}
\end{restatable}
Corollary~\ref{cor:ce_bound_wd} differs from Corollary~\ref{cor:ce_bound_r} in that the characterization of $w_y$ depends on $r_{\mathcal{W}}(\rho_{\mathcal{Z}}, \lambda)$, i.e., a function of the norm constraint, $\rho_\mathcal{Z}$, on the points and the regularization strength $\lambda$.
While $r_{\mathcal{W}}(\rho_{\mathcal{Z}}, \lambda)$ has, to the best of our knowledge, no closed-form solution, it can be solved numerically.
Fig.~\ref{fig:toy_sc_vs_ce} illustrates the attained regular simplex configuration, on a toy example, in case of added $L_2$ regularization.
It is important to note that the assumed norm-constraint on points in $\mathcal{Z}$ is not purely theoretical.
In fact, such a constraint often arises\footnote{although it might not be explicitly enforced}, e.g., via batch normalization \cite{Ioffe15a} at the last layer of a network implementing $\varphi_\theta$.
While one could, in principle, derive a normalization dependent bound for the CE loss, it is unclear (to the best of our knowledge) if a regular simplex solution satisfying the corresponding equality conditions always exists.
\textbf{Numerical Simulation}
To empirically assess our theoretical results, we take the toy example from Fig.~\ref{fig:toy_sc_vs_ce}, where we minimize (via gradient descent) the $L_2$ regularized CE~loss over $W$ and $Z$ with $\forall n: \|z_n\|\leq 1$. This setting corresponds to having an \emph{ideal} encoder, $\varphi$, that can realize any configuration of points and matches the assumptions of Corollary~\ref{cor:ce_bound_wd}.
Fig.~\ref{fig:ce_cor2_assessment} (\emph{right}) shows that the lower bound, for varying values of the regularization strength $\lambda$, closely matches the empirical loss.
Additionally, Fig.~\ref{fig:ce_cor2_assessment} (\emph{left}) shows a direct comparison of the empirical weight average, $\|\overline{w}\|$, \emph{vs.} the corresponding theoretical value of $\|w_y\|$ (which is equal for all $y$ in case of minimal loss).
These experiments empirically confirm that conditions \ref{thm:ce_bound_frob:c1} and \ref{thm:ce_bound_frob:c2}, as well as the adapted condition \ref{cor_ce_bound_wd:c3} from Corollary~\ref{cor:ce_bound_wd} are satisfied.
\begin{figure}[t!]
\centering{
\includegraphics[width=0.49\columnwidth]{figures/cor2/cor2_toy_w.png}\hfill
\includegraphics[width=0.49\columnwidth]{figures/cor2/cor2_toy_L.png}}
\caption{Numerical simulation for Corollary~\ref{cor:ce_bound_wd} (on the toy data of Fig.~\ref{fig:toy_sc_vs_ce}), as a function of the $L_2$ regularization strength $\lambda$. The \emph{left} plot shows the theoretical norm of $w_y$ (which is equal for all $y$ at minimal loss) vs. the observed mean norm of the three weights. The \emph{right} plot shows the theoretical bound vs. the empirical $L_2$ regularized CE~loss.\label{fig:ce_cor2_assessment}}
\end{figure}
In \S\ref{sec:experiments}, we will see that the sought-for regular simplex configurations actually arise (with varying quality) when minimizing the $L_2$ regularized CE~loss for a ResNet-18 trained on popular vision benchmarks.
\subsection{Supervised Contrastive Loss}
\label{subsection:analysis_sc}
An analysis of the SC~loss, similar to \S\ref{subsection:analysis_ce}, is less straightforward. In fact, as the loss is defined over \emph{batches}, we can not simply sum up per-instance losses to characterize the ideal $N$ point configuration. Instead, we need to consider \emph{all} batch configurations of a specific size $b \in \mathbb{N}$.
We next state our lower bound for the SC~loss with the corresponding equality conditions.
\begin{restatable}{thm}{thm@supcon}
\label{thm:supcon}
Let $\rho_\mathcal{Z}>0$ and let $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}^{h-1}$.
Further, let $Z=(z_1,\ldots,z_N) \in \mathcal{Z}^N$ be an $N$ point configuration with labels $Y=(y_1,\ldots,y_N) \in [K]^N$. If the label configuration $Y$ is balanced, it holds that
\begin{align*}
& \mathcal{L}_{\operatorname{SC}}(Z;Y) \\
& \ge
\sum_{l=2}^{b}
l\, M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
- \frac{K\rho_{\mathcal{Z}}^2}{K-1}
\right)
\right)\enspace,
\end{align*}
where
\[
M_l = \sum_{y\in\mathcal{Y}} |\set{ {B \in \mathcal{B}}:~ |B_y|=l }|\enspace.
\]
Equality is attained if and only if the following conditions are satisfied. There are $\zeta_1, \dots, \zeta_{K} \in \mathbb{R}^h$ such that:
\begin{enumerate}[labelindent=8pt,leftmargin=!,label=(C\arabic*),labelwidth=\widthof{\ref{last-item}}]
\item\label{con:supcon:1}
$\forall n \in [N]: z_n = \zeta_{y_n}$
\item\label{con:supcon:2}
$\{\zeta_y\}_y$ form a $\rho_{\mathcal{Z}}$-sphere-inscribed regular simplex
\end{enumerate}
\end{restatable}
Theorem~\ref{thm:supcon} characterizes the geometric configuration of points in $Z$ at minimal loss.
We see that the equality conditions \ref{thm:ce_bound_frob:c1} and \ref{thm:ce_bound_frob:c2} from Theorem~\ref{thm:ce_bound_frob} equally appear in Theorem~\ref{thm:supcon}.
This implies that, at minimal loss, each class collapses to a point and these points form a regular simplex.
Considering the guiding principle of the SC~loss, i.e., separating instances from distinct classes and attracting instances from the same class, it seems plausible that constraining instances to the hypersphere would yield an evenly distributed arrangement of classes.
However, a closer look at the SC~loss reveals that this is not obvious by any means.
In contrast to the physical (electrostatic) intuition, the involved attraction and repulsion forces are not pairwise, but depend on groups of samples, i.e., batches.
Naively, one could try to characterize the loss minimizing configuration of points for each batch separately.
Yet, this is destined to fail, as the minimizing arrangement of points in each batch depends on the label configuration; an example is visualized in Fig.~\ref{fig:minimizers}.
Hence, there is no simultaneous minimizer for all batch-wise losses.
It is therefore crucial to understand the interaction of the attraction and repulsion forces across different batches.
We sketch the argument of the proof below and refer to the supplementary material for details.
\begin{figure}[t!]
\centering{
\includegraphics[width=\columnwidth]{figures/minimizers/minimizers_rotated.pdf}}
\vspace{-0.68cm}
\caption{Illustration of \emph{loss minimizing} point configurations of the batch-wise SC~loss for varying label configurations and a batch size $b=9$.
Colored numbers indicate the \emph{multiplicity} of each class in the batch.}
\label{fig:minimizers}
\vspace{-8pt}
\end{figure}
\textbf{Proof Idea for Theorem~\ref{thm:supcon}}
The key idea is to decouple the attraction and repulsion effects from the batch-wise formulation of the loss.
Since each batch-wise loss contribution is actually a sum of label-wise contributions, the supervised contrastive loss can be considered as a sum over the Cartesian product of the set of all batches with the set of all labels.
We partition this Cartesian product into appropriately constructed subsets, i.e., by label multiplicity. This allows to apply Jensen's inequality to each sum over such a subset.
In the resulting lower bound, the repulsion and attraction effects are still allocated to the batches, but encoded more tangibly, i.e., linearly, as sums of inner products.
Therefore, their interactions can be analyzed by a combinatorial argument which hinges on the balanced class label assumption.
Minimality of the respective sums is attained if and only if (1) all classes are collapsed and (2) the mean of all instances (i.e., ignoring the class label) is zero.
The simplex arrangement arises as consequence of (1) \& (2) and, additionally, the equality conditions yielded by the previous application of Jensen's inequality, i.e., all intra-class and inter-class inner products are equal.
\textbf{Numerical Simulation}
For a large number of points, numerical computation of the bound in Theorem~\ref{thm:supcon} is infeasible due to the combinatorial growth of the number of batches (even for the toy-example of Fig.~\ref{fig:toy_sc_vs_ce} with 300 points). Hence, we consider a smaller setup. In particular, we take $K=3$ classes, each consisting of $4$ points on the unit circle $\mathbb{S}^{1}$, i.e., $Z=(z_1,\ldots,z_{12})$, $h=2$ and $\rho = 1$.
For a batch size of $b=9$, this setup yields a total of 167,960 batches, i.e., the number of combinations with replacement.
We initialize the $z_i$ as the projection of points sampled from a standard Gaussian distribution and
then minimize the SC~loss (by stochastic gradient descent for 100k iterations) over the points in $Z$. Fig.~\ref{fig:sc_eval} (\emph{left}) shows that, at convergence,
the lower bound on $\mathcal{L}_{\operatorname{SC}}(Z;Y)$ closely matches the empirical loss. Fig.~\ref{fig:sc_eval} (\emph{right}) shows the SC~loss over \emph{all} batches, highlighting the different loss levels depending on the label configuration in the batch (cf. Fig.~\ref{fig:minimizers}).
\begin{figure}[t!]
\begin{minipage}{\columnwidth}
\begin{minipage}[b]{0.41\textwidth}
\centering
\begin{small}
\raisebox{0.8cm}{
\begin{tabular}{r|c}
\hline
& $\mathcal{L}_{\operatorname{SC}}(Z;Y)$\\
\hline
Empirical & 12.12016 \\
\textbf{Theory} & 12.12015\\
\hline
\end{tabular}}
\end{small}
\end{minipage}
\hfill
\begin{minipage}[b]{0.52\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{figures/sc_eval/hist.pdf}
\end{minipage}
\end{minipage}
\vspace{-0.08cm}
\caption{
Numerical optimization of the SC~loss for toy data on $\mathbb{S}^1$.
\emph{Left}: Comparison of mean batch-wise loss with the lower bound from Theorem~\ref{thm:supcon}.
\emph{Right}: Histogram (over all $\approx$170k batches) of the batch-wise loss values at convergence,
showing the inhomogeneity of minimal loss values across batch configurations (cf. Fig.~\ref{fig:minimizers}).
\label{fig:sc_eval}}
\end{figure}
\subsection{Setup}
As our choice of $\varphi_\theta$, we select a ResNet-18 \cite{He16a} model, i.e., all layers up to the linear classifier.
Experiments are conducted on CIFAR10/100, for which this choice yields $512$-dim. representations (and $K\leq h+1$ holds in all cases).
We either compose $\varphi_\theta$ with a linear classifier and train with the CE~loss function (denoted as \textbf{CE}), or we directly optimize $\varphi_\theta$ via the SC~loss function, then freeze the encoder parameters and train a linear classifier on top (denoted \textbf{SC}).
In case of the latter, outputs of $\varphi_\theta$ are always projected onto a hypersphere of radius $\rho=\nicefrac{1}{\sqrt{\tau}}$ (with $\tau=0.1$), which accounts for scaling the inner-products by the temperature parameter $\nicefrac{1}{\tau}$ in the original formulation of \citet{Koshla20a}.
\textcolor{black}
{We want to stress that while Theorem \ref{thm:supcon} holds for every $\rho>0$, the temperature crucially influences the optmization dynamics and needs to be tuned appropriately.}
For comparison, we also compose $\varphi_\theta$ with a \emph{fixed} linear classifier, in particular, a classifier with weights a-priori optimized towards a regular simplex arrangement. This is similar to \cite{Mettes20a}, only that we minimize the CE~loss (denoted as \textbf{CE-fix}) to learn predictors $W \circ \varphi_\theta$, as opposed to pulling outputs of $\varphi_\theta$ towards the fixed prototypes/weights.
Optimization is done via (mini-batch) stochastic gradient descent with $L_2$ regularization ($10^{-4}$) and momentum (0.9) for 100k iterations. The batch-size is fixed to 256 and the learning rate is annealed exponentially, starting from 0.1.
When using data augmentation, we apply random cropping and random horizontal flipping, each with probability $\nicefrac{1}{2}$.
\subsection{Theory vs. Practice}
\label{subsection:theory_vs_practice}
To provide a first impression to which extend the representations of the training data achieve the loss minimizing geometric arrangement, we compare the empirical \textbf{CE} and \textbf{SC} loss values to the optima derived in \S\ref{sec:analysis}, using ResNet-18 models trained on CIFAR10 (with data augmentation).
The \textcolor{tabred}{theoretical}/\textcolor{tabblue}{empirical} losses are (1) \textcolor{tabred}{7.64e-5} vs. \textcolor{tabblue}{2.48e-4} (for \textbf{CE}) and (2) \textcolor{tabred}{824.487} vs. \textcolor{tabblue}{824.731} (for \textbf{SC}), where we estimate the empirical \textbf{SC} loss over 1k training batches. Notably, when optimizing for 500k iterations (instead of 100k; see supplementary material), the loss values continue to move closer to the optimum, but at very low speed. In particular, the loss values change to \textcolor{tabblue}{2.27e-4} (for \textbf{CE}) and \textcolor{tabblue}{824.523} (for \textbf{SC}), respectively. Overall, this suggests that out of these models, representations learned by minimizing the \textbf{SC} loss might arrange closer to the theoretically optimal configuration. As our results only cover the \emph{loss minimizers} and not (close to optima) level sets, the latter hypothesis is more of a first guess and not predicted by the theory.
For a closer look at the geometric arrangement of the representations (and classifier weights), we compute three statistics, all based on the cosine similarity $\gamma: \mathbb{R}^h \times \mathbb{R}^h \to [0,1]$, defined as
\begin{equation}
(x,y)\mapsto 1-\cos^{-1}\left(
\inprod{\nicefrac{x}{\|x\|}}{\nicefrac{y}{\|y\|}}\right)
/\pi\enspace.\label{eqn:cs}
\end{equation}
First, we measure the separation of the class representations via the cosine similarity among the class means, $\mu_1, \dots, \mu_K$, i.e., $\gamma(\mu_i,\mu_j)$ for $i \neq j$.
Second, for the CE~loss function, we compute the cosine similarity across the classifier weights, i.e., $\gamma(w_i,w_j), i\neq j$, quantifying their separation.
Third, to quantify class collapse, we compute the cosine similarity among all representations and their respective class means, i.e.,
$\gamma(\varphi(x_n),\mu_{y_n})$.
Note that our theoretical results imply that the classes should collapse and the pairwise similarities, as mentioned above, should be equal.
Fig.~\ref{fig:geometry} illustrates the distribution of the cosine similarities for the ResNet-18 model trained with different loss functions (and using data augmentation). We observe that the SC~loss leads to (1) an arrangement of the class means much closer to the ideal simplex configuration and (2) a tighter concentration of training representations around their class means. Furthermore, in case of the CE~loss, the weight arrangement reaches, on average, a regular simplex configuration, while the representations slightly deviate. When using a-priori fixed weights in a simplex configuration, i.e., \textbf{CE-fix}, the situation is similar, but the within-class spread is smaller. In general, the statistics are comparable between CIFAR10 and CIFAR100, only that the distribution of all computed statistics widens for models trained with \textbf{CE} on CIFAR100. We conjecture that the increase in the number of classes, combined with the joint optimization of $\varphi_\theta$ and $W$ complicates convergence to the loss minimizing state. Fig.~\ref{fig:geometry} (\emph{right}) further suggests that approaching this state positively correlates with generalization performance. Whether the latter is a general phenomenon, or may even have a theoretical foundation, is an interesting question for future work.
Finally, we draw attention to the comparatively large gap between the cosine similarities \emph{across} the class means and their theoretical prediction in case of models trained on CIFAR10 (Fig. \ref{fig:toy_sc_vs_ce}, \emph{top left}). The aforementioned gap indicates that the chosen encoder might not be powerful enough to arrange the representations on a sphere-inscribed regular simplex. In fact, a standard ResNet \cite{He16a} utilizes a ReLU activation function after each block, including the last block before the linear classifier. Therefore, the coordinates of representations obtained by the encoder part of a standard ResNet are always \emph{non-negative}, and so are the coordinates of the class means. Consequently, their inner products are non-negative as well, which corresponds to a minimal cosine similarity of 0.5 across the class means. Since the scalar products of vertices (considered as position vectors) of a unit sphere inscribed regular simplex with $K$ vertices are $- 1/(K-1)$, the deviation from the optimal class separation, resulting from the choice of encoder, is unnoticeable for models trained on CIFAR100 due to the large number of classes, i.e., $K=100$, but becomes apparent in case of CIFAR10 where $K=10$.
We suspect that architectures which do not implement the aforementioned non-negativity constraint in the encoder, e.g., the \emph{pre-activation} variants of ResNets \cite{He16b}, are capable of separating the classes to a larger extend and thus match the theoretical prediction more closely when trained on data with a small number of classes.
\subsection{Random Label Experiments}
\label{subsection:random_labels}
Despite the similarity of the loss minimizing geometric arrangements at the output of $\varphi_\theta$, for both (CE, SC) losses, we have seen (in Fig.~\ref{fig:geometry}) that the extent to which this ``optimal'' state is achieved differs.
These differences likely arise as a result of the underlying optimization dynamics, driven by the loss contribution of each batch. Notably, while the CE~loss decomposes into independent instance-wise contributions, the SC~loss does not (due to the interaction terms).
One way to explore this in greater detail, is to study optimization behavior as a function of label corruption. Specifically, as label corruption (i.e., the fraction of randomly flipped labels) increases, it is interesting to track the number of iterations (\emph{time to fit}) to reach zero training error \cite{CZhang2017a}, illustrated in Fig.~\ref{fig:ttf_exp}.
On both datasets, \textbf{CE} and \textbf{CE-fix} show an approximately linear growth, while \textbf{SC} shows a remarkably \emph{superlinear} growth. We argue that the latter primarily results from the profound interaction among instances in a batch. Intuitively, as the number of attraction terms for the SC~loss function scales quadratically with the number of samples per class, increasing the number of semantically confounding labels equally increases the complexity of the optimization problem. In contrast, for \textbf{CE} and \textbf{CE-fix}, semantically confounding labels only impose per-instance constraints instead.
\begin{figure}[t!]
\centering{
\includegraphics[width=0.49\columnwidth]{figures/overfit_data/tto_cifar10.pdf}\hfill
\includegraphics[width=0.49\columnwidth]{figures/overfit_data/tto_cifar100.pdf}}
\vspace{-0.1cm}
\caption{Time to fit for models of the form $W \circ \varphi_\theta$, based on ResNet-18 encoders, optimized under different loss functions.\label{fig:ttf_exp}}
\end{figure}
This equally explains why, on CIFAR10, \textbf{SC} cannot achieve zero error beyond 80\% corruption: fewer training instances per class (500 vs. 5,000) yield fewer pairwise intra-class constraints to be met.
\subsection{Definitions}
\label{subsection:definitions}
For our purposes, we define the CE~and SC~loss, resp., as the loss over all $N$ instances in $Z$. In case of the CE~loss, this is the average over all instance losses; in case of the SC~loss, we sum over \emph{all} batches of size $b \in \mathbb{N}$.
While the normalizing constant is irrelevant for our results, we point out that normalizing the SC~loss would depend on the cardinality of \emph{all} multi-sets of size $b$.
\vskip2ex
\begin{restatable}[\textbf{Cross-entropy loss}]{defi}{rest@def@ce}
\label{def:ce}
Let $\mathcal{Z} \subseteq \mathbb{R}^h$ and let $Z$ be an $N$ point configuration, $Z = (z_1,\ldots,z_N) \in \mathcal{Z}^N$, with labels $Y=(y_1,\ldots,y_N) \in [K]^N$; let $w_y$ be the $y$-th row of the linear classifiers weight matrix $W \in \mathbb{R}^{K \times h}$. The cross-entropy loss $\mathcal{L}_{\operatorname{CE}}(\cdot, W;\,Y): \mathcal{Z}^N \to \mathbb{R}$ is defined as
\begin{equation}
\ Z \mapsto \frac{1}{N}\sum\limits_{n=1}^N \ell_{\operatorname{CE}}(Z, W; Y, n)
\label{def:lce}
\end{equation}
with $\ell_{\operatorname{CE}}(\cdot, W; Y, n): \mathcal{Z}^N \to \mathbb{R}$ given by
\begin{equation}
\ell_{\operatorname{CE}}(Z, W; Y, n) =
-
\log
\left(
\frac
{
\exp(
\dm{z_n}{w_{y_n}})
}
{
\sum\limits_{l=1}^K
\exp(
\dm{z_n}{w_l})
}
\right)
\enspace.
\label{def:llce}
\end{equation}
\end{restatable}
\begin{restatable}[\textbf{Supervised contrastive loss}]{defi}{rest@def@sc}
\label{def:sc}
Let $\mathcal{Z} = \mathbb{S}^{h-1}_{\rho_\mathcal{Z}} \subseteq \mathbb{R}^h$ and let $Z$ be an $N$ point configuration, $Z = (z_1,\ldots,z_N) \in \mathcal{Z}^N$, with labels $Y=(y_1,\ldots,y_N) \in [K]^N$.
For a fixed batch size $b\in \mathbb{N}$, we define
\begin{equation}
\mathcal{B} = \{\{\mskip-5mu\{ n_1, \dots, n_b \}\mskip-5mu\}: n_1, \dots, n_b \in [N]\}
\label{def:index_multi_set}
\end{equation}
as the set of index multi-sets of size $b$.
The supervised contrastive loss $\mathcal{L}_{\operatorname{SC}}(\cdot\ ; Y): \mathcal{Z}^N \to \mathbb{R}$ is defined as
\begin{equation}
Z \mapsto \sum\limits_{B \in \mathcal{B}}
\ell_{\operatorname{SC}}(Z; Y, B)
\label{def:lsc}
\end{equation}
with $\ell_{\operatorname{SC}}(\cdot\ ; Y,B):\mathcal{Z}^N \to \mathbb{R}$ given by\footnote{for notational reasons, we set $\frac{0}{0}=0$ when $|B_y|=1$}
\begin{equation}
-\sum\limits_{i \in B}
\frac{\bb1_{\set{|B_{y_i}|>1}}}{|B_{y_i}|-1}\hspace{-0.1cm}
\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}} \hspace{-0.2cm}
\log
\left(
\frac
{\exp\big(\dm{z_i}{z_j}\big)}
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}\hspace{-0.2cm}
\exp\big(\dm{z_i}{z_k}\big)
}
\right)
\label{def:llsc}
\end{equation}
where $B_{y_i} = \{\mskip-5mu\{ j \in B: y_{j} = y_i \}\mskip-5mu\}$ denotes the multi-set of indices in a batch $B \in \mathcal{B}$ with label equal to $y_i$.
\footnote{
{Definition~\ref{def:sc} differs from the original definition by \citet{Koshla20a} in the following aspects:
First, we do not explicitly duplicate batches (e.g., by augmenting each instance).
For fixed index $n$, this does not guarantee that at least one other instance with label equal to $y_n$ exists.
However, this is formally irrelevant, as the contribution to the summation is zero in that case.
Nevertheless, batch duplication is subsumed in our definition.
Second, we adapt the definition to multi-sets, allowing for instances to occur more than once.
If batches are drawn with replacement, this could indeed happen in practice.
Third, we omit scaling the inner products $\dm{\cdot}{\cdot}$ in Eq.~\eqref{def:llsc} by a temperature parameter $1/\tau, \tau >0$, as this complicates the notation. Instead we implicitly subsume this scaling into the radius $\rho_{\mathcal{Z}}$ of~ $\mathbb{S}^{h-1}_{\rho_{\mathcal{Z}}}$.}
}
\end{restatable}
As the regular simplex, inscribed in a hypersphere, will play a key role in our results, we formally define this object next:
\vskip1ex
\begin{restatable}[\textbf{$\rho$-Sphere-inscribed regular simplex}]{defi}{rest@def@simplex}
\label{def:simplex}
Let $h,K \in \mathbb{N}$ with $K\le h+1$.
We say that $\zeta_1, \dots, \zeta_K \in \mathbb{R}^h$ form the vertices of a regular simplex inscribed in the hypersphere of radius $\rho>0$, if and only if the following conditions hold:
\vspace{-0.2cm}
\begin{enumerate}
[labelindent=7pt,leftmargin=!,label=(S\arabic*),labelwidth=\widthof{\ref{last-item}}]
\item\label{def:simplex:s1} $\sum_{i \in [K]} \zeta_i = 0$
\item\label{def:simplex:s2} $\| \zeta_i \| = \rho$\, for $i \in [K]$
\item\label{def:simplex:s3} $\exists d \in \mathbb{R}: d = \inprod{\zeta_i}{\zeta_j}$\, for $1 \leq i < j \leq K$
\end{enumerate}
\end{restatable}
Fig.~\ref{fig:toy_simplex} shows such configurations (for $K=2,3,4$) on $\mathbb{S}^2$.
\begin{figure}[h!]
\centering{
\includegraphics[scale=1.0]{figures/simplex/figure-crop}
}
\caption{Regular simplices inscribed in $\mathbb{S}^2$.\label{fig:toy_simplex}}
\end{figure}
\begin{rembold}
The assumption $K\le h+1$ is crucial, as it is a necessary and sufficient condition for the existence of the regular simplex.
In our context, $K$ denotes the number of classes and $K\leq h+1$ is typically satisfied, as the output spaces of encoders in contemporary neural networks are high-dimensional, e.g., 512-dimensional for a ResNet-18 on CIFAR10/100.
If it is violated, then the bounds derived in \S\ref{sec:analysis} still hold, but are not tight. Studying the loss minimizing configurations in this regime is much harder. Even for the related and more studied Thomson problem of minimizing the potential energy of $K$ equally charged particles on the 2-dimensional sphere, the minimizers are only known for $K\in \set{2,3,4,5,6,12}$ \cite{Borodachov19a}.
\end{rembold}
\section{Proofs for Section~\ref{subsection:analysis_ce}}
In this section, we will prove Theorem~\ref{thm:ce_bound_frob} of the main manuscript and its corollaries. First, we recall the main definitions of the paper and introduce an auxiliary function. \\
Throughout this section the following objects will appear repeatedly an thus are introduced one-off:
\begin{itemize}
\item $h, N, K \in \mathbb{N}$
\item $\mathcal{Z} = \mathbb{R}^h$
\item $\mathcal{Y} = \{1, \dots, K\} = [K]$
\end{itemize}
We additionally assume $|\mathcal{Y}| = K \leq h +1$.
\textcolor{restated}{\rest@def@ce*}
\textcolor{restated}{\rest@def@simplex*}
\begin{Sdefi}[Auxiliary function $S$]
\label{def:ce_aux_s}
Let $\mathcal{Z} = \mathbb{R}^h$, then we define
\begin{align*}
S(\,\cdot\,,\,\cdot\,;\,Y): \mathcal{Z}^N \times \mathcal{Z}^K &\to \mathbb{R} \\
(Z,W) &\mapsto
\frac 1 N \frac K {K-1}
\sum_{n \in [N]}
\inprod{z_n}{\bar w - w_{y_n}}
\enspace,
\end{align*}
where $\bar{w} = \frac 1 {|\mathcal{Y}|} \sum_{y\in \mathcal{Y}} w_y$.
\end{Sdefi}
\begin{Slem}
\label{lem:ce_bound_1}
Let $h, K, N\in \mathbb{N}$, $\mathcal{Z} = \mathbb{R}^h$. Further, let
\begin{align*}
Z &= (z_n)_{i=n}^N \in \mathcal{Z}^N,\\
W &= (w_y)_{y=1}^K \in \mathcal{Z}^K,\\
Y &= (y_n)_{i=n}^N \in \mathcal{Y}^N\enspace.
\end{align*}
It holds that
\[
\mathcal{L}_{\operatorname{CE}}(Z,W;Y)
\ge \log \Big(
1 + (K-1) \exp \big(
S(Z, W;\,Y)
\big)
\Big)\enspace,
\]
where $S$ is as in Definition~\ref{def:ce_aux_s}.
Equality is attained if and only if the following conditions hold:
\begin{enumerate}[label={(P\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item $\forall n \in [N]$ $\exists M_n$ such that $\forall y\in \mathcal{Y}\setminus\set{y_n}$ all inner products $\inprod{z_n}{w_y} = M_n$ are equal.
\label{con:ce_bound:P1}
\item $\exists M\in \mathbb{R}$ such that $\forall n\in [N]$ it holds that $\sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}} (\inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}}) = M$\enspace.
\label{con:ce_bound:P2}
\end{enumerate}
\end{Slem}
\begin{proof}
Using the identities $\log(t) = - \log(1/t)$ and $\exp(a)/\exp(b)= \exp (a -b)$, rewrite the cross-entropy loss in the equivalent form
\begin{equation}
\mathcal{L}_{\operatorname{CE}}(Z,W;Y)
= \frac 1 N \sum_{n \in [N]} \log \left(
1 + \sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}}
\exp \left(
\inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}}
\right)
\right)
\enspace.
\end{equation}
In order to bound $\mathcal{L}_{\operatorname{CE}}$ from below, we apply Jensen's inequality twice; first for the convex function $t \mapsto \exp(t)$ and then for the convex function $t \mapsto \log(1+\exp(t))$:
\begin{align}
\mathcal{L}_{\operatorname{CE}}(Z,W;Y)
&\stackrel{\ref{con:ce_bound:P1}}{\ge}
\frac 1 N \sum_{n \in [N]} \log \left(
1 + (K-1) \exp \left(
\frac {1}{K-1}
\sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}}
\left(\inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}}\right)
\right)
\right)
\\
&\stackrel{\ref{con:ce_bound:P2}}{\ge}
\log \left(
1 + (K-1) \exp \left(\frac 1 N \frac 1 {K-1} \sum_{n \in [N]}\sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}}
\left( \inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}} \right)
\right)
\right)
\enspace.
\label{lem:ce_bound_1:eq1}
\end{align}
By the linearity of the inner product and as the addend for $y = y_n$ equals zero, the exponent in Eq.~\eqref{lem:ce_bound_1:eq1} is simply $S(Z,W;Y)$, which proves the bound.
The equality condition is obtained from the combination of the equality cases in both applications of Jensen's inequality. These are:
\begin{enumerate}
[label={(P\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:ce_bound:P1}] $\forall n \in [N]$ $\exists M_n$ such that $\forall y\in \mathcal{Y}\setminus\set{y_n}$ all inner products $\inprod{z_n}{w_y} = M_n$ are equal.
\item[\ref{con:ce_bound:P2}] $\exists M\in \mathbb{R}$ such that $\forall n\in [N]$ it holds that $\sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}} (\inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}} )= M$.
\end{enumerate}
\end{proof}
\begin{Slem}
\label{lem:ce_bound_2}
Let $h, K, N\in \mathbb{N}$, $\rho_{\mathcal{Z}}>0$ and $\mathcal{Z} = \{z \in \mathbb{R}^h: \|z\| \leq \rho_{\mathcal{Z}}\}$. Further, let
\begin{align*}
Z &= (z_n)_{n=1}^N \in \mathcal{Z}^N\enspace,\\
W &= (w_y)_{y=1}^K \in (\mathbb{R}^h)^K\enspace,\\
Y &= (y_n)_{n=1}^N \in \mathcal{Y}^N\enspace.
\end{align*}
If the class configuration $Y$ is balanced, i.e., for all ${y \in \mathcal{Y}}$,
$N_y = \big|\{ i \in [N]: y_i = y\}\big| = \nicefrac{N}{K}$,
then
\begin{equation*}
S(Z, W;\,Y)
\geq
- \rho_\mathcal{Z} \frac{\sqrt{K}}{K-1} \|W\|_F\enspace,
\end{equation*}
where $\norm{\cdot}_F$ denotes the Frobenius norm.
We get equality if and only if the following conditions hold:
\begin{enumerate}[label={(P\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\setcounter{enumi}{2}
\item\label{con:ce_bound:P3} $\forall n\in [N]$ there $\exists \lambda_n \le 0$ such that $z_n = \lambda_n (\bar w - w_{y_n})$
\item\label{con:ce_bound:P4} $\forall n: \norm{z_n} = \rho_\mathcal{Z}$
\item\label{con:ce_bound:P5} $\forall y \in \mathcal{Y}$ the terms $\norm{\bar w}^2+ \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}$ are equal
\item\label{con:ce_bound:P6} $\bar w = 0$
\end{enumerate}
\end{Slem}
\vfill
\pagebreak
\begin{proof}
We will bound the function $S$ from Lemma \ref{lem:ce_bound_1}, using the norm constraint on each $z_n \in \mathcal{Z}$. In particular,
\begin{align*}
S(Z,W;Y)
& = \frac 1 N \frac K {K-1}
\sum_{n \in [N]}
\inprod{z_n}{ \bar w - w_{y_n}}
\\
&\stackrel{\ref{con:ce_bound:P3}}{\ge}
- \frac 1 N \frac K {K-1}
\sum_{n \in[N]}
\norm{z_n}\norm{\bar w - w_{y_n}}
\\
&\stackrel{\ref{con:ce_bound:P4}}{\geq}
- \frac 1 N \frac K {K-1}
\sum_{n \in [N]}
\rho_\mathcal{Z} \norm{\bar w - w_{y_n}}
\\
& = - \frac 1 N \frac K {K-1} \rho_\mathcal{Z}
\sum_{y \in \mathcal{Y}}
\norm{\bar w - w_{y}}
\left(
\sum_{\substack{n \in [N] \\ y_n= y}}
1
\right)
\\
& = - \frac 1 N \frac K {K-1} \rho_\mathcal{Z}
\sum_{y \in \mathcal{Y}}
\norm{\bar w - w_{y}}
N_y
\\
&= - \frac 1 N \frac K {K-1} \rho_\mathcal{Z}
\frac N K
\sum_{y \in \mathcal{Y}}
\norm{\bar w - w_{y}}
\tag{by assumption $N_y = \frac N K$}
\\
&= - \frac 1 {K-1} \rho_\mathcal{Z}
\sum_{y\in\mathcal{Y}}
\sqrt{
\norm{\bar w}^2 + \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}
}
\\
&\stackrel{\ref{con:ce_bound:P5}}{\ge}
- \rho_\mathcal{Z} \frac K {K-1}
\sqrt{ \frac 1 K
\sum_{y \in \mathcal{Y}} \left(
\norm{\bar w}^2 + \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}
\right)
}
\\
&= - \rho_\mathcal{Z} \frac K {K-1}
\sqrt{ \frac 1 K \left(
K \norm{\bar w}^2 +\sum_{y \in \mathcal{Y}} \norm{w_y}^2 - 2 \inprod{\bar w}{\sum_{y \in \mathcal{Y}}w_y}
\right)
}
\\
&=
- \rho_\mathcal{Z} \frac K {K-1}
\sqrt{ \frac 1 K
\sum_{y \in \mathcal{Y}} \norm{w_y}^2 - \norm{\bar w}^2
}
\\
& \stackrel{\ref{con:ce_bound:P6}}{\ge}
- \rho_\mathcal{Z} \frac K {K-1}
\sqrt{ \frac 1 K
\sum_{y \in \mathcal{Y}} \norm{w_y}^2
}
\\
&=
- \rho_\mathcal{Z} \frac{\sqrt{K}}{K-1} \|W\|_F
\enspace,
\end{align*}
where
\begin{itemize}[label={(P\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\setcounter{enumi}{2}
\item[\ref{con:ce_bound:P3}] follows from the Cauchy-Schwarz inequality with equality if and only if $\forall n\in [N]$ there $\exists \lambda_n \le 0$ such that $z_n = \lambda_n (\bar w - w_{y_n})$,
\item[\ref{con:ce_bound:P4}] follows from the assumption on the space $\mathcal{Z}$, with equality if and only if $\forall n$, $\norm{z_n} = \rho_\mathcal{Z}$ is maximal,
\item[\ref{con:ce_bound:P5}] follows from Jensen's inequality for the convex function $t\mapsto - \sqrt t$ with equality if and only if $\forall y \in \mathcal{Y}$ the terms $\norm{\bar w}^2+ \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}$ are equal,
\item[\ref{con:ce_bound:P6}]follows from the positivity of the norm, with equality if and only if $\bar w = 0$, i.e. $W$ is centered at the origin.
\end{itemize}
\end{proof}
\clearpage
\textcolor{restated}{\rest@thm@ce@bound@frob*}
\vspace{-0.5cm}
\begin{proof}
To prove the bound, we consecutively leverage Lemma~\ref{lem:ce_bound_1} and Lemma~\ref{lem:ce_bound_2}.
\begin{align*}
\mathcal{L}_{\operatorname{CE}}(Z, W;\,Y)
&\stackrel{\text{Lem.}~\ref{lem:ce_bound_1}}{\ge}
\log
\Big(
1 + (K-1) \exp
\big( S(Z, W;\,Y)\big)
\Big)\\
&\stackrel{\text{Lem.}~\ref{lem:ce_bound_2}}{\ge}
\log
\left(
1 + (K-1) \exp
\left(
- \rho_\mathcal{Z} \frac{\sqrt{K}}{K-1} \|W\|_F
\right)
\right)
\enspace.
\end{align*}
The application of Lemma~\ref{lem:ce_bound_1} and \ref{lem:ce_bound_2} above also yields the sufficient and necessary conditions for equality, which are
\ref{con:ce_bound:P1}, \ref{con:ce_bound:P2}, \ref{con:ce_bound:P3}, \ref{con:ce_bound:P4}, \ref{con:ce_bound:P5} and \ref{con:ce_bound:P6}.
It remains to prove that those conditions are equivalent to \ref{thm:ce_bound_frob:c1}, \ref{thm:ce_bound_frob:c2}, \ref{thm:ce_bound_frob:c3}.
That is, we need to show that
\begin{enumerate}[label={(P\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:ce_bound:P1}] $\forall n \in [N]$ $\exists M_n$ such that $\forall y\in \mathcal{Y}\setminus\set{y_n}$ all inner products $\inprod{z_n}{w_y} = M_n$ are equal,
\item[\ref{con:ce_bound:P2}] $\exists M\in \mathbb{R}$ such that $\forall n\in [N]$ it holds that $\sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}} \inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}} = M$.
\item[\ref{con:ce_bound:P3}]$\forall n\in [N]$ there $\exists \lambda_n \leq 0$ such that $z_n = \lambda_n (\bar w - w_{y_n})$,
\item[\ref{con:ce_bound:P4}] $\forall n: \norm{z_n} = \rho_\mathcal{Z}$,
\item[\ref{con:ce_bound:P5}] $\forall y \in \mathcal{Y}$ the terms $\norm{\bar w}^2+ \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}$ are equal,
\item[\ref{con:ce_bound:P6}] $\bar w = 0$
\end{enumerate}
are equivalent to that the existence of $\zeta_1,\dots,\zeta_{|\mathcal{Y}|}\in \mathbb{R}^h$ such that
\begin{itemize}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{thm:ce_bound_frob:c1}] $\forall n \in [N]: z_n = \zeta_{y_n}$
\item[\ref{thm:ce_bound_frob:c2}] $\{\zeta_y\}_{y\in \mathcal{Y}}$ form a $\rho_{\mathcal{Z}}$-sphere-inscribed regular simplex, i.e., it holds that
\begin{itemize}
\item[\ref{def:simplex:s1}] $\sum_{y \in \mathcal{Y}} \zeta_y = 0$\enspace,
\item[\ref{def:simplex:s2}] $\| \zeta_y \| = \rho_{\mathcal{Z}}$ for $y \in \mathcal{Y}$\enspace,
\item[\ref{def:simplex:s3}] $\exists d \in \mathbb{R}: d = \inprod{\zeta_y}{\zeta_{y'}}$ for $1 \leq y < y' \leq K$\enspace.
\end{itemize}
\item[\ref{thm:ce_bound_frob:c3}] $\exists \rho_{\mathcal{W}} > 0: \forall y \in \mathcal{Y}: w_{y} = \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y}$\enspace.
\end{itemize}
The arguments for the equivalencies are given below:
First, we show (P1) - (P6) $\implies$ (C1) - (C3):
\underline{Ad \ref{thm:ce_bound_frob:c1}}: We need to show that $\forall n \in [N]:~z_n=\zeta_{y_n}$.
Let $n\in [N]$. Conditions \ref{con:ce_bound:P3} and \ref{con:ce_bound:P6} yield $ z_n = - \lambda_n w_{y_n}$ where $\lambda_n \leq 0$.
If $w_{y_n}=0$, this immediately implies $\ref{thm:ce_bound_frob:c1}$ with $\zeta_{y_n} = 0$. If $w_{y_n}\neq 0$, we have $|\lambda| = \norm{z_n}/\norm{w_{y_n}}$, and thus
by \ref{con:ce_bound:P4}
\begin{equation}
z_n = -
\left(-\frac{\|z_n\|}{\|w_{y_n}\|}\right)
w_{y_n}
\stackrel{\ref{con:ce_bound:P4}}{=}
\frac{\rho_{\mathcal{Z}}}{\|w_{y_n}\|} w_{y_n}
\enspace.
\label{eq:thm1:1}
\end{equation}
Consequently, condition $\ref{thm:ce_bound_frob:c1}$ is fulfilled with $\zeta_{y_n} = \rho_\mathcal{Z} \frac{w_{y_n}}{\norm{w_{y_n}}}$.
\underline{Ad \ref{thm:ce_bound_frob:c3}}: We need to show that $\exists \rho_{\mathcal{W}} > 0$ such that $\forall y \in \mathcal{Y}$ we have $w_{y} = \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y}$\enspace.
Since $Y$ is balanced, for every label $y\in \mathcal{Y}$ we have that $N_y = N/K>0$ and so there exists $n\in [N]$ with $y_n = y$.
Thus Eq.~\eqref{eq:thm1:1} implies for every $y\in \mathcal{Y}$ that $\zeta_y = \rho_\mathcal{Z} \frac{w_{y}}{\norm{w_{y}}}$. Hence, condition \ref{thm:ce_bound_frob:c3} is fulfilled with $\rho_\mathcal{W} = \norm{w_y}$ if all such norms $\norm{w_y}$ agree.
Indeed, by condition \ref{con:ce_bound:P5}, there is $C\in \mathbb{R}$ such that for each $y\in \mathcal{Y}$
\begin{equation*}
C
\stackrel{\ref{con:ce_bound:P5}}{=}
\norm{\bar w}^2 + \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}
\stackrel{\ref{con:ce_bound:P6}}{=}
0 + \norm{w_y}^2 - 2 \cdot 0
=
\norm{w_y}^2
\enspace.
\end{equation*}
\underline{Ad \ref{thm:ce_bound_frob:c2}}: We need to show that $\{\zeta_y\}_{y \in \mathcal{Y}}$ fulfill the requirements \ref{def:simplex:s1}, \ref{def:simplex:s2} and \ref{def:simplex:s3} of the regular simplex from Definition~\ref{def:simplex}.
From condition~\ref{thm:ce_bound_frob:c1} and condition~\ref{thm:ce_bound_frob:c3}, we already know that
\begin{equation}
\label{thm:ce_loss_frob:eq1}
\frac{\rho_{\mathcal{Z}}}{\rho_{\mathcal{W}}} \cdot w_{y} = \zeta_{y}
\text{ for } y \in \mathcal{Y}
\enspace,
\end{equation}
which we will use in the following.
\underline{Ad \ref{def:simplex:s1}}: We need to show that $\sum_{y \in \mathcal{Y}} \zeta_y = 0$.
This follows directly from Eq.~\eqref{thm:ce_loss_frob:eq1} and condition \ref{con:ce_bound:P6}, because
\begin{equation}
\sum\limits_{y \in \mathcal{Y}}
\zeta_y
\stackrel{\text{Eq.~}\eqref{thm:ce_loss_frob:eq1}}{=}
\frac{\rho_{\mathcal{Z}}}{\rho_{\mathcal{W}}}
\sum\limits_{y \in \mathcal{Y}}
w_{y}
\stackrel{\ref{con:ce_bound:P6}}{=} 0
\enspace.
\end{equation}
\underline{Ad \ref{def:simplex:s2}}: We need to show for every $y\in \mathcal{Y}$ that $\| \zeta_y \| = \rho_{\mathcal{Z}}$.
This follows directly from Eq.~\eqref{thm:ce_loss_frob:eq1} and the already proven condition \ref{thm:ce_bound_frob:c3}, because
\begin{equation}
\|\zeta_y\|
\stackrel{\text{Eq.~}\eqref{thm:ce_loss_frob:eq1}}{=}
\|\frac{\rho_{\mathcal{Z}}}{\rho_{\mathcal{W}}} \cdot w_{y_n} \|
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\frac{\rho_{\mathcal{Z}}}{\rho_{\mathcal{W}}} \cdot \rho_{\mathcal{W}}
= \rho_{\mathcal{Z}}
\enspace.
\end{equation}
\underline{Ad \ref{def:simplex:s3}}: We need to show that for every $y,y'\in \mathcal{Y}$ with $y\neq y'$ there $\exists d \in \mathbb{R}: d = \inprod{\zeta_y}{\zeta_{y'}}$\enspace.
Let $y,y' \in \mathcal{Y}$ with $y\neq y'$.
Since $Y$ is balanced, we have that $N_{y'} = \nicefrac{N}{K}>0$. Hence, there exists $n\in [N]$ with $y' = y_n$ and so
\begin{equation}
\label{eq:s3:1}
\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \inprod{ \zeta_{y'}}{\zeta_y}
=
\inprod{ \zeta_{y_n}}{\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}}\zeta_y}
\stackrel{\text{Eq.~}\eqref{thm:ce_loss_frob:eq1}}{=}
\inprod{\zeta_{y_n}}{w_y}
\stackrel{\ref{thm:ce_bound_frob:c1}}{=}
\inprod{z_n}{w_y}
\stackrel{\ref{con:ce_bound:P1}}{=} M_n\enspace.
\end{equation}
Similarly,
\begin{equation}
\label{eq:s3:2}
\inprod{z_n}{w_{y_n}}
\stackrel{\ref{thm:ce_bound_frob:c1}}{=}
\inprod{\zeta_{y_n}}{w_{y_n}}
\stackrel{\text{Eq.~}\eqref{thm:ce_loss_frob:eq1}}{=}
\inprod{ \zeta_{y_n}}{\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}}\zeta_{y_n}}
=
\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \|\zeta_{y_n}\|^2
\stackrel{\ref{def:simplex:s2}}{=}
{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}}
\enspace.
\end{equation}
We leverage condition~\ref{con:ce_bound:P1} and condition~\ref{con:ce_bound:P2} to get that there exists $M\in \mathbb{R}$ such that
\begin{align}
M &
\stackrel{\ref{con:ce_bound:P2}}{=}
\sum_{\substack{\hat y \in \mathcal{Y} \\ \hat y \neq y_n}}( \inprod{z_n}{w_{\hat y}} - \inprod{z_n}{w_{y_n}}) \\
&\stackrel{\eqref{eq:s3:2}}{=}
\Big( \sum_{\substack{\hat y \in \mathcal{Y} \\ \hat y \neq y_n}} \inprod{z_n}{w_{\hat y}} \Big)
- (K-1){\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \\
&\stackrel{\ref{con:ce_bound:P1}}{=}
(K-1) (M_n - {\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}})\\
&\stackrel{\eqref{eq:s3:1}}{=}
(K-1) (\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}}\inprod{ \zeta_{y'}}{\zeta_y} - {\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}})
\enspace.
\end{align}
Thus $\inprod{ \zeta_{y'}}{\zeta_y}=d$ is constant, and $d$ can be calculated by rearranging the equation above.
Next, we show (C1) - (C3) $\implies$ (P1) - (P6) :
We assume that there exist $\zeta_1, \dots, \zeta_K \in \mathbb{R}^h$ such that conditions (C1) - (C3) are fulfilled.
\underline{Ad \ref{con:ce_bound:P1}}: We need to show that $\forall n \in [N]$ $\exists M_n$ such that $\forall y\in \mathcal{Y}\setminus\set{y_n}$ all inner products $\inprod{z_n}{w_y} = M_n$ are equal.
Let $n\in [N]$ and $y \in \mathcal{Y} \setminus\{y_n\}$, then
\begin{equation}
\label{eq:thm1:2}
\inprod{z_n}{w_y}
\stackrel{ \ref{thm:ce_bound_frob:c1}}{=}
\inprod{\zeta_{y_n}}{w_y}
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\inprod{\zeta_{y_n}}{\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y}}
\stackrel{\ref{def:simplex:s3}}{=}
\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} d
\enspace,
\end{equation}
so condition \ref{con:ce_bound:P1} is fulfilled with $M_n = \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} d$.
\underline{Ad \ref{con:ce_bound:P2}}: We need to show that $\exists M$ such that $\forall n\in [N]$ it holds that $\sum_{\substack{y \in \mathcal{Y} \\ y \neq y_n}} ( \inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}}) = M$.
Let $n\in [N]$. From Eq.~\eqref{eq:thm1:2}, we already now that for $y\in \mathcal{Y}\setminus\set{y}$ it holds that $\inprod{z_n}{w_y}= \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} d$.
Similarly,
\begin{equation}
\inprod{z_n}{w_{y_n}}
\stackrel{ \ref{thm:ce_bound_frob:c1}}{=}
\inprod{\zeta_{y_n}}{w_y}
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\inprod{\zeta_{y_n}}{\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y_n}}
\stackrel{\ref{def:simplex:s2}}{=}
{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}}
\enspace.
\end{equation}
Therefore,
\begin{equation}
\sum_{y \in \mathcal{Y}\setminus\set{y_n}} \left( \inprod{z_n}{w_y} - \inprod{z_n}{w_{y_n}} \right)
= (K-1) \left( \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} d - {\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \right)
\end{equation}
and condition~\ref{con:ce_bound:P2} is fulfilled with $M = (K-1) \left( \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} d - {\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \right)$.
\underline{Ad \ref{con:ce_bound:P4}}: We need to show that $\forall n: \norm{z_n} = \rho_\mathcal{Z}$.
This follows immediately from condition~\ref{thm:ce_bound_frob:c1} and condition~\ref{def:simplex:s2}:
\begin{equation}
\norm{z_n}
\stackrel{\ref{thm:ce_bound_frob:c1}}{=}
\norm{\zeta_{y_n}}
\stackrel{\ref{def:simplex:s2}}{=}
\rho_{\mathcal{Z}}
\enspace.
\end{equation}
\underline{Ad \ref{con:ce_bound:P6}}: We need to show that $\bar w = 0$.
This follows immediately from condition~\ref{thm:ce_bound_frob:c3} and condition~\ref{def:simplex:s1}:
\begin{equation}
\bar{w}=\frac{1}{K} \sum\limits_{y \in \mathcal{Y}} w_y
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\frac{1}{K} \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \sum\limits_{y \in \mathcal{Y}} \zeta_{y}
\stackrel{\ref{def:simplex:s1}}{=}
0
\enspace.
\end{equation}
\underline{Ad \ref{con:ce_bound:P5}}: We need to show that $\forall y \in \mathcal{Y}$ the terms $\norm{\bar w}^2+ \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}$.
This follows from conditions~\ref{thm:ce_bound_frob:c3} and \ref{def:simplex:s2}, such as the already proven condition~\ref{con:ce_bound:P6}.
Let $y \in \mathcal{Y}$ then
\begin{equation}
\norm{\bar w}^2+ \norm{w_y}^2 - 2 \inprod{\bar w}{w_y}
\stackrel{\ref{con:ce_bound:P6}}{=}
0 + \norm{w_y}^2 - 2\cdot 0
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\| \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y_n}\|^2
\stackrel{\ref{def:simplex:s2}}{=}
\rho_{\mathcal{W}}^2
\enspace,
\end{equation}
which, indeed, does not depend on $y$.
\underline{Ad \ref{con:ce_bound:P3}}: We need to show that $\forall n\in [N]$ there $\exists \lambda_n \leq 0$ such that $z_n = \lambda_n (\bar w - w_{y_n})$.
Let $n \in [N]$ and consider
\begin{equation}
z_n
\stackrel{\ref{thm:ce_bound_frob:c1}}{=}
\zeta_{y_n}
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\frac{\rho_{\mathcal{Z}}}{\rho_{\mathcal{W}}} w_{y_n}
\end{equation}
Thus, from the already proven condition~\ref{con:ce_bound:P6}, it follows that
\begin{equation}
\bar w - w_{y_n}
\stackrel{\ref{con:ce_bound:P6}}{=} - w_{y_n}
= - \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} z_n
\end{equation}
and condition~\ref{con:ce_bound:P3} is fulfilled with $\lambda_n = - \frac{\rho_{\mathcal{Z}}}{\rho_{\mathcal{W}}} \le 0$.
\end{proof}
\clearpage
\textcolor{restated}{\rest@cor@ce@bound@r*}
\begin{proof}
By leveraging Theorem~\ref{thm:ce_bound_frob},
we get
\begin{align}
\mathcal{L}_{\operatorname{CE}}(Z, W;\,Y)
&\stackrel{\text{Thm.~}\ref{thm:ce_bound_frob}}{\ge}
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {\sqrt{K}}{K-1}
\|W\|_F
\right)
\right)\\
&=
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {\sqrt{K}}{K-1}
\sqrt{\sum\limits_{y\in \mathcal{Y}} \|w_y\|^2}
\right)
\right)\\
&\ge
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {\sqrt{K}}{K-1}
\sqrt{\sum\limits_{y\in \mathcal{Y}} r_{\mathcal{W}}^2}
\right)
\right)\\
&=
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {K}{K-1}
r_{\mathcal{W}}
\right)
\right)
\enspace,
\end{align}
where equality is attained if and only if the bound from Theorem~\ref{thm:ce_bound_frob} is tight, i.e., conditions~\ref{thm:ce_bound_frob:c1}, \ref{thm:ce_bound_frob:c2}, \ref{thm:ce_bound_frob:c3} are fulfilled and, additionally,
\begin{equation}
\label{cor:ce_bound_r:eq1}
r_{\mathcal{W}} = \|w_y\| \text{ for }y \in \mathcal{Y}
\enspace.
\end{equation}
It remains to show that if conditions \ref{thm:ce_bound_frob:c1} \ref{thm:ce_bound_frob:c2} are fulfilled, then the following equivalency holds:
\begin{equation}
\big(
r_{\mathcal{W}} = \|w_y\| \text{ for }y \in \mathcal{Y}
\;\land
\; \ref{thm:ce_bound_frob:c3}
\big)
\Longleftrightarrow
\ref{cor_ce_bound_r:c3}
\enspace.
\end{equation}
``$\Longrightarrow$'': We need to show that $\forall y \in \mathcal{Y}: w_y =\frac{r_{\mathcal{W}}}{\rho_{\mathcal{Z}}}\zeta_y$.
So, let $y \in \mathcal{Y}$.
By condition~\ref{thm:ce_bound_frob:c3}, there is $\rho_{\mathcal{W}} > 0$ such that
\begin{equation}
\label{cor:ce_bound_r:eq2}
w_{y} = \frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y}
\enspace.
\end{equation}
Thus \ref{cor_ce_bound_r:c3} holds if $\rho_\mathcal{W} = r_\mathcal{W}$.
Indeed,
\begin{equation}
r_{\mathcal{W}}
\stackrel{\eqref{cor:ce_bound_r:eq1}}{=}
\|w_y\|
\stackrel{\eqref{cor:ce_bound_r:eq2}}{=}
\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \|\zeta_y\|
\stackrel{\ref{thm:ce_bound_frob:c2}}{=}
\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \rho_{\mathcal{Z}}
=
\rho_{\mathcal{W}}
\enspace.
\end{equation}
``$\Longleftarrow$'': Follows immediately as we can choose $\rho_{\mathcal{W}}=r_{\mathcal{W}}>0$.
\end{proof}
\clearpage
\begin{Slem}
\label{lem:ce_bound_3}
Let $\lambda$, $\rho_{\mathcal{Z}} > 0$, $K, h \in \mathbb{N}$ and $W \in (\mathbb{R}^h)^K$. The function
\begin{equation}
\label{lem:ce_bound_3:eq_1}
f(x) =
\log
\left(
1 + (K-1)
\exp
\left(
- \rho_{\mathcal{Z}} \frac{{K}}{K-1}x
\right)
\right)
+ \lambda K x^2
\end{equation}
is minimized by $x_0 = r_\mathcal{W}(\rho_\mathcal{Z},\lambda)>0$, i.e., the \emph{unique} solution to
\begin{equation}
0 =
K \left(2 \lambda x-\frac{\rho_\mathcal{Z} }{e^{\frac{K \rho_\mathcal{Z} x}{K-1}}+K-1}\right)
\enspace.
\end{equation}
\end{Slem}
\begin{proof}
The first derivative of $f$ is given by
\begin{equation}
f'(x) =
K \left(2 \lambda x-\frac{\rho_\mathcal{Z} }{e^{\frac{K \rho_\mathcal{Z} x}{K-1}}+K-1}\right)
\enspace.
\end{equation}
Note that $f'$ is strictly increasing.
Thus $f$ is strictly convex and has a unique minimum at the point $x_0$ where $f'(x_0) = 0$.
As $f'$ is continuous on $(0,\infty)$ with
\begin{equation}
f'(0) = - \rho_\mathcal{Z}
< 0
\end{equation}
and
\begin{equation}
\lim\limits_{x \rightarrow \infty} f'(x)= \infty \enspace,
\end{equation}
the intermediate value theorem implies $0 < x_0 = r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda) < \infty$.
\end{proof}
\textcolor{restated}{\rest@cor@ce@bound@wd*}
\begin{proof}
By leveraging Theorem~\ref{thm:ce_bound_frob} and Lemma~\ref{lem:ce_bound_3} (with $x = {\norm{W}_F}/{\sqrt{K}}$),
we get
\begin{align*}
&\mathcal{L}_{\operatorname{CE}}(Z, W;\,Y) + \lambda \norm{W}_F^2\\
&\stackrel{\text{Thm.~}\ref{thm:ce_bound_frob}}{\ge}
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {\sqrt{K}}{K-1}
\|W\|_F
\right)
\right)
+
{\lambda} \norm{W}_F^2\\
& \stackrel{\phantom{\text{Lem.~1}}}{=}
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {K}{K-1}
x
\right)
\right)
+
{\lambda K} x^2
\tag{by setting $x = {\norm{W}_F}/{\sqrt{K}}$}\\
&\stackrel{\text{Lem.~}\ref{lem:ce_bound_3}}{\ge}
\log \left(
1 + (K-1) \exp \left(
- \rho_\mathcal{Z} \frac {{K}}{K-1}
r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)
\right)
\right)
+
\lambda K r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)^2
\enspace,
\end{align*}
where equality is attained if and only if
the bound from Theorem~\ref{thm:ce_bound_frob} is tight, i.e., conditions~\ref{thm:ce_bound_frob:c1}, \ref{thm:ce_bound_frob:c2}, \ref{thm:ce_bound_frob:c3} are fulfilled and, additionally,
\begin{equation}
\label{cor:ce_bound_wd:eq1}
\|W\|_F/\sqrt K = r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)
\enspace.
\end{equation}
It remains to show that if \ref{thm:ce_bound_frob:c1} and \ref{thm:ce_bound_frob:c2} are fulfilled, it holds that
\begin{equation}
\big(
\|W\|_F/\sqrt K = r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)\;\land\; \ref{thm:ce_bound_frob:c3}
\big)
\Longleftrightarrow
\ref{cor_ce_bound_wd:c3}
\enspace.
\end{equation}
``$\Longrightarrow$``: We need to show for every $y \in \mathcal{Y}$ that $w_y = \frac{r_\mathcal{W}(\rho_{\mathcal{Z}},\lambda)}{\rho_{\mathcal{Z}}}\zeta_y$.
So let $y \in \mathcal{Y}$. By condition~\ref{thm:ce_bound_frob:c3},
there exists $\rho_\mathcal{W}>0$ such that $w_y = \nicefrac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_y$.
Thus, condition~\ref{cor_ce_bound_wd:c3} is fulfilled if $\rho_\mathcal{W} = r_\mathcal{W}(\rho_{\mathcal{Z}},\lambda)$.
Indeed,
\begin{align}
r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)
&\stackrel{\eqref{cor:ce_bound_wd:eq1}}{=}
\frac{\|W\|_F}{\sqrt{K}}
=
\sqrt{\frac 1 K \sum\limits_{y\in \mathcal{Y}} \|w_y\|^2}
\stackrel{\ref{thm:ce_bound_frob:c3}}{=}
\sqrt{\sum\limits_{y\in \mathcal{Y}} \|\frac{\rho_{\mathcal{W}}}{\rho_{\mathcal{Z}}} \zeta_{y}\|^2}\\
&\stackrel{\ref{thm:ce_bound_frob:c2}}{=}
\sqrt{\frac 1 K \sum\limits_{y \in \mathcal{Y}} \rho_{\mathcal{W}}^2}
= \rho_{\mathcal{W}}
\enspace.
\end{align}
``$\Longleftarrow$``:
Condition~\ref{thm:ce_bound_frob:c3} Is fulfilled as we can choose
\begin{equation}
\rho_\mathcal{W} = r_\mathcal{W}(\rho_\mathcal{Z},\lambda) \stackrel{\text{Lem.~}\ref{lem:ce_bound_3}}{>} 0
\enspace.
\end{equation}
Finally, $r_\mathcal{W}(\rho_\mathcal{Z},\lambda) = \norm{W}_F / \sqrt{K}$, as
\begin{align*}
\|W\|_F
&=
\sqrt{\sum\limits_{y\in \mathcal{Y}} \|w_y\|^2}
\stackrel{\ref{cor_ce_bound_wd:c3}}{=}
\sqrt{\sum\limits_{y\in \mathcal{Y}} \|\frac{r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)}{\rho_{\mathcal{Z}}} \zeta_{y}\|^2}\\
&\stackrel{\ref{thm:ce_bound_frob:c2}}{=}
\sqrt{\sum\limits_{y\in \mathcal{Y}} r_\mathcal{W}(\rho_{\mathcal{Z}}, \lambda)^2}
=
\sqrt{K} r_\mathcal{W}(\rho_\mathcal{Z},\lambda)
\enspace.
\end{align*}
\end{proof}
\section{Additional Experiments}
\label{sec:suppmat_exp}
The experiments in \S\ref{subsection:theory_vs_practice} suggest that representations learned by minimizing the \textbf{SC} loss might arrange closer to the (theoretically optimal) simplex configuration, compared to representations learned by minimizing the \textbf{CE} loss.
To corroborate that this disparity is due to differing optimization dynamics of the loss functions, i.e., differing trajectories in the parameter space, and not an artifact of terminating the loss minimization to early, we repeat\footnote{
i.e., the same setup and hyperparameters as in §\ref{subsection:theory_vs_practice}, except for the \emph{number of training iterations}
}
these experiments when optimizing over \textbf{500k} SGD iterations instead of 100k. After every 10k iterations, we freeze the model, compute the class means of representation of the training data and evaluate two geometric properties on all of the training data: (1) the \emph{cosine similarity \textbf{across} class means} and (2) the \emph{cosine similarity \textbf{to} class means}, \footnote{We omit the \emph{cosine similarity across weights}, as, for \textbf{SC}, this requires to train an additional linear classifier each time.}, as illustrated in Figs. \ref{fig:suppmat_experiments_cifar10}, \ref{fig:suppmat_experiments_cifar100}.
\begin{figure}[h!]
\captionsetup{width=.83\linewidth}
\centering
\begin{subfigure}[b]{0.85\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{figures/simplex_convergence/convergence_cifar10_noaugment_cropped.pdf}
\caption{\textbf{CIFAR10} (\underline{without} augmentation)}
\label{appendix:figure:experiment_cifar10_woaug}
\end{subfigure}
\vskip4ex
\begin{subfigure}[b]{0.85\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{figures/simplex_convergence/convergence_cifar10_augment_cropped.pdf}
\caption{\textbf{CIFAR10} (\underline{with} augmentation)}
\label{appendix:figure:experiment_cifar10_waug}
\end{subfigure}
\caption{
Distribution of geometric properties of representations, $\varphi_\theta(x_n)$, tracked during training. Representations are obtained from a ResNet-18 model trained (\subref{appendix:figure:experiment_cifar10_waug}) \underline{with} and (\subref{appendix:figure:experiment_cifar10_woaug}) \underline{without} data augmentation on \textbf{CIFAR10}, with \textbf{CE} and \textbf{SC}, respectively. \textcolor{tabblue}{Blue} and \textcolor{tabgreen}{green} lines indicate the evolution of the medians over the iterations;
\textcolor{tabred}{Red} lines indicate the \emph{sought-for} value at a regular simplex configuration.
\label{fig:suppmat_experiments_cifar10}
}
\end{figure}
The results reveal that (1) optimizing for 500k iterations improves convergence to the optimal state, yet at a very low speed, and (2) minimizing \textbf{SC} still yields representations closer to the simplex, compared to \textbf{CE}. The latter not only holds at the terminal stage of training, but at (almost) every evaluation step.
Interestingly, on both datasets, the distributions of the computed properties obtained from the model trained via \textbf{CE} have notably more spread than the ones obtained from the model trained with \textbf{SC}.
Finally, we compare the geometric properties after training for 500k iteration with the ones from training over 100k iterations, i.e., Fig.~\ref{fig:geometry} in \S\ref{subsection:theory_vs_practice}. In case of \textbf{SC}, the distributions are roughly the same, whereas for \textbf{CE}, the distributions after 500k iterations are notably closer to the theoretical optimum than the ones after 100k iterations, particularly on the more complex CIFAR100 dataset. Once more, this highlights the faster convergence to the simplex arrangement via minimizing \textbf{SC}.
\vspace{0.2cm}
\begin{figure}[H]
\captionsetup{width=.83\linewidth}
\centering
\begin{subfigure}[b]{0.85\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{figures/simplex_convergence/convergence_cifar100_noaugment_cropped.pdf}
\caption{\textbf{CIFAR100} (\underline{without} augmentation)}
\label{appendix:figure:experiment_cifar100_woaug}
\end{subfigure}
\vskip4ex
\begin{subfigure}[b]{0.85\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{figures/simplex_convergence/convergence_cifar100_augment_cropped.pdf}
\caption{\textbf{CIFAR100} (\underline{with} augmentation)}
\label{appendix:figure:experiment_cifar100_waug}
\end{subfigure}
\caption{
Distribution of geometric properties of representations, $\varphi_\theta(x_n)$, tracked during training. Representations are obtained from a ResNet-18 model trained (\subref{appendix:figure:experiment_cifar100_waug}) \underline{with} and (\subref{appendix:figure:experiment_cifar100_woaug}) \underline{without} data augmentation on \textbf{CIFAR10}, with \textbf{CE} and \textbf{SC}, respectively. \textcolor{tabblue}{Blue} and \textcolor{tabgreen}{green} lines indicate the evolution of the medians over the iterations;
\textcolor{tabred}{Red} lines indicate the \emph{sought-for} value at a regular simplex configuration.
\label{fig:suppmat_experiments_cifar100}}
\end{figure}
\section{Proofs for Section~\ref{subsection:analysis_sc}}
In this section, we will prove Theorem~\ref{thm:supcon} of the main manuscript (restated below).
Throughout this section the following objects will appear repeatedly an thus are introduced one-off:
\begin{itemize}
\item $h, N, K \in \mathbb{N}$
\item $\rho_{\mathcal{Z}}>0$
\item $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}^{h-1}$
\item $\mathcal{Y} = \{1, \dots, K\} = [K]$
\end{itemize}
Further, we will consider batches $B\in \mathcal{B}$ of an arbitrary but fixed size $b\ge3$. We additionally assume $|\mathcal{Y}| = K \leq h +1$.
\textcolor{restated}{\thm@supcon*}
\subsection{Definitions}
First we will recall the definition of the supervised contrastive (SC) loss and introduce some necessary auxiliary definitions.
The SC~loss is given by
\begin{equation}
\mathcal{L}_{\operatorname{SC}}(Z;Y) = \sum_ {B \in \mathcal{B}} \ell_{\operatorname{SC}}(Z;Y,B) \enspace,
\end{equation}
where $\ell_{\operatorname{SC}}(Z;Y,B)$ is the \emph{batch-wise loss}
\begin{equation}
\ell_{\operatorname{SC}}(Z;Y,B) =
-\sum\limits_{\substack{i \in B\\ |B_{y_i}|>1}}
\frac{1}{|B_{y_i}|-1}
\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\log
\left(
\frac
{\exp\big(\dm{z_i}{z_j}\big)}
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
\right)
\enspace.
\end{equation}
We next introduce the \emph{class-specific batch-wise loss}
\begin{equation}
\ell_{\operatorname{SC}}(Z;Y,B,y)
=
\begin{cases}
-
\sum\limits_{i \in B_y}
\frac{1}{|B_{y_i}|-1}
\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\log
\left(
\frac
{\exp\big(\dm{z_i}{z_j}\big)}
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
\right)
& \text{ if } |B_{y}|>1
\\
0 & \text{ else}\enspace.
\end{cases}
\enspace,
\end{equation}
This allows us to write
\begin{align}
\ell_{\operatorname{SC}}(Z;Y,B)
& =
-\sum\limits_{\substack{i \in B\\|B_{y_i}|>1}}
\frac{1}{|B_{y_i}|-1}
\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\log
\left(
\frac
{\exp\big(\dm{z_i}{z_j}\big)}
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
\right)
\\
& =
-
\sum_{\substack{y\in \mathcal{Y}\\|B_y|>1}}
\sum\limits_{i \in B_y}
\frac{1}{|B_{y_i}|-1}
\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\log
\left(
\frac
{\exp\big(\dm{z_i}{z_j}\big)}
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
\right)
\\
& =
\sum_{y\in \mathcal{Y}}
\ell_{\operatorname{SC}}(Z;Y,B,y) \enspace,
\end{align}
and so
\begin{equation}
\mathcal{L}_{\operatorname{SC}}(Z;Y) = \sum_{y \in \mathcal{Y}} \sum_{B \in \mathcal{B}} \ell_{\operatorname{SC}}(Z;Y,B,y)
\enspace.
\end{equation}
We use the following notation:
the multiplicity of an element $x$ in the multiset $M$ is denoted by $\mult{M}(x)$.
Furthermore, we introduce the label configuration of a batch, i.e.,
\begin{equation}
\Upsilon(B) = \mset{y_i: ~ i\in B }
\enspace,
\end{equation}
thus $\mult{\Upsilon(B)}(y) = |{\By{y}{Y}{B}}|$.
For example, if $\mathcal{Y} = \set{a,b}$, $B = \mset{1,2,2,5,10}$ and $a = y_1=y_2$, $b=y_5=y_{10} $, then $\mult{B}(2)=2$, $\Upsilon(B) = \mset{a,a,a,b,b}$ and $\mult{\Upsilon(B)}(a) = 3 = |\mset{1,2,2}| = |B_a|$.
We will slightly abuse notation ($Y$ is a tuple, not a multiset) and write $\mult{Y}(y) = \mult{\Upsilon([N])}(y) = |\mset{n \in [N]: y_n = y}|$.
For every batch $B\in \mathcal{B}$ and label $y\in \mathcal{Y}$, we will also write ${B_y}^C:=\mset{i\in B:~y_i\neq y}$ for the complement of ${B_y}:=\mset{i\in B:~y_i = y}$, which was already introduced in the Definition~\ref{def:sc} of the supervised contrastive loss.
\begin{defi}[Auxiliary functions $S$, $S_{\text{rep}}$, $S_{\text{att}}$]
Let $h, N \in \mathbb{N}$, $\rho_{\mathcal{Z}}>0$ and $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}^{h-1}$.
For fixed label configuration $Y\in \mathcal{Y}^N$, batch $B\in \mathcal{B}$ and label $y\in \mathcal{Y}$ with $\mult{\Upsilon(B)}(y)>1$, we define
\begin{align}
S(\,\cdot\,;Y,B,y): \mathcal{Z}^N &\to \mathbb{R} \\
Z &\mapsto
S_{\text{att}}(Z;Y,B,y) + S_{\text{rep}}(Z;Y,B,y)
\enspace,
\end{align}
where
\begin{align}
\label{eq:def_s_att}
S_{\text{att}}(Z;Y,B,y)
& = -
\frac{1}{|{\By{y}{Y}{B}}|\,(|{\By{y}{Y}{B}}|-1)}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}\setminus\mset{i}}
\inprod{z_i}{z_j}
\\
\label{eq:def_s_rep}
S_{\text{rep}}(Z;Y,B,y)
&=
\begin{cases}
\frac{1}{|{\By{y}{Y}{B}}|\,|{\By{y}{Y}{B}}^C|}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}^C}
\inprod{z_i}{z_j}
\qquad &\text{if } \mult{\Upsilon(B)(y)}\neq b
\\
0 &\text{if } \mult{\Upsilon(B)}(y) = b
\end{cases}
\enspace.
\end{align}
\end{defi}
\begin{defi}[Auxiliary partition of $\mathcal{B}$]
For every $y\in \mathcal{Y}$ and every $l\in \set{0,\dots,b}$, we define
\begin{equation}
\mathcal{B}_{y,l} : = \set{ {B \in \mathcal{B}}:~ \mult{\Upsilon(B)}(y) = l}
\enspace.
\end{equation}
\end{defi}
\subsection{Proof of Theorem~\ref{thm:supcon}}
\begin{proof}[]
We first present the main steps of the proof of Theorem~\ref{thm:supcon} and refer to subsequent technical lemmas when needed.
\begin{enumerate}[itemsep=2ex,labelindent=23pt,leftmargin=!,label=(\textbf{Step \arabic*}),labelwidth=\widthof{\ref{last-item}}]
\item
\label{thm:supcon:proof:step-1}
For each class $y\in \mathcal{Y}$ and each batch ${B \in \mathcal{B}}$ with $\mult{\Upsilon(B)}(y)>1$, the class-specific batch-wise loss $\ell_{\operatorname{SC}}(Z;Y,B,y)$ (see Lemma \ref{lem:supcon_S}) is bounded from below by
\begin{equation}
\ell_{\operatorname{SC}}(Z;Y,B,y) \ge
|{\By{y}{Y}{B}}|
\log
\left(
|{\By{y}{Y}{B}}| - 1 + |{\By{y}{Y}{B}}^C|
\exp \left(S(Z;Y,B,y)\right)
\right)
\enspace.
\end{equation}
\item
\label{thm:supcon:proof:step-2}
We regroup the addends of the sum $\mathcal{L}_{\operatorname{SC}}(Z;Y)$, i.e.,
\begin{equation}
\mathcal{L}_{\operatorname{SC}}(Z;Y) = \sum_ {B \in \mathcal{B}} \sum_ {y \in \mathcal{Y}} \ell_{\operatorname{SC}}(Z;Y,B,y) \enspace,
\end{equation}
such that each group is defined by addends requiring
$B\in \mathcal{B}_{y,l}=\set{ {B \in \mathcal{B}}|~ \mult{\Upsilon(B)}(y) = l}$.
As a result, we can leverage the bound of \ref{thm:supcon:proof:step-1} on each group, i.e.,
\begin{align}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
&= \sum_{B \in \mathcal{B}} \sum_{y \in \mathcal{Y}} \ell_{\operatorname{SC}}(Z;Y,B,y) \\
& = \sum_{l=0}^b \sum_{y \in \mathcal{Y}} \sum_{B \in \mathcal{B}_{y,l}} \ell_{\operatorname{SC}}(Z;Y,B,y)
\\
& \ge
\sum_{l=2}^{b} \sum_{y \in \mathcal{Y}} \sum_{B \in \mathcal{B}_{y,l}}
l
\log
\left(
l - 1 + (b-l)
\exp \left( S(Z;Y,B,y)
\right)
\right)
\enspace.
\end{align}
Here the sum over the value of $l$ starts at $l=2$, because $\ell_{\operatorname{SC}}(Z;Y,B,y)=0$ vanishes for batches $B\in \set{\mathcal{B}_{y,0},\mathcal{B}_{y,1}}$.
\item
\label{thm:supcon:proof:step-3}
Applying Jensen's inequality (see Lemma~\ref{lem:supcon_jensen}), then yields
\begin{equation}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
\ge
\sum_{l = 2}^{b} l M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
\frac {1} {M_l}
\sum_{y \in \mathcal{Y}}
\sum_{B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
\right)
\right)
\enspace,
\end{equation}
where
$M_l = \sum_ {y \in \mathcal{Y}} |\mathcal{B}_{y,l}|$.
\item
\label{thm:supcon:proof:step-4}
In Lemma~\ref{lem:supcon_outer_bound}, we characterize the equality case of the bound above.
It is achieved
if and only if all intra-class and inter-class inner products agree, respectively.
\end{enumerate}
The next steps investigate, for each $l\in \set{2,\dots,b-1}$, the sum
\begin{equation}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
=
\left(
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S_{\text{att}}(Z;Y,B,y)
\right)
+
\left(
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
\right)
\enspace.
\end{equation}
\begin{enumerate}[itemsep=2ex,labelindent=23pt,leftmargin=!,label=(\textbf{Step \arabic*}),labelwidth=\widthof{\ref{last-item}}]
\setcounter{enumi}{4}
\item The sum of the attraction terms, $S_{\text{att}}$, is maximal if and only if all intra-class inner products are maximal, i.e., they are equal to ${\rho_\mathcal{Z}}^2$ (see Lemma~\ref{lem:supcon_att}). This implies
\begin{equation}
\sum_{y \in \mathcal{Y}} \sum_{B \in \mathcal{B}_{y,l}}S_{\text{att}}(Z;Y,B,y)
\ge
-M_l \, {\rho_{\mathcal{Z}}}^2
\enspace.
\end{equation}
\item If $|\mathcal{Y}|>2$, then the trivial bound on the repulsion term (inner products $= - {\rho_\mathcal{Z}}^2$) is not tight as this could only be achieved if the classes were on opposite poles of the sphere.
Thus we need an additional step: instead of summing the repulsion terms, $S_{\text{rep}}(Z;Y,B,y)$, over all labels and batches, as done in \ref{thm:supcon:proof:step-4}, we re-write this summation as a sum over pairs of indices $(n,m) \in [N]^2$ of different classes $y_n\neq y_m$ (see Lemma~\ref{lem:supcon_batches_to_indices}). That is,
\begin{equation}
\sum_{y \in \mathcal{Y}} \sum_{B \in \mathcal{B}_{y,l}}S_{\text{rep}}(Z;Y,B,y)
=
\sum_ {y \in \mathcal{Y}}
\sum_ {\substack{n \in [N] \\ y_n = y}}
\sum_ {\substack{m \in [N] \\ y_m \neq y}}
K_{n,m}(y,l)
\inprod{z_n}{z_m}
\enspace,
\end{equation}
where
$K_{n,m}(y,l) :=
\frac{1}{l(b-l)}
\sum_{B \in \mathcal{B}_{y,l}}
\mult{ B_y }(n)
\mult{ {B_y}^C }(m)$\enspace.
\item As we assume the label configuration $Y$ to be balanced, we get that
\begin{equation}
K_{n,m}(y,l) = \frac{M_l}{N^2} \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1} \enspace,
\end{equation}
which only depends on $l$ (and not on $y$), see Lemma~\ref{lem:supcon_combinatorics} and Eq.~\eqref{eq:supcon:K_balanced}. Thus, it suffices to bound
\begin{equation}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_n \neq y}}
\inprod{z_n}{z_m}
\ge
- {\rho_\mathcal{Z}}^2
\sum_{y\in \mathcal{Y}}
\mult{Y}(y)^2
= - \frac{N^2}{|\mathcal{Y}|} {\rho_\mathcal{Z}}^2
\enspace,
\end{equation}
where equality is attained if and only if (a) $\sum_ {n \in [N]} z_n = 0$, and (b) $y_n=y_m$ $\Rightarrow$ $z_n = z_m$ (see Lemma~\ref{lem:supcon_rep_sum}).
\item Finally, we combine all results from (Steps 1-7) and obtain the asserted lower bound
(see Lemma~\ref{lem:supcon_final}), i.e.,
\begin{align}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
&\ge
\sum_{l = 2}^{b} M_l
l \log
\left(
l - 1 + (b-l)
\exp \left(- \frac{1}{M_l}
\left(
\frac{M_l}{N^2} \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}
\frac{N^2}{|\mathcal{Y}|}
+ M_l
\right)
{\rho_{\mathcal{Z}}}^2
\right)
\right)\\
&=
\sum_{l = 2}^{b} M_l
l \log
\left(
l - 1 + (b-l)
\exp \left(- \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}
{\rho_{\mathcal{Z}}}^2
\right)
\right)
\enspace,
\end{align}
where equality is attained if and only if all instances $z_n$ with equal label $y_n$ collapse to a vertex $\zeta_{y_n}$ of a regular simplex, inscribed in a hypersphere of radius ${\rho_{\mathcal{Z}}}$, i.e., conditions \ref{con:supcon:1} and \ref{con:supcon:2} in Theorem~\ref{thm:supcon}.
\end{enumerate}
\end{proof}
\clearpage
\subsection{Technical lemmas}
In the following, we provide proofs for all technical lemmas invoked throughout Steps 1-8 in the proof of Theorem~\ref{thm:supcon}.
\begin{Slem}
\label{lem:supcon_S}
Fix a class ${y \in \mathcal{Y}}$ and a batch ${B \in \mathcal{B}}$ with $\mult{\Upsilon(B)}(y) \in \set{2,\dots,b}$. For every $Z\in \mathcal{Z}^N$ and every $Y\in \mathcal{Y}^N$, the class-specific batch-wise loss $\ell_{\operatorname{SC}}(Z; Y,B,y)$ is bounded from below by
\begin{equation}
\ell_{\operatorname{SC}}(Z;Y,B,y) \ge
\mult{\Upsilon(B)}(y)
\log
\left(
\mult{\Upsilon(B)}(y) - 1 + (b-\mult{\Upsilon(B)}(y))
\exp \left( S(Z;Y,B,y)
\right)
\right)
\enspace,
\end{equation}
where equality is attained if and only if all of the following hold:
\begin{enumerate}[label={(Q\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item \label{con:supcon_batch_class:att}
$\forall i \in B$ there is a $C_i(B,y)$ such that $\forall j\in {\By{y}{Y}{B}} \setminus \mset{i}$ all inner products $\inprod{z_i}{z_j}= C_i(B,y)$ are equal.
\item \label{con:supcon_batch_class:rep}
$\forall i \in B$ there is a $D_i(B,y)$ such that $\forall j\in {\By{y}{Y}{B}}^C$ all inner products $\inprod{z_i}{z_j}= D_i(B,y)$ are equal.
\end{enumerate}
\end{Slem}
\begin{proof}
The lemma follows from an application of Jensen's inequality.
In particular, we first need to bring the class-specific batch-wise loss in a form amenable to Jensen's inequality.
Since $\mult{\Upsilon(B)}(y)>1$, we have that
\begin{align}
\ell_{\operatorname{SC}}(Z;Y,B,y)
& =
-
\frac{1}{|B_{y_i}|-1}
\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\log
\left(
\frac
{\exp\big(\dm{z_i}{z_j}\big)}
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
\right)
\\
&=
\sum\limits_{i \in B_y}
\log
\left(
\frac
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
{\prod\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}} \exp\big(\dm{z_i}{z_j}\big)^{\nicefrac{1}{|B_{y_i}|-1}}}
\right)
\\
& = \label{lem:supcon_S:eqn:1}
\sum\limits_{i \in B_y}
\log
\left(
\frac
{
\sum\limits_{k \in B \setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}
\exp\big(\dm{z_i}{z_k}\big)
}
{\exp\big(\frac{1}{|B_y\setminus\mset{i}|}\sum\limits_{j \in B_{y_i}\setminus \{\mskip-5mu\{ i \}\mskip-5mu\}}\dm{z_i}{z_j}\big)}
\right)
\enspace.
\end{align}
We will focus on the sum in the numerator:
For every $i\in \By{y}{Y}{B}$, we write
\begin{equation}
\label{lem:supcon_S:eqn3}
{\sum_{k\in B\setminus \mset{i}} \exp (\inprod{z_i}{z_k})}
=
{\sum_{k\in { \By{y}{Y}{B}}\setminus \mset{i} } \exp (\inprod{z_i}{z_k})}
+ {\sum_{k\in { \By{y}{Y}{B}}^C } \exp (\inprod{z_i}{z_k})}
\enspace.
\end{equation}
First, assume that $\mult{\Upsilon(B)}(y) \neq b$.
As the exponential function is convex, we can leverage Jensen's inequality on both sums, resulting in
\begin{align}
{\sum_{k\in { \By{y}{Y}{B}}\setminus \mset{i} } \exp (\inprod{z_i}{z_k})}
& \stackrel{\ref{con:supcon_batch_class:att}}\ge
|{ \By{y}{Y}{B}}\setminus \mset{i}|
\exp \left(
\frac{1}{|{ \By{y}{Y}{B}}\setminus \mset{i}|}
\sum_{k \in { \By{y}{Y}{B}} \setminus \mset{i}}\inprod{z_i}{z_k}
\right)
\\
{\sum_{k\in { \By{y}{Y}{B}}^C } \exp (\inprod{z_i}{z_k})}
& \stackrel{\ref{con:supcon_batch_class:rep}}\ge
|{\By{y}{Y}{B}}^C|
\exp \left(
\frac{1}{|{\By{y}{Y}{B}}^C|}
\sum_{k \in { \By{y}{Y}{B}}^C }\inprod{z_i}{z_k}
\right)
\enspace.
\end{align}
Herein, equality is attained if and only if
\begin{enumerate}[label={(Q\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:supcon_batch_class:att}]
There is a $C_i(B,y)$ such that $\forall j\in {\By{y}{Y}{B}} \setminus \mset{i}$ all inner products $\inprod{z_i}{z_j}= C_i(B,y)$ are equal.
\item[\ref{con:supcon_batch_class:rep}]
There is a $D_i(B,y)$ such that $\forall j\in {\By{y}{Y}{B}} \setminus \mset{i}$ all inner products $\inprod{z_i}{z_j}= D_i(B,y)$ are equal.
\end{enumerate}
Thus, using $\exp(a)/\exp(b)= \exp(a-b)$, we bound the argument of the $\log$ in Eq.~\eqref{lem:supcon_S:eqn:1} by
\begin{align}
& \frac
{\sum_{k\in B\setminus \mset{i}} \exp (\inprod{z_i}{z_k})}
{\exp \left(\frac {1} {|{\By{y}{Y}{B}} \setminus \mset{i}|}\sum_{j \in {\By{y}{Y}{B}}\setminus \mset i}\inprod{z_i}{z_j}\right)}
\\
\ge &
\label{lem:supcon_S:eqn2}
|{ \By{y}{Y}{B}}\setminus \mset{i}| + |{\By{y}{Y}{B}}^C|
\exp \left(
\underbrace{
\frac{1}{|{\By{y}{Y}{B}}^C|}
\sum_{k \in { \By{y}{Y}{B}}^C }\inprod{z_i}{z_k}
-
\frac{1}{|{ \By{y}{Y}{B}}\setminus \mset{i}|}
\sum_{j \in { \By{y}{Y}{B}} \setminus \mset{i}}\inprod{z_i}{z_j}
}_{S(Z;Y,B,y)}
\right)
\enspace.
\end{align}
Hence, using $|{ \By{y}{Y}{B}}| = \mult{B}(y)$ and the definition of $S(Z;Y,B,y)$, we obtain the claimed bound
\begin{equation}
\ell_{\operatorname{SC}}(Z;Y,B) \ge
\mult{\Upsilon(B)}(y)
\log
\left(
\mult{\Upsilon(B)}(y) - 1 + (b-\mult{\Upsilon(B)}(y))
\exp \left( S(Z;Y,B,y)
\right)
\right)
\enspace.
\end{equation}
Note that equality is attained, if and only if the above conditions hold for every $i\in \By{y}{Y}{B}$. Also, note that the respective constants, $C_i(B,y)$ and $D_i(B,y)$, depend indeed on the batch $B$ and the label $y$.
The case of $\mult{\Upsilon(B)}(y) = b$ follows from a analogous argument starting from Eq.~\eqref{lem:supcon_S:eqn2} under the observation that, in this case, $B_y=B$ and ${B_y}^C= \emptyset$. This leads to the inequality
\begin{equation}
\ell_{\operatorname{SC}}(Z;Y,B) \ge b
\log
\left(
b - 1
\right)
\enspace,
\end{equation}
with equality condition~\ref{con:supcon_batch_class:att}. Note that the statement of the lemma is phrased such that the results from this case are automatically included.
\end{proof}
\begin{Slem}
\label{lem:supcon_jensen}
Let $l\in \set{2,\dots,b}$. For every $Y \in \mathcal{Y}^N$ and every $Z\in \mathcal{Z}^N$, we have that
\begin{align}
&\frac{1}{M_l}
\sum_{y \in \mathcal{Y}} \sum_ {\substack{ B \in \mathcal{B}_{y,l}}}
\log
\left(
l - 1 + (b-l)
\exp \left( S(Z;Y,B,y)
\right)
\right)
\\
\ge
&\log
\left(
l - 1 + (b-l)
\exp \left(
\frac{1}{M_l}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
\right)
\right)
\enspace,
\nonumber
\end{align}
where $M_l =\sum_ {y \in \mathcal{Y}} |\mathcal{B}_{y,l}|$ and equality is attained if and only if:
\begin{enumerate}[label={(Q\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\setcounter{enumi}{2}
\item \label{con:supcon_jensen}
$l=b$ or there is a constant $D(l)$ such that for every $y\in\mathcal{Y}$ and for every $B\in \mathcal{B}_{y,l}$ the values of $S(Z;Y,B,y) = D(l)$ agree.
\end{enumerate}
\end{Slem}
\begin{proof}
Let $\alpha,\beta>0$ and $f:\mathbb{R} \to \mathbb{R}$, $x\mapsto \log(\alpha +\beta \exp(x))$. The function $f$ is smooth with second derivative
$f''(x) = \frac{\alpha \beta e^x}{\left(\alpha +\beta e^x\right)^2}>0$ and therefore it is convex.
Thus, by Jensen' inequality, for every finite sequence $(x_{B,y})_{ ({y \in \mathcal{Y}},{B \in \mathcal{B}_{y,l}})}$ it holds that
\begin{equation}
\frac{1}{\sum_{{y \in \mathcal{Y}}}|\mathcal{B}_{y,l}|}
\sum_{y \in \mathcal{Y}} \sum_ {\substack{ B \in \mathcal{B}_{y,l}}} f(x_{B,y})
\stackrel{\ref{con:supcon_jensen}}{\ge}
f \left(
\frac{1}{\sum_{{y \in \mathcal{Y}}}|\mathcal{B}_{y,l}|}
\sum_{y \in \mathcal{Y}} \sum_ {\substack{ B \in \mathcal{B}_{y,l}}}
x_{B,y}
\right)
\enspace.
\end{equation}
Setting $\alpha = l-1$, $\beta = b-l$ and $x_{B,y} = S(Z;Y,B,y)$, we obtain the bound from the statement of the lemma. Furthermore, equality is attained if and only if:
\begin{enumerate}[label={(Q\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:supcon_jensen}]
There is a constant $D(l)$ such that for every $y\in\mathcal{Y}$ and for every $B\in \mathcal{B}_{y,l}$ the values of $S(Z;Y,B,y) = D(l)$ agree.
\end{enumerate}
\end{proof}
\clearpage
Next, we combine Lemma~\ref{lem:supcon_S} with Lemma~\ref{lem:supcon_jensen}, which implies a bound with more tangible equality conditions.
\begin{Slem}
\label{lem:supcon_outer_bound}
For every $Y\in \mathcal{Y}^N$ and every $Z\in \mathcal{Z}^N$
the supervised contrastive loss $\mathcal{L}_{\operatorname{SC}}$ is bounded from below by
\begin{equation}
\label{eq:supcon_outer_bound}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
\ge
\sum_{l = 2}^{b} l \, M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
\frac {1} {M_l}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
\right)
\right)
\enspace,
\end{equation}
where $M_l$ is defined as in Lemma~\ref{lem:supcon_jensen} and equality is attained if and only if:
\begin{enumerate}[label={(A\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item \label{con:supcon_outer_bound:intra}
There exists a constant $\alpha$, such that $\forall n,m\in [N]$, $y_n = y_m$ implies $\inprod{z_n}{z_m} = \alpha$\enspace.
\item \label{con:supcon_outer_bound:inter}
There exists a constant $\beta$, such that $\forall n,m\in [N]$, $y_n \neq y_m$ implies $\inprod{z_n}{z_m} = \beta$\enspace.
\end{enumerate}
\end{Slem}
\begin{proof}
First, observe that $\ell_{\operatorname{SC}}(Z;Y,B,y)= 0$ if $B\in \set{\mathcal{B}_{y,0}, \mathcal{B}_{y,1}}$. Leveraging Lemma~\ref{lem:supcon_S} and Lemma~\ref{lem:supcon_jensen}, we get
\begin{align}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
&= \sum_ {B \in \mathcal{B}} \sum_ {y \in \mathcal{Y}} \ell_{\operatorname{SC}}(Z;Y,B,y) \\
& = \sum_{l =2}^{b} \sum_ {y \in \mathcal{Y}} \sum_{B\in \mathcal{B}_{y,l}} \ell_{\operatorname{SC}}(Z;Y,B,y)
\\
& \stackrel{\text{Lem.~\ref{lem:supcon_S}}}{\ge}
\sum_{l =2}^{b} \sum_{y \in \mathcal{Y}} \sum_{\mathcal{B}_{y,l}}
l \,
\log
\left(
l - 1 + (b-l)
\exp \left( S(Z;Y,B,y)
\right)
\right)
\\
&
\label{lem:supcon_outer_bound:eq1}
\stackrel{\text{Lem.~\ref{lem:supcon_jensen}}}{\ge}
\sum_{l =2}^{b}l \, M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
\frac {1} {M_l}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
\right)
\right)
\enspace.
\end{align}
Here equality is attained if and only if all of the following conditions hold:
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:supcon_batch_class:att}]
$\forall l \in \set{2,\dots,b}$, $\forall y \in \mathcal{Y}$, $\forall B\in \mathcal{B}_{y,l}$ and $\forall i \in B$ there is a $C_i(B,y)$ such that $\forall j\in {\By{y}{Y}{B}} \setminus \mset{i}$ all inner products $\inprod{z_i}{z_j}= C_i(B,y)$ are equal.
\item[\ref{con:supcon_batch_class:rep}]
$\forall l \in \set{2,\dots,b}$, $\forall y \in \mathcal{Y}$, $\forall B\in \mathcal{B}_{y,l}$ and $\forall i \in B$ there is a $D_i(B,y)$ such that $\forall j\in {\By{y}{Y}{B}}^C$ all inner products $\inprod{z_i}{z_j}= D_i(B,y)$ are equal.
\item[\ref{con:supcon_jensen}]
$\forall l \in \set{2,\dots,b-1}$, there is a constant $D(l)$ such that for every $y\in\mathcal{Y}$ and for every $B\in \mathcal{B}_{y,l}$ the values of $S(Z;Y,B,y) = D(l)$ agree.
\end{enumerate}
It remains to show that \ref{con:supcon_batch_class:att} \& \ref{con:supcon_batch_class:rep} \& \ref{con:supcon_jensen} is equivalent to:
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item [\ref{con:supcon_outer_bound:intra}]
There exists a constant $\alpha$, such that $\forall n,m\in [N]$, $y_n = y_m$ implies $\inprod{z_n}{z_m} = \alpha$\enspace.
\item [\ref{con:supcon_outer_bound:inter}]
There exists a constant $\beta$, such that $\forall n,m\in [N]$, $y_n \neq y_m$ implies $\inprod{z_n}{z_m} = \beta$\enspace.
\end{enumerate}
Recall the definition of the auxiliary function $S$, i.e
\begin{align}
S(Z;Y,B,y)&= S_{\text{att}}(Z;Y,B,y) + S_{\text{rep}}(Z;Y,B,y) \enspace, \text{where}\\
S_{\text{att}}(Z;Y,B,y)
& = -
\frac{1}{|{\By{y}{Y}{B}}|\,(|{\By{y}{Y}{B}}|-1)}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}\setminus\mset{i}}
\inprod{z_i}{z_j}
\\
S_{\text{rep}}(Z;Y,B,y)
&=
\frac{1}{|{\By{y}{Y}{B}}|\,|{\By{y}{Y}{B}}^C|}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}^C}
\inprod{z_i}{z_j}
\enspace.
\end{align}
We start with the direction \ref{con:supcon_outer_bound:intra} \& \ref{con:supcon_outer_bound:inter} $\implies$ \ref{con:supcon_batch_class:att} \& \ref{con:supcon_batch_class:rep} \& \ref{con:supcon_jensen}.
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:supcon_batch_class:att}]
Fix $l\in\set{2,\dots,b}$, $y\in Y$, $B\in \mathcal{B}_{y,l}$ and $i\in B$.
Let $j\in {\By{y}{Y}{B}} \setminus \mset{i}$, i.e., $y_j =y = y_i$. Therefore condition \ref{con:supcon_outer_bound:intra} implies $\inprod{z_i}{z_j} = \alpha$, i.e., condition \ref{con:supcon_batch_class:att} is fulfilled with $C_i(B,y) = \alpha$.
\item[\ref{con:supcon_batch_class:rep}]
Fix $l\in\set{2,\dots,b}$, $y\in Y$, $B\in \mathcal{B}_{y,l}$ and $i\in B$.
Let $j\in {\By{y}{Y}{B}}^C$, i.e., $y_j \neq y = y_i$. Therefore condition \ref{con:supcon_outer_bound:inter} implies $\inprod{z_i}{z_j} = \beta$, i.e. condition \ref{con:supcon_batch_class:rep} is fulfilled with $D_i(B,y) = \beta$.
\item[\ref{con:supcon_jensen}]
Fix $l\in \set{2,\dots,b-1}$. Let $y\in \mathcal{Y}$ and $B\in \mathcal{B}_{y,l}$. By condition \ref{con:supcon_outer_bound:intra},
$S_{\text{att}}(Z;Y,B,y) = -\alpha$ and by condition \ref{con:supcon_outer_bound:inter},
$S_{\text{rep}}(Z;Y,B,y) = \beta$, so $S(Z;Y,B,y) = S_{\text{rep}}(Z;Y,B,y) + S_{\text{att}}(Z;Y,B,y) = \beta - \alpha$ and condition \ref{con:supcon_jensen} is fulfilled with $D(l)=\beta-\alpha$.
\end{enumerate}
Next, we show \ref{con:supcon_batch_class:att} \& \ref{con:supcon_batch_class:rep} \& \ref{con:supcon_jensen} $\implies$ \ref{con:supcon_outer_bound:intra} \& \ref{con:supcon_outer_bound:inter}.
Let $y,y'\in \mathcal{Y}$ and $n,m,n',m'\in [N]$ with $y_n=y_m = y$ and $y_{n'} = y_{m'} = y'$.
For brevity, we write multisets such that the multiplicity of each element is denoted as a superscript, i.e., $\{\mskip-5mu\{ n^b \}\mskip-5mu\}$ denotes the multiset $\{\mskip-5mu\{ n, \dots, n \}\mskip-5mu\}$ which contains $n$ exactly $b$ times.
Recall, that we assume the $b\ge 3$.
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item [\ref{con:supcon_outer_bound:intra}]
We need to show that $\inprod{z_n}{z_m} = \inprod{z_{n'}}{z_{m'}}$.
There are two cases: $y = y'$ and $y\neq y'$.
First, assume $y \neq y'$.\\
Choose $l=2$ and pick the batch $B_1 = \mset{n,m,(n')^{b-2}}\in \mathcal{B}_{y,2}$.
Then
$$S(Z; Y, B_1,y) = S_{\text{att}}(Z;Y,B_1,y) + S_{\text{rep}}(Z;Y,B_1,y) = - \inprod{z_n}{z_m} + \frac{1}{2}\inprod{z_n}{z_{n'}}+\frac{1}{2} \inprod{z_m}{z_{n'}}\enspace.$$
Condition~\ref{con:supcon_batch_class:rep} implies that $\inprod{z_n}{z_{n'}} = \inprod{z_m}{z_{n'}}$, and so the above simplifies to $S(Z; Y, B_1,y) = - \inprod{z_n}{z_m} + \inprod{z_n}{z_{n'}}$.
An analogous argument for the batch $B_2= \mset{n',m',n^{b-2}}\in \mathcal{B}_{y',2}$ implies that $S(Z; Y, B_2,y') = - \inprod{z_{n'}}{z_{m'}} + \inprod{z_n}{z_{n'}}$.
Finally, by condition~\ref{con:supcon_jensen}, we have that $S(Z; Y, B_1,y) = S(Z; Y, B_2,y')$ and thus $\inprod{z_{n}}{z_{m}} = \inprod{z_{n'}}{z_{m'}}$.
\vskip2ex
Now, assume $y=y'$.\\
Let $p\in [N]$ such that $y_p \neq y$.
Again, choose $l=2$ and pick batches $B_1 = \mset{n,m,p^{b-2}}\in \mathcal{B}_{y,2}$ and $B_2 =\mset{n',m',p^{b-2}}\in \mathcal{B}_{y,2}$. By the same argument as in the preceding case of $y\neq y'$, we have that
$S(Z; Y, B_1,y) = - \inprod{z_n}{z_m} + \inprod{z_n}{z_{p}}$
and $S(Z; Y, B_2,y) = - \inprod{z_{n'}}{z_{m'}} + \inprod{z_{n'}}{z_{p}}$.
Therefore, condition~\ref{con:supcon_jensen} implies that
$$- \inprod{z_n}{z_m} + \inprod{z_n}{z_{p}} = - \inprod{z_{n'}}{z_{m'}} + \inprod{z_{n'}}{z_{p}} \enspace.$$
Now, pick the batch $B_3 = \mset{z_n,z_m,p^{b-2}}$. From condition~\ref{con:supcon_batch_class:rep} it follows that $\inprod{z_n}{p} = \inprod{z_m}{p}$ and thus $\inprod{z_{n'}}{z_{m'}} = \inprod{z_{n}}{z_{m}}$.
\item [\ref{con:supcon_outer_bound:inter}]
We need to show that $\inprod{z_n}{z_{n'}} = \inprod{z_{m}}{z_{m'}}$ if $y\neq y'$.
Choose $l=2$ and pick the batches $B_1 = \mset{n^2,(n')^{b-2}} \in \mathcal{B}_{y,2}$ and $B_2 = \mset{m^2,(m')^{b-2}} \in \mathcal{B}_{y,2}$.
We can already assume that condition \ref{con:supcon_outer_bound:intra} holds, so for every batch $B \in \set{B_{y,2}}$, we have that $S_{\text{att}}(Z;Y,B,y) = - \alpha$ and thus
\begin{equation}
S(Z;Y,B,y)
=
-\alpha + S_{\text{rep}}(Z;Y,B,y)
=
- \alpha +
\frac{1}{2(b-2)}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}^C}
\inprod{z_i}{z_j}
\enspace.
\end{equation}
Therefore, $S(Z;Y,B_1,y) = \inprod{z_n}{z_{n'}} - \alpha$ and $S(Z;Y,B_2,y) = \inprod{z_m}{z_{m'}} - \alpha$.
By condition \ref{con:supcon_jensen}, we have that $S(Z;Y,B_1,y) = S(Z;Y,B_2,y)$ and so $\inprod{z_n}{z_{n'}} = \inprod{z_m}{z_{m'}}$.
\end{enumerate}
\end{proof}
In the following, we address the two parts of the sum in the exponent in Eq. \eqref{eq:supcon_outer_bound}, i.e.,
\begin{equation}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
=
\underbrace{
\left(
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S_{\text{att}}(Z;Y,B,y)
\right)
}_{\text{Lem.~}\ref{lem:supcon_att}}
+
\underbrace{
\left(
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
\right)
}_{\text{Lem.~}\ref{lem:supcon_rep}}
\enspace.
\end{equation}
While the first summand is handled easily via Lemma~\ref{lem:supcon_att}, handling the second summand needs further considerations, encapsulated in Lemmas~\ref{lem:supcon_batches_to_indices},
\ref{lem:combinatorics}, \ref{lem:supcon_combinatorics}, \ref{lem:supcon_rep_sum} and finally combined into Lemma~\ref{lem:supcon_rep}.
\begin{Slem}[Sum of attraction terms]
\label{lem:supcon_att}
Let $l\in \set{2,\dots,b}$ and let $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}$.
For every $Y\in \mathcal{Y}^N$ and every $Z\in \mathcal{Z}^N$, it holds that
\begin{equation}
\sum_{y \in \mathcal{Y}}\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{att}}(Z;Y,B,y)
\ge
- \left(\sum_{y\in\mathcal{Y}} |\mathcal{B}_{y,l}|\right)
{\rho_{\mathcal{Z}}}^2
\enspace,
\end{equation}
where equality is attained if and only if:
\begin{enumerate}[label={(Q\arabic*)}, start=4,labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item \label{con:supcon_att}
For every $n,m\in [N]$, $y_n = y_m$ implies $z_n = z_m$\enspace.
\end{enumerate}
\end{Slem}
\begin{proof}
Recall the definition of $S_{\text{att}}(Z;Y,B,y)$ from Eq.~\eqref{eq:def_s_att}:
\begin{equation}
S_{\text{att}}(Z;Y,B,y)
= -
\frac{1}{|{\By{y}{Y}{B}}|\,|{\By{y}{Y}{B}}\setminus \mset{i}|}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}\setminus\mset{i}}
\inprod{z_i}{z_j}
\enspace.
\end{equation}
Using the Cauchy-Schwarz inequality and the assumption that $\mathcal{Z}$ is a hypersphere of radius $\rho_\mathcal{Z}$, $S_{\text{att}}(Z;Y,B,y)$ is bounded from below by
\begin{equation}
S_{\text{att}}(Z;Y,B,y)
\stackrel{\ref{con:supcon_att}}\ge -
\frac{1}{|{\By{y}{Y}{B}}|\,|{\By{y}{Y}{B}}\setminus \mset{i}|}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}\setminus\mset{i}}
\norm{z_i} \norm{z_j}
=
- {\rho_{\mathcal{Z}}}^2
\enspace,
\end{equation}
which already implies the bound in the statement of the lemma.
For fixed $l\in\set{2,\dots,b}$, equality is attained if and only if there is equality in the Cauchy-Schwarz inequality.
This means, that for every $y\in\mathcal{Y}$, for every $B \in \mathcal{B}_{y,l}$ and for every $i,j \in \By{y}{Y}{B}$ there exists $\lambda\ge0$, such that $z_i = \lambda z_j$.
Since the $z_i$ and $z_j$ are on a hypersphere, this is equivalent to $z_i = z_j$.
Furthermore,
for each pair of indices $n,m \in [N]$ with equal class $y_n = y_m =y$, there exists a batch $B \in \mathcal{B}_{y,l}$ containing both indices. Hence the equality condition is equivalent to
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item [\ref{con:supcon_att}] For every $n,m\in [N]$, $y_n = y_m$ implies $z_n =z_m$.
\end{enumerate}
\end{proof}
Next, we consider the repulsion component.
Recall the definition of $S_{\text{rep}}(Z;Y,B,y)$ from Eq.~\eqref{eq:def_s_rep}. We want to bound
\begin{equation}
\sum_{y \in \mathcal{Y}}\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
=
\sum_{y \in \mathcal{Y}}\sum_{B \in \mathcal{B}_{y,l}}
\frac{1}{|{\By{y}{Y}{B}}|\,|{\By{y}{Y}{B}}^C|}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}^C}
\inprod{z_i}{z_j}
\enspace.
\end{equation}
Similarly to the case of the sum of the attraction terms in Lemma~\ref{lem:supcon_att}, we could bound each inner product by $\inprod{z_i}{z_j}\ge -{\rho_{\mathcal{Z}}}^2$. However, the obtained inequality will not be tight and thus useless for identifying the minimizer of the sum. This is due to the fact that, in this case,
equality would be attained if and only if all points $z_n,z_m\in \mathcal{Z}$ of different class $y_n \neq y_m$ were on opposite poles of the sphere. Yet, this is impossible for $|\mathcal{Y}|>2$, i.e., if there are more than two classes.
Therefore, the argumentation is a bit more complex and we split it in a sequence of lemmas.
\begin{Slem}
\label{lem:supcon_batches_to_indices}
Let $l \in \set{2,\dots,b-1}$ and let $y \in \mathcal{Y}$.
For every $Y\in\mathcal{Y}^N$ and every $Z\in \mathcal{Z}^N$ the following identity holds:
\begin{equation}
\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
=
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
K_{n,m}(y,l) \,
\inprod{z_n}{z_m}
\enspace,
\end{equation}
where for each $n,m\in [N]$ with $y_n = y$ and $y_m \neq y$ the combinatorial factor $K_{n,m}(y,l)$ is defined by
\begin{equation}
K_{n,m}(y,l) =
\frac{1}{l(b-l)}
\sum_{B \in \mathcal{B}_{y,l}}
\mult{ B_y }(n)
\mult{ {B_y}^C }(m)
\enspace.
\end{equation}
\end{Slem}
\begin{proof}
The lemma follows from appropriately partitioning the sum:
\begin{align}
\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
& =
\sum_{B \in \mathcal{B}_{y,l}}
\frac{1}{|{\By{y}{Y}{B}}|\,|{\By{y}{Y}{B}}^C|}
\sum_{i \in {\By{y}{Y}{B}}}
\sum_{j \in {\By{y}{Y}{B}}^C}
\inprod{z_i}{z_j}
\\
& =
\sum_{n\in [N]} \sum_{m\in [N]}
\frac{1}{l(b-l)}
\sum_{B \in \mathcal{B}_{y,l}}
\sum_{\substack{i \in {\By{y}{Y}{B}} \\ i = n}}
\sum_{\substack{j \in {\By{y}{Y}{B}}^C \\ j = m}}
\inprod{z_i}{z_j}
\\
& =
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
\inprod{z_n}{z_m}
\frac{1}{l(b-l)}
\sum_{B \in \mathcal{B}_{y,l}}
\left(\sum_{\substack{i \in {\By{y}{Y}{B}} \\ i = n}} 1 \right)
\left(\sum_{\substack{j \in {\By{y}{Y}{B}}^C \\ j = m}} 1 \right)
\\
& =
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
\inprod{z_n}{z_m}
\frac{1}{l(b-l)}
\sum_{B \in \mathcal{B}_{y,l}}
\mult{ \By{y}{Y}{B} }(n)
\mult{ \By{y}{Y}{B}^C }(m)
\\
& =
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
\inprod{z_n}{z_m} \,
K_{n,m}(y,l)
\enspace.
\end{align}
\end{proof}
In order to address the quantities $K_{n,m}(y, l)$, we will need the combinatorial identities of the subsequent Lemma~\ref{lem:combinatorics}.
\begin{Slem}
\label{lem:combinatorics}
Let $n,m \in \mathbb{N}$.
\begin{enumerate}
\item The number of $m$-multisets over $[n]$ is
\begin{equation} \label{eq:mset_card}
\msetch{n}{m} = \binom{n + m -1}{m}
\enspace.
\end{equation}
\item
\begin{equation}
\sum_{k=0}^m \msetch{n}{k} = \msetch{n+1}{m}
\label{eq:comb_sum}
\end{equation}
\item
\begin{equation}
\msetch{n+1}{m} = \msetch{n}{m} + \msetch{n+1}{m-1}
\label{eq:mset_rec_rel}
\end{equation}
\item Let $m\ge 1$, then
\begin{equation} \label{eq:mset_sum}
\sum_{k\in [m]} k \msetch{n-1}{m-k} = \msetch{n+1}{m-1}= \frac m n \msetch{n}{m}
\end{equation}
\end{enumerate}
\end{Slem}
\begin{proof}
The first three identities are well known and imply the last one.
\begin{enumerate}[label={\underline{Ad (\arabic*):}}, labelindent=19pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item Follows from the stars and bars representation of multisets. Therein, every $m$-mutliset over $[n]$ is uniquely determined by the position of $m$ bars in an $n+m-1$ tuple of stars and bars. Hence, the cardinality of the set of such multisets is the number of $m$-element subsets of an $n+m-1$-element set, which
is given by the binomial coefficient in Eq.~\eqref{eq:mset_card}. More precisely, the multiplicity of a number $k\in [n]$ in the multiset is encoded by the number of stars between the $(k-1)$-th and the $k$-th bar.
For example, for $n=5$ and $k=4$ the multiset ${1,3,4,4}$ is represented by $(*||*|**|)$.
\item Denote by $\mathcal{P}_{\{\{\,\}\}}(n,m)$ the set of all $m$-multisets over $[n]$. Then
\begin{equation}
\mathcal{P}_{\{\{\,\}\}}(n+1,m) = \bigsqcup_{k = 0}^m \set{M \in \mathcal{P}_{\{\{\,\}\}}(n+1,m)|~ \mult{M}(n+1)=k}
\enspace.
\end{equation}
Thinking of a $m$-multiset over $[n+1]$ containing the element $(n+1)$ exactly $k$-times as a $m-k$ multiset over $[n]$, we get from Eq. \eqref{eq:mset_card}
\begin{align}
\msetch{n+1}{m}
&= |\mathcal{P}_{\{\{\,\}\}}(n+1,m)| \\
&= \sum_{k=0}^m
|\set{M \in \mathcal{P}_{\{\{\,\}\}}(n+1,m)| \mult{M}(n+1)=k}|
\\
& =
\sum_{k=0}^m \msetch{n}{m-k}
= \sum_{k=0}^m \msetch{n}{k}
\enspace.
\end{align}
\item Follows directly from the previous argument. In particular,
\begin{equation}
\msetch{n+1}{m}
\stackrel{\eqref{eq:comb_sum}}{=}
\sum_{k=0}^m \msetch{n}{k}
= \sum_{k=0}^{m-1} \msetch{n}{k} + \msetch{n}{m}
\stackrel{\eqref{eq:comb_sum}}{=}
\msetch{n+1}{m-1} + \msetch{n}{m}
\enspace.
\end{equation}
\item The second equality is obvious, once both sides are expanded to the level of factorials.
For the the first equality, we prove by induction the equivalent formula
\begin{equation}
\sum_{k=0}^{m} (m-k) \msetch{n-1}{k}
=
\msetch{n+1}{m-1}
\enspace.
\label{eq:comb_equi}
\end{equation}
First, consider the case $m=1$. Then both
\begin{equation}
\sum_{k=0}^{1} (1-k) \msetch{n-1}{k} = (1-0) \msetch{n-1}{0}= 1
~~ \text{and} ~~
\msetch{n+1}{m-1} = \msetch{n+1}{0} = 1
\enspace.
\end{equation}
Secondly, assume that Eq.~\eqref{eq:comb_equi} holds for $m$. We show that it then also holds for $m+1$, i.e.
\begin{equation}
\sum_{k = 0}^{m+1} (m+1-k) \msetch{n-1}{k} = \msetch{n+1}{m} \enspace.
\end{equation}
The proof is a simple application of the previously derived summation identities:
\begin{align}
\sum_{k=0}^{m+1} (m+1-k) \msetch{n-1}{k}
& =
\underbrace{\sum_{k=0}^m (m-k) \msetch{n-1}{k}}_{\eqref{eq:comb_equi}}
+ \underbrace{\sum_{k=0}^m \msetch{n-1}{k} }_{\eqref{eq:comb_sum}}
\\
& = \msetch{n+1}{m-1} + \msetch{n}{m} \\
& \stackrel{\eqref{eq:mset_rec_rel}}{=} \msetch{n+1}{m}
\enspace.
\end{align}
\end{enumerate}
\end{proof}
\clearpage
\begin{Slem}
\label{lem:supcon_combinatorics}
Let $l \in \set{1,\dots,b-1}$, $Y\in \mathcal{Y}^N$ and $y\in \mathcal{Y}$.
For every $n,m \in [N]$, the combinatorial factor $K_{n,m}(y,l)$ has value
\begin{equation}
K_{n,m}(y,l) = \frac{|\mathcal{B}_{y,l}|}{\mult{Y}(y)(N - \mult{Y}(y))}
\enspace.
\end{equation}
\end{Slem}
\begin{proof}
We have
\begin{align}
K_{n,m}(y,l)
& =
\frac{1}{l(b-l)}
\sum_{B \in \mathcal{B}_{y,l}}
\mult{ \By{y}{Y}{B} }(n)
\mult{ \By{y}{Y}{B}^C }(m)
=
\frac{1}{l(b-l)}
\sum_{p=1}^l
\sum_{q=1}^{b-l}
\sum_{\substack{B \in \mathcal{B}_{y,l} \\ \mult{B_y}(n) = p \\ \mult{{B_y}^C}(m) = q}}
\mult{ \By{y}{Y}{B} }(n)
\mult{ \By{y}{Y}{B}^C }(m)
\\
&=
\frac{1}{l(b-l)}
\sum_{p=1}^l p
\sum_{q=1}^{b-l} q
\sum_{\substack{B \in \mathcal{B}_{y,l} \\ \mult{B_y}(n) = p \\ \mult{{B_y}^C}(m) = q}} 1
\label{lem:supcon_combinatorics:eq1}
\enspace.
\end{align}
Therefore, it is crucial to calculate the cardinality $|\set{B \in \mathcal{B}_{y,l}: \mult{B_y}(n) = p,~ \mult{{B_y}^C}(m) = q}|$.
We can think of each batch $B\in \mathcal{B}$ satisfying the condition
$$B \in \set{B \in \mathcal{B}_{y,l},~ \mult{B_y}(n) = p,~ \mult{{B_y}^C}(m) = q}$$
as a disjoint union of multisets $B = C_n \sqcup C_m \sqcup C_y \sqcup C_{y^C}$,
where
\begin{itemize}
\item $C_n$ is a $p$-multiset over the singleton $\set{n}$,
\item $C_m$ is a $q$-multiset over the singleton $\set{m}$,
\item $C_y$ is a $(l-p)$-set over the multiset $\set{i \in [N]\setminus\set{n}|~ y_i = y}$ of cardinality $\mult{Y}(y)-1$ and
\item $C_m$ is a $(b-l-q)$-set over the multiset $\set{j \in [N]\setminus\set{n}|~ y_j \neq y}$ of cardinality $N - \mult{Y}(y)-1$.
\end{itemize}
We write $\mathcal{C}_n, \mathcal{C}_m, \mathcal{C}_y$ and $\mathcal{C}_{y^C}$ for the respective sets of multisets. These sets are of cardinalities (see Eq. \eqref{eq:mset_card})
\begin{alignat*}{2}
|\mathcal{C}_{n}|&= \msetch{1}{p} = 1, \qquad
|\mathcal{C}_{y}|&&= \msetch{\mult{Y}(y)-1}{l-p}, \\
|\mathcal{C}_{m}|&= \msetch{1}{q} = 1, \qquad
|\mathcal{C}_{y^C}|&&= \msetch{N - \mult{Y}(y) - 1}{b-l-q},
\end{alignat*}
and so
\begin{equation}
\begin{split}
|\set{B \in \mathcal{B}_{y,l}: \mult{B_y}(n) = p,~ \mult{{B_y}^C}(m) = q}|
= |\mathcal{C}_{n}| \, |\mathcal{C}_{m}|\, |\mathcal{C}_{y}| \, |\mathcal{C}_{y^C}|\\
= \underbrace{\msetch{1}{p} \msetch{1}{q}}_{=1}
\msetch{\mult{Y}(y)-1}{l-p} \msetch{N - \mult{Y}(y) - 1}{b-l-q}
\enspace.
\end{split}
\end{equation}
By a similar argument,
\begin{align}
|\set{B \in \mathcal{B}_{y,l} }|
&= \msetch{\mult{Y}(y)}{l} \msetch{N - \mult{Y}(y) }{b-l}
\enspace.
\end{align}
Therefore, the sum from Eq.~\eqref{lem:supcon_combinatorics:eq1} simplifies to
\begin{align}
K_{n,m}(y,l)
&=
\frac{1}{l(b-l)}
\sum_{p=1}^l p
\sum_{q=1}^{b-l} q|\set{B \in \mathcal{B}_{y,l},~ \mult{B_y}(n) = p,~ \mult{{B_y}^C}(m) = q}|\\
& =
\frac{1}{l(b-l)}
\sum_{p=1}^l p \msetch{\mult{Y}(y)-1}{l-p}
\sum_{q=1}^{b-l} q \msetch{N - \mult{Y}(y) - 1}{b-l-q}
\enspace.
\end{align}
Leveraging Eq.~\eqref{eq:mset_sum}, we get the claimed result
\begin{align}
K_{n,m}(y,l)
& =
\frac{1}{l(b-l)}
\frac{l}{\mult{Y}(y)}
\msetch{\mult{Y}(y)}{l}
\frac{b-l}{N - \mult{Y}(y)}
\msetch{N - \mult{Y}(y)}{b-l}
\\
& =
\frac{|\mathcal{B}_{y,l}|}{\mult{Y}(y)(N - \mult{Y}(y))}
\enspace.
\end{align}
\end{proof}
\begin{Slem}
\label{lem:supcon_rep_sum}
Let $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}$.
For every $Z\in \mathcal{Z}^N$ and every $Y\in \mathcal{Y}^N$, we have that
\begin{equation}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_n \neq y}}
\inprod{z_n}{z_m}
\ge
- {\rho_\mathcal{Z}}^2
\sum_{y\in \mathcal{Y}}
\mult{Y}(y)^2
\enspace,
\end{equation}
where equality is attained if and only if the following conditions hold:
\begin{enumerate}[label={(Q\arabic*)}, start=5,labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item \label{con:supcon_rep_sum:1}
$\sum_{n\in [N]} z_n = 0$\enspace.
\item \label{con:supcon_rep_sum:2}
For every $n,m\in [N]$, $y_n = y_m$ implies $z_n = z_m$\enspace.
\end{enumerate}
\end{Slem}
\begin{proof}
We first rewrite the sum as
\begin{align}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_n \neq y}}
\inprod{z_n}{z_m}
&=
\sum_{y\in \mathcal{Y}}
\sum_{\substack{y'\in \mathcal{Y} \\ y \neq y'}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y'}}
\inprod{z_n}{z_m}
\\
& =
\sum_{y\in \mathcal{Y}}
\sum_{y'\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y'}}
\inprod{z_n}{z_m}
-
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y}}
\inprod{z_n}{z_m}
\\
& =
\sum_{\substack{n\in [N]}}
\sum_{\substack{m\in [N]}}
\inprod{z_n}{z_m}
-
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y}}
\inprod{z_n}{z_m}
\\
& =
\inprod{\sum_{\substack{n\in [N]}} z_n}{\sum_{\substack{n\in [N]}}z_n}
-
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y}}
\inprod{z_n}{z_m}
\enspace,
\end{align}
where, for the last step, we used the linearity of the inner product.
Using that the norm is positive-definite and applying the Cauchy-Schwarz inequality, yields the following lower bound:
\begin{align}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_n \neq y}}
\inprod{z_n}{z_m}
& =
\norm{ \sum_{n\in [N]} z_n}^2
-
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y}}
\inprod{z_n}{z_m}
\\
& \stackrel{\ref{con:supcon_rep_sum:1}}{\ge}
0 -
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}}
\sum_{\substack{m\in [N]\\y_m = y}}
\inprod{z_n}{z_m}
\\
& \stackrel{\ref{con:supcon_rep_sum:2}}{\ge}
-
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\y_n = y}} \norm{z_n}
\sum_{\substack{m\in [N]\\y_m = y}}
\norm{z_m}
\\
& =
- \sum_{y\in \mathcal{Y}}
\left(
\mult{Y}(y) {\rho_\mathcal{Z}}
\right)^2
=
- {\rho_\mathcal{Z}}^2
\sum_{y\in \mathcal{Y}}
\mult{Y}(y)^2
\enspace
\end{align}
Equality is attained if and only if the following conditions hold:
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item [\ref{con:supcon_rep_sum:1}]$\sum_n z_n = 0$
\item [\ref{con:supcon_rep_sum:2}]
We have equality in all applications of the Cauchy-Schwarz inequality, i.e., for every $y\in \mathcal{Y}$ and every $n,m\in [N]$ with $y_n=y_m=y$ there exists $\lambda(y,n,m)\ge0$ such that $z_n = \lambda (y,n,m)z_m$.
Since $\mathcal{Z}$ is a sphere $\lambda(y,n,m) = 1$ and so the above is equivalent to $y_n=y_m$ implies $z_n = z_m$.
\end{enumerate}
\end{proof}
\begin{Slem}[Sum of repulsion terms]
\label{lem:supcon_rep}
Let $l\in\set{2,\dots,b-1}$ and let $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}^{h-1}$.
For every $Z \in \mathcal{Z}^N$ and every balanced
$Y\in \mathcal{Y}^N$, we have that
\begin{equation}
\sum_{y \in \mathcal{Y}}\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
\ge
-
|\mathcal{B}_{y,l}| \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}
{\rho_\mathcal{Z}}^2
\enspace,
\end{equation}
where equality is attained if and only if
the conditions \ref{con:supcon_rep_sum:1} \& \ref{con:supcon_rep_sum:2} from Lemma~\ref{lem:supcon_rep_sum} are fulfilled.
\end{Slem}
\begin{proof}
Recall from Lemma~\ref{lem:supcon_batches_to_indices} that
\begin{equation}
\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
=
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
K_{n,m}(y,l) \,
\inprod{z_n}{z_m}
\enspace,
\end{equation}
and from Lemma~\ref{lem:supcon_combinatorics} that
\begin{equation}
K_{n,m}(y,l) = \frac{|\mathcal{B}_{y,l}|}{\mult{Y}(y)(N - \mult{Y}(y))} \enspace.
\end{equation}
Therefore,
\begin{equation}
\sum_{y \in \mathcal{Y}}\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
\stackrel{\text{Lem.~}\ref{lem:supcon_batches_to_indices}}{=}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
\frac{|\mathcal{B}_{y,l}|}{\mult{Y}(y)(N - \mult{Y}(y))} \,
\inprod{z_n}{z_m}
\enspace.
\end{equation}
Since $Y$ is balanced, i.e. $\mult{Y}(y) = \nicefrac{N}{|\mathcal{Y}|}$ for every $y\in \mathcal{Y}$, the term
\begin{equation}
\frac{|\mathcal{B}_{y,l}|}{\mult{Y}(y)(N - \mult{Y}(y))}
= \frac{|\mathcal{B}_{y,l}|}{N^2} \frac{|\mathcal{Y}|^2}{|\mathcal{Y}|-1}
\label{eq:supcon:K_balanced}
\end{equation}
does not depend on the labels $y$ as (1) $|\mathcal{B}_{y,l}|$ is independent from $y$ due to symmetry and (2) so is $\mult{Y}(y)$.
For brevity, we will still write $|\mathcal{B}_{y,l}|$ in the following, but keep in mind that it is constant w.r.t.~ $y$.
Furthermore, by Lemma~\ref{lem:supcon_rep_sum}
\begin{equation}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
\inprod{z_n}{z_m}
\ge
- {\rho_\mathcal{Z}}^2
\sum_{y\in \mathcal{Y}}
\mult{Y}(y)^2
= - \frac{N^2}{|\mathcal{Y}|} {\rho_\mathcal{Z}}^2
\enspace,
\end{equation}
where equality is attained if and only if the conditions \ref{con:supcon_rep_sum:1} \& \ref{con:supcon_rep_sum:2} are fulfilled.
Therefore, we obtain the claimed bound
\begin{align}
\sum_{y \in \mathcal{Y}}\sum_{B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
&=
\frac{|\mathcal{B}_{y,l}|}{N^2} \frac{|\mathcal{Y}|^2}{|\mathcal{Y}|-1}
\sum_{y\in \mathcal{Y}}
\sum_{\substack{n\in [N]\\ y_n = y}}
\sum_{\substack{m\in [N] \\ y_m \neq y}}
\inprod{z_n}{z_m}
\\
&\ge
-
\frac{|\mathcal{B}_{y,l}|}{N^2} \frac{|\mathcal{Y}|^2}{|\mathcal{Y}|-1}
\frac{N^2}{|\mathcal{Y}|} {\rho_\mathcal{Z}}^2
\\
& =
-
|\mathcal{B}_{y,l}| \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}
{\rho_\mathcal{Z}}^2
\enspace.
\end{align}
\end{proof}
\clearpage
As we have lower-bounded the attraction and repulsion components in Lemmas~\ref{lem:supcon_att} and \ref{lem:supcon_rep}, respectively, the following lemma, bounding the exponent in Eq.~\eqref{eq:supcon_outer_bound} of Lemma~\ref{lem:supcon_outer_bound}, is an immediate consequence.
\begin{Slem}
\label{lem:supcon_inner_bound}
Let $l\in \set{2,\dots,b-1}$ and let $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}$.
For every $Z \in \mathcal{Z}^N$ and every balanced $Y\in \mathcal{Y}^N$, we have that
\begin{equation}
\frac{1}{M_l}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
\ge
\frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}{\rho_{\mathcal{Z}}}^2
\enspace,
\end{equation}
where equality is attained if and only if the following conditions hold:
\begin{enumerate}[label={(A\arabic*)},start=3, labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item \label{con:supcon_inner_bound:collapse}For every $n,m\in [N]$, $y_n = y_m$ implies $z_n = z_m$\enspace.
\item \label{con:supcon_inner_bound:mean}
$\sum_{n\in [N]} z_n = 0$\enspace.
\end{enumerate}
\end{Slem}
\begin{proof}
Since $Y$ is balanced, $|\mathcal{B}_{y,l}|$ does not depend on $y$, and so
\begin{equation}
M_l = {\sum_{y\in Y} |\mathcal{B}_{y,l}|} = |\mathcal{Y}| |\mathcal{B}_{y,l}|
\enspace.
\end{equation}
Leveraging the bounds on the sums of the attraction terms $S_{\text{att}}(Z;Y,B,y)$ and of the repulsion terms $S_{\text{rep}}(Z;Y,B,y)$ from Lemma~\ref{lem:supcon_att} and Lemma~\ref{lem:supcon_rep}, respectively, we get
\begin{align}
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S(Z;Y,B,y)
& =
\left(
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S_{\text{att}}(Z;Y,B,y)
\right)
+
\left(
\sum_{y \in \mathcal{Y}} \sum_ {B \in \mathcal{B}_{y,l}}
S_{\text{rep}}(Z;Y,B,y)
\right)
\\
& \ge
- |\mathcal{Y}| |\mathcal{B}_{y,l}|
{\rho_{\mathcal{Z}}}^2
-
|\mathcal{B}_{y,l}| \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}
{\rho_\mathcal{Z}}^2
\\
& =
- |\mathcal{Y}| |\mathcal{B}_{y,l}|
{\rho_{\mathcal{Z}}}^2
\left(
1 + \frac{1}{|\mathcal{Y}|-1}
\right)
\\
& =
- |\mathcal{Y}| |\mathcal{B}_{y,l}|
{\rho_{\mathcal{Z}}}^2
\frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}
\enspace.
\end{align}
This is the bound as stated in the lemma.
Herein, equality is attained if and only if equality is attained in Lemma~\ref{lem:supcon_att} and Lemma~\ref{lem:supcon_rep}.
Since conditions \ref{con:supcon_att} and \ref{con:supcon_rep_sum:2} are the same as condition \ref{con:supcon_inner_bound:collapse} and, additionally, since condition \ref{con:supcon_rep_sum:1} is the same as condition \ref{con:supcon_inner_bound:mean}, the lemma follows.
\end{proof}
\begin{Slem}
\label{lem:supcon_final}
Combining Lemma~\ref{lem:supcon_outer_bound} and Lemma~\ref{lem:supcon_inner_bound} implies that
the supervised contrasive loss $\mathcal{L}_{\operatorname{SC}}(Z;Y)$ is bounded from below by
\begin{equation}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
\ge
\sum_{l =2}^{b}
l M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
- \frac{|\mathcal{Y}|}{|\mathcal{Y}|-1} {\rho_{\mathcal{Z}}}^2
\right)
\right)
\enspace,
\end{equation}
where equality is attained if and only if there are $\zeta_1, \dots, \zeta_{|\mathcal{Y}|} \in \mathbb{R}^h$ such that the following conditions hold:
\begin{enumerate}[label={(C\arabic*)},labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item\label{con:supcon_final:1}
$\forall n \in [N]: z_n = \zeta_{y_n}$
\item\label{con:supcon_final:2}
$\{\zeta_y\}_{y\in \mathcal{Y}}$ form a $\rho_{\mathcal{Z}}$-sphere-inscribed regular simplex
\end{enumerate}
\end{Slem}
\begin{proof}
We have that
\begin{align}
\mathcal{L}_{\operatorname{SC}}(Z;Y)
& \stackrel{\text{Lem.}~\ref{lem:supcon_outer_bound}\phantom{0}}{\ge}
\sum_{l = 2}^b l M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
\frac {1} {M_l}
\sum_{y \in \mathcal{Y}} \sum_ {\substack{ B \in \mathcal{B}\\\mult{\Upsilon(B)}(y) = l}}
S(Z;Y,B,y)
\right)
\right)
\label{eq:lem:supcon_final:1}
\\
&\stackrel{\text{Lem.}~\ref{lem:supcon_inner_bound}}{\ge}
\sum_{l = 2}^b l M_l
\log
\left(
l - 1 + (b-l)
\exp \left(
-\frac{|\mathcal{Y}|}{|\mathcal{Y}|-1}{\rho_{\mathcal{Z}}}^2
\right)
\right)
\enspace.
\end{align}
Equality holds if and only if the equality conditions of Lemma~\ref{lem:supcon_outer_bound} and Lemma~\ref{lem:supcon_inner_bound} are fulfilled, i.e. if and only if:
\begin{enumerate}[labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item [\ref{con:supcon_outer_bound:intra}]
There exists a constant $\alpha$, such that $\forall n,m\in [N]$, $y_n = y_m$ implies $\inprod{z_n}{z_m} = \alpha$\enspace.
\item [\ref{con:supcon_outer_bound:inter}]
There exists a constant $\beta$, such that $\forall n,m\in [N]$, $y_n \neq y_m$ implies $\inprod{z_n}{z_m} = \beta$\enspace.
\item[\ref{con:supcon_inner_bound:collapse}] For every $n,m\in [N]$, $y_n = y_m$ implies $z_n = z_m$\enspace.
\item [\ref{con:supcon_inner_bound:mean}]
$\sum_{n\in [N]} z_n = 0$
\end{enumerate}
\emph{
Note that Lemma~\ref{lem:supcon_inner_bound} does not hold for $l=b$, so the exponent in Eq.~\eqref{eq:lem:supcon_final:1} might differ in this case. However, this is irrelevant as, in this case, the factor $(b-l)$ in front of the exponential function vanishes.}
To finish the proof, we need to show under the assumption $\mathcal{Z} = \mathbb{S}_{\rho_\mathcal{Z}}$, that these conditions are equivalent to that there are $\zeta_1,\dots,\zeta_{|\mathcal{Y}|}$ such that
\begin{enumerate}[ labelindent=10pt,leftmargin=!,labelwidth=\widthof{\ref{last-item}}]
\item[\ref{con:supcon_final:1}]
$\forall n \in [N]: z_n = \zeta_{y_n}$ and
\item[\ref{con:supcon_final:2}]
$\{\zeta_y\}_{y\in \mathcal{Y}}$ form a $\rho_{\mathcal{Z}}$-sphere-inscribed regular simplex, i.e.,
\begin{enumerate}[label=(S\arabic*)]
\item\label{def:simplex:s1:sc} $\sum_{y \in \mathcal{Y}} \zeta_y = 0$,
\item\label{def:simplex:s2:sc} $\| \zeta_y \| = \rho_\mathcal{Z}$ for $y \in \mathcal{Y}$,
\item\label{def:simplex:s3:sc} $\exists d \in \mathbb{R}: d = \inprod{\zeta_y}{\zeta_{y'}}$ for $y,y'\in \mathcal{Y}$ with $y\neq y'$.
\end{enumerate}
\end{enumerate}
Obviously, \ref{con:supcon_inner_bound:collapse} $\Longleftrightarrow$ \ref{con:supcon_final:1},
\ref{def:simplex:s2:sc} holds by assumption,
\ref{con:supcon_inner_bound:mean} $\Longleftrightarrow$ \ref{def:simplex:s1:sc}
and
\ref{con:supcon_outer_bound:inter} $\implies$ \ref{def:simplex:s3:sc}.
Thus it remains only to show that \ref{con:supcon_final:1} \& \ref{con:supcon_final:2} $\implies$ \ref{con:supcon_outer_bound:intra}.
Let $n,m\in [N]$ such that $y=y_n = y_m$. By condition \ref{con:supcon_final:1}, $z_n = z_m = \zeta_y$, so by condition \ref{def:simplex:s2:sc}, $\inprod{z_n}{z_m} = \norm{\zeta_y}^2 = {\rho_\mathcal{Z}}^2$, which does not depend on $n$ and $m$.
\end{proof}
|
{
"timestamp": "2021-07-15T02:18:02",
"yymm": "2102",
"arxiv_id": "2102.08817",
"language": "en",
"url": "https://arxiv.org/abs/2102.08817"
}
|
\section{Introduction}
Data symmetry has played a significant role in the deep neural networks.
In particular, a convolutional neural network, which play an important part in the recent achievements of deep neural networks, has translation equivariance that preserves the symmetry of the translation group.
From the same point of view, many studies have aimed to incorporate various group symmetries into neural networks, especially convolutional operation~\citep{cohen2019general,deepsphere_ml,finzi2020generalizing}.
As example applications, to solve the dynamics modeling problems, some works have introduced Hamiltonian dynamics~\citep{greydanus2019hamiltonian,toth2019hamiltonian,zhong2019symplectic}.
Similarly, \citet{quessard2020learning} estimated the action of the group by assuming the symmetry in the latent space inferred by the neural network.
Incorporating the data structure (symmetries) into the models as inductive bias, can reduce the model complexity and improve model generalization.
In terms of inductive bias, meta-learning, or learning to learn, provides a way to select an inductive bias from data.
Meta-learning use past experiences to adapt quickly to a new task $\mathcal{T} \sim p(\mathcal{T})$ sampled from some task distribution $p(\mathcal{T})$.
Especially in supervised meta-learning, a task is described as predicting a set of unlabeled data (target points) given a set of labeled data (context points).
Various works have proposed the use of supervised meta-learning from different perspectives ~\citep{andrychowicz2016learning,ravi2016optimization,finn2017model,snell2017prototypical,santoro2016meta,rusu2018meta}.
In this study, we are interested in neural processes (NPs)~\citep{garnelo2018conditional,garnelo2018neural}, which are meta-learning models that have encoder-decoder architecture~\citep{xu2019metafun}.
The encoder is a permutation-invariant function on the context points that maps the contexts into a latent representation.
The decoder is a function that produces the conditional predictive distribution of targets given the latent representation.
The objective of NPs is to learn the encoder and the decoder, so that the predictive model generalizes well to new tasks by observing some points of the tasks.
To achieve the objective, an NP is required to learn the shared information between the training tasks $\mathcal{T}, \mathcal{T}^\prime \sim p(\mathcal{T})$: the data knowledge~\cite{lemke2015metalearning}.
Each task $\mathcal{T}$ is represented by one dataset, and multiple datasets are provided for training NPs to tackle a meta-task.
For example, we consider a meta-task that completing the pixels that are missing in a given image.
Often, images are taken by the same condition in each dataset, respectively.
While the datasets contain identical subjects of images (e.g., cars or apples), the size and angle of the subjects in the image may be different; the datasets have group symmetry, such as scaling and rotation.
Therefore, it is expected that pre-constraining NPs to have group equivariance improves the performance of the NPs at those datasets.
In this paper, we investigate the group equivalence of NPs.
Specifically, we try to answer the following two questions, (1) can NPs represent equivariant functions? (2) can we explicitly induce the group equivariance into NPs?
In order to answer the questions, we introduce a new family of NPs, EquivCNP, and show that EquivCNP is a permutation-invariant and group-equivariant function theoretically and empirically.
Most relevant to EquivCNP, ConvCNP~\citep{gordon2019convolutional} shows that using general convolution operation leads to the translation equivariance theoretically and experimentally; however it does not consider incorporation of other groups.
First, we introduce the decomposition theorem for permutation-invariant and group-equivariant maps.
The theorem suggests that the encoder maps the context points into a latent variable, which is a functional representation, in order to preserve the data symmetry.
Thereafter, we construct EquivCNP by following the theorem.
In this study, we adopt LieConv~\citep{finzi2020generalizing} to construct EquivCNP for practical implementation.
We tackle a 1D synthetic regression task~\citep{garnelo2018conditional,garnelo2018neural,kim2019attentive,gordon2019convolutional} to show that EquivCNP with translation equivariance is comparable to conventional NPs.
Furthermore, we design a 2D image completion task to investigate the potential of EquivCNP with several group equivariances.
As a result, we demonstrate that EquivCNP enables zero-shot generalization by incorporating not translation, but scaling equivariance.
\section{Related Work}
\subsection{Neural Networks with Group Equivariance}
Our works build upon the recent advances in group equivariant convolutional operation incorporated into deep neural networks.
The first approach is group convolution introduced in \citep{cohen2016group}, where standard convolutional kernels are used and their transformation or the output transformation is performed with respect to the group.
This group convolution induces exact equivariance, but only to the action of discrete groups.
In contrast, for exact equivariance to continuous groups, some works employ harmonic analysis so as to find the basis of equivariant functions, and then parameterize convolutional kernels in the basis \citep{weiler2019general}.
Although this approach can be applied to any type of general data~\citep{anderson2019cormorant,weiler2019general}, it is limited to local application to compact, unimodular groups.
To address these issues, LieConv~\citep{finzi2020generalizing} and other works~\citep{huang2017deep,bekkers2019b} use Lie groups.
Our EquivCNP chooses LieConv to manage group equivariance for simplicity of the implementation.
There are several works that study deep neural networks using data symmetry.
In some works, in order to solve machine learning problems such as sequence prediction or reinforcement learning, neural networks attempt to learn a data symmetry of physical systems from noisy observations directly~\citep{greydanus2019hamiltonian,toth2019hamiltonian,zhong2019symplectic,sanchez2019hamiltonian}.
While both these studies and EquivCNP can handle data symmetries, EquivCNP is not limited to specific domains such as physics.
Furthermore, \citet{quessard2020learning} let the latent space into which neural networks map data, have group equivariance, and estimated the parameters of data symmetries.
In terms of using group equivariance in the latent space, EquivCNP is similar to this study but differs from being able to use various group equivariance.
\subsection{Family of neural processes}
NPs~\citep{garnelo2018conditional,garnelo2018neural} are deep generative models for regression functions that map an input $x_i \in \mathbb{R}^{d_x}$ into an output $y_i \in \mathbb{R}^{d_y}$.
In particular, given an arbitrary number of observed data points $(x_C, y_C) \coloneqq \{(x_i, y_i)\}_{i=1}^{C}$, NPs model the conditional distribution of the target value $y_T$ at some new, unobserved target data point $x_T$, where $(x_T, y_T)\coloneqq \{ (x_j, y_j) \}_{j=1}^{T}$.
Fundamentally, there are two NP variants: deterministic and probabilistic.
Deterministic NPs~\citep{garnelo2018conditional}, known as conditional NPs (CNPs), model the conditional distribution as:
\begin{align*}
p(y_T | x_T, x_C, y_C) \coloneqq p(y_T | x_T, r_C),
\end{align*}
where $r$ represents a function that maps data sets $(x_C, y_C)$ into a finite-dimensional vector space in a permutation-invariant way and $r_C \coloneqq r(x_C, y_C) \in \mathbb{R}^d$ is the feature vector.
The function $r$ can be implemented by DeepSets~\citep{zaheer2017deep}.
The likelihood $p(y_T|x_T, r_C)$ is modeled by Gaussian distribution factorized across the targets $(x_j, y_j)$ with mean and variance of prediction $\{(x_j, y_j)\}_{j=1}^T$ by passing inputs $r_C$ and $x_j$ through the MLP.
The CNP is trained by maximizing the likelihood.
Probabilistic NPs include a latent variable $z$.
The NP infers $q(z|r_C)$ given an input $r_C$ using the reparametrization trick~\citep{kingma2013auto} and models such a conditional distribution as:
\begin{align*}
p(y_T|x_T, x_C, y_C) \coloneqq \int p(y_T| x_T, r_C, z) q(z|r_C)dz
\end{align*}
and it is trained by maximizing an ELBO: $\mathcal{L}(\phi, \theta) = \mathbb{E}_{z \sim q_\phi(z|x_T, y_T)}[ \log p_\theta(y_T|x_T)] - KL[q_\phi(z|x_T, y_T) \| p_\theta(z| x_C, y_C) ]$.
NPs have various useful properties: i) Scalability: the computational cost of NPs scales as $\mathcal{O}(n+m)$ with respect to $n$ contexts and $m$ targets of data, ii) Flexibility: NPs can define a conditional distribution of an arbitrary number of target points, conditioning an arbitrary number of observations, iii) Permutation invariance: the encoder of NPs uses Deepsets~\citep{zaheer2017deep} to make the target prediction permutation invariant.
Thanks to these properties, \citet{galashov2019meta} replace Gaussian processes in Bayesian optimization, contextual multi-armed bandit, and Sim2Real tasks.
While there are many NP variants~\citep{kim2019attentive,louizos2019functional,xu2019metafun} to improve the performance of NPs, those do not take group equivariance into account yet.
The most similar to EquivCNP, ConvCNP~\citep{gordon2019convolutional} incorporated only translation equivariance.
In contrast, EquivCNP can incorporate not only translation but also other groups such as rotation and scaling.
\section{Decomposition Theorem}
In this section, we consider group convolution.
We first prepare some definition and teminology.
Let $\mathcal{X}$ and $\mathcal{Y}\subset \mathbb{R}$
be the input space and output space, respectively.
We define as $\mathcal{Z}_{M}=(\mathcal{X} \times \mathcal{Y})^{M}$ as a collection of $M$ input-output pairs, $\mathcal{Z}_{\leq M}=\bigcup_{n=1}^{M} \mathcal{Z}_{n}$ as the collection of at most $M$ pairs, and $\mathcal{Z}=\bigcup_{m=1}^{\infty} \mathcal{Z}_{m}$ as the collection of finitely many pairs.
Let $[n]=\{1, \ldots, n\}$ for $n\in \mathbb{N}$, and let $\mathbb{S}_n$ be the permutation group on $[n]$.
The action of $\mathbb{S}_n$ on $\mathcal{Z}_{n}$ is defined as
\begin{align*}
\pi Z_n
:= ((\boldsymbol{x}_{\pi^{-1}(1)}, \boldsymbol{y}_{\pi^{-1}(1)}), \ldots,(\boldsymbol{x}_{\pi^{-1}(n)}, \boldsymbol{y}_{\pi^{-1}(n)})),
\end{align*}
where $\pi\in\mathbb{S}_n$ and $Z_n\in \mathcal{Z}_{n}$.
We define the multiplicity of $Z_n=((\boldsymbol{x}_{1}, \boldsymbol{y}_{1}), \ldots,(\boldsymbol{x}_{n}, \boldsymbol{y}_{n}))\in \mathcal{Z}_{n}$ by
\begin{align*}
\operatorname{mult}(Z_n)
:=\sup \left\{\left|\left\{i \in[n]: \boldsymbol{x}_{i}=\hat{\boldsymbol{x}}\right\}\right|: \hat{\boldsymbol{x}}=\boldsymbol{x}_{1}, \ldots, \boldsymbol{x}_{n}\right\}
\end{align*}
and the multiplicity of $\mathcal{Z}^{\prime} \subseteq \mathcal{Z}$ by $\operatorname{mult}(\mathcal{Z}^{\prime})
:=\sup_{Z_n \in \mathcal{Z}^{\prime}} \operatorname{mult}(Z_n).$
Then, a collection $\mathcal{Z}^{\prime} \subseteq \mathcal{Z}$ is said to have multiplicity $K$ if $\operatorname{mult}(\mathcal{Z}^{\prime})=K$.
Mathematically, symmetry is described in terms of group action.
The following group equivariant maps represent to preserve the symmetry in data.
\begin{df}[Group Equivariance and Invariance]
Suppose that a group $G$ acts on sets $\mathcal{S}$ and $\mathcal{S}'$.
Then, a map $\Phi:\mathcal{S}\to\mathcal{S}'$ is called $G$-equivariant when $\Phi(g\cdot s)=g\cdot \Phi(s)$ holds for arbitrary $g\in G$ and $s\in\mathcal{S}$.
In particular, when $G$ acts on $\mathcal{S}'$ trivially (i.e., $g\cdot s'=s'$ for $g\in G$ and $s'\in\mathcal{S}'$), the $G$-equivariant map is said to be $G$-invariant: $\Phi(g\cdot s)= \Phi(s)$.
\end{df}
Then, we can derive the following theorem, which decompose a permutation-invariant and group equivariant function into two tractable functions.
\begin{thm}[Decomposition Theorem]\label{G-representation}
Let $G$ be a group.
Let $\mathcal{Z}_{\leq M}^{\prime} \subseteq (\mathcal{X}\times \mathcal{Y})_{\leq M}$ be topologically closed, permutation-invariant and $G$-invariant with multiplicity $K$.
For a function $\Phi: \mathcal{Z}_{\leq M}^{\prime} \rightarrow C_{b}(\mathcal{X}, \mathcal{Y})$, the following conditions are equivalent:
\begin{itemize}
\item[(I)] $\Phi$ is continuous, permutation-invariant and $G$-equivariant.
\item[(II)] There exist a function space $\mathcal{H}$ and a continuous $G$-equivariant function $\rho: \mathcal{H} \rightarrow C_{b}(\mathcal{X}
, \mathcal{Y})$
and a continuous $G$-invariant interpolating kernel $\psi: \mathcal{X}^2 \rightarrow \mathbb{R}$
such that
\begin{align*}
&\Phi(Z
=\rho \left(\sum_{i=1}^{m}
\phi_{K+1}\left(y_{i}\right)
\psi_{\boldsymbol{x}_{i}
\right)
\end{align*}
where $\phi_{K+1}: \mathcal{Y} \rightarrow \mathbb{R}^{K+1}$ is defined by $\phi_{K+1}(y):=[1,y,y^2,\ldots,y^K]^\top$.
\end{itemize}
\end{thm}
Thanks to the Theorem~\ref{G-representation}, we can construct the permutation-invariant and group-equivariant NPs whose form of encoder and decoder is determined.
In this paper, we call $\Phi$ as EquivDeepSet.
\section{Group Equivariant Conditional Neural Processes}
In this section, we represent EquivCNP that is a permutation-invariant and group-equivariant map.
EquivCNP models the same conditional distribution as well as CNPs:
\begin{align*}
p(\boldsymbol{Y}_T | \boldsymbol{X}_T, \mathcal{D}_C)&=\prod_{n=1}^{N} p\left(\boldsymbol{y}_{n} | \Phi_{\boldsymbol{\theta}}(\mathcal{D}_C)\left(\boldsymbol{x}_{n}\right)\right)\\
&=\prod_{n=1}^{N} \mathcal{N}\left(\boldsymbol{y}_{n} ; \boldsymbol{\mu}_{n}, \mathbf{\Sigma}_{n}\right) \text { with }\left(\boldsymbol{\mu}_{n}, \mathbf{\Sigma}_{n}\right)=\Phi_{\boldsymbol{\theta}}(\mathcal{D}_C)\left(\boldsymbol{x}_{n}\right)
\end{align*}
where $\mathcal{N}$ denotes the density function of a normal distribution, $\mathcal{D}_C = (\boldsymbol{X}_C, \boldsymbol{Y}_C) = \{ (x_c, y_c)\}_{i=1}^{C}$ is the observed context data and $\phi$ is a EquivDeepSet.
The important components of EquivCNP to be determined are $\rho$, $\phi$, and $\psi$.
The algorithm is represented in Algorithm~\ref{EquivCNP}.
To describe in more detail, first, Section~\ref{group_conv} introduce the definition of group convolution, and then Section~\ref{lieconv} explains LieConv~\citep{finzi2020generalizing} used for EquivCNP to implement group convolution.
Finally, we describe the architecture of proposed EquivCNP in Section~\ref{arch}.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{imgs/ver14.pdf}
\caption{Overview of EquivCNP.}
\label{fig:my_label}
\end{figure}
\begin{algorithm}[t]
\caption{Prediction of Group Equivariant Conditional Neural Process}
\label{EquivCNP}
\begin{algorithmic}
\REQUIRE $\rho=$\text{LieConv}, RBF kernel $\psi$, context $\{\boldsymbol{x}_i,y_i\}_{i=1}^N$, target $\{\boldsymbol{x}^*_j\}_{j=1}^M$
\STATE lower, upper $\leftarrow$ range($(\boldsymbol{x}_i)_{i=1}^N \cup (\boldsymbol{x}_j)_{j=1}^M$)
\STATE $(\boldsymbol{t}_k)_{k=1}^{T} \leftarrow \text{uniform\_grid}(\text{lower}, \text{upper}; \gamma)$
\STATE // Encoding the context information into representation $\boldsymbol{h}$(i.e. Encoder)
\STATE $\boldsymbol{h} \leftarrow \sum_{i=1}^N \phi_{K+1}(y_i) \psi ([\boldsymbol{x}_j^*, \boldsymbol{t}_k],\boldsymbol{x}_i)$ $\boldsymbol{h}$.
\STATE $(\boldsymbol{\mu}_j,\boldsymbol{\Sigma}_j)^{\top} = \text{LieConvNet}(\boldsymbol{h})(\boldsymbol{x}^*_j)$ // Decoder
\ENSURE $\{(\boldsymbol{\mu}_j,\boldsymbol{\Sigma}_j)\}_{j=1}^M$
\end{algorithmic}
\end{algorithm}
\subsection{Group Convolution}\label{group_conv}
When $\mathcal{X}$ is a homogenous space of a group $G$,
the lift of $x\in \mathcal{X}$ is the element of group $G$ that transfers a fixed origin $o$ to $x$:$\mathrm{Lift}(x)=\{u \in G\colon uo = x\}$.
That is, each pair of coordinates and features is lifted into $K$ elements\footnote{$K$ is a hyperparameter and we randomly pick $K$ elements $\{ u_{ik} \}_{k=1}^K$ in the orbit corresponding to $x_i$. }:$\{(x_i, f_i)\}_{i=1}^{N} \rightarrow \{(u_{ik}, f_i)\}_{i=1,k=1}^{N,K}$.
When the group action is transitive, the space on which it acts on is a homogenous space.
More generally, however, the action is not transitive, and the total space contains an infinite number of orbits.
Consider a quotient space $Q = \mathcal{X}/G$, which consists of orbits of $G$ in $\mathcal{X}$.
Then each element $q\in Q$ is a homogenous space of $G$.
Because many equivariant maps use this information, the total space should be $G\times \mathcal{X}/G$, not $G$. Hence, $x\in\mathcal{X}$ is lifted to the pair $(u, q)$, where $u\in G$ and $q\in Q$.
Group convolution is a generalization of convolution by translation, which is used in images, etc., to other groups.
\begin{df}[Group Convolution~\citep{kondor2018generalization,cohen2019general}]
Let $g, f \colon G\times Q \rightarrow \mathbb{R}$ be functions, and let $\mu(\cdot)$ be a Haar measure on $G$.
For any $u \in G$, the convolution of $f$ by $g$ is defined as
\begin{align*}
h(u, q) = \int_{G\times Q} g(v^{-1}u, q, q^{\prime})f(v, q^\prime)d\mu(v)dq^\prime.
\end{align*}
\end{df}
By the definition,
we can verify that the group convolution is $G$-equivariant.
Moreover, \citet{cohen2019general} recently showed that a $G$-equivariant linear map is represented by group convolution when the action of a group is transitive.
\subsection{Local Group Convolution}\label{lieconv}
In this study, we used LieConv as a group convolution ~\citep{finzi2020generalizing}.
LieConv is a convolution that can handle Lie groups in group convolutions.
LieConv acts on a pair ${(x_i, f_i)}_{i=1}^{N}$ of coordinates $x_i \in \mathcal{X}$ and values $f_i\in V$ in vector space $V$.
First, input data $x_i$ is transformed (lifted) into group elements $u_i$ and orbits $q_i$.
Next, we define the convolution range based on the invariant (pseudo) distance in the group, and convolve it using a kernel parameterized by a neural network.
What is important for inductive bias and computational efficiency in convolution is that the range of convolutions is local; that is, if the distance between $u_i$ and $u_j$ is larger than $r$, $g_{\theta} (u_i, u_j) = 0$.
First, we define distance in the Lie group to deal with locality in the matrix group\footnote{We assume that we have a finite-dimensional representation.}:
\begin{align*}
d(u, v) \coloneqq \|\log (u^{-1}v) \|_F,
\end{align*}
where
$\log$ denotes the matrix logarithm, and $F$ denotes the Frobenius norm.
Because $d(wu, wv) = \| \log (u^{-1}w^{-1}wv) \|_F = d(u, v)$ holds,
this function is left-invariant and is a pseudo-distance.\footnote{This is because the triangle inequality is not satisfied.}
To further account for orbit $q$, we extend the distance to $d((u_i, q_i), (v_j, q_j))^2 = d(u_i, v_j)^2 + \alpha d_{\mathcal{O}}(q_i,q_j)^2$, where $d_{\mathcal{O}}(q_i,q_j):=\inf_{x_i\in q_i, x_j \in q_j} d_{\mathcal{X}}(x_i, x_j)$ and $d_{\mathcal{X}}$ is the distance on $\mathcal{X}$.
It is not necessarily invariant to the transformation in $q$.
Based on this distance, the neighborhood is $nbhd(u) = \{ v, q | d((u_i, q_i), (v_i, q_j)) < r\}$.
The radius $r$ should be adjusted appropriately from the ratio of the range of convolutions to the total input, because the appropriate value is difficult to determine depending on the group treated.
Therefore, the Lie group convolution is
\begin{align*}
h(u, q) = \int_{v,q^\prime \in \mathrm{nbhd(u)}} g_\theta(v^{-1}u, q, q^{\prime})f(v, q^\prime)d\mu(v)dq^\prime.
\end{align*}
Radius $r$ of the neighborhood corresponds to the inverse of the density channel $h^{(0)}$ in \cite{gordon2019convolutional}.
{\bf{Discrete Approximation}}.
Given a lifted input data point $\{(v_j, q_j)_{j=1}^N\}$ and a function value $f_j = f(v_j, q_j)$ at each point, we need to select a target $\{(u_i, q_i)_{i=1}^{N}\}$ to convolve so that we can approximate the integral of the equation.
Because the convolutional range is limited by $\mathrm{nbhd}(u)$, LieConv can approximate the integrals by the Monte Carlo method:
\begin{align*}
h(u, q) = (g \hat{*} f)(u, q) = \frac{1}{n} \sum_{v_j, q_j^\prime \in \mathrm{nbhd}(u, q)} g(v_j^{-1}u, q, q_j^\prime)f(v_j, q_j^\prime)
\end{align*}
The classical convolutional filter kernel $g(\cdot)$ is only valid for discrete values and is not available for continuous group elements.
Therefore, pointconv/Lieconv uses a multilayered neural network $g_\theta$ as a convolutional kernel.
However, because neural networks are good at computation in Euclidean space, and input $G$ is not a vector space, we let $g_\theta$ be a map in the Lie algebra $\mathfrak{g}$.
Therefore, we use Lie groups
and logarithmic maps exist in each element of the group.
That is, let $g_\theta(u) = (g \circ \exp)_\theta(\log u)$, and parameterize $\tilde{g}_\theta=(g\circ \exp)_\theta$ by MLP.
We use $\tilde{g}_\theta \colon \mathfrak{g}\rightarrow \mathbb{R}^{c_{out} \times c_{in}}$.
Therefore, the convolution of the equation is
\begin{align*}
h_{i}=\frac{1}{n_{i}} \sum_{j \in \text { nbhd }(i)} \tilde{g}_{\theta}\left(\log \left(v_{j}^{-1} u_{i}\right), q_{i}, q_{j}\right) f_{j}.
\end{align*}
Here, the input to the MLP is $a_{ij} = \mathrm{Concat}\left( [\log (v_j^{-1}u_i), q_i, q_j)] \right)$.
\subsection{Implementation}\label{arch}
First, we explain the form of $\phi$.
Because most real-world data have a single output per one input location, we treat the multiplicity of $\mathcal{D}_C$ as one, $K=1$, and define $\phi(y) = [1\quad y]^{\top}$ based on ~\citep{zaheer2017deep}.
The first dimension of output $\phi_i$ indicates whether the data located at $x_i$ is observed, so that the model can distinguish between the observed data, and the unobserved data whose value is zero ($y_i=0$).
Then, we describe the form of $\psi$.
Following our Theorem \ref{G-representation}, $\psi$ is required to be stationary, non-negative, and a positive--definite kernel.
For EquivCNP, we change $\psi$ depending on whether the input data is continuous or discrete.
With continuous input data (e.g. 1D regression), we use RBF kernels for $\psi$.
An RBF kernel has a learnable bandwidth parameter and scale parameter and is optimized with EquivCNP.
A functional representation $E(Z)$ is made up by multiplying the kernel $\psi$ with $\phi$.
On the other hand, when the inputs are discrete (e.g. images), we use not an RBF kernel but LieConv.
Finally, we explain the form of $\rho$.
With our Theorem\ref{G-representation}, because $\rho$ needs to be a continuous group equivariant map between function spaces, we use LieConv for $\rho$.
In this study, under the hypothesis of separability~\citep{kaiser2017depthwise}, we implemented separable LieConv in the spatial and channel directions, to improve the efficiency of computational processing.
The details are given in the Appendix B.
EquivCNP requires to compute the convolution of $E(Z)$.
However, since $E(Z)$ itself is a functional representation, it cannot be computed in computers as it is.
To address this issue, we discretize $E(Z)$ over the range of context and target points.
We space the lattice points $(t_i)_{i=1}^n \subseteq \mathcal{X}$ on a uniform grid over a hypercube covering both the context and target points.
Because the conventional convolution that is used in ConvCNP requires discrete lattice input space to operate on and produces discrete outputs, we need to back the outputs to continuous functions $\mathcal{X} \rightarrow \mathcal{Y}$.
While ConvCNP regards the outputs as weights for evenly-spaced basis functions (i.e., RBF kernel),
LieConv does not require the input location to be lattice and can produce continuous functions output directly.
Note that the algorithm of EquivCNP can be the same as ConvCNP; it can also use evenly-spaced basis functions.
The obtained functions are used to output the Gaussian predictive mean and the variance at the given target points.
We can evaluate EquivCNP by log-likelihood using the mean and variance.
\section{Experiment}
To investigate the potential of EquivCNP, we constructed three questions: 1) Is EquivCNP comparable to conventional NPs such as ConvCNP? and 2) Can EquivCNP have group equivariance in addition to translation equivariance and 3) does it preserve the symmetries?
To compare fairly with ConvCNP, the architecture of EquivCNP follows that of ConvCNP; details are given in the Appendix C.
\subsection{1D Synthetic Regression Task}\label{sec:1d}
\begin{table}[t]
\caption{Log-likelihood of synthetic 1-dimensional regression}
\centering
\begin{tabular}{lrrr}
\toprule
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{RBF} & \multicolumn{1}{c}{Matern} & \multicolumn{1}{c}{Periodic} \\
\midrule
Oracle GP & $3.9335 \pm 0.5512$ & $3.7676 \pm 0.3542$ & $1.2194 \pm 5.6685$ \\
CNP~\citep{garnelo2018conditional} & $-1.7468 \pm 1.5415$ & $-1.7808 \pm 1.3124$ &$-1.0034 \pm 0.5174$ \\
ConvCNP~\citep{gordon2019convolutional} & $1.3271 \pm 1.0324$ & $0.8189\pm 0.9366$ & $-0.4787 \pm 0.5448$ \\
EquivCNP (ours) & $1.2930 \pm 1.0113$ & $0.6616 \pm 0.6728$ & $-0.4037 \pm 0.4968$\\
\bottomrule
\end{tabular}
\label{tab:1dreg}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{imgs/1d-reg.pdf}
\caption{Predictive mean and variance of ConvCNP and EquivCNP. The first two columns show the prediction of the models trained on the RBF kernel and the last two columns show the prediction of the model trained on the Matern--$\frac{5}{2}$ kernel. The target function and sampled data points are the same between the top row and bottom row except for the context. At the top row, the context is within the vertical dash line that is sampled from the same range during the training (black circle). In the bottom row, the new context located out of the training range (white circle) is appended. }
\label{fig:1dreg}
\end{figure}
To answer the first question, we tackle the 1D synthetic regression task as has been done in other papers ~\citep{garnelo2018conditional,garnelo2018neural,kim2019attentive}.
At each iteration, a function $f$ is sampled from a given function distribution, then, some of the context $\mathcal{D}_C$ and target $\mathcal{D}_T$ points are sampled from function $f$.
In this experiment, we selected the Gaussian process with RBF kernel, Matern--$\frac{5}{2}$ and periodic kernel for the function distribution.
We chose translation equivariance $T(1)$ to incorporate into EquivCNP.
We compared EquivCNP with GP (as an oracle), with CNP~\citep{garnelo2018conditional} as a baseline, and with ConvCNP.
Table~\ref{tab:1dreg} shows the log--likelihood means and standard deviations of 1000 tasks.
In this task, both contexts and targets are sampled from the range $[-2, 2]$.
From Table~\ref{tab:1dreg}, we can see that EquivCNP with translation equivariance is comparable to ConvCNP throughout all GP curve datasets. That is, EquivCNP has the model capacity to learn the functions as well as ConvCNP.
We also conducted the extrapolation regression proposed in ~\citep{gordon2019convolutional} as shown in Figure~\ref{fig:1dreg}.
The first two columns show the models trained on an RBF kernel and the last two columns on a Matern--$\frac{5}{2}$ kernel.
The top row shows the predictive distribution when the observation is given within the same training region; the bottom row for the observation is not only the training region but also the extrapolation region: $[-4, 4]$.
As a result, EquiveCNP can generalize to the observed data whose range is not included during training.
This result was expected because \citet{gordon2019convolutional} has mentioned that translation equivariance enables the models to adapt to this setting.
\subsection{2D Image-Completion Task}
An image-completion task aims to investigate that EquivCNP can complete the images when it is given an appropriate group equivariance.
The image-completion task can be regarded as a regression task that predicts the value of $y_i^*$ at the 2D image coordinates $x_i^*$, given the observed pixels $\mathcal{D}_C = \{ (x_n, y_n) \}_{n=1}^N$ ($\in \mathbb{R}^3$ for the colored image input, and $\in \mathbb{R}$ for the grayscale image input).
The framework of the image completion can apply not only to the images but also to other real-world applications, such as predicting spatial data~\citep{takeuchi2018angle}.
To evaluate the effect of EquivCNP with a specific group equivariance, we introduce a new dataset digital clock digits as shown in Figure~\ref{fig:test_image}.
Since previous works use the MNIST dataset for image completion, we also conduct the image completion task with rotated-MNIST.
However, we cannot find a significant difference between the group equivariance models (the result of rotated-MNIST is depicted in Appendix E).
We think that this happens because
(1) original MNIST contains various data symmeries including translation, scaling, and rotation, and
(2) we cannot specify them precisely.
Thus, we provide digital clock digits dataset anew.
\begin{figure}[t]
\def\@captype{table}
\begin{minipage}[T]{.55\textwidth}
\centering
\tblcaption{Log-likelihood of 2D image-completion task}
\begin{tabular}{lr}
\toprule
\multicolumn{1}{c}{Group} & Log--likelihood \\
\midrule
$T(2)$ & $1.0998 \pm 0.4115$ \\
$SO(2)$& $-2.4275 \pm 6.8856$ \\
$R_{>0} \times SO(2)$ & $\mathbf{1.8398\pm 0.5368}$ \\
$SE(2)$ & $1.1655 \pm 0.5420$ \\
\bottomrule
\end{tabular}
\label{tab:2dreg}
\end{minipage}
\hfill
\begin{minipage}[c]{.4\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{imgs/digits.pdf}
\caption{The example of training data (top) and test data (bottom).}
\label{fig:test_image}
\end{minipage}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{imgs/2d_reg_revised.pdf}
\caption{Image-completion task results. The top row shows the given observation and the other rows show the mean of the conditional distribution predicted by EquivCNP with the specific group equivariance: $T(2)$, $SO(2)$, $R_{>0} \times SO(2)$, and $SE(2)$. Two of each column shows the same image, and the difference between two columns is the percentage of context random sampling: $25\%$ and $75\%$. When the size of digits is the same as that of the training set (i.e. not scaling but rotation equals $SO(2)$ symmetry), $T(2)$ and $SE(2)$ have a good quality, but when the size of digits is smaller than that of training set, $R_{>0} \times SO(2)$ has a good performance.}
\label{fig:2dreg}
\end{figure}
In this experiment, we used four kinds of group equivariance; translation group $T(2)$, the 2D rotation group $SO(2)$, the translation and rotation group $SE(2)$, and the rotation-scale group $R_{>0} \times SO(2)$.
The size of the images is $64 \times 64$ pixels, and the numbers are in the center with the same vertical length.
For the test data, we transform the images by scaling within $[0.15, 0.5]$ and rotating within $[-90^\circ, +90^\circ]$.
Image completion with our digits data becomes an extrapolation task in that the test data is never seen during training, though the number shapes are the same in both sets.
The log--likelihood of image completion by EquivCNP with the group equivariance is reported in Table~\ref{tab:2dreg}.
The mean and standard deviation of the log--likelihood is calculated over 1000 tasks (i.e. evaluating the digit transformed in 100 times respectively).
As a result, EquivCNP with $R_{>0} \times SO(2)$ performed better than other group equivarinace.
On the other hand, the model with $SO(2)$ had the worst performance.
This might happen because the $SO(2)$ is not able to generalize EquivCNP to scaling.
In fact, the log--likelihood of $SE(2)$, which is the group equivariance combining translation $T(2)$ and rotation $SO(2)$, is not improved than that of $T(2)$.
Figure~\ref{fig:2dreg} shows the qualitative result of image completion by EquivCNP with each group equivariance.
We demonstrate that EquivCNP was able to predict digits smaller than the training digits\footnote{When the scaling is $\times 1.0$, it equals to $SO(2)$ symmetry.}.
While $T(2)$ completes the images most clearly when the sizes of digits and the number of observations are large, other groups also complete the images.
The smaller the size of digits is compared to the training digits, the worse the quality of $T(2)$ completion becomes, and $R_{>0} \times SO(2)$ completes the digits more clearly.
This is because the convolution region of $T(2)$ is invariant to the location, while that of $R_{>0} \times SO(2)$ is adaptive to the location.
As a result, for the images transformed by scaling, we can see that EquivCNP with $R_{>0} \times SO(2)$ preserved scaling group equivariance.
\section{Discussion}
We presented a new neural process, EquivCNP, that uses the group equivariant adopted from LieConv.
Given a specific group equivariance, such as translation and rotation as inductive bias, EquivCNP has a good performance at regression tasks.
This is because the kernel size changes depending on the specific equivariance.
Real--world applications, such as robot learning tasks (e.g. using hand-eye camera) will be left as future work.
We also hope EquivCNPs will help in learning group equivariance~\citep{quessard2020learning} by data--driven approaches for future research.
\section{Introduction}
Data symmetry has played a significant role in the deep neural networks.
In particular, a convolutional neural network, which play an important part in the recent achievements of deep neural networks, has translation equivariance that preserves the symmetry of the translation group.
From the same point of view, many studies have aimed to incorporate various group symmetries into neural networks, especially convolutional operation~\citep{cohen2019general,deepsphere_ml,finzi2020generalizing}.
As example applications, to solve the dynamics modeling problems, some works have introduced Hamiltonian dynamics~\citep{greydanus2019hamiltonian,toth2019hamiltonian,zhong2019symplectic}.
Similarly, \citet{quessard2020learning} estimated the action of the group by assuming the symmetry in the latent space inferred by the neural network.
Incorporating the data structure (symmetries) into the models as inductive bias, can reduce the model complexity and improve model generalization.
In terms of inductive bias, meta-learning, or learning to learn, provides a way to select an inductive bias from data.
Meta-learning use past experiences to adapt quickly to a new task $\mathcal{T} \sim p(\mathcal{T})$ sampled from some task distribution $p(\mathcal{T})$.
Especially in supervised meta-learning, a task is described as predicting a set of unlabeled data (target points) given a set of labeled data (context points).
Various works have proposed the use of supervised meta-learning from different perspectives ~\citep{andrychowicz2016learning,ravi2016optimization,finn2017model,snell2017prototypical,santoro2016meta,rusu2018meta}.
In this study, we are interested in neural processes (NPs)~\citep{garnelo2018conditional,garnelo2018neural}, which are meta-learning models that have encoder-decoder architecture~\citep{xu2019metafun}.
The encoder is a permutation-invariant function on the context points that maps the contexts into a latent representation.
The decoder is a function that produces the conditional predictive distribution of targets given the latent representation.
The objective of NPs is to learn the encoder and the decoder, so that the predictive model generalizes well to new tasks by observing some points of the tasks.
To achieve the objective, an NP is required to learn the shared information between the training tasks $\mathcal{T}, \mathcal{T}^\prime \sim p(\mathcal{T})$: the data knowledge~\cite{lemke2015metalearning}.
Each task $\mathcal{T}$ is represented by one dataset, and multiple datasets are provided for training NPs to tackle a meta-task.
For example, we consider a meta-task that completing the pixels that are missing in a given image.
Often, images are taken by the same condition in each dataset, respectively.
While the datasets contain identical subjects of images (e.g., cars or apples), the size and angle of the subjects in the image may be different; the datasets have group symmetry, such as scaling and rotation.
Therefore, it is expected that pre-constraining NPs to have group equivariance improves the performance of the NPs at those datasets.
In this paper, we investigate the group equivalence of NPs.
Specifically, we try to answer the following two questions, (1) can NPs represent equivariant functions? (2) can we explicitly induce the group equivariance into NPs?
In order to answer the questions, we introduce a new family of NPs, EquivCNP, and show that EquivCNP is a permutation-invariant and group-equivariant function theoretically and empirically.
Most relevant to EquivCNP, ConvCNP~\citep{gordon2019convolutional} shows that using general convolution operation leads to the translation equivariance theoretically and experimentally; however it does not consider incorporation of other groups.
First, we introduce the decomposition theorem for permutation-invariant and group-equivariant maps.
The theorem suggests that the encoder maps the context points into a latent variable, which is a functional representation, in order to preserve the data symmetry.
Thereafter, we construct EquivCNP by following the theorem.
In this study, we adopt LieConv~\citep{finzi2020generalizing} to construct EquivCNP for practical implementation.
We tackle a 1D synthetic regression task~\citep{garnelo2018conditional,garnelo2018neural,kim2019attentive,gordon2019convolutional} to show that EquivCNP with translation equivariance is comparable to conventional NPs.
Furthermore, we design a 2D image completion task to investigate the potential of EquivCNP with several group equivariances.
As a result, we demonstrate that EquivCNP enables zero-shot generalization by incorporating not translation, but scaling equivariance.
\section{Related Work}
\subsection{Neural Networks with Group Equivariance}
Our works build upon the recent advances in group equivariant convolutional operation incorporated into deep neural networks.
The first approach is group convolution introduced in \citep{cohen2016group}, where standard convolutional kernels are used and their transformation or the output transformation is performed with respect to the group.
This group convolution induces exact equivariance, but only to the action of discrete groups.
In contrast, for exact equivariance to continuous groups, some works employ harmonic analysis so as to find the basis of equivariant functions, and then parameterize convolutional kernels in the basis \citep{weiler2019general}.
Although this approach can be applied to any type of general data~\citep{anderson2019cormorant,weiler2019general}, it is limited to local application to compact, unimodular groups.
To address these issues, LieConv~\citep{finzi2020generalizing} and other works~\citep{huang2017deep,bekkers2019b} use Lie groups.
Our EquivCNP chooses LieConv to manage group equivariance for simplicity of the implementation.
There are several works that study deep neural networks using data symmetry.
In some works, in order to solve machine learning problems such as sequence prediction or reinforcement learning, neural networks attempt to learn a data symmetry of physical systems from noisy observations directly~\citep{greydanus2019hamiltonian,toth2019hamiltonian,zhong2019symplectic,sanchez2019hamiltonian}.
While both these studies and EquivCNP can handle data symmetries, EquivCNP is not limited to specific domains such as physics.
Furthermore, \citet{quessard2020learning} let the latent space into which neural networks map data, have group equivariance, and estimated the parameters of data symmetries.
In terms of using group equivariance in the latent space, EquivCNP is similar to this study but differs from being able to use various group equivariance.
\subsection{Family of neural processes}
NPs~\citep{garnelo2018conditional,garnelo2018neural} are deep generative models for regression functions that map an input $x_i \in \mathbb{R}^{d_x}$ into an output $y_i \in \mathbb{R}^{d_y}$.
In particular, given an arbitrary number of observed data points $(x_C, y_C) \coloneqq \{(x_i, y_i)\}_{i=1}^{C}$, NPs model the conditional distribution of the target value $y_T$ at some new, unobserved target data point $x_T$, where $(x_T, y_T)\coloneqq \{ (x_j, y_j) \}_{j=1}^{T}$.
Fundamentally, there are two NP variants: deterministic and probabilistic.
Deterministic NPs~\citep{garnelo2018conditional}, known as conditional NPs (CNPs), model the conditional distribution as:
\begin{align*}
p(y_T | x_T, x_C, y_C) \coloneqq p(y_T | x_T, r_C),
\end{align*}
where $r$ represents a function that maps data sets $(x_C, y_C)$ into a finite-dimensional vector space in a permutation-invariant way and $r_C \coloneqq r(x_C, y_C) \in \mathbb{R}^d$ is the feature vector.
The function $r$ can be implemented by DeepSets~\citep{zaheer2017deep}.
The likelihood $p(y_T|x_T, r_C)$ is modeled by Gaussian distribution factorized across the targets $(x_j, y_j)$ with mean and variance of prediction $\{(x_j, y_j)\}_{j=1}^T$ by passing inputs $r_C$ and $x_j$ through the MLP.
The CNP is trained by maximizing the likelihood.
Probabilistic NPs include a latent variable $z$.
The NP infers $q(z|r_C)$ given an input $r_C$ using the reparametrization trick~\citep{kingma2013auto} and models such a conditional distribution as:
\begin{align*}
p(y_T|x_T, x_C, y_C) \coloneqq \int p(y_T| x_T, r_C, z) q(z|r_C)dz
\end{align*}
and it is trained by maximizing an ELBO: $\mathcal{L}(\phi, \theta) = \mathbb{E}_{z \sim q_\phi(z|x_T, y_T)}[ \log p_\theta(y_T|x_T)] - KL[q_\phi(z|x_T, y_T) \| p_\theta(z| x_C, y_C) ]$.
NPs have various useful properties: i) Scalability: the computational cost of NPs scales as $\mathcal{O}(n+m)$ with respect to $n$ contexts and $m$ targets of data, ii) Flexibility: NPs can define a conditional distribution of an arbitrary number of target points, conditioning an arbitrary number of observations, iii) Permutation invariance: the encoder of NPs uses Deepsets~\citep{zaheer2017deep} to make the target prediction permutation invariant.
Thanks to these properties, \citet{galashov2019meta} replace Gaussian processes in Bayesian optimization, contextual multi-armed bandit, and Sim2Real tasks.
While there are many NP variants~\citep{kim2019attentive,louizos2019functional,xu2019metafun} to improve the performance of NPs, those do not take group equivariance into account yet.
The most similar to EquivCNP, ConvCNP~\citep{gordon2019convolutional} incorporated only translation equivariance.
In contrast, EquivCNP can incorporate not only translation but also other groups such as rotation and scaling.
\section{Decomposition Theorem}
In this section, we consider group convolution.
We first prepare some definition and teminology.
Let $\mathcal{X}$ and $\mathcal{Y}\subset \mathbb{R}$
be the input space and output space, respectively.
We define as $\mathcal{Z}_{M}=(\mathcal{X} \times \mathcal{Y})^{M}$ as a collection of $M$ input-output pairs, $\mathcal{Z}_{\leq M}=\bigcup_{n=1}^{M} \mathcal{Z}_{n}$ as the collection of at most $M$ pairs, and $\mathcal{Z}=\bigcup_{m=1}^{\infty} \mathcal{Z}_{m}$ as the collection of finitely many pairs.
Let $[n]=\{1, \ldots, n\}$ for $n\in \mathbb{N}$, and let $\mathbb{S}_n$ be the permutation group on $[n]$.
The action of $\mathbb{S}_n$ on $\mathcal{Z}_{n}$ is defined as
\begin{align*}
\pi Z_n
:= ((\boldsymbol{x}_{\pi^{-1}(1)}, \boldsymbol{y}_{\pi^{-1}(1)}), \ldots,(\boldsymbol{x}_{\pi^{-1}(n)}, \boldsymbol{y}_{\pi^{-1}(n)})),
\end{align*}
where $\pi\in\mathbb{S}_n$ and $Z_n\in \mathcal{Z}_{n}$.
We define the multiplicity of $Z_n=((\boldsymbol{x}_{1}, \boldsymbol{y}_{1}), \ldots,(\boldsymbol{x}_{n}, \boldsymbol{y}_{n}))\in \mathcal{Z}_{n}$ by
\begin{align*}
\operatorname{mult}(Z_n)
:=\sup \left\{\left|\left\{i \in[n]: \boldsymbol{x}_{i}=\hat{\boldsymbol{x}}\right\}\right|: \hat{\boldsymbol{x}}=\boldsymbol{x}_{1}, \ldots, \boldsymbol{x}_{n}\right\}
\end{align*}
and the multiplicity of $\mathcal{Z}^{\prime} \subseteq \mathcal{Z}$ by $\operatorname{mult}(\mathcal{Z}^{\prime})
:=\sup_{Z_n \in \mathcal{Z}^{\prime}} \operatorname{mult}(Z_n).$
Then, a collection $\mathcal{Z}^{\prime} \subseteq \mathcal{Z}$ is said to have multiplicity $K$ if $\operatorname{mult}(\mathcal{Z}^{\prime})=K$.
Mathematically, symmetry is described in terms of group action.
The following group equivariant maps represent to preserve the symmetry in data.
\begin{df}[Group Equivariance and Invariance]
Suppose that a group $G$ acts on sets $\mathcal{S}$ and $\mathcal{S}'$.
Then, a map $\Phi:\mathcal{S}\to\mathcal{S}'$ is called $G$-equivariant when $\Phi(g\cdot s)=g\cdot \Phi(s)$ holds for arbitrary $g\in G$ and $s\in\mathcal{S}$.
In particular, when $G$ acts on $\mathcal{S}'$ trivially (i.e., $g\cdot s'=s'$ for $g\in G$ and $s'\in\mathcal{S}'$), the $G$-equivariant map is said to be $G$-invariant: $\Phi(g\cdot s)= \Phi(s)$.
\end{df}
Then, we can derive the following theorem, which decompose a permutation-invariant and group equivariant function into two tractable functions.
\begin{thm}[Decomposition Theorem]\label{G-representation}
Let $G$ be a group.
Let $\mathcal{Z}_{\leq M}^{\prime} \subseteq (\mathcal{X}\times \mathcal{Y})_{\leq M}$ be topologically closed, permutation-invariant and $G$-invariant with multiplicity $K$.
For a function $\Phi: \mathcal{Z}_{\leq M}^{\prime} \rightarrow C_{b}(\mathcal{X}, \mathcal{Y})$, the following conditions are equivalent:
\begin{itemize}
\item[(I)] $\Phi$ is continuous, permutation-invariant and $G$-equivariant.
\item[(II)] There exist a function space $\mathcal{H}$ and a continuous $G$-equivariant function $\rho: \mathcal{H} \rightarrow C_{b}(\mathcal{X}
, \mathcal{Y})$
and a continuous $G$-invariant interpolating kernel $\psi: \mathcal{X}^2 \rightarrow \mathbb{R}$
such that
\begin{align*}
&\Phi(Z
=\rho \left(\sum_{i=1}^{m}
\phi_{K+1}\left(y_{i}\right)
\psi_{\boldsymbol{x}_{i}
\right)
\end{align*}
where $\phi_{K+1}: \mathcal{Y} \rightarrow \mathbb{R}^{K+1}$ is defined by $\phi_{K+1}(y):=[1,y,y^2,\ldots,y^K]^\top$.
\end{itemize}
\end{thm}
Thanks to the Theorem~\ref{G-representation}, we can construct the permutation-invariant and group-equivariant NPs whose form of encoder and decoder is determined.
In this paper, we call $\Phi$ as EquivDeepSet.
\section{Group Equivariant Conditional Neural Processes}
In this section, we represent EquivCNP that is a permutation-invariant and group-equivariant map.
EquivCNP models the same conditional distribution as well as CNPs:
\begin{align*}
p(\boldsymbol{Y}_T | \boldsymbol{X}_T, \mathcal{D}_C)&=\prod_{n=1}^{N} p\left(\boldsymbol{y}_{n} | \Phi_{\boldsymbol{\theta}}(\mathcal{D}_C)\left(\boldsymbol{x}_{n}\right)\right)\\
&=\prod_{n=1}^{N} \mathcal{N}\left(\boldsymbol{y}_{n} ; \boldsymbol{\mu}_{n}, \mathbf{\Sigma}_{n}\right) \text { with }\left(\boldsymbol{\mu}_{n}, \mathbf{\Sigma}_{n}\right)=\Phi_{\boldsymbol{\theta}}(\mathcal{D}_C)\left(\boldsymbol{x}_{n}\right)
\end{align*}
where $\mathcal{N}$ denotes the density function of a normal distribution, $\mathcal{D}_C = (\boldsymbol{X}_C, \boldsymbol{Y}_C) = \{ (x_c, y_c)\}_{i=1}^{C}$ is the observed context data and $\phi$ is a EquivDeepSet.
The important components of EquivCNP to be determined are $\rho$, $\phi$, and $\psi$.
The algorithm is represented in Algorithm~\ref{EquivCNP}.
To describe in more detail, first, Section~\ref{group_conv} introduce the definition of group convolution, and then Section~\ref{lieconv} explains LieConv~\citep{finzi2020generalizing} used for EquivCNP to implement group convolution.
Finally, we describe the architecture of proposed EquivCNP in Section~\ref{arch}.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{imgs/ver14.pdf}
\caption{Overview of EquivCNP.}
\label{fig:my_label}
\end{figure}
\begin{algorithm}[t]
\caption{Prediction of Group Equivariant Conditional Neural Process}
\label{EquivCNP}
\begin{algorithmic}
\REQUIRE $\rho=$\text{LieConv}, RBF kernel $\psi$, context $\{\boldsymbol{x}_i,y_i\}_{i=1}^N$, target $\{\boldsymbol{x}^*_j\}_{j=1}^M$
\STATE lower, upper $\leftarrow$ range($(\boldsymbol{x}_i)_{i=1}^N \cup (\boldsymbol{x}_j)_{j=1}^M$)
\STATE $(\boldsymbol{t}_k)_{k=1}^{T} \leftarrow \text{uniform\_grid}(\text{lower}, \text{upper}; \gamma)$
\STATE // Encoding the context information into representation $\boldsymbol{h}$(i.e. Encoder)
\STATE $\boldsymbol{h} \leftarrow \sum_{i=1}^N \phi_{K+1}(y_i) \psi ([\boldsymbol{x}_j^*, \boldsymbol{t}_k],\boldsymbol{x}_i)$ $\boldsymbol{h}$.
\STATE $(\boldsymbol{\mu}_j,\boldsymbol{\Sigma}_j)^{\top} = \text{LieConvNet}(\boldsymbol{h})(\boldsymbol{x}^*_j)$ // Decoder
\ENSURE $\{(\boldsymbol{\mu}_j,\boldsymbol{\Sigma}_j)\}_{j=1}^M$
\end{algorithmic}
\end{algorithm}
\subsection{Group Convolution}\label{group_conv}
When $\mathcal{X}$ is a homogenous space of a group $G$,
the lift of $x\in \mathcal{X}$ is the element of group $G$ that transfers a fixed origin $o$ to $x$:$\mathrm{Lift}(x)=\{u \in G\colon uo = x\}$.
That is, each pair of coordinates and features is lifted into $K$ elements\footnote{$K$ is a hyperparameter and we randomly pick $K$ elements $\{ u_{ik} \}_{k=1}^K$ in the orbit corresponding to $x_i$. }:$\{(x_i, f_i)\}_{i=1}^{N} \rightarrow \{(u_{ik}, f_i)\}_{i=1,k=1}^{N,K}$.
When the group action is transitive, the space on which it acts on is a homogenous space.
More generally, however, the action is not transitive, and the total space contains an infinite number of orbits.
Consider a quotient space $Q = \mathcal{X}/G$, which consists of orbits of $G$ in $\mathcal{X}$.
Then each element $q\in Q$ is a homogenous space of $G$.
Because many equivariant maps use this information, the total space should be $G\times \mathcal{X}/G$, not $G$. Hence, $x\in\mathcal{X}$ is lifted to the pair $(u, q)$, where $u\in G$ and $q\in Q$.
Group convolution is a generalization of convolution by translation, which is used in images, etc., to other groups.
\begin{df}[Group Convolution~\citep{kondor2018generalization,cohen2019general}]
Let $g, f \colon G\times Q \rightarrow \mathbb{R}$ be functions, and let $\mu(\cdot)$ be a Haar measure on $G$.
For any $u \in G$, the convolution of $f$ by $g$ is defined as
\begin{align*}
h(u, q) = \int_{G\times Q} g(v^{-1}u, q, q^{\prime})f(v, q^\prime)d\mu(v)dq^\prime.
\end{align*}
\end{df}
By the definition,
we can verify that the group convolution is $G$-equivariant.
Moreover, \citet{cohen2019general} recently showed that a $G$-equivariant linear map is represented by group convolution when the action of a group is transitive.
\subsection{Local Group Convolution}\label{lieconv}
In this study, we used LieConv as a group convolution ~\citep{finzi2020generalizing}.
LieConv is a convolution that can handle Lie groups in group convolutions.
LieConv acts on a pair ${(x_i, f_i)}_{i=1}^{N}$ of coordinates $x_i \in \mathcal{X}$ and values $f_i\in V$ in vector space $V$.
First, input data $x_i$ is transformed (lifted) into group elements $u_i$ and orbits $q_i$.
Next, we define the convolution range based on the invariant (pseudo) distance in the group, and convolve it using a kernel parameterized by a neural network.
What is important for inductive bias and computational efficiency in convolution is that the range of convolutions is local; that is, if the distance between $u_i$ and $u_j$ is larger than $r$, $g_{\theta} (u_i, u_j) = 0$.
First, we define distance in the Lie group to deal with locality in the matrix group\footnote{We assume that we have a finite-dimensional representation.}:
\begin{align*}
d(u, v) \coloneqq \|\log (u^{-1}v) \|_F,
\end{align*}
where
$\log$ denotes the matrix logarithm, and $F$ denotes the Frobenius norm.
Because $d(wu, wv) = \| \log (u^{-1}w^{-1}wv) \|_F = d(u, v)$ holds,
this function is left-invariant and is a pseudo-distance.\footnote{This is because the triangle inequality is not satisfied.}
To further account for orbit $q$, we extend the distance to $d((u_i, q_i), (v_j, q_j))^2 = d(u_i, v_j)^2 + \alpha d_{\mathcal{O}}(q_i,q_j)^2$, where $d_{\mathcal{O}}(q_i,q_j):=\inf_{x_i\in q_i, x_j \in q_j} d_{\mathcal{X}}(x_i, x_j)$ and $d_{\mathcal{X}}$ is the distance on $\mathcal{X}$.
It is not necessarily invariant to the transformation in $q$.
Based on this distance, the neighborhood is $nbhd(u) = \{ v, q | d((u_i, q_i), (v_i, q_j)) < r\}$.
The radius $r$ should be adjusted appropriately from the ratio of the range of convolutions to the total input, because the appropriate value is difficult to determine depending on the group treated.
Therefore, the Lie group convolution is
\begin{align*}
h(u, q) = \int_{v,q^\prime \in \mathrm{nbhd(u)}} g_\theta(v^{-1}u, q, q^{\prime})f(v, q^\prime)d\mu(v)dq^\prime.
\end{align*}
Radius $r$ of the neighborhood corresponds to the inverse of the density channel $h^{(0)}$ in \cite{gordon2019convolutional}.
{\bf{Discrete Approximation}}.
Given a lifted input data point $\{(v_j, q_j)_{j=1}^N\}$ and a function value $f_j = f(v_j, q_j)$ at each point, we need to select a target $\{(u_i, q_i)_{i=1}^{N}\}$ to convolve so that we can approximate the integral of the equation.
Because the convolutional range is limited by $\mathrm{nbhd}(u)$, LieConv can approximate the integrals by the Monte Carlo method:
\begin{align*}
h(u, q) = (g \hat{*} f)(u, q) = \frac{1}{n} \sum_{v_j, q_j^\prime \in \mathrm{nbhd}(u, q)} g(v_j^{-1}u, q, q_j^\prime)f(v_j, q_j^\prime)
\end{align*}
The classical convolutional filter kernel $g(\cdot)$ is only valid for discrete values and is not available for continuous group elements.
Therefore, pointconv/Lieconv uses a multilayered neural network $g_\theta$ as a convolutional kernel.
However, because neural networks are good at computation in Euclidean space, and input $G$ is not a vector space, we let $g_\theta$ be a map in the Lie algebra $\mathfrak{g}$.
Therefore, we use Lie groups
and logarithmic maps exist in each element of the group.
That is, let $g_\theta(u) = (g \circ \exp)_\theta(\log u)$, and parameterize $\tilde{g}_\theta=(g\circ \exp)_\theta$ by MLP.
We use $\tilde{g}_\theta \colon \mathfrak{g}\rightarrow \mathbb{R}^{c_{out} \times c_{in}}$.
Therefore, the convolution of the equation is
\begin{align*}
h_{i}=\frac{1}{n_{i}} \sum_{j \in \text { nbhd }(i)} \tilde{g}_{\theta}\left(\log \left(v_{j}^{-1} u_{i}\right), q_{i}, q_{j}\right) f_{j}.
\end{align*}
Here, the input to the MLP is $a_{ij} = \mathrm{Concat}\left( [\log (v_j^{-1}u_i), q_i, q_j)] \right)$.
\subsection{Implementation}\label{arch}
First, we explain the form of $\phi$.
Because most real-world data have a single output per one input location, we treat the multiplicity of $\mathcal{D}_C$ as one, $K=1$, and define $\phi(y) = [1\quad y]^{\top}$ based on ~\citep{zaheer2017deep}.
The first dimension of output $\phi_i$ indicates whether the data located at $x_i$ is observed, so that the model can distinguish between the observed data, and the unobserved data whose value is zero ($y_i=0$).
Then, we describe the form of $\psi$.
Following our Theorem \ref{G-representation}, $\psi$ is required to be stationary, non-negative, and a positive--definite kernel.
For EquivCNP, we change $\psi$ depending on whether the input data is continuous or discrete.
With continuous input data (e.g. 1D regression), we use RBF kernels for $\psi$.
An RBF kernel has a learnable bandwidth parameter and scale parameter and is optimized with EquivCNP.
A functional representation $E(Z)$ is made up by multiplying the kernel $\psi$ with $\phi$.
On the other hand, when the inputs are discrete (e.g. images), we use not an RBF kernel but LieConv.
Finally, we explain the form of $\rho$.
With our Theorem\ref{G-representation}, because $\rho$ needs to be a continuous group equivariant map between function spaces, we use LieConv for $\rho$.
In this study, under the hypothesis of separability~\citep{kaiser2017depthwise}, we implemented separable LieConv in the spatial and channel directions, to improve the efficiency of computational processing.
The details are given in the Appendix B.
EquivCNP requires to compute the convolution of $E(Z)$.
However, since $E(Z)$ itself is a functional representation, it cannot be computed in computers as it is.
To address this issue, we discretize $E(Z)$ over the range of context and target points.
We space the lattice points $(t_i)_{i=1}^n \subseteq \mathcal{X}$ on a uniform grid over a hypercube covering both the context and target points.
Because the conventional convolution that is used in ConvCNP requires discrete lattice input space to operate on and produces discrete outputs, we need to back the outputs to continuous functions $\mathcal{X} \rightarrow \mathcal{Y}$.
While ConvCNP regards the outputs as weights for evenly-spaced basis functions (i.e., RBF kernel),
LieConv does not require the input location to be lattice and can produce continuous functions output directly.
Note that the algorithm of EquivCNP can be the same as ConvCNP; it can also use evenly-spaced basis functions.
The obtained functions are used to output the Gaussian predictive mean and the variance at the given target points.
We can evaluate EquivCNP by log-likelihood using the mean and variance.
\section{Experiment}
To investigate the potential of EquivCNP, we constructed three questions: 1) Is EquivCNP comparable to conventional NPs such as ConvCNP? and 2) Can EquivCNP have group equivariance in addition to translation equivariance and 3) does it preserve the symmetries?
To compare fairly with ConvCNP, the architecture of EquivCNP follows that of ConvCNP; details are given in the Appendix C.
\subsection{1D Synthetic Regression Task}\label{sec:1d}
\begin{table}[t]
\caption{Log-likelihood of synthetic 1-dimensional regression}
\centering
\begin{tabular}{lrrr}
\toprule
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{RBF} & \multicolumn{1}{c}{Matern} & \multicolumn{1}{c}{Periodic} \\
\midrule
Oracle GP & $3.9335 \pm 0.5512$ & $3.7676 \pm 0.3542$ & $1.2194 \pm 5.6685$ \\
CNP~\citep{garnelo2018conditional} & $-1.7468 \pm 1.5415$ & $-1.7808 \pm 1.3124$ &$-1.0034 \pm 0.5174$ \\
ConvCNP~\citep{gordon2019convolutional} & $1.3271 \pm 1.0324$ & $0.8189\pm 0.9366$ & $-0.4787 \pm 0.5448$ \\
EquivCNP (ours) & $1.2930 \pm 1.0113$ & $0.6616 \pm 0.6728$ & $-0.4037 \pm 0.4968$\\
\bottomrule
\end{tabular}
\label{tab:1dreg}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{imgs/1d-reg.pdf}
\caption{Predictive mean and variance of ConvCNP and EquivCNP. The first two columns show the prediction of the models trained on the RBF kernel and the last two columns show the prediction of the model trained on the Matern--$\frac{5}{2}$ kernel. The target function and sampled data points are the same between the top row and bottom row except for the context. At the top row, the context is within the vertical dash line that is sampled from the same range during the training (black circle). In the bottom row, the new context located out of the training range (white circle) is appended. }
\label{fig:1dreg}
\end{figure}
To answer the first question, we tackle the 1D synthetic regression task as has been done in other papers ~\citep{garnelo2018conditional,garnelo2018neural,kim2019attentive}.
At each iteration, a function $f$ is sampled from a given function distribution, then, some of the context $\mathcal{D}_C$ and target $\mathcal{D}_T$ points are sampled from function $f$.
In this experiment, we selected the Gaussian process with RBF kernel, Matern--$\frac{5}{2}$ and periodic kernel for the function distribution.
We chose translation equivariance $T(1)$ to incorporate into EquivCNP.
We compared EquivCNP with GP (as an oracle), with CNP~\citep{garnelo2018conditional} as a baseline, and with ConvCNP.
Table~\ref{tab:1dreg} shows the log--likelihood means and standard deviations of 1000 tasks.
In this task, both contexts and targets are sampled from the range $[-2, 2]$.
From Table~\ref{tab:1dreg}, we can see that EquivCNP with translation equivariance is comparable to ConvCNP throughout all GP curve datasets. That is, EquivCNP has the model capacity to learn the functions as well as ConvCNP.
We also conducted the extrapolation regression proposed in ~\citep{gordon2019convolutional} as shown in Figure~\ref{fig:1dreg}.
The first two columns show the models trained on an RBF kernel and the last two columns on a Matern--$\frac{5}{2}$ kernel.
The top row shows the predictive distribution when the observation is given within the same training region; the bottom row for the observation is not only the training region but also the extrapolation region: $[-4, 4]$.
As a result, EquiveCNP can generalize to the observed data whose range is not included during training.
This result was expected because \citet{gordon2019convolutional} has mentioned that translation equivariance enables the models to adapt to this setting.
\subsection{2D Image-Completion Task}
An image-completion task aims to investigate that EquivCNP can complete the images when it is given an appropriate group equivariance.
The image-completion task can be regarded as a regression task that predicts the value of $y_i^*$ at the 2D image coordinates $x_i^*$, given the observed pixels $\mathcal{D}_C = \{ (x_n, y_n) \}_{n=1}^N$ ($\in \mathbb{R}^3$ for the colored image input, and $\in \mathbb{R}$ for the grayscale image input).
The framework of the image completion can apply not only to the images but also to other real-world applications, such as predicting spatial data~\citep{takeuchi2018angle}.
To evaluate the effect of EquivCNP with a specific group equivariance, we introduce a new dataset digital clock digits as shown in Figure~\ref{fig:test_image}.
Since previous works use the MNIST dataset for image completion, we also conduct the image completion task with rotated-MNIST.
However, we cannot find a significant difference between the group equivariance models (the result of rotated-MNIST is depicted in Appendix E).
We think that this happens because
(1) original MNIST contains various data symmeries including translation, scaling, and rotation, and
(2) we cannot specify them precisely.
Thus, we provide digital clock digits dataset anew.
\begin{figure}[t]
\def\@captype{table}
\begin{minipage}[T]{.55\textwidth}
\centering
\tblcaption{Log-likelihood of 2D image-completion task}
\begin{tabular}{lr}
\toprule
\multicolumn{1}{c}{Group} & Log--likelihood \\
\midrule
$T(2)$ & $1.0998 \pm 0.4115$ \\
$SO(2)$& $-2.4275 \pm 6.8856$ \\
$R_{>0} \times SO(2)$ & $\mathbf{1.8398\pm 0.5368}$ \\
$SE(2)$ & $1.1655 \pm 0.5420$ \\
\bottomrule
\end{tabular}
\label{tab:2dreg}
\end{minipage}
\hfill
\begin{minipage}[c]{.4\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{imgs/digits.pdf}
\caption{The example of training data (top) and test data (bottom).}
\label{fig:test_image}
\end{minipage}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{imgs/2d_reg_revised.pdf}
\caption{Image-completion task results. The top row shows the given observation and the other rows show the mean of the conditional distribution predicted by EquivCNP with the specific group equivariance: $T(2)$, $SO(2)$, $R_{>0} \times SO(2)$, and $SE(2)$. Two of each column shows the same image, and the difference between two columns is the percentage of context random sampling: $25\%$ and $75\%$. When the size of digits is the same as that of the training set (i.e. not scaling but rotation equals $SO(2)$ symmetry), $T(2)$ and $SE(2)$ have a good quality, but when the size of digits is smaller than that of training set, $R_{>0} \times SO(2)$ has a good performance.}
\label{fig:2dreg}
\end{figure}
In this experiment, we used four kinds of group equivariance; translation group $T(2)$, the 2D rotation group $SO(2)$, the translation and rotation group $SE(2)$, and the rotation-scale group $R_{>0} \times SO(2)$.
The size of the images is $64 \times 64$ pixels, and the numbers are in the center with the same vertical length.
For the test data, we transform the images by scaling within $[0.15, 0.5]$ and rotating within $[-90^\circ, +90^\circ]$.
Image completion with our digits data becomes an extrapolation task in that the test data is never seen during training, though the number shapes are the same in both sets.
The log--likelihood of image completion by EquivCNP with the group equivariance is reported in Table~\ref{tab:2dreg}.
The mean and standard deviation of the log--likelihood is calculated over 1000 tasks (i.e. evaluating the digit transformed in 100 times respectively).
As a result, EquivCNP with $R_{>0} \times SO(2)$ performed better than other group equivarinace.
On the other hand, the model with $SO(2)$ had the worst performance.
This might happen because the $SO(2)$ is not able to generalize EquivCNP to scaling.
In fact, the log--likelihood of $SE(2)$, which is the group equivariance combining translation $T(2)$ and rotation $SO(2)$, is not improved than that of $T(2)$.
Figure~\ref{fig:2dreg} shows the qualitative result of image completion by EquivCNP with each group equivariance.
We demonstrate that EquivCNP was able to predict digits smaller than the training digits\footnote{When the scaling is $\times 1.0$, it equals to $SO(2)$ symmetry.}.
While $T(2)$ completes the images most clearly when the sizes of digits and the number of observations are large, other groups also complete the images.
The smaller the size of digits is compared to the training digits, the worse the quality of $T(2)$ completion becomes, and $R_{>0} \times SO(2)$ completes the digits more clearly.
This is because the convolution region of $T(2)$ is invariant to the location, while that of $R_{>0} \times SO(2)$ is adaptive to the location.
As a result, for the images transformed by scaling, we can see that EquivCNP with $R_{>0} \times SO(2)$ preserved scaling group equivariance.
\section{Discussion}
We presented a new neural process, EquivCNP, that uses the group equivariant adopted from LieConv.
Given a specific group equivariance, such as translation and rotation as inductive bias, EquivCNP has a good performance at regression tasks.
This is because the kernel size changes depending on the specific equivariance.
Real--world applications, such as robot learning tasks (e.g. using hand-eye camera) will be left as future work.
We also hope EquivCNPs will help in learning group equivariance~\citep{quessard2020learning} by data--driven approaches for future research.
|
{
"timestamp": "2021-02-18T02:17:59",
"yymm": "2102",
"arxiv_id": "2102.08759",
"language": "en",
"url": "https://arxiv.org/abs/2102.08759"
}
|
\subsection{Social influence and diet}
Social influence on dietary habits is an active area of research \cite{higgs2016social,shepherd_1999}. Food consumption has been found to be influenced by eating with others \cite{hetherington2006situational}, and the food choices of others, including people one does not know, have been observed to influence food choices, even when not consciously recognized \cite{christie2018vegetarian,robinson2013food}. Particular attention has been given to understanding the governing psychological mechanisms, including the seeking of dish uniformity driven by the goal of regret minimization, or the seeking of dish variety driven by self-presentation~\cite{ariely2000sequential,de2013adolescents,munt2017barriers}.
Although the underlying mechanisms are not fully understood, uniformity seeking is observed across a range of studies. For example, it is observed that the quantity dimension is used to communicate gender identity, and the food-type dimension to ingratiate the co-eater's preferences by matching the other's presumed choice, following gender-based stereotypes about food \cite{cavazza2017portion}. Such social norms, including the influence of peers, have tremendous potential for understanding dietary patterns and designing public health interventions \cite{collins2019two,mollen2013healthy,robinson_blissett_higgs_2013,ROBINSON2014414}.
In our work, we monitor behaviors outside of experimental setups. While previous efforts in this area have focused on specific behaviors (\abbrevStyle{e.g.}\xspace, buying a dessert or not), having access to a multi-year history of all transactions made on a large campus allows us to observe behavioral changes for longer time periods and in a more fine-grained way, by measuring a wide set of purchasing behaviors that occur in the real world.
\subsection{The special case of children and adolescents}
A large fraction of the transactions recorded in our logs were made by students, \abbrevStyle{i.e.}\xspace, adolescents and young adults.
Focusing on similar age groups, social influence in dietary habits has been examined in the context of school children \cite{finnerty2010effects,patrick2005review,salvy2008effects,birch1980effects} and adolescents \cite{stevenson2007adolescents,DELAHAYE2010161,DELAHAYE2011719}, who are theorized to be most susceptible to social pressures. In particular, effects of peer influence have been observed in children and adolescents' diets as well as activity patterns \cite{ball2010healthy,salvy2012influence}.
Systematic reviews of social network analyses of young people's eating behaviors and body weight reveal consistent evidence that school friends are significantly similar in terms of their body mass index. Friends with the highest body mass index appear to be most similar \cite{fletcher2011you}. Prior work further reveals that the family context is essential when implementing healthy eating interventions, as parents, not friends, are the most prominent influencers of adolescents' healthy eating \cite{eurpub,pedersen2015following}.
\subsection{Contagiousness of unhealthy behavior}
Previous work has particularly been focused on unhealthy behaviors and their contagious effects, observing that obesity \cite{christakis2007spread}, overeating \cite{doi:10.1086/644611}, fast food \cite{thornton2013barriers}, high-fat \cite{FEUNEKES1998645,hermans2009effects}, and alcohol and snack consumption \cite{pachucki2011social,wouters2010peer} are contagious. In fact, the strongest evidence of social influence in food choices has been found for unhealthy behaviors (\abbrevStyle{e.g.}\xspace, snack foods) \cite{CRUWYS20153,blok2013unhealthy}. Beyond food consumption, peer influence and social norms have been observed to play a role in unhealthy weight-control behaviors among adolescent girls: self-induced vomiting, laxatives, diet pills, and fasting were all shown to be contagious among adolescent girls \cite{EISENBERG20051165}. A rich literature exists on tackling the problem of unhealthy behaviors through interventions with the goal of promoting healthy dietary habits and physical activity \cite{fjeldsoe2011systematic}, losing weight \cite{jeffery1993strengthening}, reducing the risk of chronic illnesses \cite{gittelsohn2012interventions}, and reducing food waste \cite{reynolds2019consumption}.
There is a heated debate about whether unhealthy behaviors are indeed contagious, or whether the observed similarities should instead be attributed to homophily, \abbrevStyle{i.e.}\xspace, people's tendency to form ties with others who are similar to oneself to begin with.
Disentangling social influence from homophily poses a fundamental challenge. Without strong assumptions about the structure of ties or the ability to measure confounding factors, homophily and contagion are generically confounded (\abbrevStyle{i.e.}\xspace, the effect of social influence cannot be identified) \cite{aral2009distinguishing,shalizi2011homophily,shalizi2016estimating}.
Our work attempts to minimize the effect of confounding variables in previously infeasible ways. Based on the rich transaction data, we measure a set of relevant confounding variables and carefully control for them in our quasi-experimental setup.
\subsection{Nutrition monitoring and modeling based on digital traces}
Social media has emerged as a promising source of data for studies on monitoring food consumption. For instance, it has been shown that Twitter has tremendous potential to provide insights into food choices at a population scale \cite{abbar2015you}. Researchers have also studied specific dietary issues and behaviors: reports of eating disorders \cite{hunger2016,pro_eating2016}, dietary choices, and nutritional challenges in food deserts, \abbrevStyle{i.e.}\xspace, places with poor access to healthy and affordable food \cite{de2016characterizing}. Another active area of research has been focused on improving methods for monitoring food consumption, relying on mobile phones \cite{barriers_negative2015,food_journal2015} and wearable devices, to recognize when the eating activities take place~\cite{eating_moments}.
Recent related research has also demonstrated the value of monitoring and modeling of nutrition using other kinds of large-scale digital traces \cite{groseclose2017public}, such as grocery store purchase logs \cite{aiello2019large,buckeridge2014method}, online recipes \cite{rokicki2018impact}, logging-based smartphone applications and wearables \cite{achananuparp2016extracting,info:doi/10.2196/20625}, reviewing platforms~\cite{chorley2016pub}, search engine logs \cite{West:2013:CCI:2488388.2488510,vosen2011forecasting}, social media such as Twitter \cite{abbar2015you,mejova2016fetishizing,widener2014using} or Instagram~\cite{sharma2015measuring,ofli2017saki}, crowdsourcing platforms \cite{Howell:2016:ATP:2896338.2896358}, and geo-location signals \cite{sadilek2018machine}.
Finally, large-scale passively sensed signals have been harnessed in university campus environments to measure factors of well-being outside of nutrition \cite{barclay2013peer,madan2010social,nook2015social,sefidgar2019passively,swain2020leveraging}. Recent preliminary insights point towards the feasibility and the potential of automatically inferring social interactions from behavioral traces for campus-centric applications \cite{swain2020leveraging}.
To summarize, while large-scale digital traces are promising for monitoring and modeling nutrition, little is known about how these passively sensed behavioral signals can be used for understanding the factors that govern food consumption in campus settings. Our longitudinal study aims to bridge this gap by analyzing large-scale, long-term purchase data.
\subsection{Transaction log data}
\label{sec:Transaction log data}
This work leverages an anonymized dataset of food purchases made on the \'Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL) university campus. The data spans 8 years, from 2010 to 2018, and contains about 38 million transactions, of which about 18 million were made with a badge that allows linking to an anonymized person's ID. The data includes 38.7k users, who, on median, are observed for a time period spanning 578 days and make 188 transactions. Each transaction is labeled with the time it took place, information about the sales location (shop, restaurant, vending machine, or caf\'e), the cash register where the transaction took place, and the purchase items. Items are associated with unstructured textual descriptions (\abbrevStyle{e.g.}\xspace, ``coffee'', ``croissant'', ``Coca-Cola can'').
The unstructured textual descriptions were additionally manually mapped to categorical labels (such as ``meal'', ``drink'', or ``dessert'') by a research assistant, who labeled the 500 most frequently purchased items, which account for 95.4\% of the total volume of item purchases observed in the dataset.
The distribution of purchases across categories is shown in Figure~\ref{fig:3}.
Purchases are not evenly spread over the course of the year, but, as expected, are higher during semesters, and lower during the breaks between semesters (Figure~\ref{fig:2}, left).
This work also leverages a smaller-size enriched transactional dataset gathered during a campus-wide sustainability challenge, for which 1,031 consenting participants formed 278 teams in order to compete in taking sustainable actions (\abbrevStyle{e.g.}\xspace, taking the stairs instead of the elevator, or consuming a vegetarian meal). This data was not used for our analyses, but only for assessing the accuracy of our heuristic method for inferring frequent eating peers (described next).
\subsection{Inference of co-eating onset from proximity in transaction logs}
\label{sec:Inference of co-eating onset from proximity in transaction logs}
\begin{figure}[t]
\includegraphics[width=\textwidth]{figures/friends.pdf}
\caption{\textbf{Left:} Annual distribution of food purchases. The trends mirror the university schedule: the number of transactions drops at the end of the spring semester (around week 25), and increases again at the start of the fall semester (around week 40). A similar pattern is observed before the beginning of the spring semester (around week 10).
\textbf{Right:} Annual distribution of detected onsets of social ties. The ties emerge disproportionally often when classes start at the beginning of the fall semester (by a factor of 3.5 times, compared to a baseline sampled at random from the distribution of purchases).}
\label{fig:2}
\end{figure}
To measure the effect of the emergence of new social ties, we first infer frequently co-eating persons based on the proximity in the transaction logs. Frequently co-eating persons are likely to share a social tie, \abbrevStyle{i.e.}\xspace, they are persons likely to be friends, colleagues, or classmates who often eat together. Previous work has shown that such social ties can be reliably inferred from geospatial proximity~\cite{crandall2010inferring}.
To infer frequent eating peers, we monitor a sequence of transactions made on the same day with the badge in the queue of a fixed cash registry, in a given shop. We identify situations when two individuals are adjacent in the queue and make a transaction within one minute between each other, with no one in between them. We use a lower threshold of 10 such high-confidence proximity indicators to infer a likely social tie. The first appearance of proximity in the logs is then considered to be the onset of co-eating. We observe a spike in tie formation coinciding with the start of classes in the fall (Figure~\ref{fig:2}, right).
Furthermore, we evaluate the precision of our heuristic by comparing the inferred co-eaters with ground-truth team membership information from the sustainability challenge. We observe that team membership in the sustainability challenge, a ground-truth indicator of a social tie, is correlated with sharing an inferred tie based on the transaction logs: out of all the pairs of individuals from the sub-population taking part in the sustainability challenge who are detected as frequent eating partners,
72\%
are also members of the same team.
\subsection{Inference of nutritional properties from raw transaction logs}
We infer a set of summary nutritional properties from raw transaction logs by relying on a set of pre-established criteria. We derive healthiness labels based on food-pyramid recommendations~\cite{walter2007food}. Products that should be consumed in the least amounts possible, \abbrevStyle{i.e.}\xspace, items at the top of the Swiss food pyramid (with high amounts of saturated fats, salt, added sugars, refined grains, and highly processed foods) were considered as ``unhealthy'' (\abbrevStyle{e.g.}\xspace, sodas, chips, candies, and chocolate bars). Other products that are not at the top of the Swiss food pyramid are considered to be ``healthy'' (including non-sweetened beverages, fruits, vegetables, whole grains, meat, fish, and nuts). When insufficient information was available from the name of the product, ``unclassifiable'' was selected.
Two professional epidemiologists specialized in nutrition independently assessed each food item and categorized them into healthy \abbrevStyle{vs.}\xspace\ unhealthy \abbrevStyle{vs.}\xspace\ unclassifiable. The reviewers had access to the unstructured textual description of the item (e.g., ``coffee'', ``croissant'', ``Coca-Cola can''). The reviewers did not have access to any other meta-information about the items. Disagreements were resolved by a third reviewer. Labels are used to create a healthiness score of a set of purchases by averaging individual product scores, coded numerically as $1$ for healthy (25\% of items), $-1$ for unhealthy (46\% of items), and $0$ for unclassifiable (29\% of items).
\subsection{Matched incident user design with active comparators}
\begin{figure}[t]
\includegraphics[width = \textwidth]{figures/design_incident.pdf}
\caption{Study design diagrams. We illustrate three potential observational study designs to estimate the effect of eating with other persons on food choices, (a) \textit{cross-sectional design}, (b) \textit{incident user design}, and (c) \textit{incident user design with active comparators}. At different points in time, a person either does not have a regular eating partner (marked in gray), or she does (marked in green, red, or blue). A cross-sectional design observes food consumption at the start of the monitored period, a fixed time $t_0$, which is the same across all participants. Incident user design isolates the effect of the onset of co-eating with another person on subsequent food consumption. In incident user design, time is tracked relative to the moment of onset $t_0$, which may be different across participants. The active comparator design additionally allows for comparisons of the effect of onset among persons who all start to eat with someone, but their partners have different characteristics (marked in red and blue). The present paper is based on an incident user design with active comparators (presented in more detail in Figure~\ref{fig:diagram2}).}
\label{fig:1}
\end{figure}
Recall that we are interested in determining whether and how eating with others impacts the nature of food consumption. As depicted in Figure~\ref{fig:1}(a), a na\"ive approach to answering those questions would be a \textit{cross-sectional design}: at any given absolute point in time, some people are regularly eating with their peers (indicated with green) while others do not (indicated with gray). Starting from a certain absolute point in time $t_0$, by identifying persons with different habits, one could compare what is consumed by the persons who do not have a regular eating partner with what is consumed by the persons eating with a regular eating partner. One could also compare the food consumed by persons who are eating regularly with partners who have different habits.
The problem with this setup is that those persons who do not eat with others might have done so in the past (\abbrevStyle{e.g.}\xspace, Person~1 in Figure~\ref{fig:1}(a)). Those who do eat with others might have been doing it for a long time or might have just initiated. Also, some people stop eating with others, whereas other people continue. It could be that those who stop do so because they prefer the diet they seek when eating alone (\abbrevStyle{i.e.}\xspace, selection bias). Additionally, people who eat with others might differ in fundamental ways from those who do not.
For these two reasons, looking at everyone at the same moment in a cross-sectional way can be problematic. To overcome these challenges, we can turn to an \textit{incident user design} (Figure~\ref{fig:1}(b)), which restricts the population to those people who newly initiate the treatment---starting to eat together with another person. We are interested in the causal effect on food consumption of initiating eating with a peer. Among people who had no regular eating partners in the past, what is the causal effect of starting to eat with a peer? In this way we isolate the causal effect of initiation. We restrict the observed population so that none of the persons have a history of eating with someone. Note how Person~1 in Figure~\ref{fig:1}(b) starts eating together with a regular partner, but then after a while no longer has a regular eating partner. This is not an issue because we are interested in the effect of the onset.
\begin{figure}[t]
\includegraphics[width=0.8\textwidth]{figures/diagramv2.pdf}
\caption{The matched incident user design with active comparators on which the present study is based. We identify comparison pairs of focal persons 1 and 2, who are indistinguishable in the pre-treatment period and have no regular eating partners, until the moment of co-eating onset, when they each acquire a regular eating partner. Focal person 1 starts regularly eating with a healthy-eating partner, while focal person 2 starts regularly eating with an unhealthy-eating partner. The comparison pair of focal users is then observed in the pre-treatment period (no regular eating partner) and post-treatment period (regular eating partner). The effect of the co-eating onset is estimated using a difference-in-differences analysis.}
\label{fig:diagram2}
\end{figure}
As opposed to the cross-sectional design, where time is absolute, the incident user design offers the flexibility of tracking time relative to an onset $t_0$ that may be different for different participants. Although this design allows us to compare different treatments, the problem with this setup, which persists from the above-described cross-sectional design, is that, if the comparison group is ``no treatment'' (\abbrevStyle{i.e.}\xspace, no initiation of co-eating), it is not apparent when the follow-up should start for the ``no treatment'' group. Additionally, selection bias remains and is not accounted for, as people who do not initiate might in other fundamental ways differ from those who do initiate.
Our study design addresses these challenges by implementing a variant of incident user design, \textit{incident user design with active comparators} (Figure~\ref{fig:1}(c)). Here, before initiation, no user included in the study had a regular eating partner (\abbrevStyle{i.e.}\xspace, was treatment-free). We compare the effect of initiating to eat with partners who have different habits among persons who all initiate to eat with someone (illustrated with blue and red in Figure~\ref{fig:1}(c)). Active comparator designs tend to involve significantly less confounding \cite{yoshida2015active,lund2015active,johnson2013incident}, as people who eat with different kinds of others are more alike among themselves than when compared to people who do not have regular eating partners.
Our study design is illustrated in more detail in Figure~\ref{fig:diagram2}. We identify persons (referred to as \textit{focal persons}) who had no regular eating partners and, at a moment $t_0$ specific to that focal person, initiate eating with someone (referred to as \textit{eating partner}).
Here, as defined in Section~\ref{sec:Inference of co-eating onset from proximity in transaction logs}, a person qualifies as a focal person's potential eating partner if the two were observed making subsequent purchases in the same queue within one minute of one another on at least 10 occasions in the entire dataset,
and the onset of co-eating is defined as the first one of these occasions.
We then isolate pre-treatment and post-treatment periods of the focal person's food purchases comprising all transactions made six months before the first purchase together (moment $t_0$) and six months after, respectively. We ensure that the focal person does not initiate eating with anyone else in the pre- and post-treatment six months. The length of the pre-treatment period is chosen so that it is feasible to expect that an individual will be present on campus given the typical stay in the logs (the total observed 12 months of pre- and post-treatment correspond to one school year).
\begin{figure}[t]
\includegraphics[width=0.7\textwidth]{figures/seasonality.pdf}
\caption{Daily fraction of purchases annotated as potentially unhealthy, tracked over five years. A seasonal pattern emerges. Drops in the daily fraction of unhealthy purchases coincide with between-semester breaks.}
\label{fig:4}
\end{figure}
Some persons initiate co-eating with a person who has a positive healthiness score in the aligned pre-treatment period. In contrast, some initiate co-eating with a partner who has a negative score. These are the two groups that we seek to compare (we refer to the two types of partners as \textit{healthy-eating partner} and \textit{unhealthy-eating partner}).
For a focal person who starts to eat together with a partner who has a healthy dietary pattern, an \textit{active comparator} (or \textit{counterpart}), will be another focal person who starts to eat together with a partner who has an unhealthy dietary pattern. The potential counterparts start to eat with their partner in the same month as the other counterpart.
This is done in order to control for temporal confounds that might arise from a seasonal variation of food popularity: as seen in Figure~\ref{fig:4}, unhealthy foods are especially popular at certain times of the year.
The healthiness of the partner's dietary pattern is determined according to its numeric value (greater or less than zero), and not relative to the focal person.
Comparing incident users with active comparators is an important step towards reducing the impact of biases. However, in the assignment of the type of treatment, there can still be confounding. For example, it might be the case that only people who already have healthy habits start eating together with a partner who has healthy habits, due to a preference for similar others. The influence of the partners would then be indistinguishable from the impact of selection biases caused by homophily.
Hence, we turn to a \textit{matched} incident user design with active comparators. We introduce an improvement over the previously discussed setup, where the incident users are matched to the potential active comparators while additionally controlling for pre-treatment covariates. Our goal here is to balance potential confounding variables within pairs, to be able to observe how the onset of co-eating with partners with different dieting patterns is associated with subsequent changes in the focal person's dieting pattern. We achieve this by performing a propensity-score-based causal analysis. We approximate randomized treatment assignment by modeling the propensity to experience the assigned intervention, relying on a number of pre-treatment covariates describing the focal persons' eating profiles. Due to the balancing property of propensity scores \cite{10.1093/biomet/70.1.41}, matching on propensities results in similar covariate distributions between groups that differ in their assigned interventions.
The covariates capture important dimensions of the pre-treatment dietary pattern of the focal person:
\textit{where} the food is purchased (what is the shop where the person most frequently buys food),
\textit{when} the food is purchased (what is the fraction of items occurring during lunchtime),
\textit{what} types of items are purchased (what fraction of purchased items are meals, and what is their estimated healthiness),
and \textit{how often} the person purchases food on campus (number of transactions).
We measure these confounding covariates up to time $t_0$.
\begin{figure}[t]
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_matching1.pdf}
\caption{Distribution (before matching) of propensity to start eating with a healthy-eating partner.}
\label{fig:7a}
\end{minipage}
\hspace{.05\textwidth}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig_matching2.pdf}
\caption{Importance of most indicative features for predicting treatment assignment, \abbrevStyle{i.e.}\xspace, initiation of eating with a healthy- \abbrevStyle{vs.}\xspace\ unhealthy\hyp eating partner (shop names anonymized). Most important feature: pre-treatment healthiness score of focal person's purchases, which indicates homophily.}
\label{fig:7b}
\end{minipage}
\end{figure}
We use a random forest model that predicts the type of treatment based on pre-treatment covariates of the focal person (area under the ROC curve: 0.87). This implies that past purchases allow us to accurately predict whether the tie will be formed with a healthy- or an unhealthy-eating partner, and that confounding is a real problem that needs to be addressed.
The distribution of the propensity to start eating with a partner who has a high healthiness score is presented in Figure~\ref{fig:7a}. We also examine the feature importances in predicting the treatment assigned, \abbrevStyle{i.e.}\xspace, the initiation of eating with a partner who has a healthy or unhealthy eating pattern (Figure~\ref{fig:7b}). We observe that the focal person's pre-treatment healthiness score is in fact the most important predictor of the type of partner the focal person will start to eat with, pointing at homophily.
Focal persons in the two sets are then matched while ensuring that two potential matches have propensity scores (likelihoods of receiving the treatment) within a caliper of 0.1. The size of the caliper was chosen so that balance in covariates is achieved. Moreover, an exact match on the sign of the mean pre-treatment healthiness score and the most frequented shop is required to achieve tight control. We then create matched pairs based on possible candidates by performing maximum weight matching on the weighted bipartite graph, where nodes are focal persons, and the weights use similarity based on the Mahalanobis distance in covariates. We maximize the total similarity to find a maximal matching.
\begin{figure}[t]
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/fig_matching3.pdf}
\caption{Histogram of the number of high confidence indicators of eating together with their respective partners, for matched focal persons.}
\label{fig:5}
\end{minipage}
\hspace{.05\textwidth}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/fig_matching4.pdf}
\caption{Histogram of the partner's pre-treatment healthiness score, across matched persons. Orange bars correspond to healthy-eating partners, blue bars to unhealthy-eating partners. A margin of 0.1 is ensured to differentiate the treatments.}
\label{fig:6}
\end{minipage}
\end{figure}
The result is a set of matched pairs of focal persons, indistinguishable up to the moment of initiation, who initiated co-eating with partners with different dietary patterns in the same month. This approach yielded 415 matched pairs of 830 focal persons who started to eat with different partners. We require at least 10 high confidence indicators of eating together with the partner (Figure~\ref{fig:5}). Partners' distribution of pre-treatment healthiness scores is shown in Figure~\ref{fig:6}.
Our matched analysis then moves on to comparing focal people who initiate co-eating with a person with a healthy dieting pattern, to their counterparts who have the same dieting patterns up to the moment of initiation, but initiate co-eating with a partner who has an unhealthy dieting pattern. The post-treatment patterns are then compared across treatments within the matched population.
\begin{table}[b]
\small
\caption{To ensure that matched persons are comparable, we evaluate the balance of their pre-treatment covariates, via the standardized mean difference (SMD) across covariates in the two matched groups.}
\label{tab:1}
\begin{tabular}{l|c|c}
\textbf{Pre-treatment covariate} & \textbf{SMD before matching} & \textbf{SMD after matching} \\
\hline
Preferred shop (\abbrevStyle{i.e.}\xspace, where the largest fraction of & \multicolumn{2}{c}{exact match required} \\
pre-treatment transactions is made) & \multicolumn{2}{c}{} \\
\hline
Pre-treatment percentage of lunchtime transactions & 0.109 & 0.045\\
\hline
Pre-treatment percentage of meal transactions & 0.207 & 0.075\\
\hline
Pre-treatment mean healthiness score & 0.301 & 0.023\\
\hline
Pre-treatment mean weekly number of transactions & 0.071 & 0.023 \\
\end{tabular}
\end{table}
Before moving on to the analysis of the outcomes, we ensure that the matched persons are comparable by measuring the balance of their pre-treatment covariates (Table~\ref{tab:1}). We use the standardized mean difference (SMD) across covariates in the two groups to measure the balance. We observe that matching greatly reduces the SMD, as the largest SMD across covariates (the one of the pre-treatment healthiness score of the focal person) changes from 0.301 before matching, to 0.023 after matching. Groups are considered balanced if all covariates have SMD lower than 0.2~\cite{kiciman2018using}, a criterion that is satisfied here.
\subsection{Regression analysis of pooled data}
\begin{figure}[t]
\begin{minipage}{\textwidth}
\centering
\includegraphics[width = 0.8\textwidth]{figures/rq1.pdf}
\caption{The effect of the focal person's pre-treatment covariates, the partner's pre-treatment healthiness score, and the number of detected high-confidence indicators of eating together on the focal person's post-treatment healthiness of purchased items. The effects are estimated with linear regression ($R^{2}=0.194$); 95\% confidence intervals are approximated as two standard errors. Significant coefficients ($p<0.05$) are marked with an asterisk (*). The focal person's own healthiness score and their eating partner's healthiness score are the only two statistically significant factors associated with the focal person's post-treatment healthiness score.}
\label{fig:8}
\end{minipage}
\end{figure}
First, we aim to determine if there are any significant differences between the outcomes of the matched focal persons. Is the pre-treatment healthiness of the partner predictive of post-treatment healthiness of the focal person? We start by performing a regression estimation of the effect of the partner's pre-treatment healthiness score on the focal person's post-treatment score.
We fit a model where the focal person's post-treatment healthiness score is the dependent variable. We include the focal person's pre-treatment healthiness score, the partner's pre-treatment healthiness score, the number of high-confidence indicators of eating together, and the focal person's pre-treatment covariates as the independent variables. The focal person's pre-treatment covariates (number of transactions, percentage of transactions that are meals, percentage of lunchtime transactions, and the pre-treatment healthiness score) are already controlled for by matching, but they are included in the model to account for possible residual confounding. The predictors and the outcome are standardized, so the coefficients are interpreted as increases in the healthiness score per standard deviation of the predictor.
In Figure~\ref{fig:8}, fitting the linear regression, we measure a significant positive effect of 0.13 (95\% CI [0.07, 0.19]) of the partner's pre-treatment healthiness score. The focal person's own pre-treatment healthiness is the strongest predictor of post-treatment healthiness (coefficient 0.43, 95\% CI [0.36, 0.49]). This is the first indicator that the pre-treatment score of the partner is associated with the focal person's patterns.
\subsection{Contingency-table analysis}
Next, to obtain fine-grained insights about patterns taking place at the pair level, we analyze the outcomes with a contingency table.
We binarize the outcome to look for either an increase or no increase post-treatment, compared to pre-treatment. Four possibilities exist for any matched pair: both increased, only one or the other increased, and none increased. The contingency table is presented as Table~\ref{tab:2}. The table counts the frequency of the four possible results. Using a chi-squared test, we reject the null hypothesis of no treatment effect ($p = 0.00017$).
It is particularly informative to observe the discordant pairs (off-diagonal entries in the contingency table) among the matched pairs. Such pairs correspond to situations when the outcome (increase or no increase) differs in the matched pair. The intuition is the following: if there is no effect, the two types of discordant entries should be balanced. However, we observe that in 103 pairs, the focal person with a positive intervention increased, and the focal person with a negative intervention did not. The reversed situation, in comparison, occurs in 67 pairs. We test the null hypothesis of no effect in a paired randomized experiment using McNemar's test \cite{lachenbruch1998assessing}, which relies directly on the evidence that comes from the discordant pairs (their number and the ratio between them). Here, too, we reject the null hypothesis of no treatment effect ($p = 0.007$).
\begin{table}[t]
\small
\caption{Contingency table counting number of pairs of matched focal persons in each condition. Post-treatment healthiness score is compared to pre-treatment score to determine if there was an increase; in columns, for focal persons who start to eat with healthy-eating partners, and in rows, for their matched counterparts, \abbrevStyle{i.e.}\xspace, focal persons who start to eat with unhealthy-eating partners.}
\label{tab:2}
\begin{tabular}{l r c c | c}
\multicolumn{2}{c}{} & \multicolumn{2}{l|}{\textit{Focal person with a }} & \\
\multicolumn{2}{c}{} & \multicolumn{2}{l|}{\textit{healthy-eating partner}} & \\
\multicolumn{2}{c}{} & \multicolumn{1}{c}{\textbf{Increase} } & \textbf{No increase} & \textbf{Total pairs} \\
\textit{Focal person with an } & \textbf{Increase} & 126 & 67 & 193 \\
\textit{unhealthy-eating partner} & \textbf{No increase} & 103 & 119 & 222 \\
\hline
\multicolumn{2}{r}{\textbf{Total pairs}} & 229& 186& \textbf{415} \\
\end{tabular}
\end{table}
\subsection{Difference-in-differences analysis}
We move on to further exploit the matched setup in order to estimate the difference\hyp in\hyp differences~\cite{lechner2011estimation} effect for pairs of matched focal persons.
The idea is to first calculate the difference between post-treatment and pre-treatment healthiness scores for each focal person separately.
Then, we can calculate the difference in treatment effects between two matched focal persons in each pair.
Averaging the differences in differences across all pairs yields the overall treatment effect.
\xhdr{Regression model}
In practice, following the standard approach, we estimate the difference-in-differences effect via a regression model.
Here, each focal user adds two data points (one pre-treatment, one post-treatment), each of which specifies, as predictors, the type of partner with whom the focal user started to eat as a treatment (healthy-eating or unhealthy-eating) and the time period (pre- or post-treatment); and, as the outcome, the healthiness score of the focal user's food choice during the respective period.
Each matched pair thus contributes four data points, and the modeled dataset consists of $4 \cdot 415 = 1{,}660$ data points.
Formally, the model takes the following form:
\begin{equation}
y_{it} = \alpha + \beta \cdot \text{healthy\_treatment}_i + \gamma \cdot \text{treated}_t + \delta \cdot (\text{healthy\_treatment}_i \cdot \text{treated}_t) + \text{error}_{it},
\label{eqn:formula_overall}
\end{equation}
where the dependent variable $y_{it}$ is the focal user $i$'s healthiness score in period $t$,
and the independent variables indicate whether $i$'s partner has a positive or negative pre-treatment healthiness score ($\text{healthy\_treatment}_i$ = 1 or 0, respectively)
and whether the respective data point captures the pre- or post-treatment period ($\text{treated}_t$ = 1 or 0, respectively).
The coefficient $\delta$ of the interaction term, then, is the difference\hyp in\hyp differences effect of starting to eat with a healthy- \abbrevStyle{vs.}\xspace\ unhealthy\hyp eating partner.
\xhdr{Results}
Calculating the average difference-in-differences effect with a linear regression across all matched focal persons, we observe a larger post-treatment increase in focal persons with healthy-eating partners compared to the post-treatment increase in matched counterparts, $\delta=0.051$ (95\%~CI $[0.021,0.076]$, $R^2=0.07$). This means that, accounting for possible temporal drifts between post-treatment and pre-treatment that are not associated with the initiation, focal persons starting to eat with a healthy-eating partner significantly diverge from their matched counterparts starting to eat with an unhealthy-eating partner.
Quantitatively, the effect size of $\delta=0.051$ means that, compared to matched counterparts who start eating with an unhealthy\hyp eating partner, focal persons who start eating with a healthy\hyp eating partner increase their healthiness score by an additional 5.1\% of the full range spanning from a neutral healthiness score~(\abbrevStyle{i.e.}\xspace, 0) to the maximum healthiness score~(\abbrevStyle{i.e.}\xspace,~1).
Similarly, to estimate the effect of social tie formation on the absolute number of healthy and of unhealthy purchased items, we repeat the regression analysis described in \Eqnref{eqn:formula_overall},
but now with different dependent variables that capture the \textit{total number of healthy and the total number of unhealthy items purchased} by focal user $i$ in period $t$. We observe that the focal persons who start to eat with a partner with a healthy pattern purchase an additional 5.71 (95\% CI $[3.21, 8.21]$, $R^2=0.17$) healthy items, and an additional $-1.13$ (95\% CI $[-3.04, 0.78]$, $R^2=0.12$) unhealthy items in the six months following the tie formation, compared to their matched counterparts.
\xhdr{Sensitivity analysis}
The above finding relies on the assumption that there are no unobserved variables that create the differences between the matched focal persons that could explain away the measured effect.
Sensitivity analysis is a way of quantifying how the results of our calculations would change if the assumptions were violated to a limited extent.
If the conclusions of the study would change little, then the study is insensitive to a violation of the assumptions, up to the specified limited extent.
In contrast, if the conclusions would change substantially, then the study is sensitive to a violation of the assumption.
The key assumption made in our analysis is that the treatment assignment is not biased, or in other words, that after balancing the pre-treatment covariates, the co-eating initiation with a healthy-eating \abbrevStyle{vs.}\xspace\ an unhealthy-eating partner is randomized (\abbrevStyle{i.e.}\xspace, it is effectively decided by a coin flip). We measure by how much that assumption needs to be violated in order to alter our conclusion that there is a significant difference\hyp in\hyp differences effect on the healthiness of purchased items among the matched focal persons.
Specifically, sensitivity analysis lets us answer the following question: if there is a violation of randomized treatment assignment (\abbrevStyle{i.e.}\xspace, a deviation from fair 50/50 coin flip), how large would it need to be in order to alter the conclusion that the null hypothesis of no difference between focal persons can be rejected?
This notion is quantified by the \textit{sensitivity}~$\Gamma$, which specifies the ratio by which the treatment odds of two matched persons would need to differ in order to result in a $p$-value above the significance threshold.
We always have $\Gamma \geq 1$, with larger values of $\Gamma$ corresponding to more robust conclusions.
For the chosen $p =0.05$, we measure a sensitivity of $\Gamma = 1.17$,
which implies that, within matched pairs,
an individual's probability of being the treated one could take on any value between
$1/(1+\Gamma) = 0.46$
and
$\Gamma/(1+\Gamma) = 0.54$
without changing our decision of rejecting the null hypothesis of no effect.
In other words, if the assignment of the treatment after matching were not approximating the ideal 0.5, but a third variable made some people more likely to initiate eating with a healthy-eating or an unhealthy-eating partner,
the randomized treatment assignment would have to be violated by deviating from the fair~0.5 by at least four percentage points.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/rq1sens_same_scale.pdf}
\vspace{-2mm}
\caption{Sensitivity analysis. For the sensitivity $\Gamma = 1.17$, the amplification $(\Lambda,\Delta)$ is plotted (see text for explanation). Horizontal and vertical dashed lines indicate $\Gamma$, i.e., the asymptotic value of $\Lambda$ for $\Delta \rightarrow \infty$, and vice versa.}
\label{fig:13}
\end{center}
\end{figure}
Additionally, we conduct an amplification of the sensitivity analysis \cite{rosenbaum2009amplification}. Amplification is particularly relevant when the concern is not about a violation of the randomized treatment assignment, but rather about the potential existence of a specific unobserved covariate. It then becomes useful to consider possible combinations of $\Lambda$ and $\Delta$, two parameters describing the unobserved covariate, that would result in the measured $\Gamma$. The strength of the relationship between the unobserved covariate and the difference in outcomes within the matched pair is defined by $\Delta$, whereas $\Lambda$ defines the strength of the relationship between the unobserved covariate and the difference in probability of being assigned a treatment.
With these definitions, the
sensitivity $\Gamma$ can be expressed in terms of $\Lambda$ and $\Delta$, as $\Gamma = (\Lambda \Delta + 1) / (\Lambda + \Delta)$.
The result of sensitivity analysis amplification is presented in Figure~\ref{fig:13}. For combinations of $\Lambda $ and $\Delta$ in the orange area, significant effects would be detected (leading to $p <0.05$), whereas for the combinations in the blue area, no significant effects would be detected (leading to $p >0.05$). An infinite number of $(\Lambda,\Delta)$ combinations fall on the border; \abbrevStyle{e.g.}\xspace, $(\Lambda, \Delta) = (2.0,1.6)$ corresponds to an unobserved covariate that doubles the odds of treatment and multiplies the odds of a positive pair difference in the outcomes by 1.6.
Overall, we conclude that the study design is insensitive to small biases \cite{rosenbaum2017observation}.
\begin{figure}[t]
\begin{minipage}[t]{.45\textwidth}
\centering
\includegraphics[width = \textwidth]{figures/doseresponseb.pdf}
\subcaption{}\label{fig:dosea}
\end{minipage}
\hspace{0.04\textwidth}
\begin{minipage}[t]{.45\textwidth}
\centering
\includegraphics[width =\textwidth]{figures/doseresponsea.pdf}
\subcaption{}\label{fig:doseb}
\end{minipage}
\hfill
\vspace{-3mm}
\caption{
Dose--response relationship.
\textbf{(a)}~Histogram of pre-treatment differences in healthiness scores between partners of paired focal users.
\textbf{(b)}~Difference\hyp in\hyp differences effect between focal persons in matched pairs, stratified by the pre-treatment differences in healthiness scores between the partners they were exposed to.
}
\end{figure}
\subsection{Dose--response relationship}
Next, we analyze the dose--response relationship in our matched setup. Similar focal persons initiate eating with differing partners. We observe systematic changes in the dieting patterns of the focal persons after the tie formation. But do more drastic difference\hyp in\hyp differences effects occur when the differences between partners are more drastic?
In the case of a true causal effect, one would expect a dose--response effect where focal persons diverge more post-treatment if their partners diverged more pre-treatment.
Although large differences in the pre-treatment scores between matched focal persons' partners are rare (Figure~\ref{fig:dosea}), Figure~\ref{fig:doseb} shows evidence of a dose--response relationship:
the difference\hyp in\hyp differences effect is stronger when the partners are more different (\abbrevStyle{i.e.}\xspace, the more extreme difference in partners leads to more extreme effect estimates). If there were other confounding factors that could explain the observed difference\hyp in\hyp differences effects, and those factors had nothing to do with the onset of eating together, we would not expect to find a dose--response relationship.
The observed dose--response relationship thus further supports the conclusion of a causal effect.
\begin{figure}[b]
\begin{minipage}{.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rq1_1.pdf}
\end{minipage}
\begin{minipage}{.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rq1_2.pdf}
\end{minipage}
\begin{minipage}{.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rq1_3.pdf}
\end{minipage}
\caption{\new{Post-treatment increase in healthiness score, stratified by pre-treatment healthiness score of focal person (with 95\% bootstrapped confidence intervals).} The difference is shown separately for focal persons who start eating with a person with a positive (orange) \abbrevStyle{vs.}\xspace\ negative (blue) healthiness score. The difference is monitored in the first 3, 6, and 12 post-treatment months (left, center, and right panel, respectively).}
\label{fig:10}
\end{figure}
\subsection{Stratification by pre-treatment healthiness}
Additionally, we would like to understand for whom the treatment is effective. Are there changes across the board with respect to the initial healthiness, or only for specific sub-populations? For whom is the intervention most efficient? We again monitor the differences between post- and pre-treatment healthiness scores, but now stratified into quartiles by pre-treatment healthiness score of the focal person (Figure~\ref{fig:10}).
\new{Moreover, we repeat this analysis for post-treatment observation periods of varying length (3, 6, and 12 post-treatment months).} In the aligned, post-intervention period, persons who start eating with partners with healthy dieting patterns are characterized with consistently higher healthiness scores compared to the matched counterparts, across strata of the focal person's pre-treatment healthiness score. Note that the fact that the slopes are decreasing may be a simple regression to the mean. \new{The key observation is that, within each stratum, when comparing the outcomes in orange and blue, people who initiate eating with a healthy-eating partner (orange) see a greater post- \abbrevStyle{vs.}\xspace\ pre-treatment difference compared to people who initiate eating with an unhealthy-eating partner (blue).}
\subsection{Analysis of affected food-item categories}
\begin{figure}[t]
\begin{minipage}{\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{figures/reg_rq2_2.pdf}
\caption{Estimated difference\hyp in\hyp differences effects of co-eating onset with healthy- \abbrevStyle{vs.}\xspace\ unhealthy-eating partner on frequency of purchased food categories (with 95\% confidence intervals approximated as plus\slash minus two standard errors). Categories with a significant effect are marked in orange (positive) and blue (negative), whereas categories with no significant effect are marked in gray.}
\label{fig:12}
\end{minipage}
\end{figure}
Finally, we set out to understand the influence of new co-eating partners on the rates at which categories of food items are subsequently purchased. Since we observed that the behaviors are modified, we now ask: what items are eaten more, and which less? What foods being purchased and eaten in group settings on campus have the largest influence on others?
To estimate category\hyp specific difference\hyp in\hyp differences effects, we repeat the regression analysis described in \Eqnref{eqn:formula_overall}, but now with a different dependent variable $y_{cit}$, which captures the number of items from food category $c$ purchased by focal user $i$ in period $t$.
By fitting a separate regression for each food category $c$, we obtain category-specific effects $\delta_c$.
The estimated effects $\delta_c$, together with 95\% confidence intervals, are presented in Figure~\ref{fig:12}. We observe that the focal persons initiating to eat with a healthy-eating partner purchase more coffee, lunch meals, coffee from vending machines, soup, fruit, dessert, tea, salad, and wraps, compared to their matched counterparts, who purchase more soft drinks, drinks from vending machines, water, condiments, pizza, kebabs, and cr\^epes. The values on the $x$-axis can be interpreted as the number of purchased items by which the matched focal persons diverge in the post-treatment period. For example, in the six months following tie formation, people who start eating with healthy-eating partners purchase, on average, around two additional meals and around four additional coffees, compared to the matched counterparts. The matched counterparts who start eating with unhealthy-eating partners, by contrast, on average purchase around one additional soft drink in the six months following tie formation.
Coffees and lunch meals are the items that see the largest increase after tie formation with a healthy-eating partner. These items are in general purchased in large numbers (Figure~\ref{fig:2}). Conversely, items with the strongest effect among the matched counterparts, with the exception of water, loosely form a cluster of potentially unhealthy items that should not be eaten in large quantities. The remaining items with a significant positive effect, soups, fruits, desserts, tea, salad buffet, and wraps are overall less indicative of an unhealthy dietary pattern.
\section{Introduction}
\label{sec:intro}
\input{001intro.tex}
\section{Related work}
\label{sec:related}
\input{002rel.tex}
\section{Materials and methods}
\label{sec:matmet}
\input{003methods.tex}
\section{Results}
\label{sec:results}
\input{004results.tex}
\section{Discussion and conclusions}
\label{sec:discussion}
\input{005diss.tex}
\begin{acks}
We would like to thank
Nils Rinaldi, Aurore Nembrini, and Philippe Vollichard for their help in obtaining and anonymizing the data.
We are also grateful to Jonas Racine and Kiran Garimella for early help with data engineering,
and to our reviewers for their helpful suggestions.
This project was funded by the Microsoft Swiss Joint Research Center.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
{
"timestamp": "2021-02-18T02:17:52",
"yymm": "2102",
"arxiv_id": "2102.08755",
"language": "en",
"url": "https://arxiv.org/abs/2102.08755"
}
|
\section{introduction}
Intelligent reflecting surface (IRS) has been recently proposed as a cost-effective technology to improve the spectral efficiency and energy efficiency of future wireless networks \cite{JR:wu2019IRSmaga}. Specifically, by smartly adjusting the phase shifts of a large number of reflecting elements, IRS can reconfigure the wireless propagation environment to achieve different design objectives, such as signal focusing and interference suppression. {There has been an upsurge of interest in investigating joint active and passive beamforming for various system setups \cite{rajatheva2020white,xu2020resource,guan2019intelligent,zou2020wireless,ding2020simple,JR:wu2019discreteIRS,JR:wu2018IRS,huang2018achievable,huang2019reconfigurable,di2020hybrid}.
In particular, the fundamental \emph{squared power gain} of IRS was unveiled in \cite{JR:wu2018IRS}.}
{In \cite{huang2019reconfigurable} and \cite{di2020hybrid}, energy efficiency and achievable rate maximization problems were studied by considering continuous and discrete IRS phase shifts, respectively.}
\begin{figure}[!t]
\centering
\includegraphics[width=2.7in]{IRS_WPCN_model}
\caption{An IRS-assisted WPCN employing NOMA for UL WIT.}\label{system:model} \vspace{-0.5cm}
\end{figure}
While the above works focused on applying IRS for improving wireless information transmission (WIT), there is a growing interest in exploiting the high beamforming gain of IRS to enhance the efficiency of wireless power transfer (WPT) to Internet-of-Things (IoT) devices. One line of research targets IRS passive beamforming for simultaneous wireless information and power transfer (SWIPT), where information and energy receivers are concurrently served \cite{wu2019weighted,wu2019jointSWIPT,tang2020joint,liu2020energy}. The other line of research focuses on IRS-assisted wireless powered communication networks (WPCNs), where self-sustainable devices first harvest energy in the downlink (DL) and then transmit information signals in the uplink (UL)
\cite{lyubin2021IRS,zheng2020joint}, based on the well-known ``harvest and then transmit'' protocol illustrated in Fig. \ref{system:model}. The WPCN sum throughput maximization problem was studied in \cite{lyubin2021IRS} for UL WIT with time-division multiple access (TDMA). To improve the WIT efficiency, space-division multiple access (SDMA) was employed for UL WIT in \cite{zheng2020joint} by jointly optimizing the IRS phase shifts and the transmit powers.
Non-orthogonal multiple access (NOMA) is practically appealing for UL WIT in WPCNs due to its capability to improve spectrum efficiency and user fairness by allowing multiple users to simultaneously access the same spectrum. To the best of the authors' knowledge, optimization of IRS-assisted WPCNs employing NOMA for UL WIT has not been studied in the literature yet. Furthermore, since DL WPT and UL WIT occur in different time slots and have different objectives, it is usually believed that exploiting different IRS phase-shift vectors in the two phases, which is referred to as \emph{dynamic IRS beamforming}, may improve system performance. As such, all existing works on IRS-assisted WPCNs (e.g., \cite{lyubin2021IRS,zheng2020joint}) naturally assume dynamic passive beamforming in their problem formulations. However, it remains an open problem whether dynamic IRS beamforming is actually beneficial for maximizing the throughput of wireless powered IoT networks employing NOMA.
Motivated by the above considerations, we study an IRS-assisted WPCN where an IRS is deployed to assist the DL WPT and UL WIT between a hybrid access point (HAP) and multiple devices.
For this setup, we aim to maximize the sum throughput of all devices by jointly optimizing the resource allocation and IRS phase shifts (i.e., passive beamforming). For the first time, we unveil that adopting different IRS phase shifts for DL WPT and UL WIT, i.e., dynamic IRS beamforming, is not beneficial for the considered system. Since the algorithmic computations are typically executed at the HAP, which then sends the optimized phase shifts to the IRS controller, this result not only reduces the number of optimization variables, but also lowers the feedback signalling overhead, especially when the IRS is practically large. Exploiting the insight gained, we further propose both joint and alternating optimization based algorithms to solve the resulting problem. Numerical results demonstrate the significant performance gains achieved by the proposed algorithms compared to benchmark schemes and reveal that integrating IRS into WPCNs not only improves the system throughput but also reduces the system energy consumption.
\emph{Notations:} For a vector $\bm{x}$, $[\bm{x}]_n$ denotes its $n$-th entry and $\bm{x}^T$ and $\bm{x}^H$ denote its transpose and Hermitian transpose, respectively. $\mathcal{O}(\cdot)$ denotes the computational complexity order. $ \mathrm{Re}\{\cdot\}$ denotes the real part of a complex number. ${\rm{tr}}(\Ss)$ denotes the trace of matrix $\Ss$. $\arg(\bm{x})$ denotes the phase vector of $\bm{x}$.
\section{System Model and Problem Formulation}
\subsection{System Model}
As shown in Fig. 1, we consider an IRS-aided WPCN, which comprises an HAP, an IRS, and $K$ wireless-powered devices. The HAP and the devices are all equipped with a single antenna and the IRS consists of $N$ reflecting elements.
To ease practical implementation, the HAP and all devices are assumed to operate over the same frequency band, with the total available transmission time denoted by $T_{\max}$. In addition, the quasi-static flat-fading channel model is adopted which means that the channel coefficients remain constant during $T_{\max}$. As such, UL/DL channel reciprocity holds for all channels, which allows channel state information (CSI) acquisition for the DL based on UL training. The WPCN adopts the typical ``harvest and then transmit'' protocol where the devices first harvest energy from the DL signal emitted by the HAP and then use the available energy to transmit information to the HAP in the UL.
To be able to characterize the maximum achievable performance, it is assumed that the CSI of all channels is perfectly known at the HAP. The equivalent baseband channels from the HAP to the IRS, from the IRS to device $k$, and from the HAP to device $k$ are denoted by $\bm{g}\in \mathbb{C}^{N\times 1}$, $\bm{h}^H_{r,k}\in \mathbb{C}^{1\times N}$, and ${h}^H_{d,k}\in \mathbb{C}$, respectively, where $k = 1, \cdots,K$.
During DL WPT, the HAP broadcasts an energy signal with constant transmit power $P_{\rm A}$ during time $\tau_0$.
The energy harvested from the noise is assumed to be negligible as in \cite{ju14_throughput}, since the noise power is much smaller than the power received from the HAP. Let $\ttheta_0 = \text{diag} ( e^{j\theta_1}, \cdots, e^{j\theta_N})$ denote the reflection phase-shift matrix of the IRS for DL WPT where $\theta_n\in [0, 2\pi), \forall n$.
Thus, the amount of harvested energy at device $k$ can be expressed as
\begin{align}\label{eq3}
E^h_k&=\eta_kP_{\rm A}|h^H_{d,k} + \bm{h}^H_{r,k}\ttheta_0 \bm{g}|^2\tau_0 \\ \nonumber
&=\eta_kP_{\rm A}|h^H_{d,k} + \bm{q}_k^H \vvv_0|^2\tau_0,
\end{align}
where $\eta_k \in (0,1]$ is the energy conversion efficiency of device $k$, $\q_k= \bm{h}^H_{r,k} \text{diag}(\bm{g})$, and $\vvv_0 = [e^{j\theta_1}, \cdots, e^{j\theta_N}]^T$.
For UL WIT, NOMA is adopted where all devices transmit their respective information signals to the HAP simultaneously for a duration of ${\tau}_{\rm 1}$ with transmit powers $p_k$, $k=1,\cdots,K$. The HAP applies successive interference cancellation (SIC) to eliminate multiuser interference.
Specifically, for detecting the message of the $k$-th device, the HAP first decodes the message of the $i$-th device, $\forall i<k$, and then removes this message from the received signal, in the order of $i=1, 2,...,k-1$. The signal received from the $i$-th user, $\forall i>k$, is treated as noise. Hence, the achievable throughput of device $k$ in bits/Hz can be expressed as
\begin{align}\label{eq6}
r_k={\tau}_{\rm 1}\log_2\left(1+\frac{p_k |h^H_{d,k} + \q^H_k \vvv_1|^2 }{\sum_{i=k+1}^{K} p_i |h^H_{d,i} + \q^H_i \vvv_1|^2+ \sigma^2}\right),
\end{align}
where $\sigma^2$ is the additive white Gaussian noise power at the HAP and $\vvv_1 = [ e^{j\varphi_1}, \cdots, e^{j\varphi_N}]^T$ denotes the IRS phase shift vector for UL WIT.
Then, the sum throughput is given by \cite{zeng2019spectral}
\begin{align}\label{eq6}
\!\!\!\! R_{\rm sum}\!= \!\sum_{k=1}^{K}r_k \! =\! {\tau}_{\rm 1} \log_2\left( 1+\sum_{k=1}^{K}\frac{p_k|h^H_{d,k} + \q^H_k \vvv_1 |^2}{\sigma^2} \right).
\end{align}
\subsection{Problem Formulation}
Our objective is to maximize the sum throughput of the considered system by jointly optimizing the IRS phase shifts, the time allocation, and the transmit powers. This leads to the following optimization problem
\begin{subequations} \label{probm20}
\begin{align}
\text{(P1)}: \mathop {\mathrm{max} }\limits_{ \overset{ \tau_{0}, {\tau}_{\rm 1},\{p_{k}\}, } { \vvv_0, \vvv_1 } } &{\tau}_{\rm 1}\log_2\left(1+\sum_{k=1}^{K}\frac{p_k|h^H_{d,k} + \q^H_k \vvv_1 |^2}{\sigma^2}\right)\\
\mathrm{s.t.} ~~& {p_k}{\tau}_{\rm 1}\leq \eta_kP_{\rm A} |h^H_{d,k} + \q^H_k \vvv_0 |^2 \tau_0, ~ \forall k, \label{eq201}\\
&\tau_{0}+{\tau}_{\rm 1}\leq T_{\mathop{\max}}, \label{eq202}\\
&\tau_{0}\geq0, ~ {\tau}_{\rm 1}\geq 0, ~p_k\geq 0, ~\forall k, \label{eq203} \\
& |[\vvv_0]_n|=1, n=1,\cdots, N, \label{eq:modulus1} \\
& |[\vvv_1]_n|=1, n=1,\cdots, N. \label{eq:modulus2}
\end{align}
\end{subequations}
In (P1), (\ref{eq201}) and (\ref{eq202}) represent the energy causality and total time constraints, respectively, (\ref{eq203}) are non-negativity constraints, and \eqref{eq:modulus1} and \eqref{eq:modulus2} are unit-modulus constraints for the phase shifts employed for DL WPT and UL WIT, respectively. Intuitively, since the DL and UL transmissions have different objectives, i.e., WPT and WIT, adopting different phase-shift vectors, i.e., $\vvv_0$ and $\vvv_1$, is expected to be beneficial for maximizing the system sum throughput. However, using different IRS phase-shift vectors also increases the feedback signalling overhead to the IRS and the computational complexity at the HAP due to the large number of optimization variables. Furthermore, (P1) is a non-convex optimization problem and difficult to solve optimally in general due to coupled optimization variables in (4a) and (4b) as well as the non-convex unit-nodulus constraints in (4e) and (4f).
\section{Proposed Solution}
In this section, we first answer the question whether the optimal solution of (P1) requires dynamic IRS beamforming. Then, we propose two efficient algorithms to solve the resulting optimization problem.
\vspace{-0.1cm}
\subsection{Do We Need Different Phase Shifts in DL and UL?}
\begin{proposition}
For (P1), $\vvv^{\star}_0=\vvv^{\star}_1$ holds, where $\vvv^{\star}_0$ and $\vvv^{\star}_1$ are the optimal phase-shift vectors for DL WPT and UL WIT, respectively.
\end{proposition}
\begin{proof}
First, it can be easily shown that constraint \eqref{eq201} is met with equality for the optimal solution since otherwise $p_k$ can be always increased to improve the objective value until \eqref{eq201} becomes active. Then, substituting \eqref{eq201} into the objective function eliminates $\{p_k\}$. As such, for any given $\tau_0$ and $\tau_1$, (P1) is equivalent to
\begin{subequations} \label{sub:probm20}
\begin{align}
\mathop {\mathrm{max} }\limits_{{ \vvv_0, \vvv_1 } } ~~& \sum_{k=1}^{K} \alpha_k | h^H_{d,k} + \q^H_k \vvv_0 |^2| h^H_{d,k} + \q^H_k \vvv_1 |^2 \\
\mathrm{s.t.} ~~
& \text{(4e), (4f),}
\end{align}
\end{subequations}
where $\alpha_k=\frac{\tau_0 P_{\rm A} \eta_k } { {\tau}_{\rm 1}\sigma^2 }$.
Denote by $\w$ the vector maximizing $ \sum_{k=1}^{K}\alpha_k| h^H_{d,k} + \q^H_k \vvv |^4$ subject to constraints $|[\vvv]_n|=1, \forall n$. For the objective function in (5a), we can establish the following inequalities
{\begin{align}
\!\!\!\! &\sum_{k=1}^{K} \left( \sqrt{\alpha_k } | h^H_{d,k} + \q^H_k \vvv_0 |^2 \right) \left( \sqrt{\alpha_k } | h^H_{d,k} +\q^H_k \vvv_1 |^2 \right) \\
\!\!\!\! \overset{(a)}{\leq} &\sqrt{ \left(\sum_{k=1}^{K} \alpha_k| h^H_{d,k} +\q^H_k \vvv_0 |^4\right)\left( \sum_{k=1}^{K}\alpha_k | h^H_{d,k} + \q^H_k \vvv_1 |^4\right) } \nonumber \\
\!\!\!\! \overset{(b)}{\leq}& \sqrt{ \left(\sum_{k=1}^{K}\alpha_k | h^H_{d,k} +\q^H_k \w |^4\right)^2 }= { \sum_{k=1}^{K} \alpha_k| h^H_{d,k} + \q^H_k \w|^4},
\end{align}}where $(a)$ is due to the Cauchy-Schwarz inequality and $(b)$ holds since $\w$ maximizes $ \sum_{k=1}^{K}\alpha_k| h^H_{d,k} + \q^H_k \vvv |^4$ and the equality holds when $\vvv^{\star}_0=\vvv^{\star}_1=\w$.
\end{proof}
Proposition 1 explicitly shows that dynamic IRS beamforming is not needed for UL WPT and DL WIT and using constant passive beamforming is sufficient to maximize the sum throughput of the considered system. As such, if the HAP is in charge of computing the IRS phase shifts, it only needs to feed back $N$ phase-shift values (i.e., $\vvv_0$) to the IRS, rather than $2N$ (i.e., $\vvv_0$ and $\vvv_1$), which reduces the signalling overhead and the associated delay, especially for practically large $N$. Furthermore, exploiting Proposition 1, we only need to solve the following problem, which involves a smaller number of optimization variables (i.e., IRS phase shifts)
\begin{subequations} \label{probm20}
\begin{align}
\mathop {\mathrm{max} }\limits_{{\tau_{0}, {\tau}_{\rm 1}, \vvv_0 } } &{\tau}_{\rm 1}\log_2\left(1+\sum_{k=1}^{K}\frac{\tau_0 P_{\rm A} \eta_k|h^H_{d,k} + \q^H_k \vvv_0 |^4 }{{\tau}_{\rm 1}\sigma^2}\right) \label{eqNdy:obj}\\
\mathrm{s.t.} ~~& \tau_{0}+{\tau}_{\rm 1}\leq T_{\mathop{\max}}, \label{eqNdy202}\\
&\tau_{0}\geq0, ~ {\tau}_{\rm 1}\geq 0, \label{eqNdy203} \\
& |[\vvv_0]_n|=1, n=1,\cdots, N. \label{eq:Ndy:modulus1}
\end{align}
\end{subequations}
Although simpler, problem \eqref{probm20} is still a non-convex optimization problem.
\subsection{Proposed Joint Optimization Algorithm}
To deal with the non-convex objective function \eqref{eqNdy:obj}, we introduce a slack variable $S$ and reformulate problem \eqref{probm20} as follows
\begin{subequations} \label{probm:JO:20}
\begin{align}
\mathop {\mathrm{max} }\limits_{{\tau_{0},{\tau}_{\rm 1}, \vvv_0 } } &{\tau}_{\rm 1}\log_2\left(1+\frac{S}{{\tau}_{\rm 1}}\right)\\
\mathrm{s.t.} ~~& S \leq \sum_{k=1}^{K}\frac{\tau_0 P_{\rm A} \eta_k|h^H_{d,k} + \q^H_k \vvv_0 |^4 }{\sigma^2}, \label{eqJO201}\\
~~& \tau_{0}+{\tau}_{\rm 1}\leq T_{\mathop{\max}}, \label{eqJO202}\\
&\tau_{0}\geq0, ~{\tau}_{\rm 1}\geq 0,\label{eqJO203} \\
& |[\vvv_0]_n|=1, n=1,\cdots, N. \label{eq:JO:modulus1}
\end{align}
\end{subequations}
{Note that for the optimal solution of problem \eqref{probm:JO:20}, constraint \eqref{eqJO201} is met with equality, since otherwise we can always increase the objective value by increasing $S$ until \eqref{eqJO201} becomes active.}
Let $ | h^H_{d,k} + \q^H_k \vvv_0 | = | {\bar \q}^H_k \bar \vvv_0 | $, where ${\bar\vvv}_0 = [ \vvv^H_0 \: 1]^H$ and $ {\bar \q}^H_k = [ {\q}^H_k \: h^H_{d,k}] $. Define $\Q_k={\bar \q}_k{\bar \q}^H_k$ and $\bm{V}_0=\bm{\bar{v}}_0\bm{\bar{v}}_0^H$ which needs to satisfy $\bm{V}_0\succeq \bm{0}$ and ${\rm{rank}}(\bm{V}_0)=1$.
Then, \eqref{eqJO201} can be expressed as
\begin{align} \label{eq:AO:slack}
S\leq \sum_{k=1}^{K}\frac{\eta_k P_{\rm A} \tau_0 [{\rm{Tr}}(\V_0\Q_k)]^2 }{\sigma^2}.
\end{align}
The key observation is that although $\tau_0 [{\rm{Tr}}(\V_0\Q_k)]^2= [{\rm{Tr}}(\V_0\Q_k)]^2/\frac{1}{\tau_0}$ in \eqref{eq:AO:slack} is not jointly convex with respect to $\V_0$ and $\tau_0$, it is jointly convex with respect to ${\rm{Tr}}(\V_0\Q_k)$ and $\frac{1}{\tau_0}$. Recall that any convex function is globally lower-bounded by its first-order Taylor expansion at any point. This thus motivates us to apply the successive convex approximation (SCA) technique for solving problem \eqref{probm:JO:20}. Therefore, with given local point $\hat \V_0$ and $\hat \tau_0$, we obtain the following lower bound
\begin{align}
\!\!& \frac{ [{\rm{Tr}}(\V_0\Q_k)]^2}{{1}/{\tau_0}} \geq \frac{ [{\rm{Tr}}(\hat \V_0\Q_k)]^2}{{1}/{\hat \tau_0}} - \frac{ [{\rm{Tr}}(\hat \V_0\Q_k)]^2}{{1}/{\hat \tau_0}} \left(\frac{1}{\tau_0} - \frac{1}{\hat \tau_0} \right)
\nonumber \\
\!\!& + \frac{2 {\rm{Tr}}(\hat \V_0\Q_k)}{{1}/{\hat \tau_0}} \left( {\rm{Tr}}( \V_0\Q_k) - {\rm{Tr}}(\hat \V_0\Q_k) \! \right) \! \triangleq \! \mathcal{G}(\V_0,\tau_0).
\end{align}
With the lower bound in (11), problem \eqref{probm:JO:20} is approximated as the following problem
\begin{subequations}\label{probm:SDR:20}
\begin{align}
\mathop {\mathrm{max} }\limits_{{\tau_{0}, {\tau}_{\rm 1}, \V_0, S } } &{\tau}_{\rm 1}\log_2\left(1+\frac{S}{{\tau}_{\rm 1}}\right)\\
\mathrm{s.t.} ~~& S \leq \sum_{k=1}^{K}\frac{P_{\rm A} \eta_k }{\sigma^2} \mathcal{G}(\V_0,\tau_0), \\
~~& \tau_{0}+ {\tau}_{\rm 1}\leq T_{\mathop{\max}}, \label{eqSDR202}\\
&\tau_{0}\geq0, ~{\tau}_{\rm 1}\geq 0,\label{eqSDR203} \\
~~~~& [\bm{V}_0]_{n,n} = 1, n=1,\cdots, N+1, \label{P6:SDR:C9} \\
~~~~&{\rm{rank}}(\bm{V}_0)=1, \bm{V}_0 \succeq 0. \label{P6:SDR:C10}
\end{align}
\end{subequations
Note that by relaxing the rank-one constraint in \eqref{P6:SDR:C10}, problem \eqref{probm:SDR:20} becomes a convex semidefinite program (SDP) and we can successively solve it by using standard convex optimization solvers such as CVX, until convergence is achieved. After convergence, Gaussian randomization can be applied to obtain a high-quality solution. {Alternatively, instead of relaxing the rank-one constraint, we can further transform it into an equivalent constraint based on the largest singular value (see Section IV-A for details) and then solve the resulting problem using the penalty method \cite{wu2019jointSWIPT}.} The computational complexity of this algorithm lies in solving the SDP and is given by $\mathcal{O}( I_{JO}N^{6.5} )$ where $I_{JO}$ denotes the number of iterations required for convergence. Since all optimization variables are optimized simultaneously, the joint optimization algorithm serves as a benchmark scheme for other schemes having lower complexities.
\vspace{-0.3cm}
\subsection{Proposed Alternating Optimization Algorithm}
Next, we propose an efficient alternating optimization algorithm where the phase shifts and time allocation are alternately optimized until convergence is achieved. {The key advantage of the algorithm is that it admits a (semi) closed-form solution in each iteration, which thus avoids the computational complexity incurred by using SDP solvers in Section III-B.}
First, for any fixed $ \vvv_0$, it can be shown that problem \eqref{probm20} is simplified to a convex optimization problem where the optimal time allocation can be obtained by using the Lagrange duality method as in \cite{ju14_throughput} where the details are omitted here for brevity. Second, for any fixed $\tau_0$ and $\tau_1$, problem \eqref{probm20} is reduced to
\begin{subequations} \label{probm:AO:20}
\begin{align}
\mathop {\mathrm{max} }\limits_{{ \vvv_0 } } ~~& \sum_{k=1}^{K} \alpha_k|h^H_{d,k} + \q^H_k \vvv_0 |^4 = \sum_{k=1}^{K} \alpha_k (\bar \vvv_0^H \Q_k \bar \vvv_0)^2 \label{eq:AO:obj} \\
\mathrm{s.t.} ~~
& |[\vvv_0]_n|=1, n=1,\cdots, N, \label{eq:AO:modulus1}
\end{align}
\end{subequations}
where $\alpha_k=\frac{\tau_0 P_{\rm A} \eta_k } { {\tau}_{\rm 1}\sigma^2 }$ as in \eqref{sub:probm20}. Although maximizing a convex function does not lead to a convex optimization problem, the convexity of \eqref{eq:AO:obj} allows us to apply the SCA technique for solving problem \eqref{probm:AO:20}. Specifically, for a given local point $ \hat \vvv_0$, the first-order Taylor expansion based lower bound for the $k$-th term in \eqref{eq:AO:obj} can be expressed as
\begin{align}\label{eqAO}
(\bar \vvv_0^H \Q_k \bar \vvv_0)^2 \geq &~2\hat \vvv_0^H \Q_k \hat \vvv_0 \left( 2 \mathrm{Re}\{ \hat \vvv^H_0 \Q_k \bar \vvv_0 \} - 2 \hat \vvv^H_0 \Q_k \hat \vvv_0 \right) \nonumber \\
&~+ (\hat \vvv_0^H \Q_k \hat \vvv_0)^2 \nonumber \\
=& ~4 \mathrm{Re}\{ C_k \hat \vvv_0^H \Q_k \bar\vvv_0 \} - 3C^2_k,
\end{align}
where $C_k=\hat \vvv_0^H \Q_k \hat \vvv_0$. Based on \eqref{eqAO}, a lower bound for \eqref{eq:AO:obj} is given by
\begin{align}\label{SCA:obj}
4 \mathrm{Re} \left\{ \left(\sum_{k=1}^{K} \alpha_kC_k \hat \vvv_0^H \Q_k \right) \bar\vvv_0 \right\} - 3 \sum_{k=1}^{K}\alpha_kC^2_k.
\end{align}
Based on \eqref{SCA:obj}, it is not difficult to show that the optimal solution satisfying \eqref{eq:AO:modulus1} is given by $ \bar\vvv_0=e^{j\arg( {\bm\beta})}$ where ${\bm \beta}= ( \sum_{k=1}^{K} \alpha_kC_k \hat \vvv_0^H \Q_k )^H$.
Note that the objective value of problem \eqref{probm20} is non-decreasing by alternately optimizing the time allocation and phase shifts, and also upper-bounded by a finite value. Thus, the proposed algorithm is guaranteed to converge.
The complexity of this algorithm mainly lies in the calculation of the phase shifts for problem \eqref{probm:AO:20} and thus is given by $\mathcal{O}( I_{AO}N )$, where $I_{AO}$ is the number of iterations required for convergence.
\section{Numerical Results }
This section presents simulation results to demonstrate the effectiveness of the proposed solutions and provide useful insights for IRS-aided WPCN design.
The HAP and IRS are respectively located at $(0,0,0)$ meter (m) and $(10,0, 3)$ m, as shown in Fig. \ref{simulation:setup}. The user devices are randomly and uniformly distributed within a radius of $1.5$ m centered at $(10,0,0)$ m. The pathloss exponents of both the HAP-IRS and IRS-device channels are set to $2.2$, while those of the HAP-device channels are set to $2.8$. {Furthermore, Rayleigh fading is adopted as the small-scale fading for all channels.} The signal attenuation at a reference distance of $1$ m is set as $30$ dB. Other system parameters are set as follows: $\eta_k=0.8,~\forall k$, $\sigma^2=-85$ dBm, $T_{\max}=1$ s, and $P_{\rm A}=40$ dBm.
\begin{figure}[!t]
\centering
\includegraphics[width=2.4in]{IRS_NWPCN_setup}
\caption{Simulation setup. } \vspace{-0.5cm}\label{simulation:setup}
\end{figure}
\begin{figure*}[!t]\vspace{-0.0cm}
\centering
\subfigure[Performance comparison.] {\includegraphics[width=2.35in, height=1.8in]{N_vs_rate2}\label{N:versus:rate}}
\subfigure[Impact of $N$ on DL WPT duration.]{\includegraphics[width=2.35in, height=1.8in]{N_versus_tau0}\label{N:versus:tau0}}
\subfigure[Impact of $N$ on user harvested energy.]{\includegraphics[width=2.35in, height=1.8in]{N_versus_Energy4}\label{N:versus:energy}}
\caption{Simulation results. } \label{pb}\vspace{-0.6cm}
\end{figure*}
\vspace{-0.1cm}
\subsection{Performance Comparison
In Fig. \ref{N:versus:rate}, we plot the sum throughput versus the number of IRS elements for $K=5$ and $K=10$. {For comparison, we consider the following schemes: 1) ``Proposed JO with SDR'' where problem \eqref{probm:SDR:20} with ${\rm{rank}}(\bm{V}_0)=1$ in \eqref{P6:SDR:C10} relaxed is solved successively, and thus, this scheme serves as a performance upper bound; 2) ``Proposed JO with GR'' where Gaussian randomization is applied to recover a rank-one $\bm{V}_0$ based on the solution of the scheme in 1); 3) ``Proposed JO with Penalty'' where we replace ${\rm{rank}}(\bm{V}_0)=1$ by ${\rm Tr}(\bm{V}_0) - \lambda_{\max}(\bm{V}_0) \leq 0$ with $\lambda_{\max}(\bm{V}_0)$ denoting the largest singular value of $\bm{V}_0$ and then solve the resulting problem using the penalty method \cite{wu2019jointSWIPT};
4) ``Proposed AO" in Section III-C; 5) Fixed time allocation with optimized IRS phase shifts, i.e., $\tau_0=0.5 T_{\max}$; 6) Fixed IRS phase shifts with optimized time allocation, i.e., $\theta_n=0, \forall n$; and 7) Without IRS.}
It is observed from Fig. \ref{N:versus:rate} that the sum throughput gain achieved by our proposed JO/AO designs over the benchmark schemes increases as $N$ increases for both $K=5$ and $K=10$. In particular, the proposed AO algorithm achieves almost the same performance as the proposed JO with SDR/GR/Penalty and is thereby a practically appealing solution considering its low complexity, especially for large $N$. Besides, the performance of the scheme with fixed phase shifts is less sensitive to increasing $N$ and only achieves a marginal gain over the system without IRS, whereas the scheme with fixed time allocation performs even worse than the system without IRS for small $N$, but outperforms the scheme with fixed phase shifts for large $N$. This is expected since as $N$ increases, the passive beamforming gain achieved by phase-shift optimization helps compensate the performance loss incurred by fixed time allocation. Nevertheless, Fig. \ref{N:versus:rate} highlights the importance of the joint design of the IRS phase shifts and the time allocation.
\vspace{-0.3cm}
\subsection{Impact of IRS on WPCNs}
We note that the significant throughput improvement shown in Fig. \ref{N:versus:rate} is due to the deployment of the IRS and not due to an increase in the total energy consumption at the HAP, which is given by $E_{\rm HAP}=P_{\rm A}\tau_0$. To illustrate this explicitly, we plot in Fig. \ref{N:versus:tau0} the optimized DL WPT duration $\tau_0$ versus $N$ obtained with the proposed AO algorithm and without IRS. It is observed that for IRS-assisted WPCNs, the optimized DL WPT duration decreases as $N$ increases and thus the total system energy consumed at the HAP $E_{\rm HAP}$ is actually reduced. This also leaves devices more transmission time for UL WIT, which benefits the sum throughput since $R_{\rm sum}$ is monotonically increasing in $\tau_1$. {This suggests that integrating IRS into WPCNs introduces a desirable ``double-gain'' as it not only improves the system throughput but also reduces the energy consumption, thus rendering this architecture spectrum and energy efficient.}
Furthermore, although the DL WPT duration $\tau_0$ decreases due to the deployment of the IRS, it is not at the cost of reducing the energy harvested at the devices. In Fig. \ref{N:versus:energy}, we plot the harvested energy of two randomly selected devices (e.g., D1 and D2), i.e., $E^h_k =\eta_kP_{\rm A}|h^H_{d,k} + \bm{q}_k^H \vvv_0|^2\tau_0$, versus $N$ when $K=5$. One can observe that the harvested energy monotonically increases with $N$ for both devices. Considering the decrease of $\tau_0$ shown in Fig. \ref{N:versus:tau0}, it is not difficult to conclude that the increase of $E^h_k$ is solely due to the improved effective channel power gain $|h^H_{d,k} + \bm{q}_k^H \vvv_0|^2$, which further demonstrates the effectiveness of IRS for WPCNs.
\vspace{-0.4cm}
\section{Conclusions}
This paper studied IRS-assisted WPCNs employing NOMA for UL WIT.
We first unveiled that dynamic IRS beamforming cannot improve the performance of the considered system, which simplifies the problem and reduces the signalling overhead. Based on this result, we proposed both joint and alternating optimization algorithms for system throughput maximization where the latter admits closed-form expressions and is practically appealing. Numerical results showed that our proposed designs are able to drastically improve the system performance compared to several baseline schemes and also shed light on how the IRS improves the throughput of WPCNs while decreasing the energy consumption as the number of IRS elements increases. {In future work, it is worth investigating the effectiveness of dynamic IRS beamforming for WPCNs with imperfect CSI and/or SIC, different user weights, multiple antennas, etc.}
\vspace{-0.5cm}
|
{
"timestamp": "2021-03-16T01:32:01",
"yymm": "2102",
"arxiv_id": "2102.08739",
"language": "en",
"url": "https://arxiv.org/abs/2102.08739"
}
|
\section{Introduction}
Circumstellar disks are the birth places of planetary systems. Thus their physical properties strongly influence the outcome of the planet formation processes. In turn, massive planets dramatically impact the disk structure. Recent scattered light observations have revealed a number of disks with warps or misalignments of inner and outer disk regions (e.g., \citealt{Marino2015, Benisty2017}). The interaction with either planets or stellar companions is frequently invoked to explain these observations (e.g., \citealt{2012Natur.491..418B, Facchini2018}). Recently \cite{Bi2020} and \cite{Kraus2020} showed multiple misaligned rings supporting this scenario in the GW\,Ori triple system.
However, many of the systems with inferred misalignments are around single stars (e.g., \citealt{Pinilla2018, Muro-Arena2020}).
Similarly, spin-orbit misalignment of transiting planets possibly inherited from the gas-rich disk phase, is common in single stellar systems
(e.g., \citealt{Triaud2010}). An alternative scenario is that disk misalignments are natural consequence of angular momentum transfer due to late infall of material on the disk \citep{Thies2011,Dullemond2019}. Observations probing both large and small spatial scales have the potential to test this possibility. \\
SU\,Aur is a nearby (158.4$\pm$1.5\,pc, \citealt{GAIA-DR2-2018}) classical T Tauri star in the Taurus-Auriga star forming region. Spectroscopic studies determined a spectral type of G4 and a stellar luminosity of $log(L/L_\odot) = 0.9$ \citep[when re-scaled with the latest distance estimate;][]{Herczeg2014}.
Using the spectral type as well as this luminosity as input for stellar model isochrones \citep{Siess2000}, we find a stellar mass of 2.0$^{+0.2}_{-0.1}\,M_\odot$ and an age range of 4-5.5\,Myr.\\
SU\,Aur is surrounded by extended circumstellar structure first resolved in near infrared scattered light \citep{Jeffers2014}.
The signal extends up to 500\,au and shows a strong asymmetry along the East-West direction. Subsequent scattered light observations detected a faint dust tail extending from the main disk toward the West \citep{deLeon2015}.
A strong azimuthal brightness asymmetry is attributed to the dust scattering phase function and to a higher surface density on the northern side of the disk. ALMA observations show a Keplerian disk and a gas tail that extends out to several hundreds of au to the West \citep{Akiyama2019}, that could either be caused by a disruption of the disk, by a perturber, or trace cloud material accreting onto the disk.\\
In this letter, we present new observations of SU\,Aur obtained as part of the DESTINYS program (Disk Evolution Study Through Imaging of Nearby Young Stars \citealt{Ginski2020}) that aims to study the circumstellar environment of nearby T Tauri stars, complemented with VLT/NACO, HST/STIS and ALMA archival data.
\section{Observations}
We obtained new observations of SU\,Aur with VLT/SPHERE (\citealt{Beuzit2019}), and use archival data taken with VLT/NACO (program ID: 088.C-0924, PI: S. Jeffers) and ALMA (program ID: 2013.1.00426.S, PI: Y. Boehler).
\subsection{SPHERE observations}
SU\,Aur was observed on 14th of December 2019 with SPHERE/IRDIS in dual-beam polarimetric imaging mode (DPI, \citealt{deBoer2020,vanHolstein2020}) in the broad band H filter with pupil tracking setting. The central star was placed behind an apodized Lyot coronagraph with an inner working angle\footnote{IWA defined as the separation at which the throughput reaches 50\%.} of 92.5\,mas. The individual frame exposure time was set to 32\,s and a total of 104 frames where taken separated in 26 polarimetric cycles of the half wave plate. The total integration time was 55.5\,min. Observations conditions were excellent with an average Seeing of 0.8\arcsec and an atmosphere coherence time of 6.7\,ms.\\
The data were reduced using the public IRDAP pipeline (IRDIS Data reduction for Accurate Polarimetry, \citealt{vanHolstein2020}). The images were astrometrically calibrated using the pixel scale and true north offset given in \cite{2016SPIE.9908E..34M}. \\
Since the data were taken in pupil tracking mode we were able to perform angular differential imaging (ADI, \citealt{2006ApJ...641..556M}) reduction in addition to the polarimetric reduction, resulting in a total intensity image and a polarized intensity image. We show the final result of both post-processing approaches in figure~\ref{fig:sphere_images}.
Note that instead of polarized intensity we show the radial Stokes parameter Q$\phi$ as is now standard in most studies. We follow the definition in \cite{deBoer2020}:
\begin{equation}
Q_\phi = -Q\,cos(2\phi) - U\,sin(2\phi)
\end{equation}
\subsection{NACO observations}
SU\,Aur was observed with VLT/NACO in polarimetric imaging mode on 2nd of November 2011 in the Ks filter. Observing conditions were fair with an average Seeing of 0.8\arcsec{} and a coherence time of 3\,ms. As NACO does not offer a coronagraph in polarimetric mode, short individual frame exposure times of 0.35\,s were used. A total of 8160 frames were taken separated in 24 polarimetric cycles. This amounts to a total integration time of 47.6\,min. The data was taken in dithering mode in order to allow for sky background subtraction. \\
The data reduction was performed in principle analogous to the SPHERE data, however without the benefit of a detailed instrument model to determine instrument polarization. The instrumental polarization was thus estimated from the data, by placing a small aperture at the central star location and with an aperture radius smaller than one resolution element, i.e. where we would expect the polarimetric signal to be unresolved and thus on average small. Other data reduction steps were performed as described in \cite{Ginski2016} for an analogous data set of HD\,97048. The resulting Q$\phi$ image is shown in figure~\ref{fig:sphere_images}.
\subsection{Archival ALMA observations}
We retrieved CO and continuum data of SU Aur observed as part of program 2013.1.00426.S (PI: Y. Boehler) from the ALMA archive.
The 880 $\mu$m continuum and $^{12}$CO $J=3-2$ line were observed in Band 7 on 2015 July 24 and 2016 July 23 with 44 and 43 antennas, respectively. For the first set of observations, baselines ranged from 15 to 1600 m and the quasars J0423-0120, J0423-013, and J0438+3004 served as the bandpass, amplitude, and phase calibrators, respectively. For the second set of observations, the baselines ranged from 17 to 1100 m and the quasars J0510+1800, J0238+1636, and J0433+2905 served as the bandpass, amplitude, and phase calibrators, respectively. The correlator was set up with four spectral windows (SPWs). The SPW covering the CO $J=3-2$ line was centered at 345.797 GHz and had a bandwidth of 468.750 MHz and channel widths of 122.07 kHz. The other three SPWs were centered at 334.016, 335.974, and 347.708 GHz, and each one had a bandwidth of 2 GHz and channel widths of 15.625 MHz. The cumulative on-source integration time was 9 minutes.
The 1.3 mm continuum, $^{12}$CO $J=2-1$, and $^{13}$CO $J=2-1$ were observed in Band 6 on 19 July 2015 and 8 August 2015. Observational details are provided in \citet{Akiyama2019}, which first published the 1.3 mm continuum and $^{12}$CO $J=2-1$ data. In this work, we restrict our focus to the $^{13}$CO $J=2-1$ line from the Band 6 dataset, since the S/N and spatial resolution of the Band 7 continuum and $^{12}$CO line are better than their Band 6 counterparts.
The SU Aur data downloaded from the archive were processed with the ALMA pipeline in CASA v. 4.5.3. Subsequent self-calibration and imaging took place in CASA v. 5.4.0. Channels with line emission were flagged and the SPWs were spectrally averaged to form continuum datasets. The $\texttt{fixvis}$ and $\texttt{fixplanets}$ tasks were using to align the continuum peak positions of the execution blocks within each band and to assign a common phase center, respectively. One round of phase-self calibration was applied to the continuum data for the separate bands. The self-calibration solutions were then applied to the full-resolution data. The \texttt{uvcontsub} task was used to subtract the continuum from the line spectral windows in the $uv$ plane.
A Briggs robust value of 0.5 was used with the Clark CLEAN algorithm to produce the final 880 $\mu$m continuum image, which has a synthesized beam of $0.29''\times0.17''$ (26.4$^\circ$) and an rms of 0.1 mJy beam$^{-1}$. A Briggs robust value of 1.0 was used with the multiscale CLEAN algorithm to produce the $^{12}$CO $J=3-2$ image cube, which was then primary beam-corrected with \texttt{impbcor}. The resulting syntheized beam is $0.32''\times0.19''$ (26.4$^\circ$) and the rms is 10 mJy beam$^{-1}$ in channels 0.25 km s$^{-1}$ wide. A Gaussian $uv$ taper and a Briggs robust value of 2.0 were applied to the weaker $^{13}$CO $J=2-1$ line in order to improve sensitivity. The synthesized beam is $0.53''\times0.46''$ (15.0$^\circ$) and the rms is 10 mJy beam$^{-1}$ in channels 0.25 km s$^{-1}$ wide.
\begin{figure*}
\center
\includegraphics[width=0.68\textwidth]{main_figure_sphere_naco_v2.pdf}
\caption{SPHERE/IRDIS and NACO observations of the SU\,Aur system. We show SPHERE H-band Q$_\phi$ polarized signal on the top, SPHERE H-band total intensity in the center panel and NACO K$_S$-band Q$_\phi$ polarized signal on the bottom.
The SPHERE coronagraph is marked with a hashed circle.
}
\label{fig:sphere_images}
\end{figure*}
\section{Morphology in scattered light}
As evidenced from figure~\ref{fig:sphere_images}, SU Aur shows a complex circumstellar environment, with a disk, a dark lane, and large scale features. These features are discussed in details in the following.
\subsection{The circumstellar disk}
\label{sec: scatterd-light disk}
The most striking morphological feature is a dark lane at a position angle of $\sim$125$^\circ$, as indicated in the upper-center panel of figure~\ref{fig:sphere-tails}. This dark lane is also detected in Subaru/HiCIAO polarimetric observations \citep{deLeon2015} and in the NACO Ks-band (figure~\ref{fig:sphere_images}, bottom panel), and could trace shadowing by a misaligned inner disk.
While the extent of the disk is not directly evident from our SPHERE observations, the ALMA observations indicate the presence of a circumstellar disk in the continuum, and in the velocity channels of the $^{12}$CO $J=3-2$ line that show Keplerian rotation. The scattered light signal that is co-spatial with this kinematic signature is detected out to $\sim$0.7\arcsec\,(111\,au). The inner disk (within 1\,au) was resolved with near infrared interferometry by \cite{Labdon2019}. With image reconstruction they found it to be inclined by 52.8$^\circ\pm$2$^\circ$ with a position angle of 140.1$^\circ\pm$0.2$^\circ$, and the near side towards the West\footnote{We note that we use the convention that at a position angle of 0$^\circ$ the near side of a hypothetical disk is located to the West. In this convention the position angle reported by \cite{Labdon2019} is 320.1$^\circ$.}.
We show the reconstructed image from the interferometric data in figure~\ref{fig:sphere-tails}, upper-left panel. The inner disk position angle is close to the position angle of the dark lane seen in the outer disk.
We compared the position angle of the dark lane between the archival NACO data taken in 2011, the literature Subaru data, taken in 2014, and the new SPHERE data taken in 2019. Within this $\sim$8\,year timeframe we do not find a significant\footnote{The calibration accuracy of SPHERE is on the order of 0.1$^\circ$, while for NACO it is on the order of 0.2$^\circ$.} change in the orientation. A change in orientation would be expected if the inner disk misalignment was caused by mutual interactions with a short period binary companion, a scenario found to be unlikely.\\
We find that the disk is significantly brighter in the North-East than in the South-West. This was already reported by \cite{deLeon2015}, who interpreted this as an azimuthally asymmetric dust distribution. However this brightness asymmetry would be a natural consequence of the scattering phase function if we see an inclined disk with the near side in the North-East. Indeed from the ALMA data discussed in section~\ref{section: ALMA comparion}, we find a most likely position angle of 122.9$^\circ\pm$1.2$^\circ$ for the outer disk, which fits well with this interpretation. Given this position angle and assuming that the North-East side is the near side of the outer disk, this implies a strong misalignment between the inner and outer disk. Using the values provided by \cite{Labdon2019} for the inner disk and our measurement for the outer disk we find a relative inclination of $\sim$70$^\circ$. This is consistent with the presence of a narrow shadow lane. \\%\footnote{We note that the angle of the shadow lane does not necessarily have to be aligned with the major axis of the inner disk \citep{Min2017}.}. \\
In addition to the brightness asymmetry, we see considerable sub-structure in the disk. In particular several spiral features are present. In figure~\ref{fig:sphere-tails}, upper-right panel, we show the high-pass filtered version of the SPHERE Q$_\phi$ image with the spiral features highlighted. We can clearly identify six features in the South-West and four features in the North-East, with possible other (not highlighted) features in the East. The spirals in the South-West have generally larger pitch angles (25$^\circ$ to 49$^\circ$) than the spirals in the North-East (7$^\circ$ to 22$^\circ$). This could be explained by projection effects on an inclined scattering surface if the North-East side is indeed the front side of the disk \citep{Dong2016}.
\begin{figure*}
\center
\gridline{\fig{su_aur_disk_features.pdf}{0.99\textwidth}{}}
\gridline{\fig{suaur_qphi_annotated_white_v53.pdf}{0.98\textwidth}{}}
\caption{
\emph{Upper-Left:} Near-infrared interferometric image reconstructed by \cite{Labdon2019} (reproduction of their figure~3). We overlay the major axis of the inner disk that they recover. \emph{Upper-Middle:} SPHERE/IRDIS Q$_\phi$ image of the innermost part of SU\,Aur. We overlay the major axis of the interferometric inner disk and the direction of the shadow lane seen in scattered light on the outer disk. \emph{Upper-Right:} Same as the middle panel, but after application of a high pass filter. We mark the visible spiral structures.
\emph{Bottom:} SPHERE Q$_\phi$ image of SU\,Aur. Several large scale structures are annotated. The polarized flux was scaled by the inclination corrected square of the separation from the primary star to compensate for the illumination drop-off.
}
\label{fig:sphere-tails}
\end{figure*}
\subsection{The extended structure}
\label{scattering-tail-section}
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{degree_of_pol_rdi.pdf}
\caption{SPHERE degree of linear polarization. The total intensity image used is the iterative RDI reduction shown in appendix~\ref{IRDI-IADI}. Regions for which the total intensity values were not well recovered were set to 0. The inner disk is masked since it is dominated by artefacts in total intensity.
}
\label{fig:degree-of-pol}
\end{figure}
\cite{deLeon2015} reported the detection of a dust tail with Subaru/HiCIAO in scattered polarized light extending from the disk around SU\,Aur roughly 2.5\arcsec{} to the West. We find a similar tail structure in both SPHERE and NACO data. The lower signal-to-noise NACO data show a single tail, while the SPHERE data show that this structure extends much further out than seen with either HiCIAO or NACO and consists of several tails with different curvatures and orientations.
In figure~\ref{fig:sphere-tails} the most prominent structures are annotated. There are at least 4 distinct tails that extend towards the West labeled 1 to 4. The brightest tail in the SPHERE Q$_\phi$ image is the southern-most of these structures, i.e., tail 1. This tail is the structure seen in the NACO data and in the Subaru data. In the SPHERE image it becomes clear that this tail connects with the northern part of the Keplerian disk. Not only can we trace the tail structure until it merges with the disk, but we can also see the projection of the shadow lane from the outer disk on the dust tail. (see annotation in figure~\ref{fig:sphere-tails}). The angle of the shadow changes as would be expected if the dust tail is approaching the disk from above the disk-plane, i.e., from in between the observer and the disk.\\
We see several fainter tails located north of tail 1, marked with numbers 2-4. Some of them are more visible in the total intensity ADI image shown in figure~\ref{fig:sphere_images}, middle panel, indicating that they likely have a low degree of polarization. We also see a structure extending to the north at a significantly different angle than tail 1-4, and labeled northern tail in figure~\ref{fig:sphere-tails}.
The northern tail appears to vanish just before it reaches the disk (see annotation in figure~and the n\ref{fig:sphere-tails}), indicating that it is either below tail 1 or behind the disk. In order to better understand the geometry of the system we computed the degree of linear polarization of the extended structures. We utilized an iterative reference differential imaging approach (Vaendel et al. in prep.), complemented with angular differential imaging (Stapper et al., in prep.), briefly described in appendix~\ref{IRDI-IADI}.
The result is shown in figure~\ref{fig:degree-of-pol}.
Both tail 1 and the northern tail stick out with a higher degree of linear polarization compared to the surrounding structures. Assuming a standard bell curve to map the degree of linear polarization to the scattering angles (e.g., \citealt{Stolker2016b}), both tail 1 and the northern tail should be at intermediate scattering angles, with an ambiguity between forward and back-scattering. However, tail 1 is significantly brighter in the SPHERE Q$_\phi$ image than the northern tail (factor 1.5 to 4 depending on the point of measurement). This is also evidenced by the fact that tail 1 is detected by SPHERE, NACO and Subaru, whereas the northern tail is only visible in the highest signal-to-noise SPHERE data. Given that tail 1 can be smoothly traced until it connects with the northern part of the disk (i.e., the near side), it is clear that the light from tail 1 is scattered with angles smaller than 90$^\circ$. Since the northern tail shows a similar degree of polarization, but overall smaller signal, we conclude that light is scattered with angles larger than 90$^\circ$.
This means the northern tail should be located behind the disk along the line of sight. \\
In addition to the distinct tail-like structures, we see a more complex signal to the East of the disk. Followed by a zone where we see a distinct lack of signal toward the South-East (see annotation in figure~\ref{fig:sphere-tails}). If the signal to the East and South-East is located above the Keplerian disk (i.e., closer to the observer), then the region without signal might be a natural continuation of the shadow lane visible on the disk. In particular the signal to the East shows a very similar degree of linear polarization to tail 1, indicating similar scattering angles.
\section{ALMA observations}
\label{section: ALMA comparion}
\cite{Akiyama2019} presented ALMA Band 6 observations of SU Aur, showing the dust continuum emission of the Keplerian disk and revealing an extended tail structure to the West in the gas.
The dust continuum emission shows a marginally resolved disk without particular features. In figure~\ref{fig:sphere-alma-dust} we show an overlay of the mm continuum emission and the SPHERE data. Large dust particles are concentrated at the location of the disk also seen with SPHERE but are not detected in the tail structures to the West. The mm-emission appears less extended than the scattered light, possibly indicating efficient radial drift of the large dust particles.\\
From a fit of a simple symmetric model to the continuum emission, we find a position angle of 122.9$^\circ\pm$1.2$^\circ$ and an inclination of 53.0$^\circ\pm$1.5$^\circ$ (see appendix~\ref{app:alma-fit} for details). Given that the gas and scattered light show a highly asymmetric structure it is possible that this fit is affected by systematic uncertainties.\\
In figure~\ref{fig:sphere-alma-overlay} we show an overlay of two velocity channels of the $^{12}$CO 3-2 line emission with the SPHERE data. The first channel corresponds to a velocity of $\sim$4.51\,km\,s$^{-1}$ and is blue shifted relative to the intrinsic system velocity \citep[$\sim$6\,km\,s$^{-1}$;][]{Akiyama2019}, while the second channel corresponds to a velocity of $\sim$7.51\,km\,s$^{-1}$ and is red-shifted. We show all channel maps in figure~\ref{fig:channel maps}. The blue-shifted frequency channels clearly trace the northern tail detected in the SPHERE data, while the red shifted channels trace the tails to the West, in particular tail 1. Since we inferred from the scattered light data that tail 1 is above, and the northern tail below, the disk, we can thus conclude that both of these tails trace material falling onto the disk from the surrounding cloud. In order to check if the measured velocities are physical, we computed the free fall velocity around SU\,Aur and find values of 4-1.8\,km\,s$^{-1}$ for separation between 200\,au and 1200\,au. This is compatible with the projected velocities measured in the dust tails.\\
Additionally, we see strong red-shifted signal to the East and South-East of the Keplerian disk. If the detected scattered light signal is above the disk, then this may indicate that we see additional in-fall of material from these directions. It may also be that we simply trace the material falling in from the West as it is caught in Keplerian motion and spirals onto the disk.
\subsection{Possible foreground contamination}
In the red-shifted velocity channels there is in general emission with a velocity gradient from West to East, which may indicate that some of the signal is coming from the embedding cloud and not the infalling material. However, the blue-shifted channels are highly concentrated on the circumstellar disk and the northern tail, thus this is less of a concern in this case. We investigated if possible foreground contamination might be responsible for some of the signatures seen in scattered light. \\
For the shadow lane feature to be produced by optically thick foreground material, we would require a thin filament aligned such that is crosses our line of sight towards the stellar position. To create a narrow feature like the shadow lane this filament would likely have to be close to the star. In this case we would expect to pick up an illumination signature of such a hypothetical structure in scattered light, in particular since we would see it under small scattering angles and thus close to the peak of the total intensity scattering phase function. Such a signature is not visible in the SPHERE or NACO data (neither in polarized nor total intensity). Additionally if such a structure was present, we would expected SU\,Aur to show significant extinction. The literature value for the the extinction of SU\,Aur is $A_V\sim$1\,mag \citep{Calvet2004}
thus it does not appear that we observe the star obscured behind an optically thick filamentary structure. We therefore discard the possibility of highly localized foreground structures in front of the Keplerian disk and favor a misaligned inner disk scenario, supported by the orientation of the inner and outer disk derived by \cite{Labdon2019} and in this work. \\
Given the ALMA emission in the red-shifted channels, it may still be possible that thin, patchy foreground material influences the brightness distribution between the extended tail structures in scattered light.
In the red-shifted channels there is some emission visible at the position of the northern tail between velocities of 6.5 and 8.5 km/s.
However the emission at the position of the norther tail is in all channels weaker than the emission located at tail 1 and tails 2-4.
If a large portion of this red-shifted emission is foreground material, we thus conclude that the northern tail should be less affected than the region of tail 1-4. Nevertheless the northern tail is fainter in scattered light than tail 1, which we interpret as resulting from tail 1 being seen under small scattering angles and the northern tail under large scattering angles (i.e. tail 1 in front and the northern tail in the back).
If significant foreground absorption is present, than the systematic should be such that tail 1 is even brighter compared to the northern tail if the absorption were not present. Thus it would not change our conclusion of the relative radial positions of the two structures as presented in section~\ref{scattering-tail-section}.
\subsection{Relative line fluxes in disk and tails}
To test if the angular momentum transported by the dust tails is in principle sufficient to cause the misalignment of inner and outer disk that we discuss in section~\ref{sec: scatterd-light disk}, we used the available $^{12}$CO and $^{13}$CO line data to qualitatively assess the mass ratio between the Keplerian disk and the dust tails. Of the two line observations one expects $^{13}$CO to be optically thinner than $^{12}$CO and thus to trace more closely the density of the gas. \\
To estimate the mass ratio we integrated over the detected flux density in the moment 0 map of both data sets shown in figure~\ref{fig: moment-maps}.
For the Keplerian disk area we used a circular aperture with an outer radius of 0.7\arcsec{}. For the integrated flux in the tails we used two elliptical apertures in the $^{12}$CO data for the areas that coincide with the northern and the western tails in the SPHERE scattered light data. In the $^{13}$CO data we used only one elliptical aperture centered on the western region since the northern tail is not well detected in the $^{13}$CO emission.
Using this procedure we find a flux density ratio of 0.6 for the $^{12}$CO data and 2.9 for the $^{13}$CO data.\\
Given that the Keplerian disk has a higher temperature than the tails, which are farther away from the star, a smaller column density of material is needed in the disk to produce the same amount of flux compared to the tails. For the optically thinner $^{13}$CO data we find roughly three times higher integrated flux in the tails than in the disk, which may indicate that there is significantly more mass in the tails than in the disk. We point out that this assumes a constant gas-to-dust ratio in both areas. Also even $^{13}$CO may still be optically thick in the disk (and possibly the tails) and thus we can not directly translate the flux density ratio to a mass ratio.
However, it is encouraging that the flux density ratio between tail and disk increases between the optically thicker tracer $^{12}$CO and the thinner tracer $^{13}$CO. Given that the $^{13}$CO emission should be more density dominated than the $^{12}$CO emission this indicates that there is indeed a substantial amount of material in the tails compared to the disk.\\
For a proper measurement of the gas masses (and the angular momentum) in the different structures surrounding SU\,Aur deep observations of optically thin tracers are needed. However the measurements extracted from the existing data are well compatible with a scenario in which the infalling material misaligns the outer disk of SU\,Aur.
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{su_aur_channel_32_red_blue_v2.pdf}
\caption{SPHERE Q$_\phi$ image with the ALMA $^{12}$CO 3-2 channel maps as blue and red contour overlay. Contour levels correspond to the 2\,$\sigma$ rms levels starting at 5\,$\sigma$. The ALMA beam size and orientation is indicated by the white ellipse in the lower right corner. The upper panel corresponds to signal blue-shifted (-1.49kms$^{-1}$) relative to the systemic velocity and the bottom panel corresponds to red-shifted signal (+1.51km$s^{-1}$).
}
\label{fig:sphere-alma-overlay}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.50\textwidth]{12co_13co_moment0_v5.pdf}
\caption{Moment 0 map of the $^{12}$CO(3-2) and $^{13}$CO(2-1) data of the SU\,Aur system. The beam size is indicated by the white ellipse in the lower right corner.
}
\label{fig: moment-maps}
\end{figure}
\section{HST/STIS observations}
The large scale dust tail extending from SU Aur was previously captured in optical scattered light by \cite{2001AAS...199.6015G}, using HST/STIS. While in the STIS coronagraphic images the disk region is masked, STIS presents a significantly larger field of view. We show the HST/STIS data, after masking of the coronagraphic bars and combining of two telescope roll angles in figure~\ref{fig:stis-sphere}. We overlaid the contours of the SPHERE data for comparison, showing the complementarity of both instruments. \\
The dust tails seen in SPHERE seamlessly connect with the structures visible in the STIS image. In the STIS data it becomes apparent that the tail structure is turning towards the South. This is the direction in which AB\,Aur is located, which shows spectacular extended spiral features in scattered light \citep{Boccaletti2020}. The direction towards AB\,Aur may support the scenario discussed in \cite{Akiyama2019}, where they speculate that cloud material initially falls towards the center of gravity between SU\,Aur and AB\,Aur before it accretes onto either systems.
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{suaur_stis_overlay.pdf}
\caption{HST/STIS data first presented by \cite{2001AAS...199.6015G}. We masked the coronagraph and telescope spider features and combined two roll angles. SPHERE Q$\phi$ contours are overlayed in blue.
}
\label{fig:stis-sphere}
\end{figure}
\section{Discussion and conclusions}
The observational data of SU\,Aur paint an intricate picture of the formation of the system, its connection to the surrounding molecular cloud and its evolution.
\subsection{The origin of the dust tails}
SPHERE and ALMA data in concert show that material is falling onto the disk surrounding SU\,Aur. We can thus likely rule out a recent close encounter or the ejection of a dust clump for their origin \citep{Vorobyov2020}.
Additionally, we checked the Gaia DR2 catalog and find no obvious candidates for a close stellar encounter with SU\,Aur within 100\arcsec{} (i.e. sources with similar distance or proper motion)\footnote{There are several known members of the Taurus\,X subgroup of which SU\,Aur is a member (\citealt{Luhman2009}), however the closest member is JH\,433, which is moving tangentially to SU\,Aur and thus is not a candidate for a close encounter. See appendix~\ref{Aur-cluster} for a brief overview.}, neither do we detect a point source within the 6\arcsec{} field of view of SPHERE (we are generally sensitive to all stellar and brown dwarf sources outside of $\sim$0.1\arcsec{}).\\
The asymmetry of the tail structures from the largest scales seen with HST/STIS down to the smallest scales may suggest an interaction between SU\,Aur and the nearby young star AB\,Aur (projected separation of $\sim$29,000\,au). In \cite{Hacar2011} it is shown, that both of these systems are located in or near the same filament structure within the L\,1517 dark cloud.
This indicates that we may see the late formation of a very wide binary (or higher order) system formed by turbulent fragmentation (\citealt{Padoan1995}). The arc-like structure seen around AB\,Aur (\citealt{Grady1999}) and the large-scale dust and gas tails around SU\,Aur might be part of a connecting structure between the two systems as predicted by magneto-hydrodynamic simulations (e.g., \citealt{Kuffmeier2019}). As already suggested by \cite{Akiyama2019}, material may then be funneled along these structures towards either system.\\
The sharp tail structures in particular might be expected from classical Bondi-Hoyle accretion \citep{Bondi1944, Bondi1952}.
Following \cite{Dullemond2019}, a cloudlet which undergoes a close encounter will form a large scale arc-like structure and possibly also sharper dust tails. This depends largely on the size of the cloudlet relative to the impact parameter of the encounter, but also the thermodynamics within the cloud. For a cloudlet with a radius larger than the impact parameter, which also cools efficiently, they produce scattered light images containing both large-scale arcs and smaller scale tails. Their synthetic scattered light images are reminiscent of the structure seen around SU\,Aur with the HST and the tails seen with SPHERE. Additionally, an encounter with a large cloudlet would also explain that we see not only red-shifted emission and scattered light in one direction but that it envelops the disk. \cite{Kuffmeier2020} also show that such close encounters with cloudlets can produce extended arcs on scales of 10$^4$\,au. Their simulations suggest that these resulting structures are long lived if the protostar is at rest relative to the surrounding gas and is encountered by a cloudlet in relative motion. This may be plausible for SU\,Aur for two reasons. On the one hand, the fact that we indeed detect these dynamical signatures is itself an indication that they are long lived. On the other hand the systemic velocity of $\sim$6\,kms$^{-1}$ fits well with the radial velocity of the surrounding filament as reported in \cite{Hacar2011}.\\
\subsection{Disk instability due to infalling material?}
The new SPHERE observations allow us to trace the large scale structures in SU\,Aur seamlessly down to scales of less than 10\,au. The two most striking features in the Keplerian disk are the multitude of spiral arms and the sharp shadow lane. Both of these can well be explained by the infall of material. Spiral waves are a common consequence of instability triggered by infalling material (\citealt{Moeckel2009, Lesur2015, Bae2015, Kuffmeier2018, Dullemond2019, Kuffmeier2020}). While simulations typically show the spiral features in the gas, we can expect to trace them in scattered light, since small dust particles are well coupled to the gas. Indeed most of these simulations produce disks with a large number of "wispy" spiral features that match closely the appearance of SU\,Aur as highlighted in figure~\ref{fig:sphere-tails}. Such spiral features have so far been observed in scattered light predominantly in circumbinary disks, i.e., HD\,142527 (\citealt{Avenhaus2017}), HD\,34700 (\citealt{Monnier2019}) and GG\,Tau (\citealt{Keppler2020}). At this time there is no evidence that SU\,Aur is a binary star. In particular it lacks the large central cavity seen in the other systems. The spiral structure rather resembles the one seen in AB\,Aur by \cite{Boccaletti2020}. This seems to fit in the picture of asymmetric late infall in both systems along the embedding filament. However, we note that \cite{Poblete2020} point out that the inner spiral structure in the disk around AB\,Aur could be caused by a stellar binary.\\
\subsection{Disk misalignment by late infall?}
The shadow lane in SU\,Aur is a feature now commonly seen in scattered light images (e.g., \citealt{Marino2015,Stolker2016a,Benisty2017,Keppler2020}) and typically explained by a misalignment or warp between inner and outer disk. Indeed by comparing our ALMA continuum fit with the interferometric result from \cite{Labdon2019} we find a relative misalignment of $\sim$70$^\circ$.
While there is ample theory on how such a misalignment is caused, there is little observational evidence of the process. \cite{Brinch2016} reported on the misalignment of circumstellar disks in the IRS\,43 multiple system with respect to the surrounding circumbinary disk, presumably caused by the chaotic interaction of the stellar cores. An even more spectacular case of such a misalignment by multiple stars was recently shown for for the GW\,Ori system by \cite{Bi2020} and \cite{Kraus2020}.
\cite{Sakai2019} found a warped disk around the proto-star IRAS\,04368+2557. They inferred from the absence of signs of a close stellar encounter that the warp should be caused by late infall of material, but did not find direct evidence. For AB\,Aur, \cite{Tang2012} show a large scale warp in the surrounding disk and multiple tentative spiral features and suggest that both are caused by late infall. Given the results by \cite{Boccaletti2020}, who detect large scale spiral structures in scattered light, down to disk scales, this seems a likely scenario. However, we note that \cite{Boccaletti2020} interpret the innermost spiral structures as signs of a forming proto-planet rather than as instability caused by infalling material.
In SU\,Aur we directly detect the infalling material and can trace it from thousands of au down to disk scales. As \cite{Dullemond2019} argue, infalling material is bound to have a vastly different orientation of angular momentum compared to the accreting disk. In section~\ref{section: ALMA comparion}, we discussed that the mass estimates in the tail and disk structure make such a scenario plausible. Thus the infall we trace is likely causing a warp of the outer disk regions. This makes the AB\,Aur and SU\,Aur pair the best examples of such effects caused by late infall. We note that it is in principle possible that the disk we currently see in scattered light around SU\,Aur is not primordial at all, but is actually formed as a result of a close encounter with a cloudlet (see \citealt{Dullemond2019}). In this case it would be natural that it is misaligned with respect to the (presumably) primordial inner disk detected by \cite{Labdon2019}.\\
The structures revealed around SU\,Aur by SPHERE and ALMA form a coherent picture of late infall of material that dominates the evolution of the protoplanetary disk. This mechanism not only provides an additional mass reservoir for forming planets (see the discussion in \citealt{Manara2018}) but can also trigger planet formation by gravitational instability. As suggested by \cite{Thies2011}, this scenario might be able to explain the spin-orbit misalignment found in evolved planetary systems. These new high-resolution observations enable detailed future simulations of such planet formation pathways.
\acknowledgments
We thank an anonymous referee for a thorough review that improved the paper.
We would like to thank Jonathan Williams and Antonio Garufi for fruitful discussion. We also thank Eiji Akiyama for providing their reduced ALMA data and Aaron Labdon and the A\&A journal for authorizing the reprint of the near infrared interferometric results.
SPHERE is an instrument designed and built by a consortium
consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF - Osservatorio di Padova (Italy), Observatoire de
Gen\`{e}ve (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA
(France), and ASTRON (The Netherlands) in collaboration with ESO.
SPHERE was funded by ESO, with additional contributions from CNRS
(France), MPIA (Germany), INAF (Italy), FINES (Switzerland), and NOVA
(The Netherlands). SPHERE also received funding from the European Commission
Sixth and Seventh Framework Programmes as part of the Optical Infrared
Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct2004-001566
for FP6 (2004-2008), grant number 226604 for FP7 (2009-2012),
and grant number 312430 for FP7 (2013-2016).
C.G. acknowledges funding from the Netherlands Organisation for Scientific Research (NWO) TOP-1 grant as part
of the research program “Herbig Ae/Be stars, Rosetta stones for understanding
the formation of planetary systems”, project number 614.001.552.
Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51460.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract
NAS5-26555.
This paper makes use of ALMA data ADS/JAO.ALMA\#2013.1.00426.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank the North American ALMA Science Center for providing resources to reduce ALMA data.
T.B. acknowledges funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 714769 and funding from the Deutsche Forschungsgemeinschaft under Ref. no. FOR 2634/1 and under Germany's Excellence Strategy (EXC-2094–390783311).
JB acknowledges support by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51427.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
This research has used the SIMBAD database, operated at CDS, Strasbourg, France \citep{Wenger2000}.
Part of this research was carried out at the Jet Propulsion
Laboratory, California Institute of Technology, under a contract with
the National Aeronautics and Space Administration (80NM0018D0004). EEM acknowledges support from the Jet Propulsion
Laboratory Exoplanetary Science Initiative, NASA award 17-K2GO6-0030, and NASA grant NNX15AD53G.
We used the \emph{Python} programming language\footnote{Python Software Foundation, \url{https://www.python.org/}}, especially the \emph{SciPy} \citep{2020SciPy-NMeth}, \emph{NumPy} \citep{oliphant2006guide}, \emph{Matplotlib} \citep{Matplotlib} and \emph{astropy} \citep{astropy_1,astropy_2} packages.
We thank the writers of these software packages for making their work available to the astronomical community.
\vspace{2mm}
\facilities{VLT(SPHERE), VLT(NACO), ALMA, HST(STIS)}
|
{
"timestamp": "2021-02-18T02:19:04",
"yymm": "2102",
"arxiv_id": "2102.08781",
"language": "en",
"url": "https://arxiv.org/abs/2102.08781"
}
|
\section{Introduction}
Let $k$ be an algebraically closed field. Let $f\,:\,C\,\longrightarrow\,D$ be a nonconstant separable morphism
between irreducible smooth projective curves defined over $k$. For any semistable vector bundle $E$ on $D$, the
pullback $f^*E$ is also semistable. However, $f^*E$ need not be stable for every stable vector bundle
$E$ on $D$. Our aim here is to characterize all $f$ such that $f^*E$ remains stable for every stable vector bundle
$E$ on $D$. It should be mentioned that $E$ is stable (respectively, semistable) if $f^*E$ is
stable (respectively, semistable).
For any $f$ as above the following five conditions are equivalent:
\begin{enumerate}
\item The homomorphism between \'etale fundamental groups
$$f_*\, :\, \pi_1^{\rm et}(C)\,\longrightarrow\, \pi_1^{\rm et}(D)$$
induced by $f$ is surjective.
\item The map $f$ does not factor through some nontrivial \'etale cover of $D$ (in particular, $f$ is not
nontrivial \'etale).
\item The fiber product $C\times_D C$ is connected.
\item $\dim H^0(C,\, f^*f_*{\mathcal O}_C)\,=\,1$.
\item The maximal semistable subbundle of the direct image $f_*{\mathcal O}_C$ is ${\mathcal O}_D$.
\end{enumerate}
The map $f$ is called genuinely ramified if any (hence all) of the above five conditions holds.
Proposition \ref{genuinerampi1surj} and Definition \ref{def1} show
that the above statements (1), (2) and (5) are equivalent; in Lemma
\ref{genuinerampi1surjcurves} it is shown that the statements (3), (4) and (5) are equivalent.
We prove the following (see Theorem \ref{thm2}):
\begin{theorem}\label{thmi}
Let $f\,:\,C\,\longrightarrow\,D$ be a nonconstant separable morphism
between irreducible smooth projective curves defined over $k$. The map $f$ is genuinely ramified
if and only if $f^*E\, \longrightarrow\, C$ is stable for every stable vector bundle $E$ on $D$.
\end{theorem}
The key technical step in the proof of Theorem \ref{thmi} is the following (see
Proposition \ref{sumofnegetivedegreelinebundles1}):
\begin{proposition}\label{ip}
Let $f\,:\,C\,\longrightarrow\,D$ be a genuinely ramified Galois morphism, of degree $d$,
between irreducible smooth projective curves defined over $k$. Then
$$f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)\, \subset\, \bigoplus_{i=1}^{d-1} {\mathcal L}_i\, ,$$
where each ${\mathcal L}_i$ is a line bundle on $D$ of negative degree.
\end{proposition}
When $k\,=\, {\mathbb C}$, a vector bundle $F$ on a smooth complex projective curve is
stable if and only if $F$ admits an irreducible flat projective unitary connection
\cite{NS}. From this characterization of stable vector bundles it follows immediately
that given a nonconstant map $f\,:\,C\,\longrightarrow\,D$ between irreducible smooth complex
projective curves, $f^*E$ is stable for every stable vector bundle $E$ on $D$ if
the homomorphism of topological fundamental groups induced by $f$
$$
f_*\, :\, \pi_1(C,\, x_0)\,\longrightarrow\, \pi_1(D,\, f(x_0))
$$
is surjective.
Theorem \ref{thm1} was stated in \cite{PS} (it is \cite[p.~524, Lemma 3.5(b)]{PS}) without a
complete proof. In the proof of Lemma 3.5(b), which is given in three sentences in
\cite[p.~524]{PS}, it is claimed that the socle of a semistable
bundle descends under any ramified covering map (the first sentence).
\section{Genuinely ramified morphism}
The base field $k$ is assumed to be algebraically closed; there is no restriction on the
characteristic of $k$.
Let $V$ be a vector bundle on an irreducible smooth projective curve $X$ defined over $k$.
If $$0\,=\, V_0\, \subset\, V_1\, \subset\, \cdots \, \subset\, V_{n-1}\, \subset\, V_n\, =\, V$$
is the Harder-Narasimhan filtration of $V$ \cite[p.~16, Theorem 1.3.4]{HL}, then define
$$
\mu_{\rm max}(V)\,:=\, \mu(V_1)\,:=\,\frac{\text{deg}(V_1)}{\text{rk}(V_1)}\ \
\text{ and }\ \ \mu_{\rm min}(V)\,:=\, \mu(V/V_{n-1})\, .
$$
Furthermore, the above subbundle $V_1\, \subset\, V$ is called the \textit{maximal semistable subbundle} of $V$.
If $V$ and $W$ are vector bundles on $X$, and $\beta\,\in\, H^0(X,\, \text{Hom}(V,\, W))\setminus \{0\}$,
then it can be shown that
\begin{equation}\label{a1}
\mu_{\rm max}(W)\, \geq \, \mu_{\rm min}(V)\, .
\end{equation}
Indeed, we have
$$
\mu_{\rm min}(V)\, \leq \, \mu(\beta(V)) \, \leq\, \mu_{\rm max}(W)\, .
$$
\begin{remark}\label{rem1}
Let $f\,:\,X\,\longrightarrow\, Y$ be a nonconstant separable morphism
between irreducible smooth projective curves, and
let $F$ be a semistable vector bundle on $Y$. Then it is known that $f^*F$ is also semistable. Indeed, fixing
a nonconstant separable morphism $h\,:\,Z\,\longrightarrow\, X$, where $Z$
is an irreducible smooth projective curve and $f\circ h$ is Galois, we see
that $(f\circ h)^*F$ is semistable, because its maximal semistable subbundle,
being $\text{Gal}(f\circ h)$ invariant, descends to a subbundle of $F$.
The semistability of $(f\circ h)^*F\,=\, h^*f^*F$ immediately implies that $f^*F$ is semistable.
\end{remark}
\begin{lemma}\label{le1}
Let $f\,:\,X\,\longrightarrow\, Y$ be a nonconstant separable morphism of irreducible smooth projective curves.
Then for any semistable vector bundle $E$ on $X$,
$$\mu_{\rm max}(f_*E)\, \leq\, \mu(E)/{\rm deg}(f)\, .$$
More generally, for any vector bundle $E$ on $X$,
$$\mu_{\rm max}(f_*E)\, \leq\, \mu_{\rm max}(E)/{\rm deg}(f)\, .$$
\end{lemma}
\begin{proof}
Let $E$ be any vector bundle on $X$.
The coherent sheaf $f_*E$ on $Y$ is locally free, because it is torsion-free. We have
\begin{equation}\label{e1}
H^0(Y, \, {\rm Hom}(F,\,f_*E))\, \cong\, H^0(X, \, {\rm Hom}(f^*F,\,E))
\end{equation}
for any vector bundle $F$ on $Y$; see \cite[p.~110]{Ha}.
Setting $F$ in \eqref{e1} to be a semistable
subbundle $V$ of $f_*E$ we see that $H^0(X, \, {\rm Hom}(f^*V,\,E))\, \not=\, 0$.
The pullback $f^*V$ is semistable because $V$ is semistable and $f$ is separable;
see Remark \ref{rem1}.
First take $E$ to be semistable. Hence for any
nonzero homomorphism $\beta\, :\, f^*V\,\longrightarrow\, E$,
\begin{equation}\label{re1}
{\rm deg}(f)\cdot\mu(V) \,=\, \mu(f^*V)\, \leq\, \mu(E)
\end{equation}
(see \eqref{a1}). Now setting $V$ in \eqref{re1} to be the maximal semistable subbundle of
$f_*E$ we conclude that
\begin{equation}\label{e2}
\mu_{\rm max}(f_*E)\, \leq\, \mu(E)/{\rm deg}(f).
\end{equation}
To prove the general (second) statement, for any vector bundle $E$ on $X$, let
$$0\,=\, E_0\, \subset\, E_1\, \subset\, \cdots \, \subset\, E_{n-1}\, \subset\, E_n\, =\, E$$
be the Harder--Narasimhan filtration of $E$ \cite[p.~16, Theorem 1.3.4]{HL}. Consider the filtration of subbundles
\begin{equation}\label{e3}
0\,=\, f_*E_0\, \subset\, f_*E_1\, \subset\, \cdots \, \subset\, f_*E_{n-1}\, \subset\, f_*E_n\, =\, f_*E\, .
\end{equation}
{}From \eqref{e2} we know that
$$
\mu_{\rm max}((f_*E_i)/(f_*E_{i-1})) \,=\, \mu_{\rm max}(f_*(E_i/E_{i-1}))
\, \leq\, \mu(E_i/E_{i-1})/{\rm deg}(f)
$$
\begin{equation}\label{re2}
\leq\, \mu(E_1)/{\rm deg}(f)\,=\,\mu_{\rm max}(E)/{\rm deg}(f)
\end{equation}
for all $1\, \leq\, i\, \leq\, n$. Observe that using \eqref{a1} and the filtration in \eqref{e3} it
follows that
$$
\mu_{\rm max}(f_*E)\,\leq\, {\rm Max}\{\mu_{\rm max}((f_*E_i)/(f_*E_{i-1}))\}_{i=1}^n,
$$
while from \eqref{re2} we have
$$
{\rm Max}\{\mu_{\rm max}((f_*E_i)/(f_*E_{i-1}))\}_{i=1}^n
\, \leq\, \mu_{\rm max}(E)/{\rm deg}(f).$$
Therefore, $\mu_{\rm max}(f_*E)\,\leq\,\mu_{\rm max}(E)/{\rm deg}(f)$, and
this completes the proof.
\end{proof}
The following lemma characterizes the \'etale maps among the separable morphisms.
\begin{lemma}\label{etalevsdegree0}
Let $f\,:\,X\,\longrightarrow\, Y$ be a nonconstant separable morphism between irreducible
smooth projective curves. Then the following three conditions are equivalent:
\begin{enumerate}
\item The map $f$ is \'etale.
\item The degree of $f_*{\mathcal O}_X$ is zero.
\item The vector bundle $f_*{\mathcal O}_X$ is semistable.
\end{enumerate}
\end{lemma}
\begin{proof}
We have $\mu_{\rm max}(f_*{\mathcal O}_X)\, \geq\, 0$, because ${\mathcal O}_Y\, \subset\,
f_*{\mathcal O}_X$ (see \eqref{e1} and \eqref{a1}). On the other hand, from
Lemma~\ref{le1} it follows that $\mu_{\rm max}(f_*{\mathcal O}_X)\, \leq\, 0$,
so
\begin{equation}\label{y1}
\mu_{\rm max}(f_*{\mathcal O}_X)\, =\, 0\, .
\end{equation}
{}From \eqref{y1} it follows immediately that
$f_*{\mathcal O}_X$ is semistable if and only if $\text{deg}(f_*{\mathcal O}_X)\,=\, 0$.
Therefore, statements (2) and (3) are equivalent.
Let ${\mathcal R}$ be the ramification divisor on $X$ for $f$; define the effective divisor $B\,:=\,f_*{\mathcal R}$
on $Y$. We know that
$$
(\det f_*{\mathcal O}_X)^{\otimes 2}\, =\, {\mathcal O}_Y(-B)
$$
(see \cite[p.~306, Ch.~IV, Ex.~2.6(d)]{Ha}, \cite{Se}). Therefore, $f$ is \'etale (meaning $B\,=\, 0$) if and
only if ${\rm deg}(f_*{\mathcal O}_X) \,=\, 0$. So statements (1) and (2) are equivalent.
\end{proof}
Let $f\,:\,X\,\longrightarrow\, Y$ be a nonconstant separable morphism between irreducible
smooth projective curves.
The algebra structure of ${\mathcal O}_X$ produces an ${\mathcal O}_Y$--algebra structure on
the direct image $f_*{\mathcal O}_X$.
\begin{lemma}\label{maximaletalsubcover}
Let ${\mathcal V}\, \subset\, f_*{\mathcal O}_X$ be the maximal semistable subbundle.
Then ${\mathcal V}$ is a sheaf of ${\mathcal O}_Y$--subalgebras of $f_*{\mathcal O}_X$.
\end{lemma}
\begin{proof}
The action of ${\mathcal O}_Y$ on $f_*{\mathcal O}_X$ is the standard one. Let
$$
{\mathbf m}\, :\, (f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X)\, \longrightarrow\, f_*{\mathcal O}_X
$$
be the ${\mathcal O}_Y$--algebra structure on the direct image $f_*{\mathcal O}_X$
given by the algebra structure of the coherent sheaf ${\mathcal O}_X$. We need to show that
\begin{equation}\label{e4}
{\mathbf m}({\mathcal V}\otimes {\mathcal V})\, \subset\, {\mathcal V}\, ,
\end{equation}
where $\mathcal V$ is the maximal semistable subbundle of $f_*{\mathcal O}_X$.
Since $\mathcal V$ is semistable of degree zero
(see \eqref{y1}), and $\mu_{\rm max}((f_*{\mathcal O}_X)/{\mathcal V})\, <\, 0$,
using \eqref{a1} we conclude that in order to prove \eqref{e4} it suffices to show that
${\mathcal V}\otimes {\mathcal V}$ is semistable of degree zero. Indeed, there
is no nonzero homomorphism from ${\mathcal V}\otimes {\mathcal V}$ to $(f_*{\mathcal O}_X)/{\mathcal V}$,
if ${\mathcal V}\otimes {\mathcal V}$ is semistable of degree zero.
We have $\text{deg}({\mathcal V}\otimes {\mathcal V})\,=\, 0$, because
$\text{deg}({\mathcal V})\,=\, 0$. So ${\mathcal V}\otimes {\mathcal V}$ is semistable if
it does not contain any coherent subsheaf of positive degree. As
$${\mathcal V}\otimes {\mathcal V}\, \subset\, (f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X)\, ,$$
if ${\mathcal V}\otimes {\mathcal V}$ contains a subsheaf of positive degree, then
$(f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X)$ also contains a subsheaf of positive degree.
Therefore, to prove the lemma it is enough to show that $(f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X)$
does not contain any subsheaf of positive degree.
The projection formula, \cite[p.~124, Ch.~II, Ex.~5.1(d)]{Ha}, \cite{Se}, says that
\begin{equation}\label{y3}
(f_*{\mathcal O}_X)\otimes_{{\mathcal O}_Y} (f_*{\mathcal O}_X) \,\cong\, f_*(f^*(f_*{\mathcal O}_X))\,.
\end{equation}
Since ${\mathcal O}_Y\, \subset\, f_*{\mathcal O}_X$, we have
$$
{\mathcal O}_Y\,=\, {\mathcal O}_Y\otimes_{{\mathcal O}_Y} {\mathcal O}_Y\, \subset\,
(f_*{\mathcal O}_X)\otimes_{{\mathcal O}_Y} (f_*{\mathcal O}_X)\, ,
$$
and hence $\mu_{\rm max}((f_*{\mathcal O}_X)\otimes_{{\mathcal O}_Y} (f_*{\mathcal O}_X))\, \geq\, 0$.
Now from \eqref{y3} it follows that
\begin{equation}\label{y2}
\mu_{\rm max}(f_*(f^*(f_*{\mathcal O}_X)))\, \geq\, 0\, .
\end{equation}
Since $f$ is separable, the pullback, by $f$, of a semistable bundle on $Y$ is semistable (see
Remark \ref{rem1}), and consequently
the Harder--Narasimhan filtration of $f^*F$ is the pullback, by $f$, of the
Harder--Narasimhan filtration of $F$. Therefore, from \eqref{y1} it follows that
$$\mu_{\rm max}(f^*(f_*{\mathcal O}_X)) \,=\, 0\, .$$
Now applying the second part of Lemma \ref{le1},
$$0\,=\, \mu_{\rm max}(f^*(f_*{\mathcal O}_X))/\text{deg}(f)
\,\geq \, \mu_{\rm max}(f_*(f^*(f_*{\mathcal O}_X)))\, .$$
This and \eqref{y2} together imply that
$$
\mu_{\rm max}(f_*(f^*(f_*{\mathcal O}_X)))\, =\, 0\, .
$$
Therefore, using \eqref{y3} it follows that
$$
\mu_{\rm max}((f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X))\,=\, 0\, .
$$
Hence $(f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X)$ does
not contain any subsheaf of positive degree. It was shown earlier that the lemma follows from the
statement that $(f_*{\mathcal O}_X)\otimes (f_*{\mathcal O}_X)$ does
not contain any subsheaf of positive degree.
\end{proof}
\begin{definition}\label{def1}
A nonconstant separable morphism $f\,:\,X\,\longrightarrow\, Y$ between irreducible smooth projective curves is
called {\em genuinely ramified} if
${\mathcal O}_Y$ is the maximal semistable subbundle of $f_*{\mathcal O}_X$.
\end{definition}
\begin{proposition}\label{genuinerampi1surj}
Let $f\,:\,X\,\longrightarrow\, Y$ be a nonconstant separable morphism between irreducible smooth projective curves.
Then the following three conditions are equivalent:
\begin{enumerate}
\item The map $f$ is genuinely ramified.
\item The map $f$ does not factor through any nontrivial \'etale cover of $Y$ (in particular, $f$ is not
nontrivial \'etale).
\item The homomorphism between \'etale fundamental groups induced by $f$
$$f_*\, :\, \pi_1^{\rm et}(X)\,\longrightarrow\, \pi_1^{\rm et}(Y)$$
is surjective.
\end{enumerate}
\end{proposition}
\begin{proof}
(1)~$\Longrightarrow$~(2): If $f$ factors through a nontrivial \'etale covering $g\,:\,{\widetilde Y}
\,\longrightarrow\, Y$, then $g_*{\mathcal O}_{\widetilde Y}$ is semistable of degree zero (see
Lemma \ref{etalevsdegree0})
and its rank coincides with the degree of $g$. Since
$$g_*{\mathcal O}_{\widetilde Y}\, \subset\, f_*{\mathcal O}_X\, ,$$ this implies that $f$ is not genuinely ramified.
(2)~$\Longrightarrow$~(1): Lemma \ref{maximaletalsubcover} says that the maximal semistable subbundle
${\mathcal V}\, \subset\, f_*{\mathcal O}_X$ is a subalgebra. If $f$ is not genuinely ramified, then
by taking the spectrum of $\mathcal V$ we obtain a separable, possibly ramified, covering map
\begin{equation}\label{g}
g\,:\,{\widetilde Y}\,=\, {\rm Spec}\, {\mathcal V}
\,\longrightarrow\, Y
\end{equation}
whose degree coincides with the rank of $\mathcal V$. We have $g_*{\mathcal O}_{\widetilde Y}\,=\, {\mathcal V}$,
and the inclusion
map ${\mathcal V}\, \hookrightarrow\, f_*{\mathcal O}_X$ defines a map $$h\, :\, X\, \longrightarrow\,
{\widetilde Y}$$ such that
\begin{equation}\label{re3}
g\circ h\,=\, f.
\end{equation}
Since $f$ is separable, from \eqref{re3}
it follows that $g$ is also separable. It can be shown that $g$ is \'etale. To prove this, first
note that $g_*{\mathcal O}_{\widetilde Y}$ is
semistable, because $g_*{\mathcal O}_{\widetilde Y}\,=\, {\mathcal V}$ and
${\mathcal V}$ is semistable. Next, from \eqref{y1} and the semistability of $\mathcal V$ it follows
that $\mu(g_*{\mathcal O}_{\widetilde Y})\,=\,\mu_{\rm max}(g_*{\mathcal O}_{\widetilde Y})
\,=\, 0$. Now Lemma \ref{etalevsdegree0} gives that the map $g$ in \eqref{g} is \'etale.
Since $g$ is \'etale, and \eqref{re3} holds, we conclude that the statement (2) fails. Hence
the statement (2) implies the statement (1).
The equivalence between the statements (2) and (3) follows from the definition of the \'etale fundamental group.
\end{proof}
Let $f\,:\,X\,\longrightarrow\, Y$ be a nonconstant separable morphism between irreducible smooth projective
curves. Let
$$
g\, :\, {\widetilde Y}\,:=\, {\rm Spec}\, {\mathcal V}\, \longrightarrow\, Y
$$
be the \'etale covering corresponding to the maximal semistable subbundle ${\mathcal V}\, \subset\,
f_*{\mathcal O}_X$ (see \eqref{g}; it was shown that the map in \eqref{g} is \'etale). Let
\begin{equation}\label{h}
h\, :\, X\,\longrightarrow\, {\widetilde Y}
\end{equation}
be the morphism given by the inclusion map ${\mathcal V}\, \hookrightarrow\,
f_*{\mathcal O}_X$.
\begin{corollary}\label{cor1}
The map $h$ in \eqref{h} is genuinely ramified.
\end{corollary}
\begin{proof}
Let $\beta \, :\, Z\, \longrightarrow\, {\widetilde Y}$ be an \'etale covering such that
there is a map $$\gamma\, :\, X\,\longrightarrow\, Z$$ satisfying the condition
$\beta\circ\gamma\,=\, h$. Since $(g\circ\beta)\circ\gamma\,=\, f$, we have
\begin{equation}\label{a2}
g_*{\mathcal O}_{\widetilde Y}\, \subset\,
(g\circ\beta)_*{\mathcal O}_Z\, \subset\, f_*{\mathcal O}_X\, ;
\end{equation}
also, we have $\text{deg}((g\circ\beta)_*{\mathcal O}_{\widetilde Y})\,=\, 0$, because $g\circ\beta$
is \'etale (see Lemma \ref{etalevsdegree0}). But ${\mathcal V}\,=\, g_*{\mathcal O}_{\widetilde Y}$
is the maximal semistable subsheaf of $f_*{\mathcal O}_X$. Hence from \eqref{a2} it follows that
$g_*{\mathcal O}_{\widetilde Y}\, =\, (g\circ\beta)_*{\mathcal O}_Z$. This implies that
$\text{deg}(\beta)\,=\,1$. Therefore, from Proposition \ref{genuinerampi1surj} we conclude that
the map $h$ in \eqref{h} is genuinely ramified.
\end{proof}
\section{Properties of genuinely ramified morphisms}
\begin{lemma}\label{genuinerampi1surjcurves}
Let $f\,:\,C\, \longrightarrow\, D$ be a nonconstant separable morphism between irreducible
smooth projective curves. Then the following three conditions are equivalent:
\begin{enumerate}
\item The map $f$ is genuinely ramified.
\item $\dim H^0(C,\, f^*f_*{\mathcal O}_C)\,=\,1$.
\item The fiber product $C\times _D C$ is connected.
\end{enumerate}
\end{lemma}
\begin{proof}
Let ${\widetilde{C\times_D C}}$ be the normalization of the fiber product $C\times_D C$; it is
a smooth projective curve, but it is not connected unless $f$ is an isomorphism.
We have the commutative diagram
\begin{equation}\label{d1}
\xymatrix{
\widetilde{C\times_D C} \ar@/_/[ddr]_-{\widetilde{\pi}_1} \ar[dr]^-\nu \ar@/^/[drrr]^-{\widetilde{\pi}_2} & & & \\
& C\times_D C \ar[rr]^-{\pi_2} \ar[d]^-{\pi_1} && C \ar[d]^-f \\
& C \ar[rr]^-f && D
}
\end{equation}
By flat base change \cite[p.~255, Proposition 9.3]{Ha},
\begin{equation}\label{f1}
f^* (f_* {\mathcal O}_C )\,\cong\, {\pi_1}_* (\pi^*_2 {\mathcal O}_C ) \,=\, {\pi_1}_* {\mathcal O}_{C\times_D C}\, .
\end{equation}
(1)~$\Longrightarrow$~(2): Since $f$ is separable, $f^*F$ is semistable if $F$ is so (see
Remark \ref{rem1}), and hence
the maximal semistable subbundle of $f^*f_*{\mathcal O}_C$ is $f^*\mathcal V$, where ${\mathcal V}\,
\subset\, f_*{\mathcal O}_C$ is the maximal semistable subbundle. If $f$ is genuinely ramified, then
the maximal semistable subbundle of $f^*f_*{\mathcal O}_C$ is $f^*{\mathcal O}_D\,=\, {\mathcal O}_C$.
On the other hand,
$$H^0(C,\, (f^*f_*{\mathcal O}_C)/(f^*\mathcal V))\,=\, 0\, ,$$
because $\mu_{\rm max}((f^*f_*{\mathcal O}_C)/(f^*\mathcal V))\, <\, 0$ (see \eqref{a1}).
These together imply that $$\dim H^0(C,\, f^*f_*{\mathcal O}_C)\,=\,1\, ;$$
to see this consider the long exact sequence of cohomologies associated to the short exact sequence
$$
0\, \longrightarrow\, f^*\mathcal V\, \longrightarrow\,f^*f_*{\mathcal O}_C \, \longrightarrow\,
(f^*f_*{\mathcal O}_C)/(f^*\mathcal V)\, \longrightarrow\,0.
$$
(2)~$\Longleftrightarrow$~(3): From \eqref{f1} it follows that
\begin{equation}\label{t1}
H^0(C,\, f^*f_*{\mathcal O}_C) \,=\, H^0(C,\, {\pi_1}_*{\mathcal O}_{C\times_D C})
\,=\, H^0(C\times_D C,\, {\mathcal O}_{C\times_D C})\, .
\end{equation}
Consequently, $C\times _D C$ is connected if and only if $\dim H^0(C,\, f^*f_*{\mathcal O}_C)\,=\,1$.
(3)~$\Longrightarrow$~(1): Assume that $f$ is \textit{not} genuinely ramified. We will prove that $C\times_D C$ is
not connected.
Let $g\,:\,\widetilde{D}\, \longrightarrow\, D$ be the \'etale cover of $D$ given by ${\rm Spec}\, {\mathcal W}$,
where ${\mathcal W}\, \subset\, f_*{\mathcal O}_C$ is the maximal semistable subbundle (as
in \eqref{g}). The degree of this
covering $g$ is at least two, because $f$ is not genuinely ramified.
To prove that $C\times_D C$ is not connected it suffices to show that ${\widetilde D}\times_D {\widetilde D}$ is not
connected.
The projection $$\gamma\,:\, {\widetilde D}\times_D {\widetilde D}\,\longrightarrow\, {\widetilde D}$$
to the first factor is
evidently the base change of $g\,:\,\widetilde{D}\, \longrightarrow\, D$ to $\widetilde{D}$, and hence
the map $\gamma$ is \'etale. The diagonal ${\widetilde D}\, \hookrightarrow\, {\widetilde D}\times_D {\widetilde D}$
is a connected component of ${\widetilde D}\times_D {\widetilde D}$.
This implies that ${\widetilde D}\times_D {\widetilde D}$ is not connected, because the degree of
$\gamma$ is at least two.
\end{proof}
\begin{definition}\label{def2}
A nonconstant morphism $f\,:\,C\,\longrightarrow\,D$ between irreducible smooth projective curves
will be called a \textit{separable Galois morphism} if $f$ is separable, and there is a reduced
finite subgroup $\Gamma\, \subset\, \text{Aut}(C)$ such that $D\,=\, C/\Gamma$ and
$f$ is the quotient map $C\,\longrightarrow\,C/\Gamma$. Note that a separable Galois morphism
need not be \'etale. A separable Galois morphism which is genuinely ramified will be called
a \textit{genuinely ramified Galois morphism}.
\end{definition}
\begin{proposition}\label{sumofnegetivedegreelinebundles}
Let $f\,:\,C\,\longrightarrow\,D$ be a separable Galois morphism, of degree $d$,
between irreducible smooth projective curves.
Then $f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)$
is a coherent subsheaf of ${\mathcal O}^{\oplus (d-1)}_C$.
\end{proposition}
\begin{proof}
The Galois group $\text{Gal}(f)$ of $f$ will be denoted by $\Gamma$.
For any point $x\, \in\, C$, let $$\Gamma_x\, \subset\, \Gamma$$ be the isotropy
subgroup that fixes $x$ for the action of
$\Gamma$ on $C$. A point $(x,\, y)\, \in\, C\times_D C$ is singular if and
only if
$\Gamma_x$ is nontrivial. Note that for any $(x,\, y)\, \in\, C\times_D C$
the two isotropy subgroups $\Gamma_x$ and $\Gamma_y$ are conjugate, because
$y$ lies in the orbit $\Gamma\cdot x$ of $x$.
For any $\sigma\, \in\, \Gamma$, let
\begin{equation}\label{cs}
C_\sigma\, \subset\, C\times_D C
\end{equation}
be the irreducible component given by the image of the map
$$\beta_\sigma\, :\, C\,\longrightarrow\, C\times C\, ,\ \ x\,\longmapsto\, (x,\,\sigma(x))\, ;$$
clearly we have $\beta_\sigma(C)\, \subset\, C\times_D C$.
In this way, the irreducible components of $C\times_D C$ are parametrized by the elements of
the Galois group $\Gamma$. Note that there is a canonical identification
\begin{equation}\label{re8}
C\, \stackrel{\sim}{\longrightarrow}\, C_\sigma
\end{equation}
for every $\sigma\, \in\, \Gamma$.
Let $\widetilde{C\times_D C}$ be the normalization of $C\times_D C$. The maps $\beta_\sigma$,
$\sigma\, \in\, \Gamma$, in \eqref{cs} together produce
an isomorphism
\begin{equation}\label{re7}
C\times\Gamma\,\stackrel{\sim}{\longrightarrow}\, \widetilde{C\times_D C}\, ;
\end{equation}
this map sends any $(y,\, \sigma)\, \in\, C\times\Gamma$ to $(y,\, \sigma(y))$ if $\Gamma_y$ is trivial;
if $\Gamma_y$ is trivial, then $(y,\, \sigma(y))$ is a smooth point of $C\times_D C$ and hence $(y,\, \sigma(y))$
gives a unique point of $\widetilde{C\times_D C}$. Consequently, we have
\begin{equation}\label{e5}
\widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}\,=\,{\mathcal O}_C\otimes_k k[\Gamma]\, ,
\end{equation}
where $\widetilde{\pi}_1$ is the projection in \eqref{d1}, and
$k[\Gamma]$ is the group ring. The natural inclusion ${\mathcal O}_{C\times_D C}\,\hookrightarrow\,
\nu_*{\mathcal O}_{\widetilde{C\times_D C}}$, where $\nu$ is the map in \eqref{d1},
induces an injective homomorphism
\begin{equation}\label{e6}
\varphi\, :\, {\pi_1}_* {\mathcal O}_{C\times_D C}\, \hookrightarrow\,
{\pi_1}_* \nu_*{\mathcal O}_{\widetilde{C\times_D C}}\,=\,
\widetilde{\pi}_{1*}{\mathcal O}_{\widetilde{C\times_D C}}\, ,
\end{equation}
where $\pi_1$ and $\widetilde{\pi}_1$ are the maps in \eqref{d1}.
Let
\begin{equation}\label{z2}
\xi\, :\, {\mathcal O}_C \, \longrightarrow\, {\mathcal O}_C\otimes_k k[\Gamma]
\end{equation}
be the composition of homomorphisms
$$
{\mathcal O}_C \, \longrightarrow\, {\pi_1}_* {\mathcal O}_{C\times_D C} \,
\stackrel{\varphi}{\longrightarrow}\, \widetilde{\pi}_{1*}{\mathcal O}_{\widetilde{C\times_D C}}\,=\,
{\mathcal O}_C\otimes_k k[\Gamma]
$$
(see \eqref{e6} and \eqref{e5}). Note that the image $\xi({\mathcal O}_C)$ in \eqref{z2} is a subbundle
of ${\mathcal O}_C\otimes_k k[\Gamma]$, because the section $$\xi(1_C)\, \subset\,
H^0(C,\, \widetilde{\pi}_{1*}{\mathcal O}_{\widetilde{C\times_D C}})\,=\,
k[\Gamma]$$
is nowhere vanishing, where $1_C$ is the constant function $1$ on $C$. There is a trivial
subbundle ${\mathcal E}$ of the trivial bundle ${\mathcal O}_C\otimes_k k[\Gamma]$
$$
{\mathcal O}^{\oplus (d-1)}_C\,=\, {\mathcal E}\, \subset\, {\mathcal O}_C\otimes_k k[\Gamma]
$$
such that
\begin{equation}\label{e7}
{\mathcal E}\oplus \xi({\mathcal O}_C)\,=\, {\mathcal O}_C\otimes_k k[\Gamma]\, .
\end{equation}
To see this, take any point $x\, \in\, C$, and choose a subspace
$$
{\mathcal E}_x\, \subset\, ({\mathcal O}_C\otimes_k k[\Gamma])_x\,=\,
k[\Gamma]
$$
such that $k[\Gamma]\,=\, {\mathcal E}_x\oplus \xi({\mathcal O}_C)_x$; then take
$$
{\mathcal E}\,:=\, {\mathcal O}_C\otimes_k {\mathcal E}_x\, \subset\,
{\mathcal O}_C\otimes_k k[\Gamma]\, .
$$
This subbundle $\mathcal E$ clearly satisfies the condition in \eqref{e7}.
{}From the decomposition in \eqref{e7} we conclude that $({\mathcal O}_C\otimes_k k[\Gamma])/\xi({\mathcal O}_C)\,=
\, \mathcal E$. Using the isomorphism in \eqref{e5}, the homomorphism $\varphi$ in \eqref{e6} gives a homomorphism
\begin{equation}\label{e8}
\varphi'\, :\, ({\pi_1}_* {\mathcal O}_{C\times_D C})/{\mathcal O}_C \, \longrightarrow\,
({\mathcal O}_C\otimes_k k[\Gamma])/(\varphi({\mathcal O}_C))\,=\,
({\mathcal O}_C\otimes_k k[\Gamma])/(\xi({\mathcal O}_C))\,=\, {\mathcal E}.
\end{equation}
On the other hand, the isomorphism in \eqref{f1} produces an isomorphism
$$
({\pi_1}_* {\mathcal O}_{C\times_D C})/{\mathcal O}_C \,\cong\,f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)\, .
$$
Combining this isomorphism with the homomorphism $\varphi'$ in \eqref{e8} we get a homomorphism
$$
f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)\, \longrightarrow\,\mathcal E\,=\, {\mathcal O}^{\oplus (d-1)}_C\, .
$$
This homomorphism is clearly an isomorphism over the nonempty open subset of $C$ where $f$ is \'etale.
\end{proof}
Note that $f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)\,=\, (f^*f_* {\mathcal O}_C)/{\mathcal O}_C$; but we use
$f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)$ due to the relevance of $(f_* {\mathcal O}_C)/{\mathcal O}_D$.
Let
$$f\,:\,C\,\longrightarrow\,D$$ be a genuinely ramified Galois morphism, of degree $d$,
between irreducible smooth projective curves; see Definition \ref{def2}. As before, the Galois group $\text{Gal}(f)$
will be denoted by $\Gamma$, so we have $$\# \Gamma\,=\, d\, .$$
Assume that $d\, > \, 1$.
As in \eqref{cs}, the irreducible component of $C\times_D C$ corresponding
to $\sigma\, \in\, \Gamma$ will be denoted by $C_\sigma$.
The following lemma formulated in the above set-up
will be used in proving a variation of Proposition \ref{sumofnegetivedegreelinebundles}.
\begin{lemma}\label{lemord}
There is an ordering of the elements of $\Gamma$
$$
\Gamma\, =\, \{\gamma_1,\, \gamma_2,\, \cdots,\, \gamma_d\}
$$
and a self-map
$$
\eta\, :\, \{1,\, 2,\, \cdots ,\, d\}\, \longrightarrow\, \{1,\, 2,\, \cdots ,\, d\}
$$
such that
\begin{enumerate}
\item $\gamma_1\,=\, e$ (the identity element of $\Gamma$),
\item $\eta(1)\, =\, 1$,
\item $\eta(j) \, <\, j$ for all $j\, \in\, \{2,\, \cdots ,\, d\}$, and
\item $C_{\gamma_j}\bigcap C_{\gamma_{\eta(j)}}\, \not=\, \emptyset$ (see \eqref{cs} for notation).
\end{enumerate}
\end{lemma}
\begin{proof}
Set $\Gamma_0\, :=\, \gamma_1$ to be the identity element $e\, \in\, \Gamma$; also, set
$\eta(1)\,=\, 1$. Set $N_0\, =\,1$.
Let $$\Gamma_1\, \subset\, \Gamma$$ be the subset consisting of all $\gamma\, \not=\, e$ such that
the action of $\gamma$ on $C$ has a fixed point. Therefore, $\Gamma_1$ consists of all $\gamma\, \not=\, e$ such that
the irreducible component $C_\gamma\,\subset\, C\times_D C$ intersects the component $C_e\,=\, C_{\gamma_1}$.
We note that $\Gamma_1$ is nonempty, because otherwise
$C_{\gamma_1}$ would be a connected component of $C\times_D C$, while from Lemma
\ref{genuinerampi1surjcurves}(3) we know that $C\times_D C$ is connected;
recall that $\Gamma\, \not=\,\{e\}$ and $f$ is genuinely ramified.
If $\# \Gamma_1\,=\, N_1-1\, =\, N_1-N_0$, set $\gamma_j\, \in\, \Gamma$, $2\, \leq\, j\, \leq\, N_1$, to be
distinct elements of $\Gamma_1$ in an arbitrary order. Set
$$
\eta(j)\, =\, 1
$$
for all $2\, \leq\, j\, \leq\, N_1$.
If $\Gamma_1\bigcup\Gamma_0\, \not=\, \Gamma$,
let $$\Gamma_2\, \subset\, \Gamma\setminus (\Gamma_1\cup \Gamma_0)$$
be the subset consisting of all $\gamma\, \in\, \Gamma\setminus (\Gamma_1\bigcup \Gamma_0)$ such that the irreducible
component
$$C_\gamma\,\subset\, C\times_D C$$ intersects the component $C_\sigma$ for some $\sigma\, \in\, \Gamma_1$.
Note that such a component $C_\gamma$ does not intersect $C_{\gamma_1}$,
because in that case we would have $\gamma\, \in\, \Gamma_1$.
If $\# \Gamma_2\,=\, N_2-N_1$, set $\gamma_j\, \in\, \Gamma$, $N_1+1\, \leq\, j\, \leq\, N_2$,
to be distinct elements of $\Gamma_2$ in an arbitrary order. For every $N_1+1\, \leq\, j\, \leq\, N_2$, set
$$
\eta(j)\, \in\, \, \{2,\, \cdots ,\, N_1\}
$$
such that the component $C_{\gamma_j}\,\subset\, C\times_D C$
intersects the component $C_{\gamma_{\eta(j)}}$; the above definition of $\Gamma_2$
ensures that such a $\eta(j)$ exists. If there are
more than one $m\, \in\, \{2,\, \cdots ,\, N_1\}$ such that
$C_{\gamma_j}\,\subset\, C\times_D C$ intersects the component $C_{\gamma_m}$, then choose
$\eta(j)$ arbitrarily from them.
Now inductively define
$$
\Gamma_n\, \subset\, \Gamma\setminus (\bigcup_{i=0}^{n-1} \Gamma_i)\, ,
$$
if $\Gamma_n\, \not=\, \emptyset$,
to be the subset consisting of all $\gamma\, \in\, \Gamma\setminus (\bigcup_{i=0}^{n-1} \Gamma_i)$ such that the
irreducible component $C_\gamma\,\subset\, C\times_D C$ intersects the component $C_\sigma$ for some
$\sigma\, \in\, \Gamma_{n-1}$. Note that such a component $C_\gamma$ does not intersect
$\bigcup_{i=0}^{n-2} \Gamma_i$, because in that case $\gamma\, \in\, \bigcup_{i=0}^{n-1} \Gamma_i$.
If $\# \Gamma_n\,=\, N_n- \sum_{i=0}^{n-1}\# \Gamma_i\,=\, N_n-N_{n-1}$, set $\gamma_j\, \in\,
\Gamma$, $N_{n-1}+1\, \leq\, j\, \leq\, N_n$,
to be distinct elements of $\Gamma_n$ in an arbitrary order. For $N_{n-1}+1\, \leq\, j\, \leq\,
N_n$, set
$$
\eta(j)\, \in\, \, [N_{n-2}+1,\, N_{n-1}]\,=\,
\{1+\sum_{i=0}^{n-2}\# \Gamma_i,\, \cdots ,\, \sum_{i=0}^{n-1}\# \Gamma_i\}
$$
such that the component $C_{\gamma_j}\,\subset\, C\times_D C$
intersects the component $C_{\gamma_{\eta(j)}}$. If $C_{\gamma_j}$ intersects more than
one such component, choose $\eta(j)$ to be any one from them, as before.
Since $\Gamma$ is a finite group, we have $\Gamma_n\, =\,\emptyset$ for all $n$ sufficiently large. Set
$$
{\mathbb S}\,=\, \sum_{i=0}^\infty \# \Gamma_i\, =\, \text{Max}_{i\geq 0}\{N_i\}\, .
$$
Note that
$$
\bigcup_{i=1}^{\mathbb S} C_{\gamma_i}\, \subset\, C\times_D C
$$
is the connected component of $C\times_D C$ containing $C_{\gamma_1}$. Hence from
Lemma \ref{genuinerampi1surjcurves}(3) we know that $$\bigcup_{i=1}^{\mathbb S} C_{\gamma_i}\,=\, C\times_D C .$$
In other words, we have ${\mathbb S}\ =\, d\, :=\, \# \Gamma$. This completes the proof of the lemma.
\end{proof}
\begin{proposition}\label{sumofnegetivedegreelinebundles1}
Let $f\,:\,C\,\longrightarrow\,D$ be a genuinely ramified Galois morphism, of degree $d$,
between irreducible smooth projective curves. Then
$$f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)\, \subset\, \bigoplus_{i=1}^{d-1} {\mathcal L}_i\, ,$$
where each ${\mathcal L}_i$ is a line bundle on $D$ of negative degree.
\end{proposition}
\begin{proof}
As in Lemma \ref{lemord}, the Galois group $\text{Gal}(f)$ is denoted by $\Gamma$.
The ordering in Lemma
\ref{lemord} of the elements of $\Gamma$ produces an isomorphism of $k[\Gamma]$ with $k^{\oplus d}$.
Consequently, from \eqref{e5} we have
\begin{equation}\label{z1}
\widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}\,=\,
{\mathcal O}_C\otimes_k k[\Gamma]\,=\, {\mathcal O}^{\oplus d}_C\, .
\end{equation}
Let
$$
\Phi\, :\, \widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}\,=\, {\mathcal O}^{\oplus d}_C
\, \longrightarrow\, {\mathcal O}^{\oplus d}_C \,=\,
\widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}
$$
be the homomorphism defined by
\begin{equation}\label{re4}
(f_1,\, f_2,\, \cdots, \, f_d)\, \longmapsto\,
(f_1-f_{\eta(1)},\, f_2-f_{\eta(2)},\, \cdots, \, f_d-f_{\eta(d)})\, ,
\end{equation}
where $\eta$ is the map in Lemma \ref{lemord};
more precisely, the $i$-th
component of $\Phi(f_1,\, f_2,\, \cdots, \, f_d)$ is $f_i-f_{\eta(i)}$. It is straightforward to
check that
$$
{\mathcal F}\, :=\, \Phi({\mathcal O}^{\oplus d}_C)\, \subset\,
{\mathcal O}^{\oplus d}_C \,=\,
\widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}
$$
is a trivial subbundle of rank $d-1$; the first component of $\Phi(f_1,\, f_2,\, \cdots, \, f_d)$
vanishes identically, because $\eta(1)\,=\,1$. More precisely, we have
\begin{equation}\label{zf2}
{\mathcal F}\,=\, {\mathcal O}^{\oplus (d-1)}_C\, \subset\, {\mathcal O}^{\oplus d}_C
\,=\, \widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}\, ,
\end{equation}
where ${\mathcal O}^{\oplus (d-1)}_C$ is the subbundle of ${\mathcal O}^{\oplus d}_C$
spanned by all $(f_1,\, f_2,\, \cdots, \, f_d)$ such that $f_1\,=\,0$.
{}From \eqref{zf2} it follows immediately that
\begin{equation}\label{cf2}
\widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}\,=\,
{\mathcal O}^{\oplus d}_C\,=\, {\mathcal F}\oplus \xi({\mathcal O}_C)\, ,
\end{equation}
where $\xi({\mathcal O}_C)$ is the subbundle of ${\mathcal O}^{\oplus d}_C\,=\,
{\mathcal O}_C\otimes_k k[\Gamma]$ in \eqref{z2} (see \eqref{z1}).
In \eqref{d1} we have $\widetilde{\pi}_1\,=\,\pi_1\circ\nu$, and hence,
as in \eqref{e6},
there is a natural homomorphism
\begin{equation}\label{re5}
\varphi\, :\,
{\pi_1}_* {\mathcal O}_{C\times_D C}\, \hookrightarrow\, \widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}
\end{equation}
which is an isomorphism over the open subset of $C$ where $f$ is \'etale. Therefore, from
\eqref{f1} and \eqref{cf2} we get an injective homomorphism
of coherent sheaves
\begin{equation}\label{cf3}
\Psi\, :\, f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D)\,\longrightarrow \, {\mathcal F}\,=\,
{\mathcal O}^{\oplus (d-1)}_C\, ;
\end{equation}
it is similar to \eqref{e8}, except that now the direct summand ${\mathcal F}$ is chosen carefully
(it was $\mathcal E$ in \eqref{e8}).
Note that since $\text{rk}(f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D))\,=\, d-1\,=\,
\text{rk}({\mathcal O}^{\oplus (d-1)}_C)$, the homomorphism $\Psi$ in \eqref{cf3} is generically
an isomorphism, because it is an
injective homomorphism of coherent sheaves. More precisely, $\Psi$ is an isomorphism over the open subset
of $C$ where the map $f$ is \'etale.
Consider the map $\eta$ in Lemma \ref{lemord}. For every $1\,\leq\, i\, \leq\, d-1$, choose a point
\begin{equation}\label{zi}
z_i\, \in\, C_{\gamma_{i+1}}\bigcap C_{\gamma_{\eta(i+1)}}\, ;
\end{equation}
this is possible because
the fourth property in Lemma \ref{lemord} says that the intersection $C_{\gamma_{i+1}}\bigcap
C_{\gamma_{\eta(i+1)}}$ is nonempty. Recall from \eqref{re8} that
$C$ is identified with $C_{\gamma_{i+1}}$. The point $z_i\, \in\, C_{\gamma_{i+1}}$ in \eqref{zi}
will be considered as a point of $C$ using this identification. Let
$$
{\mathcal L}_i\,:=\, {\mathcal O}_C(-z_i)
$$
be the line bundle corresponding to the point $z_i\, \in\, C$.
For every $1\,\leq\, i\, \leq\, d-1$, let
\begin{equation}\label{pj}
P_i\, :\, {\mathcal O}^{\oplus (d-1)}_C\, \longrightarrow\, {\mathcal O}_C
\end{equation}
be the natural projection to the $i$-th factor.
Consider the composition of homomorphisms $P_i\circ\Psi$, where $P_i$ and
$\Psi$ are constructed in \eqref{pj} and \eqref{cf3} respectively. It can be shown that
$P_i\circ\Psi$ vanishes when restricted to the
point $z_i$ in \eqref{zi}. To see this, for any $1\,\leq\, j\, \leq\, d$, let
$$
\widehat{P}_j\, :\, {\mathcal O}^{\oplus d}_C\, \longrightarrow\, {\mathcal O}_C
$$
be the natural projection to the $j$-th factor. Recall the homomorphism $\Phi$ constructed
in \eqref{re4}. If $(f_1,\, f_2,\, \cdots, \, f_d)$ in \eqref{re4} actually lies in the image
of ${\pi_1}_* {\mathcal O}_{C\times_D C}$ by the inclusion map $\varphi$ in \eqref{re5}, then from
\eqref{zi} we have
\begin{equation}\label{re6}
(\widehat{P}_{i+1}\circ \Phi)(f_1,\, f_2,\, \cdots, \, f_d)(z_i,\, \gamma_{i+1}) \,=\,
f_{i+1}(z_i,\, \gamma)-f_{\eta(i+1)}(z_i,\, \gamma_{\eta(i+1)})\,=\, 0\, ,
\end{equation}
where $(z_i,\, \gamma_{i+1})\, \in\, C\times \Gamma \,=\, \widetilde{C\times_D C}$
(see \eqref{re7}) and the same for $(z_i,\, \gamma_{\eta(i+1)})$; note that from \eqref{zi} it follows that
the point in $C$ corresponding to $z_i\, \in\, C_{\gamma_{\eta(i+1)}}$
(see \eqref{zi}) by the identification $C\,\stackrel{\sim}{\longrightarrow}\, C_{\gamma_{\eta(i+1)}}$
in \eqref{re8} coincides with the point corresponding to $z_i\, \in\, C_{\gamma_{i+1}}$ (the element
$\gamma^{-1}_{i+1}\gamma_{\eta(i+1)}\, \in\, \Gamma$ fixes this point of $C$).
To clarify, there is a slight abuse of notation in \eqref{e6} in the following sense:
sections of $\widetilde{\pi}_{1*} {\mathcal O}_{\widetilde{C\times_D C}}$ over an open subset
$U\, \subset\, C$ are identified with function on $\widetilde{\pi}^{-1}(U)$. So
$(f_1,\, f_2,\, \cdots, \, f_d)$ in \eqref{re6} is considered as a function on $\widetilde{\pi}^{-1}(U)$; the above
condition that $(f_1,\, f_2,\, \cdots, \, f_d)$ in \eqref{re6} lies in the image
of ${\pi_1}_* {\mathcal O}_{C\times_D C}$ by the inclusion $\varphi$ map in \eqref{re5} means that
$(f_1,\, f_2,\, \cdots, \, f_d)$ coincides with $\widehat{f}\circ \nu$ for some function $\widehat{f}$ on
$\pi^{-1}_1(U)$, where $\nu$ is the map in \eqref{d1}. Now from \eqref{re6} it follows that
$P_i\circ\Psi$ vanishes when restricted to the point $z_i\, \in\, C$.
Since $P_i\circ\Psi$ vanishes when restricted to the point $z_i$, we have
\begin{equation}\label{cf4}
P_i\circ\Psi(f^* ((f_* {\mathcal O}_C) / {\mathcal O}_D))\, \subset\, {\mathcal L}_i\,=\, {\mathcal O}_C(-z_i)
\, \subset\, {\mathcal O}_C\, .
\end{equation}
{}From \eqref{cf3} and \eqref{cf4} it follows immediately that
$$
f^* ((f_* {\mathcal O}_C)/{\mathcal O}_D)\, \hookrightarrow\,\bigoplus_{i=1}^{d-1} {\mathcal L}_i\, .
$$
Since $\text{deg}({\mathcal L}_i)\,=\, -1$, the proof of the proposition is complete.
\end{proof}
\section{Pullback of stable bundles and genuinely ramified maps}
\begin{lemma}\label{negetiveslope}
Let $f\,:\,C\,\longrightarrow\, D$ be a genuinely ramified morphism
between irreducible smooth projective curves.
Let $V$ be a semistable vector bundle on $D$. Then
$$
\mu_{\rm max} ( V\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)) \, <\, \mu (V)\, .
$$
\end{lemma}
\begin{proof}
First assume that the map $f$ is Galois. Take the line bundles ${\mathcal L}_i$,
$1\, \leq\, i\, \leq\, d-1$, in
Proposition \ref{sumofnegetivedegreelinebundles1}, where $d\,=\, \text{deg}(f)$. Then from
Proposition \ref{sumofnegetivedegreelinebundles1} we have
$$
\mu_{\rm max} ( V\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D))\, \leq\,
\mu_{\rm max} (V\otimes (\bigoplus_{i=1}^{d-1} {\mathcal L}_i))\, \leq\,
{\rm \max}\{\mu(V\otimes{\mathcal L}_i)\}_{i=1}^{d-1}\, ,
$$
because $V\otimes{\mathcal L}_i$ is semistable. On the other hand,
$$
\mu(V\otimes{\mathcal L}_i)\, <\, \mu(V)\, ,
$$
because $\text{deg}({\mathcal L}_i)\, <\, 0$. Combining these, we have
$$
\mu_{\rm max} ( V\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)) \, <\, \mu (V)\, ,
$$
giving the statement of the proposition.
If the map $f$ is not Galois, consider the smallest Galois extension
\begin{equation}\label{p1}
F\,:\,\widehat{C}\,\longrightarrow\, D
\end{equation}
such that there is a morphism $\widehat{f}\,:\,\widehat{C}\,\longrightarrow\, C$ for which
\begin{equation}\label{q1}
f\circ\widehat{f}\, =\, F\, .
\end{equation}
Note that $\widehat{C}$ is irreducible and smooth, and $F$ is separable. From \eqref{q1} it follows that
\begin{equation}\label{q2}
f_*{\mathcal O}_C\, \subset\, F_*{\mathcal O}_{\widehat C}\, .
\end{equation}
First assume that the map $F$ in \eqref{p1} is genuinely ramified. From
\eqref{q2} it follows that
\begin{equation}\label{s1}
(f_*{\mathcal O}_C)/{\mathcal O}_D \,\subset\, ({F}_*{\mathcal
O}_{\widehat C})/{\mathcal O}_D\, .
\end{equation}
Since $F$ is Galois, from Proposition \ref{sumofnegetivedegreelinebundles1} we know that
$({F}_*{\mathcal O}_{\widehat C})/{\mathcal O}_D$ is contained
in a direct sum of line bundles of negative degree. Hence the subsheaf
$(f_*{\mathcal O}_C)/{\mathcal O}_D$ in \eqref{s1} is also contained
in a direct sum of line bundles of negative degree.
This implies that
$$
\mu_{\rm max} ( V\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)) \, <\, \mu (V)\, ,
$$
giving the statement of the proposition.
Therefore, we now assume that $F$ is not genuinely ramified. Let
$$
(F_*{\mathcal O}_{\widehat C})_1\, \subset\,
F_*{\mathcal O}_{\widehat C}
$$
be the maximal semistable subbundle. Let
\begin{equation}\label{gl}
g\,:\, \widehat{D}\,\longrightarrow\, D
\end{equation}
be the \'etale cover defined by the spectrum of
the bundle $(F_*{\mathcal O}_{\widehat C})_1$ of ${\mathcal O}_D$--algebras (see Lemma \ref{maximaletalsubcover});
that the map $g$ in \eqref{gl} is \'etale follows from Lemma \ref{etalevsdegree0} and \eqref{y1}, because
$g_*{\mathcal O}_{\widehat{D}}\,=\, (F_*{\mathcal O}_{\widehat C})_1$ and
$(F_*{\mathcal O}_{\widehat C})_1$ is semistable.
We note that the Galois group $\text{Gal}(F)$ for $F$ acts naturally on $F_*{\mathcal O}_{\widehat C}$,
and this action of $\text{Gal}(F)$ preserves the subbundle $(F_*{\mathcal O}_{\widehat C})_1$; indeed, this
follows from the uniqueness of the maximal semistable subbundle
$(F_*{\mathcal O}_{\widehat C})_1$. Therefore, $\text{Gal}(F)$ acts
on $\widehat{D}$, and the map $g$ in \eqref{gl} is $\text{Gal}(F)$--equivariant for the trivial
action of $\text{Gal}(F)$ on $D$. Consequently, the covering $g$ in \eqref{gl} is Galois.
Consider the following commutative diagram
\begin{equation}\label{lcd}
\xymatrix{
\widehat{C} \ar@/_/[ddr]_-{\widehat{f}} \ar[dr]^-h \ar@/^/[drrr]^-{\widehat{g}} & & & \\
& C\times_D\widehat{D} \ar[rr]^-{\pi_2} \ar[d]^-{\pi_1} && \widehat{D} \ar[d]^-g \\
& C \ar[rr]^-f && D
}
\end{equation}
The existence of the map $h$ in \eqref{lcd} is evident. The map $f$ being genuinely ramified,
it follows from Lemma \ref{genuinerampi1surj} that the homomorphism between \'etale fundamental groups
$$f_*\, :\, \pi_1^{\rm et}(C)\,\longrightarrow\, \pi_1^{\rm et}(D)$$ induced
by $f$ is surjective. This implies that the fiber product $C\times_D\widehat{D}$ is connected.
The diagram in \eqref{lcd} should not be confused with the one in \eqref{d1} --- in \eqref{lcd},
$C\times_D\widehat{D}$ is smooth as $g$ is \'etale.
We will prove that the map $\widehat{g}$ in \eqref{lcd} is genuinely ramified and Galois. For this,
first recall the earlier observation that $\text{Gal}(F)$ acts on $\widehat{D}$. The map $\widehat{g}$ is evidently
equivariant for the actions of $\text{Gal}(F)$ on $\widehat{C}$ and $\widehat{D}$.
This immediately implies that the map $\widehat{g}$ is Galois. From
Corollary \ref{cor1} it follows that $\widehat{g}$ is genuinely ramified.
We will next prove that $\widehat{C}\times_D\widehat{C}$ is a disjoint union of curves isomorphic to
$\widehat{C}\times_{\widehat{D}}
\widehat{C}$. For this, first note that $\widehat{C}\times_D\widehat{C}$ maps to
$\widehat{D}\times_D\widehat{D}$, and the curve $\widehat{D}\times_D\widehat{D}$ is a disjoint union of copies of
$\widehat{D}$ as $g$ is \'etale Galois. The component of $\widehat{C}\times_D\widehat{C}$ lying over any of these
copies of $\widehat{D}$ is
isomorphic to $\widehat{C}\times_{\widehat{D}}\widehat{C}$, and therefore
$\widehat{C}\times_D\widehat{C}$ is a disjoint union of curves isomorphic to
$\widehat{C}\times_{\widehat{D}}\widehat{C}$.
{}From \eqref{q2} we have
\begin{equation}\label{re11}
F^*f_*{\mathcal O}_C\, \subset\, F^*F_*{\mathcal O}_{\widehat C}\, .
\end{equation}
Since $\widehat{C}\times_D\widehat{C}$ is a disjoint union of curves isomorphic to
$\widehat{C}\times_{\widehat{D}}\widehat{C}$, from \eqref{t1} it follows that
$F^*F_*{\mathcal O}_{\widehat C}$ is a direct sum of copies of $\widehat{g}^*\widehat{g}_*{\mathcal O}_{\widehat C}$.
It was shown above that $\widehat{g}$ is genuinely ramified and Galois. So from
Proposition \ref{sumofnegetivedegreelinebundles1} we know that
$\widehat{g}^*((\widehat{g}_*{\mathcal O}_{\widehat C})/{\mathcal O}_{\widehat{D}})$ is
contained in a direct sum of line bundles of negative degree.
Since $\widehat{g}$ is genuinely ramified, we know from Lemma \ref{genuinerampi1surjcurves}, Remark \ref{rem1}
and \eqref{y1} that
$$
{\mathcal O}_{\widehat C}\,=\, H^0({\widehat C},\, \widehat{g}^*\widehat{g}_*{\mathcal O}_D)
\otimes {\mathcal O}_{\widehat C}
$$
is the maximal semistable subbundle of $\widehat{g}^*\widehat{g}_*{\mathcal O}_D$. Since
$F^*F_*{\mathcal O}_{\widehat C}$ is a direct sum of copies of $\widehat{g}^*\widehat{g}_*{\mathcal O}_{\widehat C}$,
this implies that
\begin{equation}\label{re9}
H^0({\widehat C},\, F^*F_*{\mathcal O}_{\widehat C}) \otimes {\mathcal O}_{\widehat C}\, \subset\,
F^*F_*{\mathcal O}_{\widehat C}
\end{equation}
is the maximal semistable subbundle. On the other hand, we have
$F^*f_*{\mathcal O}_C\,=\, \widehat{f}^*f^*f_*{\mathcal O}_C$. So from
Lemma \ref{genuinerampi1surjcurves}, Remark \ref{rem1}
and \eqref{y1} we know that
\begin{equation}\label{re10}
{\mathcal O}_{\widehat C}\,=\,
H^0({\widehat C},\, F^*f_*{\mathcal O}_C) \otimes {\mathcal O}_{\widehat C} \, \subset\,
F^*f_*{\mathcal O}_C
\end{equation}
is the maximal semistable subbundle.
Consider the inclusion homomorphism in \eqref{re11}. From \eqref{re9} and \eqref{re10} we conclude that
using this homomorphism, the
quotient $F^*((f_*{\mathcal O}_C)/{\mathcal O}_D)$ is contained in
\begin{equation}\label{u1}
F^*F_*{\mathcal O}_{\widehat C}/(H^0({\widehat
C},\, F^*F_*{\mathcal O}_{\widehat C})\otimes {\mathcal O}_{\widehat C})\, .
\end{equation}
Since $F^*F_*{\mathcal O}_{\widehat C}$ is a direct sum of copies of
$\widehat{g}^*\widehat{g}_*{\mathcal O}_{\widehat C}$,
the vector bundle in \eqref{u1} is isomorphic to a direct sum of copies of
$\widehat{g}^*((\widehat{g}_*{\mathcal O}_{\widehat C})/{\mathcal O}_{\widehat{D}})$.
It was shown above that
$\widehat{g}^*((\widehat{g}_*{\mathcal O}_{\widehat C})/{\mathcal O}_{\widehat{D}})$ is
contained in a direct sum of line bundles of negative degree. Therefore, the vector bundle in \eqref{u1} is
contained in a direct sum of line bundles of negative degree. Consequently, the subsheaf
$$
F^*((f_*{\mathcal O}_C)/{\mathcal O}_D)\, \subset\,
F^*F_*{\mathcal O}_{\widehat C}/(H^0({\widehat
C},\, F^*F_*{\mathcal O}_{\widehat C})\otimes {\mathcal O}_{\widehat C})
$$
is also contained in a direct sum of line bundles of negative degree.
Since $F^*(f_*{\mathcal O}_C)/{\mathcal O}_D)$ is
contained in a direct sum of line bundles of negative degree, we conclude that
$$
\mu_{\rm max} ((F^*V)\otimes (F^*((f_*{\mathcal O}_C)/{\mathcal O}_D))) \, <\, \mu (F^*V)\,;
$$
note that $F^*V$ is semistable by Remark \ref{rem1} as $F$ is separable. From this it follows that
$$
\mu_{\rm max} (V\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D))\,=\,
\mu_{\rm max}(F^*(V\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)))/\text{deg}(F)
$$
$$
<\, \mu (F^*V)/\text{deg}(F)\,=\, \mu (V)\, ,
$$
because $F^*V$ is semistable. This completes the proof.
\end{proof}
\begin{remark}
When the characteristic of the base field $k$ is zero, the tensor product of two semistable
bundles remains semistable \cite[p.~285, Theorem 3.18]{RR}. We note that Lemma \ref{negetiveslope}
is a straight-forward consequence of it, provided the characteristic of $k$ is zero.
\end{remark}
\begin{lemma} \label{lemma3.8}
Let $f\,:\,C\,\longrightarrow\, D$ be a genuinely ramified morphism between
irreducible smooth projective curves.
Let $V$ and $W$ be two semistable vector bundles on $D$ with $$\mu(V)\,=\, \mu(W)\, .$$ Then
$$H^0(D,\, {\rm Hom}(V,\,W)) \,=\, H^0(C,\, {\rm Hom}(f^*V,\, f^*W))\, .$$
\end{lemma}
\begin{proof}
Using the projection formula, and the fact that $f$ is a finite map, we have
$$
H^0(C,\, {\rm Hom}(f^*V,\, f^*W)) \,\cong\,H^0(D,\, f_*{\rm Hom}(f^*V,\, f^*W))
\,\cong\,H^0(D,\, f_*f^*{\rm Hom}(V,\, W))
$$
\begin{equation}\label{j1}
\,\cong\,H^0(D,\, {\rm Hom}(V,\, W)\otimes f_*{\mathcal O}_C)
\,\cong\,H^0(D,\, {\rm Hom}(V,\, W\otimes f_*{\mathcal O}_C))\, .
\end{equation}
Let
$$
0\,=\, B_0\, \subset\, B_1\, \subset\, \cdots \, \subset\, B_{m-1}\, \subset\, B_m\,=\,
W\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)
$$
be the Harder--Narasimhan filtration of $W\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)$
\cite[p.~16, Theorem 1.3.4]{HL}. Since $W$ is semistable, and $f$ is genuinely ramified, from
Lemma~\ref{negetiveslope} we know that
$$
\mu(B_i/B_{i-1})\, \leq\, \mu(B_1)\, =\, \mu_{\rm max}(W\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D))
\, <\, \mu(W)
$$
for all $1\, \leq\, i\, \leq\, m$. In view this and the given condition that $\mu(V)\,=\, \mu(W)$, from
\eqref{a1} we conclude that
$$
H^0(D,\, {\rm Hom}(V,\, B_i/B_{i-1}))\,=\, 0
$$
for all $1\, \leq\, i\, \leq\, m$; note that both $V$ and $B_i/B_{i-1}$ are semistable. This implies that
$$
H^0(D,\, {\rm Hom}(V,\, W\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D)))\,=\ 0\, .
$$
Consequently, we have
$$
H^0(D,\, {\rm Hom}(V,\, W\otimes f_*{\mathcal O}_C))\,=\,
H^0(D,\, {\rm Hom}(V,\, W))
$$
by examining the exact sequence
$$
0 \,\longrightarrow\,{\rm Hom}(V,\, W)\,\longrightarrow\, {\rm Hom}(V,\, W\otimes f_*{\mathcal O}_C)
\,\longrightarrow\, {\rm Hom}(V,\, W\otimes ((f_*{\mathcal O}_C)/{\mathcal O}_D))\,\longrightarrow\, 0\, .
$$
{}From this and \eqref{j1} it follows that
$$H^0(C,\, {\rm Hom}(f^*V,\, f^*W))\,=\, H^0(D,\, {\rm Hom}(V,\,W))\, .$$
This completes the proof.
\end{proof}
\begin{theorem}\label{thm1}
Let $f\,:\,C\,\longrightarrow\, D$ be a genuinely ramified morphism
between irreducible smooth projective curves. Let $V$ be a stable vector
bundle on $D$. Then the pulled back vector bundle $f^*V$ is also stable.
\end{theorem}
\begin{proof}
Consider the Galois extension $F\,:\,\widehat{C}\,\longrightarrow\, D$ and the diagram in
\eqref{lcd}. Since $V$ is stable, from Lemma~\ref{lemma3.8} it follows that $f^*V$ is simple.
As $V$ is semistable, it follows that $g^*V$ is also semistable, where $g$
is the map in \eqref{lcd}. Let $$E\, \subset\, g^*V$$ be the
unique maximal polystable subbundle with $\mu(E)\,=\, \mu(g^*V)$ \cite[p.~23, Lemma 1.5.5]{HL}; this
subbundle $E$ is called the socle of $g^*V$. Since
$g^*V$ is preserved by the action of the Galois group $\text{Gal}(g)$ on
$g^*V$, there is a unique subbundle $E'\, \subset\, V$ such that
$$E\,=\, g^*E'\, \subset\, g^*V\, .$$ As
$V$ is stable, we conclude that $E'\,=\, V$, and hence $g^*V$ is polystable. So we have a
direct sum decomposition
\begin{equation}\label{gv}
g^*V\,=\,\bigoplus_{j=1}^m V_j\, ,
\end{equation}
where each $V_j$ is stable with $\mu(V_j)\,=\, \mu(g^*V)$.
Take any $1\,\leq\, j\, \leq\, m$, where $m$ is the integer in \eqref{gv}.
Since $V_j$ is stable, and $\widehat{g}$ in \eqref{lcd} is Galois (this was shown in the
proof of Lemma \ref{negetiveslope}),
repeating the above argument involving the socle we conclude that $\widehat{g}^*V_j$ is also
polystable. On the other hand, as $\widehat{g}$ is genuinely ramified (see the proof of Lemma
\ref{negetiveslope}), from Lemma~\ref{lemma3.8} it follows that
\begin{equation}\label{m1}
H^0(\widehat{C},\, \text{End}(\widehat{g}^*V_j))\, =\, H^0(\widehat{D},\, \text{End}(V_j))\, .
\end{equation}
But $H^0(\widehat{D},\, \text{End}(V_j))\,=\, k$, because $V_j$ is stable. Hence from \eqref{m1}
we know that $H^0(\widehat{C},\, \text{End}(\widehat{g}^*V_j))\, =\,k$. This implies that
$\widehat{g}^*V_j$ is stable, because it is polystable.
Since $\widehat{g}^*V_j$ is stable, and $\pi_2\circ h\,=\, \widehat{g}$ (see \eqref{lcd}),
we conclude that $\pi_2^*V_j$ is also stable with
$$
\mu(\pi_2^*V_j)\,=\, \mu(\pi_2^*g^*V)
$$
for all $1\,\leq\, j\, \leq\, m$. This implies that
\begin{equation}\label{ho2}
\pi_1^*f^*V \,= \,\pi_2^*g^*V \,=\, \bigoplus_{j=1}^m \pi_2^* V_j
\end{equation}
is polystable.
The map $\pi_2$ is genuinely ramified because $\widehat g$ is genuinely ramified
(see the proof of Lemma \ref{negetiveslope}) and ${\widehat g}\,= \,h\circ\pi_2$. Indeed, if
$\pi_2$ factors through an \'etale covering of $\widehat D$, then the
genuinely ramified map ${\widehat g}$ factors through that \'etale covering of
$\widehat D$, and hence from Proposition \ref{genuinerampi1surj}
it follows that $\pi_2$ is genuinely ramified.
Since $\pi_2$ is genuinely ramified, and each $V_j$ in \eqref{ho2} is stable,
from Lemma \ref{lemma3.8} it follows that
\begin{equation}\label{ho1}
H^0(C\times_D\widehat{D},\, \text{Hom}(\pi_2^* V_i,\, \pi_2^* V_j))\,=\,
H^0(\widehat{D},\, \text{Hom}(V_i,\, V_j))
\end{equation}
for all $1\,\leq\, i,\, j\, \leq\, m$. We know that $V_i$ and $\pi_2^* V_i$ are stable. So from
\eqref{ho1} we conclude that $V_i$ is isomorphic to $V_j$ if and only if $\pi_2^* V_i$ is
isomorphic to $\pi_2^* V_j$. From \eqref{ho1} it also follows that
\begin{equation}\label{ho3}
H^0(C\times_D\widehat{D},\, \text{End}(\pi^*_2g^*V))\,=\,
H^0(\widehat{D},\, \text{End}(g^*V))\, ;
\end{equation}
we note that this also follows from Lemma \ref{lemma3.8}.
The vector bundle $f^*V$ on $C$ is semistable, because $V$ is semistable and $f$ is separable.
Let
\begin{equation}\label{m-1}
0\, \not=\, S\,\subset\, f^*V
\end{equation}
be a stable subbundle with
\begin{equation}\label{k-1}
\mu(S)\,=\, \mu(f^*V)\, .
\end{equation}
Since $S$ is stable with $\mu(S)\,=\, \mu(f^*V)$, and the map $\pi_1$ is Galois, using the earlier
argument involving the socle we conclude that
\begin{equation}\label{m2}
\widetilde{\mathbb S}\,:= \,\pi_1^*S\, \subset\, \pi_1^*f^*V\,=\, \pi_2^*g^*V
\,=\, \bigoplus_{j=1}^m \pi_2^* V_j\, =:\, \widetilde{V}
\end{equation}
is a polystable subbundle with $\mu(\widetilde{\mathbb S})\,=\, \mu(\widetilde{V})$.
Consider the associative algebra $H^0(C\times_D\widehat{D},\, {\rm End}(\widetilde{V}))$, where $\widetilde{V}$
is the vector bundle in \eqref{m2}. Define the right ideal
\begin{equation}\label{th}
\Theta \, :=\, \{\gamma\, \, \in\,H^0(C\times_D\widehat{D},\, {\rm End}(\widetilde{V}))\, \mid\,
\gamma(\widetilde{V})\, \subset\, \widetilde{\mathbb S}\}\, \subset\,
H^0(C\times_D\widehat{D},\, {\rm End}(\widetilde{V}))\, ,
\end{equation}
where $\widetilde{\mathbb S}$ is the subbundle in \eqref{m2}. The subbundle
$\widetilde{\mathbb S}\, \subset\, \widetilde{V}$ is a direct summand, because $\widetilde{V}$ is
polystable, and $\mu(\widetilde{\mathbb S})\,=\, \mu(\widetilde{V})$. Consequently,
$\widetilde{\mathbb S}$ coincides with the subbundle
generated by the images of endomorphisms lying in the right ideal ${\Theta}$. Since $\widetilde{V}$
is semistable, the image of any endomorphism of it is a subbundle.
Consider $\widetilde{V}$ in \eqref{m2}. The identification
$$
H^0(C\times_D\widehat{D},\, {\rm End}(\widetilde{V})) \,=\,
H^0(\widehat{D},\, \text{End}(g^*V))
$$
in \eqref{ho3} preserves the associative algebra structures of
$$H^0(C\times_D\widehat{D},\, {\rm End}(\widetilde{V}))\ \ \text{ and }\ \
H^0(\widehat{D},\, \text{End}(g^*V))\, ,$$
because it sends any $\gamma\, \in\, H^0(\widehat{D},\,{\rm End}(g^*V))$ to $\pi^*_2\gamma$. Let
\begin{equation}\label{th2}
\widetilde{\Theta}\,\subset\,H^0(\widehat{D},\, \text{End}(g^*V))
\end{equation}
be the right ideal that corresponds to $\Theta$ in \eqref{th} by the identification in \eqref{ho3}.
Let
\begin{equation}\label{m4}
\overline{\mathcal S}\, \subset\, g^*V
\end{equation}
be the subbundle generated by the images of endomorphisms lying in the right ideal $\widetilde{\Theta}$
in \eqref{th2}. Since $g^*V$ is semistable, the image of any endomorphism of it is a subbundle. From the
above construction of $\overline{\mathcal S}$ it follows that
$$
\widetilde{\mathbb S}\,=\, \pi^*_2 \overline{\mathcal S}\, ,
$$
where $\widetilde{\mathbb S}$ is the subbundle in \eqref{th}.
The isomorphism in \eqref{ho3} is equivariant for the actions of
the Galois group $\text{Gal}(\pi_1)\,=\, \text{Gal}(g)$ on
$$
H^0(C\times_D\widehat{D},\, {\rm End}(\widetilde{V}))\,=\,
H^0(C\times_D\widehat{D},\, {\rm End}(\pi^*_1f^*V))
$$
and $H^0(\widehat{D},\,{\rm End}(g^*V))$, because the isomorphism
sends any $\gamma\, \in\, H^0(\widehat{D},\,{\rm End}(g^*V))$ to $\pi^*_2\gamma$. Since
$\widetilde{\mathbb S}\,= \,\pi_1^*S$ in \eqref{m2} is preserved under the action of
$\text{Gal}(\pi_1)$ on $\pi^*_1f^*V$, it follows that the action of $\text{Gal}(\pi_1)$
on $H^0(C\times_D\widehat{D},\, {\rm End}(\pi^*_1f^*V))$ preserves the right ideal
$\Theta$ in \eqref{th}. These together imply that the action of $\text{Gal}(g)$ on
$H^0(\widehat{D},\,{\rm End}(g^*V))$ preserves the right ideal $\widetilde{\Theta}$ in \eqref{th2}.
Consequently, the subbundle
$$
\overline{\mathcal S}\, \subset\, g^*V
$$
in \eqref{m4} is preserved under the action of $\text{Gal}(g)$ on $g^*V$.
Since $\overline{\mathcal S}$ is preserved under the action of $\text{Gal}(g)$ on $g^*V$,
there is a unique subbundle
$$
{\mathbb S}_0\, \subset\, V
$$
such that $\overline{\mathcal S}\,=\,g^*{\mathbb S}_0\, \subset\, g^*V$. Given that $V$ is stable,
and $\mu({\mathbb S}_0)\,=\, \mu(V)$ (this follows from \eqref{k-1}), we
now conclude that ${\mathbb S}_0\, =\, V$. Hence the subbundle $S$ in \eqref{m-1} coincides with $f^*V$.
Therefore, we conclude that $f^*V$ is stable.
\end{proof}
\section{Characterizations of genuinely ramified maps}
Let $D$ be an irreducible smooth projective curve, and let
$$\phi\,:\, X\,\longrightarrow\, D$$ be a nontrivial \'etale covering with $X$
irreducible. Let $L$ be a line bundle on $X$ of degree one.
\begin{proposition}\label{prop1}\mbox{}
\begin{enumerate}
\item The direct image $\phi_*L$ is a stable vector bundle on $D$.
\item The pulled back bundle $\phi^*\phi_*L$ is not stable.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\delta$ be the degree of
the map $\phi$; note that $\delta\, >\, 1$, because $\phi$ is nontrivial.
We have $\text{deg}(\phi_*L)\,=\, \text{deg}(L)\,=\, 1$ \cite[p.~306, Ch.~IV, Ex.~2.6(a)~and~2.6(d)]{Ha}.
This implies that
$$
\text{deg}(\phi^*\phi_*L)\,=\, \delta\cdot\text{deg}(L)\,=\,\delta\, .
$$
We have a natural homomorphism
\begin{equation}\label{hw0}
H\, :\, \phi^*\phi_*L\, \longrightarrow\, L\, .
\end{equation}
This $H$ has the following property: For any coherent subsheaf $W\, \subset\, \phi_*L$, the restriction
of $H$ to $\phi^*W\, \subset\, \phi^*\phi_*L$
\begin{equation}\label{hw}
H_W\, :=\, H\vert_{\phi^*W}\, :\, \phi^* W\, \longrightarrow\, L
\end{equation}
is a nonzero homomorphism. Note that for any point $y\,\in\, D$, the fiber
$(\phi^*\phi_*L)_y$ is $H^0(\phi^{-1}(y),\, L\vert_{\phi^{-1}(y)})$, and hence
a nonzero element of $(\phi^*\phi_*L)_y$ must be nonzero at some point of $\phi^{-1}(y)$.
We will first show that $\phi_*L$ is semistable.
To prove this by contradiction, let $V\, \subset\, \phi_*L$ be a semistable subbundle with
\begin{equation}\label{hw2}
\mu(V)\, >\, \mu(\phi_*L)\,=\, \frac{1}{\delta}\, .
\end{equation}
Consider the nonzero homomorphism
$$
H_V\, :=\, H\vert_{\phi^*V}\, :\, \phi^* V\, \longrightarrow\, L
$$
in \eqref{hw}. We have $\mu(\phi^* V)\,=\, \delta\cdot \mu(V)\, >\, 1$ (see \eqref{hw2}), and
also $\phi^* V$ is semistable because $V$ is so. Consequently, $H_V$ contradicts \eqref{a1}. As
$\phi_*L$ does not contain any subbundle $V$ satisfying \eqref{hw2},
we conclude that $\phi_*L$ is semistable.
Since $\text{rk}(\phi_*L)$ is coprime to $\text{deg}(\phi_*L)$, the semistable vector
bundle $\phi_*L$ is also stable. This proves statement (1).
The vector bundle $\phi^*\phi_*L$ is not stable, because the homomorphism $H$ in \eqref{hw0} is nonzero
and $\mu(\phi^*\phi_*L)\,=\, \mu(L)$.
\end{proof}
\begin{proposition}\label{prop2}
Let $f\,:\,C\,\longrightarrow\,D$ be a nonconstant separable morphism
between irreducible smooth projective curves such that $f$ is not genuinely ramified.
Then there is a stable vector bundle $E$ on $D$ such that $f^*E$ is not stable.
\end{proposition}
\begin{proof}
Since $f$ is not genuinely ramified, from Proposition \ref{genuinerampi1surj} we know that
there is a nontrivial \'etale covering
$$\phi\,:\, X\,\longrightarrow\, D$$
and a map $\beta\, :\, C\,\longrightarrow\, X$ such that $\phi\circ\beta\,=\, f$. As in Proposition
\ref{prop1}, take a line bundle $L$ on $X$ of degree one. The vector bundle
$\phi_*L$ is stable by Proposition \ref{prop1}(1).
The vector bundle $\phi^*\phi_*L$ is not stable by Proposition \ref{prop1}(2). Therefore,
$$
f^*(\phi_*L)\, =\, \beta^*\phi^*(\phi_*L)\,=\, \beta^*(\phi^*\phi_*L)
$$
is not stable.
\end{proof}
Theorem \ref{thm1} and Proposition \ref{prop2} together give the following:
\begin{theorem}\label{thm2}
Let $f\,:\,C\,\longrightarrow\,D$ be a nonconstant separable morphism
between irreducible smooth projective curves. The map $f$ is genuinely ramified
if and only if $f^*E$ is stable for every stable vector bundle $E$ on $D$.
\end{theorem}
\section*{Acknowledgements}
We thank both the referees for going through the paper very carefully and making numerous suggestions. The
first-named author is partially supported by a J. C. Bose Fellowship. Both authors were supported by the
Department of Atomic Energy, Government of India, under project no.12-R\&D-TFR-5.01-0500.
|
{
"timestamp": "2021-02-18T02:17:33",
"yymm": "2102",
"arxiv_id": "2102.08744",
"language": "en",
"url": "https://arxiv.org/abs/2102.08744"
}
|
\section{Introduction}\label{intro}
Actually \textit{Roli's cube} $\mathcal{R}$ isn't a cube, although it does share the $1$-skeleton of a $4$-cube. First
described by Javier (Roli) Bracho, Isabel Hubard and Daniel Pellicer in \cite{bracho:2014aa},
$\mathcal{R}$ is a chiral $4$-polytope of type $\{8,3,3\}$, faithfully realized in $\mathbb{E}^4$
(a situation earlier thought impossible).
Of course, Roli didn't himself name $\mathcal{R}$; but the eponym is pleasing to his colleagues and has taken hold.
Chiral polytopes with realizations of `full rank' had (incorrectly)
been shown not to exist by
Peter McMullen in \cite[Theorem 11.2]{fullrank}. Mind you, these objects
do seem to be elusive.
Pellicer has proved in \cite{chir4and5} that chiral polytopes of full rank can exist only in ranks
$4$ or $5$.
Roli's cube $\mathcal{R}$ was constructed in \cite{bracho:2014aa} as a \textit{colourful polytope}, starting from
a hemi-$4$-cube in projective $3$-space. (For more on this, see Section~\ref{copo}.)
The construction given here in Section~\ref{rolq}
is a bit different, though
certainly closely related. In Section~\ref{mincov} we can then easily manufacture the minimal
regular cover $\T$ for $\mathcal{R}$, and give both a presentation and faithful representation for
its automorphism group. Along the way, we encounter both the
M\"{o}bius-Kantor Configuration $8_3$
and the regular complex polygon $3\{3\}3$.
In what follows, we can make our way with concrete examples, so we won't need much
of the general theory of abstract regular or chiral polytopes
and their realizations. We refer the reader to \cite{arp}, \cite{GeRP} and \cite{schulte1} for more.
\section{The $4$-cube: convex, abstract and colourful}\label{4cub}
The most familiar of the regular convex polytopes in Euclidean space
$\mathbb{E}^4$ is surely the $4$-cube $\mathcal{P}= \{4,3,3\}$.
A familiar projection of $\mathcal{P}$ into $\mathbb{E}^3$ is displayed in
Figure~\ref{proj2}.
\begin{figure}[ht]\begin{center}
\includegraphics[width=50mm]{fig4.pdf}
\end{center}
\caption{A $2$-dimensional look at a $3$-dimensional projection of the $4$-cube.}
\label{proj2}
\end{figure}
Let us equip $\mathbb{E}^4$ with its usual basis $b_1,\ldots,b_4$ and inner product.
Then we may take the vertices of $\mathcal{P}$ to be the $16$ sign change vectors
\begin{equation}\label{vert} e = (\varepsilon_1, \ldots,\varepsilon_4) \in \{\pm 1\}^4.\end{equation}
At any such vertex there is an edge (of length $2$) running
in each of the $4$
coordinate directions, so that $\mathcal{P}$ has $32 = \frac{16\cdot 4}{2}$ edges.
Similarly we count the
$24$ squares $\{4\}$ as faces of dimension $2$.
Finally, $\mathcal{P}$ has $8$ facets; these faces of dimension $3$
are ordinary cubes $\{4,3\}$. They lie in four pairs
of supporting hyperplanes
orthogonal to the coordinate axes. It is enjoyable to hunt for these faces in Figure~\ref{proj1}, where the $8$
parallel edges in each of the coordinate directions have colours black, red, blue and green, respectively.
\begin{figure}[ht]\begin{center}
\includegraphics[width=120mm]{fig2.pdf}
\end{center}
\caption{The most symmetric $2$-dimensional projection of the $4$-cube.}
\label{proj1}
\end{figure}
Let us turn to the symmetry group $G$ for $\mathcal{P}$. Each symmetry $\gamma$ is determined by its action on the
vertices, which clearly can be permuted with sign changes in all possible ways. Thus $G$ has order
$2^4\cdot 4! = 384$, and we may think of it as being comprised of all $4 \times 4$ signed permutation matrices.
In fact, $G$ can be generated by reflections $\rho_0, \rho_1, \rho_2, \rho_{3}$ in hyperplanes. Here
$\rho_0$ negates the first coordinate $x_1$ (reflection in the coordinate hyperplane orthogonal to $b_1$);
and, for $1 \leq j \leq 3$, $\rho_j$ transposes coordinates $x_j, x_{j+1}$ (reflection in the hyperlane orthogonal
to $b_j-b_{j+1}$).
Note that the reflection in the $j$-th coordinate hyperplane is
$ \rho_0^{\rho_1 \cdots \rho_{j-1}} $ for $1\leq j\leq 4$.
(We use the notation $\gamma^{\eta} := \eta^{-1} \gamma \eta$.)
The product of these $4$ special reflections, in any order,
is the central element $\zeta: t \mapsto -t $. It is easy to check as well that
\begin{equation}\label{cent}
\zeta = (\rho_0\rho_1 \rho_2\rho_{3})^4\;.
\end{equation}
The \textit{Petrie symmetry} $\pi = \rho_0\rho_1 \rho_2\rho_{3}$ therefore has period $8$.
For purposes of calculation, we note that
$G \simeq C_2^4 \rtimes S_4$ is a semidirect product. Under this isomorphism,
each $\gamma \in G$ factors uniquely as
$\gamma = e \mu $, where $\mu \in S_n$ is a permutation of
$\{1,\ldots,4\}$ (labelling the coordinates);
and $e$ is a sign change vector, as in (\ref{vert}).
Note that
$$ e^{\mu} = (\varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4)^{\mu} = (\varepsilon_{(1)\mu^{-1}}, \varepsilon_{(2)\mu^{-1}},
\varepsilon_{(3)\mu^{-1}} , \varepsilon_{(4)\mu^{-1}})\, .$$
Now really $\gamma$ is a signed permutation matrix. But it is convenient to abuse notation,
keeping in mind that each $e$ corresponds to a diagonal matrix of signs
and each $\mu$ to a permutation matrix. Thus we might write
\begin{equation}\label{pet1}
\pi = \rho_0\rho_1 \rho_2\rho_{3} = ( -1,1, 1,1)\cdot (4,3,2,1) =\left[
\begin{array}{rrrr}
0&0&0&-1 \\
1&0&0&0 \\
0&1&0&0 \\
0&0&1&0
\end{array}
\right]\;.
\end{equation}
Next we use the group $G = \langle \rho_0, \rho_1, \rho_2, \rho_{3}\rangle$ to remanufacture the cube.
In this (geometric) version
of \textit{Wythoff's construction} \cite[\S 2.4]{rcp} we choose a \textit{base vertex} $v$ fixed by the
subgroup $G_0 := \langle \rho_1, \rho_2, \rho_{3}\rangle$ (which permutes the coordinates in all ways). Thus,
$v = c(1,1,1,1)$ for some $c \in \mathbb{R}$. To avoid a trivial construction we take $c \neq 0$, so, up to
similarity, we may
use $c=1$. Then the orbit of $v$ under $G$ is just the set of $16$ points in (\ref{vert}); and their convex hull
returns $\mathcal{P}$ to us. Since $G_0$ is the full stabilizer of $v$ in $G$, the vertices correspond to right
cosets $G_0\gamma$.
The beauty of Wythoff's construction is that all faces of $\mathcal{P}$ can be constructed in a similar way by induction on
dimension (\cite[Section 1B]{arp}, \cite{coxeter10} and \cite{GeRP}). For example, the vertices $v = (1,1,1,1)$ and $v\rho_0 = (-1,1,1,1)$
of the \textit{base edge} of $\mathcal{P}$
are just the orbit of $v$ under the subgroup $G_1 := \langle \rho_0, \rho_2, \rho_{3}\rangle$; and edges
of $\mathcal{P}$ correspond to right cosets of the new subgroup $G_1$. Furthermore, a more careful look reveals that
a vertex is incident with an edge just when the corresponding cosets have non-trivial intersection.
Pursuing this, we see that the face lattice of $\mathcal{P}$ can be recontructed as a coset geometry based
on subgroups
\begin{equation}\label{dist}
G_0, G_1, G_2, G_3, \;\;\mathrm{where}\;G_j := \langle \rho_0,\ldots, \hat{\rho_{j}},\ldots, \rho_{3}\rangle.
\end{equation}
From this point of view, $\mathcal{P}$ becomes an
\textit{abstract regular $4$-polytope}, a partially ordered set whose automorphism group is $G$.
Notice that the distinguished subgroups in (\ref{dist}) provide the
proper faces in a \textit{flag} in $\mathcal{P}$, namely a mutually incident vertex, edge, square and $3$-cube.
The crucial structural property of $G$ is that it should be
a string C-group with respect to the generators $\rho_j$.
A \textit{string C-group} is a quotient
of a Coxeter group with linear diagram under which an `intersection condition' on subgroups
generated by subsets of generators, such as those in (\ref{dist}),
is preserved \cite[Sections 2E]{arp}.
For the $4$-cube $\mathcal{P}$, $G$ is actually isomorphic to the
Coxeter group $B_4$ with diagram
\begin{equation}\label{B4diag}{\bullet}\frac{4}{\;\;\;\;\;\;\;} {\bullet}\frac{3}{\;\;\;\;\;\;\;}{\bullet}\frac{3}{\;\;\;\;\;\;\;}
{\bullet}
\end{equation}
Comparing the geometric and abstract points of view, we say that the convex $4$-cube
is a \textit{realization} of its face lattice (the abstract $4$-cube).
When we think of a polytope from the abstract point of view, we often use the term \textit{rank} instead
of `dimension'. An abstract polytope $\Q$ is said to be \textit{regular} if its automorphism group
is transitive on flags (maximal chains in $\Q$).
Intuitively, regular polytopes have maximal symmetry (by reflections). Next up
are \textit{chiral} polytopes, with exactly two flag orbits and such that adjacent flags are
always in different orbits (so maximal symmetry by rotations, but without reflections).
We will soon encounter less familiar abstract regular or chiral polytopes, with their
realizations. %
For a first example, suppose that we map (by central projection)
the faces of $\mathcal{P}$ onto the $3$-sphere $\mathbb{S}^3$
centred at the origin. We can then reinterpret $\mathcal{P}$ as a regular \textit{spherical polytope} (or tessellation),
with the same symmetry group $G$. Now the centre of $G$ is
the subgroup $\langle \zeta\rangle$ of order $2$. The quotient group $G/\langle \zeta\rangle$
has order $192$ and is still a string C-group. The corresponding regular polytope
is the \textit{hemi-$4$-cube} $\mathcal{H} = \{4,3,3\}_4$, now realized in projective
space $\mathbb{P}^3$ \cite[Section 6C]{arp}; see Figure~\ref{k44fig}.
By (\ref{cent}), the product of the four
generators of $G/\langle \zeta\rangle$ has order $4$; this is recorded as the subscript in the
Schl\"{a}fli symbol for $\mathcal{H}$.
Now we can outline the construction of
Roli's cube given in \cite{bracho:2014aa}.
\medskip
\section{Colourful polyopes}\label{copo}
The image in Figure~\ref{proj2} or on the left in Figure~\ref{proj1} can just as well be understood as a graph $\mathcal{G}$,
namely the \textit{$1$-skeleton} of the $4$-cube $\mathcal{P}$. In fact, we can recreate the
abstract (or combinatorial) structure of $\mathcal{P}$ from just the edge colouring of $\mathcal{G}$:
for $0\leq j \leq 4$, the $j$-faces of $\mathcal{P}$ can be identified with the components of those subgraphs
obtained by keeping just edges with some selection of the $j$ colours (over all such choices).
We therefore say that $\mathcal{P}$ is a \textit{colourful polytope}.
Such polytopes were introduced in
\cite{schulteGID3}. In general, one begins with a finite, connected $d$-valent graph $\mathcal{G}$
admitting a (proper) edge colouring, say by the symbols $1,\ldots,d$.
Thus each of the colours provides a $1$-factor for $\mathcal{G}$.
The graph $\mathcal{G}$ determines an (abstract) colourful polytope $\mathcal{P}_{\mathcal{G}}$ as follows. For $0\leq j \leq d$,
a typical $j$-face $(C,v)$ is identified with the set all vertices of $\mathcal{G}$
connected to a given vertex $v$ by a path using
only colours from some subset $C$ of size $j$ taken from $\{1,\ldots,d\}$. The $j$-face $(C,v)$
is incident with the $k$-face $(D,w)$ just when $C \subseteq D$ and $w$ can be reached from $v$
by a $D$-coloured path. (This means that $j\leq k$; and we can just as well take $w=v$.
The minimal face of rank $-1$ in $\mathcal{P}_{\mathcal{G}}$ is formal.) Notice that $\mathcal{P}_{\mathcal{G}}$ is a \textit{simple}
$d$-polytope whose $1$-skeleton is just $\mathcal{G}$ itself.
From \cite[Theorem 4.1]{schulteGID3}, the automorphism group of $\mathcal{P}_{\mathcal{G}}$ is isomorphic to the group
of colour-preserving graph automorphisms of $\mathcal{G}$. (Such automorphisms are allowed to
permute the $1$-factors.)
It is easy to see that the hemi-$4$-cube $\mathcal{H}$ is also colourful. Its $1$-skeleton is the
complete bipartite graph $K_{4,4}$ found in
Figure~\ref{k44fig}.
We obtain this graph from Figure~\ref{proj2} or Figure~\ref{proj1} by identifying antipodal pairs of points,
like $v$ and $-v$.
\begin{figure}[ht]\begin{center}
\includegraphics[height=60mm]{figK44D.pdf}
\end{center}
\caption{The graph $K_{4,4}$ in (a) is the $1$-skeleton of the hemi-$4$-cube $\{4,3,3\}_4$ (b),(c).}
\label{k44fig}
\end{figure}
If we lift $\mathcal{H}$, as it is now, to $\mathbb{S}^3$, we regain the coloured
$4$-cube $\mathcal{P}$. Now keep $K_{4,4}$ embedded in $\mathbb{P}^3$, as in Figure~\ref{k44fig}.
But, following \cite{bracho:2014aa}, observe that $K_{4,4}$ admits the automorphism $\alpha$
which cyclically permutes, say, the first three vertices $y,w,x$ in the top block, leaving the rest fixed.
Clearly, $\alpha$ is a non-colour-preserving automorphism of $K_{4,4}$, so its effect
is to recolour $12$ of the edges in the embedded graph.
On the abstract level nothing has
changed for the resulting colourful polytope; it is still the hemi-$4$-cube $\mathcal{H}$.
But faces of ranks $2$ and $3$
are now differently embedded in $\mathbb{P}^3$.
For example, the red-blue $2$-face on $v$, which is planar in Figure~\ref{k44fig}(b), becomes
a helical quadrangle Figure~\ref{k44fig}(c) and thereby acquires an orientation.
According to Definition~\ref{petdef}, these helical polygons are Petrie
polygons for the standard realization of $\mathcal{H}$ in Figure~\ref{k44fig}(b).
The newly coloured geometric object, which we might label $\mathcal{H}^R$, is a
\textit{chiral realization} of the abstract regular polytope $\mathcal{H}$. Comforted by the fact that $\mathbb{P}^3$ is orientable, we could just as well apply
$\alpha^{-1}$ to obtain the left-handed version $\mathcal{H}^L$. These two \textit{enantiomorphs}
are oppositely embedded in $\mathbb{P}^3$, though both remain isomorphic to $\mathcal{H}$
as partially ordered sets.
If we lift either enantiomorph to $\mathbb{S}^3$, we obtain a chiral
$4$-polytope faithfully realized in $\mathbb{E}^4$ \cite[Theorem 2]{bracho:2014aa}. This is Roli's cube $\mathcal{R}$.
Next we set the stage for a slightly different construction of $\mathcal{R}$,
without the use of $\mathbb{P}^3$.
\section{Petrie polygons of the $4$-cube}\label{petri}
Let us consider the progress of the base vertex $v = (1,1,1,1)$ as we apply successive powers of $\pi$ in (\ref{pet1}).
We get a centrally symmetric $8$-cycle of vertices
$$v\rightarrow(1,1,1,-1)\rightarrow (1,1,-1,-1)\rightarrow (1,-1,-1,-1)\rightarrow
-v = (-1,-1,-1,-1)\rightarrow \ldots \;.
$$
Starting from $v$ in Figure~\ref{proj1} we therefore proceed in coordinate directions 4, 3, 2, 1
(indicated by different colours), then repeat again. This traces out the peripheral octagon $\mathcal{C}$,
which in fact is a Petrie polygon for $\mathcal{P}$.
\begin{definition}\label{petdef}
A \textit{Petrie polygon} of a $3$-polytope is an edge-path such that
any $2$ consecutive edges, but no $3$, belong to a $2$-face.
We then say that a \textit{Petrie polygon} of a $4$-polytope $\Q$ is an edge-path such that
any $3$ consecutive edges, but no $4$, belong to \emph{(}a Petrie polygon of\emph{)} a facet of $\Q$.
\end{definition}
For the cube $\mathcal{P}$, the parenthetical condition is actually superfluous; compare \cite{coxweissA}.
Clearly, we can begin a Petrie polygon at any vertex, taking any of the $4!$ orderings of the colours. But this counts
each octagon in $16$ ways. We conclude that $\mathcal{P}$ has $24$ Petrie polygons. What we really use here is
the fact that $G$ is transitive on vertices, and that at any fixed vertex, $G$ permutes the edges in all
possible ways. We see that $G$ acts transitively on Petrie polygons.
But the (global) stabilizer of $\mathcal{C}$ (constructed above with the help of $\pi$ and $v$) is the dihedral group $K$ of order $16$ generated by
$\mu_0 = \rho_0\rho_2\rho_3\rho_2 = (-1,1,1,1)\cdot(2,4),$ and
$\mu_1 = \mu_0\pi = \rho_2\rho_3\rho_2\rho_1\rho_2\rho_3 = (1,1,1,1)\cdot (1,4)(2,3)$.
(Such calculations are routine using either signed permutation matrices or the decomposition in
$C_2^4 \rtimes S_4$. Note that any $4$ consecutive vertices of $\mathcal{C}$ form a
basis of $\mathbb{E}^4$.) We confirm that $\mathcal{P}$ has $24 =384/16$ Petrie polygons.
Now we move to the \textit{rotation subgroup}
$$G^+ = \langle \rho_0\rho_1, \rho_1\rho_2, \rho_2\rho_3 \rangle.$$
It has order $192$ and consists of the signed permutation matrices of determinant $+1$.
Note that $K< G^+$. Thus, under the action of $G^+$, there are two orbits of Petrie
polygons of $12$ each. Let's label these two \textit{chiral classes}
$R$ and $L$ for right- and left-handed, taking $\mathcal{C}$ in class $R$.
The two chiral classes must be swapped
by any non-rotation, such as any $\rho_j$. To distinguish
them, we could take the determinant of the matrix whose rows are any $4$ consecutive vertices on
a Petrie polygon. The two chiral classes $R$ and $L$ then have determinants $+8$, respectively, $-8$.
Or starting from a common vertex, the edge-colour sequence along a polygon in one class is an
odd permutation of the colour sequence for a polygon in the other class.
The inner octagram ${\mathcal{C}}^*$ in Figure~\ref{proj1} is another Petrie polygon. Start at the vertex
$w = (v)\rho_1\rho_0\rho_1 = (1,-1,1,1)$
which is adjacent to $v$ along a red edge; then proceed in directions 4, 1, 2, 3 and repeat. However, the remaining
Petrie polygons appear in less symmetrical fashion in Figure~\ref{proj1}.
Note that $\mu_0$ actually acts on the diagram in Figure~\ref{proj1} as a reflection in a vertical line,
whereas $\pi$ rotates the octagon $\mathcal{C}$ and octagram $\mathcal{C}^*$ in opposite senses. On the other hand, $\mu_2 = \rho_1\rho_2\rho_0\rho_1 = (1,-1,1,1)\cdot (1,3)$ is an element of $G^+$
which swaps $\mathcal{C}$ and $\mathcal{C}^*$. Thus there are $6$ such unordered pairs
like $\mathcal{C}, \mathcal{C}^*$ in class $R$ and another $6$ pairs in class $L$.
\medskip
\begin{remark}
It can be shown that Figure~\ref{proj1} is the most symmetric orthogonal projection of $\mathcal{P}$ to a plane
\cite[\S 13.3]{rp}. Since all edges after projection have a common length, we may say that this projection
is \textit{isometric}.
The Petrie symmetry $\pi$ is one instance of a \textit{Coxeter element} in the group $G = B_4$, namely
a product of the four generators in some order.
All such Coxeter elements are conjugate. Each of them has invariant planes
which give rise to the sort of orthogonal projection displayed in Figure~\ref{proj1}.
A procedure for finding these planes is detailed in \cite[3.17]{humph}.
For $\pi$, the
two planes are spanned by the rows of
$$\left[\begin{array}{rrrr}
\frac{1}{\sqrt{2}}& \frac{1}{2} & 0& \frac{-1}{2} \\
0& \frac{1}{2} & \frac{1}{\sqrt{2}} & \frac{1}{2} \end{array}\right]
\;\mathrm{and}\;
\left[\begin{array}{rrrr}
\frac{1}{\sqrt{2}}& \frac{-1}{2} & 0& \frac{1}{2} \\
0& \frac{1}{2} & \frac{-1}{\sqrt{2}} & \frac{1}{2} \end{array}\right]
.$$
These planes are orthogonal complements; and $\pi$ acts on them by
rotations through $45^{\circ}$ and $135^{\circ}$, respectively. Figure~\ref{proj1} results from
projecting $\mathcal{P}$ onto the first plane.
\hfill$\square$
\end{remark}
\bigskip
\section{The map $\mathcal{M}$ and the M\"{o}bius-Kantor Configuration $8_3$}\label{83con}
Look again at the companion Petrie polygons
$\mathcal{C},\mathcal{C}^*$ in Figure~\ref{proj1}.
Now working around the rim clockwise from $v$ delete the edges coloured
blue, red, black, green, and repeat. We are left with the trivalent graph $\mathcal{L}$
displayed in Figure~\ref{levi}.
\begin{figure}[ht]\begin{center}
\includegraphics[width=70mm]{fig1C.pdf}
\end{center}
\caption{The Levi graph $\mathcal{L} = \{8\} + \{\frac{8}{3}\}$ for the configuration $8_3$.}
\label{levi}
\end{figure}
In fact, $\mathcal{L}$ is the \textit{generalized Petersen graph} $\{8\} + \{\frac{8}{3}\}$, studied in detail
by Coxeter in \cite[Section 5]{Coxeter:1950aa}. The graph is $2$-arc transitive, so that its automorphism group
has order $96 = 16\cdot 3\cdot 2^{2-1}$ \cite[Chapter 18]{biggsAGT}. We return to this group later.
We have labelled alternate vertices of $\mathcal{L}$ by the residues $0,1,2,3,4,5,6,7\pmod{8}$.
These will represent the points in a M\"{o}bius-Kantor configuration $8_3$. The remaining
(unlabelled) vertices of $\mathcal{L}$ represent the $8$ lines in the configuration. Thus we have lines
$013$ (represented by the `north-west' vertex $(-1,-1,1,1)$), $124$, $235$, and so on, including
line $671$ represented by $v$.
Notice that we can interpret the configuration as being comprised of two quadrangles with vertices $0,2,4,6$ and $1,3,5,7$,
\textit{each inscribed in the other}: vertex $0$ lies on edge $13$, vertex $1$ lies on edge $24$, and so on.
So far this configuration $8_3$ is purely abstract. In fact, it can be realized as
a point-line configuration in a projective (or affine) plane
over any field in which $$z^2 -z +1 = 0$$
has a root, certainly over $\mathbb{C}$. However, $8_3$ cannot be realized
in the real plane.
Coxeter made other observations in \cite{Coxeter:1950aa}, including the fact that the graph $\mathcal{L}$ is
a sub-$1$-skeleton of the $4$-cube. Altogether $\mathcal{L}$ contains
$6$ Petrie polygons, which we can briefly describe by their alternate vertices:
$$
\begin{array}{ccc}
0246 (= \mathcal{C}^*)\;\; &0541 & 1256 \\
1357 (=\mathcal{C})\;\; &2367 & 0743 \\
\end{array}
$$
Hence, the configuration can be regarded as a pair of mutually inscribed quadrangles
in three ways.
Observe that each edge of $\mathcal{L}$ lies on exactly two of the $6$ octagons. For example, the top edge
with vertices labelled $1$ and $v$ lies on octagons $1357$ and $1256$. (It does not matter that two such
octagons then share a second edge opposite the first.)
Furthermore, each vertex lies on the three octagons determined by
choices of two edges. We can thereby construct a $3$-polytope $\mathcal{M}$ of type $\{8,3\}$, with $6$ octagonal
faces, whose $1$-skeleton is $\mathcal{L}$. In short, $\mathcal{M}$ is realized by substructures of the $4$-cube $\mathcal{P}$.
Moving sideways, we can reinterpret $\mathcal{M}$ in a more familiar topological way as a map
on a compact orientable surface of genus $2$. Recall that $\mathcal{M}$ is covered by the tessellation $\{8,3\}$ of the hyperbolic plane, as indicated
in Figure~\ref{83map}.
\begin{figure}[ht]\begin{center}
\includegraphics[width=90mm]{map3.pdf}
\end{center}
\caption{Part of the tessellation $\{8,3\}$ of the hyperbolic plane.}
\label{83map}
\end{figure}
Now return to $\mathbb{E}^4$ where the combinatorial structure of $\mathcal{M}$ is handed to us as faithfully realized.
Drawing on \cite[Section 8.1]{coxmos}, we have that the rotation group $\Gamma(\mathcal{M})^+$ for $\mathcal{M}$
is generated by two special Euclidean symmetries:
$\sigma_1 = \pi = \rho_0\rho_1\rho_2\rho_3 = (-1,1,1,1)\cdot (4,3,2,1)$ (preserving the base octagon $\mathcal{C}$); and
$\sigma_2 = \rho_3\rho_2\rho_1\rho_3 = (1,1,1,1) \cdot (1,2,4)$ (preserving the base vertex $v$ on $\mathcal{C}$).
The order of $\Gamma(\mathcal{M})^+$ \textit{must} then be twice the number of edges in $\mathcal{M}$, namely $48$.
Let us assemble these and further observations in
\begin{proposition}\label{mapstuff}
\emph{(a)} The $3$-polytope $\mathcal{M}$ is abstractly regular of type $\{8,3\}$, here realized in $\mathbb{E}^4$
in a geometrically chiral way.
\emph{(b)} The rotation subgroup $\Gamma(\mathcal{M})^+ = \langle \sigma_1, \sigma_2\rangle$ has order $48$ and presentation
\begin{equation}\label{presmpl}
\langle \sigma_1, \sigma_2\,|\, \sigma_1^8 = \sigma_2^3 = (\sigma_1\sigma_2)^2 =
(\sigma_1^{-3}\sigma_2)^2 = 1 \rangle
\end{equation}
\emph{(c)} The full automorphism group $\Gamma(\mathcal{M})$ has order $96$ and presentation
\begin{equation}\label{presm}
\langle \tau_0, \tau_1, \tau_2\,|\, \tau_0^2 =\tau_1^2 =\tau_2^2
= (\tau_0\tau_1)^{8} = (\tau_1\tau_2)^{3} = (\tau_0\tau_2)^{2} = ((\tau_1\tau_0)^3 \tau_1\tau_2)^{2} =1 \rangle
\end{equation}
\end{proposition}
\noindent\textbf{Proof}. We begin with (b), where
it is easy to check that the relations in (\ref{presmpl}) do hold for the matrix group
$\langle \sigma_1, \sigma_2\rangle$. By a straightforward coset enumeration
\cite[Chapter 2]{coxmos}, we conclude from the presentation in (\ref{presmpl}) that the subgroup $\langle \sigma_1\rangle$
has the $6$ coset representatives
$$ 1, \sigma_1, \sigma_1^2, \sigma_2\sigma_1^{-1}, \sigma_2^2\sigma_1, \sigma_2\sigma_1^{-1}\sigma_2.$$
(We abuse notation by passing freely between the matrix group and abstract group.) This finishes (b).
We next note that $\langle \sigma_1\rangle \cap \langle \sigma_2\rangle = \{1\}$,
since $\sigma_1^j$ fixes $v$ only for $j \equiv 0 \pmod{8}$. Now we are justified in invoking
\cite[Theorem 1(c)]{schulte1}, whereby the $3$-polytope $\mathcal{M}$ is regular (rather than just chiral)
if and only if the mapping $\sigma_1 \mapsto \sigma_1^{-1}, \; \sigma_2 \mapsto \sigma_1^{2}\sigma_2$
induces an involutory automorphism $\tau$ of $\Gamma(\mathcal{M})^+$.
But the new relations induced by applying the mapping to (\ref{presmpl}) are easily verified
formally, or even by matrices. For instance, since $\sigma_1\sigma_2 = \sigma_2^{-1}\sigma_1^{-1}$,
we have
$$(\sigma_1^2\sigma_2)^3 = (\sigma_1\sigma_2^{-1}\sigma_1^{-1})^3 = \sigma_1\sigma_2^{-3}\sigma_1^{-1} = 1.$$
Thus $\mathcal{M}$ is abstractly regular and $\Gamma(\mathcal{M})$ has order $96$. The presentation in
(\ref{presm}) follows at once by extending $\Gamma(\mathcal{M})^+$ by $\langle \tau\rangle$, then letting
$\tau_0 := \tau, \tau_1 := \tau \sigma_1, \tau_0 := \tau \sigma_1\sigma_2$.
It remains to check that our realization is geometrically chiral. This means that
$\tau$ is not represented by a symmetry of $\mathcal{M}$ as realized in $\mathbb{E}^4$. From the combinatorial structure, $\tau$ would have to swap vertices $1$ and $v$ while preserving the two Petrie polygons on that edge.
This means that $\tau$ would have to act just like $\mu_0$, that is, just like reflection
in a vertical line in Figure~\ref{proj1}. But $\mu_0$ does not preserve the set of $8$ edges deleted to give $\mathcal{L}$
in Figure~\ref{levi}.
\hfill$\square$
\begin{remark}
It is helpful to note that the centre of
$\Gamma(\mathcal{M})^+$ is generated by $\sigma_1^4$.
Referring to \cite[Section 6.6]{coxmos}, we find that $\Gamma(\mathcal{M})^+$ is isomorphic to the group
$\langle -3,4 | 2\rangle$,
which in turn is an extension by $C_2$ of the binary tetrahedral group $\langle 3,3,2\rangle$.
Indeed, $a = \sigma_1^{-1} \sigma_2 \sigma_1^{-1}, b = \sigma_2 \sigma_1^4$
satisfy $ a^3 = b^3 = (ab)^2\; (= \zeta)$. Thus, $\langle 3,3,2\rangle \triangleleft \Gamma(\mathcal{M})^+$.
\end{remark}
\section{Roli's cube -- a chiral polytope $\mathcal{R}$ of type $\{8,3,3\}$}\label{rolq}
Under the action of $G^+$ we expect to find
$4 = 192/48$ copies of $\mathcal{M}$. To understand this better, recall that there are
$12$ Petrie polygons in one chiral class, say $R$. As with $\mathcal{C}$ and $\mathcal{C}^*$,
each polygon $\mathcal{D}$ is paired with a unique polygon $\mathcal{D}^*$ (with the disjoint set of $8$ vertices).
For each $\mathcal{D}$ there are then \textit{two} ways to remove $8$ edges so as to get a copy
of $\mathcal{L}$ and hence a copy of $\mathcal{M}$. Since $\mathcal{M}$ has six $2$-faces like $\mathcal{C}$, we once more find
$12\cdot2/6 = 4$ copies of $\mathcal{M}$.
Each Petrie polygon lies on $2$ copies of $\mathcal{M}$, again from the two
ways to remove $8$ edges. For example, $\mathcal{C}$ lies on both $\mathcal{M}$ and $(\mathcal{M})\mu_0$.
(The same is true for $\mathcal{C}^*$.)
The pointwise stabilizer in $G^+$ of the base edge joining
$v = (1,1,1,1)$ and $(v)\mu_0 = (-1,1,1,1)$ must consist of
pure, unsigned even permutations of $\{2,3,4\}$. Therefore it is generated by
$$\sigma_3 := \rho_2\rho_3 = (1,1,1,1)\cdot(2,4,3).$$
It is easy to check that $G^+ = \langle \sigma_1, \sigma_2, \sigma_3 \rangle$.
Since three consecutive edges of a Petrie polygon lie on
two adjacent square faces in a cubical facet of of $\mathcal{P}$, it must be that every vertex
of $\mathcal{R}$ has the same vertex-figure as $\mathcal{P}$, thus of tetrahedral type $\{3,3\}$.
We have enumerated and (implicitly) assembled the faces of a $4$-polytope $\mathcal{R}$,
faithfully realized in $\mathbb{E}^4$ and symmetric under the action of $G^+$.
Let's take stock of its proper faces:
\medskip
\begin{center}
\begin{tabular}{c|c|c|c|c}
rank & stabilizer in $G^+$ &order& number of faces& type \\ \hline
$0$& $\langle \sigma_2 , \sigma_3 \rangle$ & 12& $16$ & vertex of cube $\mathcal{P}$\\ \hline
$1$& $\langle \sigma_1 \sigma_2, \sigma_3 \rangle$ & 6&$32$ & edge of $\mathcal{P}$\\ \hline
$2$& $\langle \sigma_1, \sigma_2 \sigma_3 \rangle$ &16 &$12$ & Petrie polygons of $\mathcal{P}$ in one class $R$ \\ \hline
$3$& $\langle \sigma_1, \sigma_2 \rangle$ & 48& $4$ & copy of $\mathcal{M}$\end{tabular}
\end{center}
It is not hard to see that our $4$-polytope $\mathcal{R}$ is isomorphic to Roli's cube, as constructed in
\cite{bracho:2014aa} and as described in Section~\ref{copo}.
\begin{theorem}\label{rconst}
\emph{(a)} The $4$-polytope $\mathcal{R}$ is abstractly chiral of type $\{8,3,3\}$. Its symmetry group
$\Gamma(\mathcal{R}) \simeq G^+ $ has order $192$ and the presentation
\begin{eqnarray}
\langle \sigma_1, \sigma_2, \sigma_3 & |& \sigma_1^8 = \sigma_2^3 = \sigma_3^3 = (\sigma_1 \sigma_2)^2 = (\sigma_2 \sigma_3)^2 =
(\sigma_1 \sigma_2 \sigma_3)^2 = 1\label{rolipresA}\\
& &\hspace*{20mm} (\sigma_1^{-3}\sigma_2)^2 = 1 \label{rolipresB}\\
& & \hspace*{20mm} (\sigma_1^{-1}\sigma_3)^4 = 1 \;\;\rangle \label{rolipresC}
\end{eqnarray}
\emph{(b)} $\mathcal{R}$ is faithfully realized as a geometrically chiral polytope in $\mathbb{E}^4$.
\end{theorem}
\noindent\textbf{Proof}. The relations in (\ref{rolipresA}) are standard for chiral $4$-polytopes \cite[Theorem 1]{schulte1}; and we have seen that the relation in (\ref{rolipresB}) is a special feature of the facet $\mathcal{M}$.
Enumerating cosets of the subgroup $\langle \sigma_1, \sigma_2\rangle$, which still
has order $48$, we find at most the $8$ cosets represented by
$$1, \sigma_3, \sigma_3^2, \sigma_3^2 \sigma_1,
\sigma_3^2 \sigma_1^2, \sigma_3^2 \sigma_1^2 \sigma_2, \sigma_3^2\sigma_1^2 \sigma_2^2, \sigma_3^2 \sigma_1^2\sigma_3 .$$
Thus the group defined by
(\ref{rolipresA}) and (\ref{rolipresB}) has order at most $384$. But $G^+$, where these relations do hold, has
order $192$. We require an independent relation. In Section~\ref{mincov}, we will see why (\ref{rolipresC})
is just what we need.
To show that $\mathcal{R}$ is abstractly chiral we must demonstrate that the mapping
$ \sigma_1 \mapsto \sigma_1^{-1}, \sigma_2 \mapsto \sigma_1^2\sigma_2, \sigma_3\mapsto\sigma_3$
does not extend to an automorphism of $G^+$. This is easy, since
%
%
\begin{equation}\label{chirpr}
(\sigma_1\sigma_3)^4 = \zeta \;\mathrm{whereas}\;(\sigma_1^{-1} \sigma_3)^4 = 1.
\end{equation}
Clearly, $\mathcal{R}$ is realized in a geometrically chiral way in
$\mathbb{E}^4$; we have already seen this for its facet $\mathcal{M}$.
Our concrete geometrical arguments should suffice to convince the reader that
we really have described here a chiral $4$-polytope identical to the original
Roli's cube. A skeptic can nail home the proof by applying \cite[Theorem 1]{schulte1} to the group $G^+$, as generated above.
\hfill$\square$
\bigskip
\section{Realizing the Minimal Regular Cover of $\mathcal{R}$}\label{mincov}
The rotation group $G^+$ for the cube has order $192$ and `standard' generators
$\rho_0\rho_1, \rho_1\rho_2, \rho_2\rho_3$.
But for our purposes we use either of two alternate sets of generators. We already have
\begin{equation}\label{altgen1}
\sigma_1 = (\rho_0\rho_1)(\rho_2\rho_3),\; \sigma_2 = (\rho_3\rho_2)(\rho_1\rho_2)(\rho_2\rho_3),\; \sigma_3 =\rho_2\rho_3.
\end{equation}
Now we also want
\begin{equation}\label{altgen2}
\bar{\sigma}_1 = \sigma_1^{-1},\; \bar{\sigma}_2 = \sigma_1^{2}\sigma_2,\;\bar{\sigma}_3 = \sigma_3.
\end{equation}
Recalling our the shorthand for such matrices, we have
$$ \sigma_1 = (-1,1,1,1)\cdot(4,3,2,1);\; \sigma_2 = (1,1,1,1)\cdot(1,2,4) ;\; \sigma_3 = (1,1,1,1)\cdot(2,4,3),
$$
and
$$\bar{\sigma}_1 = (1,1,1,-1)\cdot(1,2,3,4);\; \bar{\sigma}_2 = (-1,-1,1,1)(1,3,2); \;\bar{\sigma}_3 = (1,1,1,1)\cdot(2,4,3).
$$
We have seen that
the group $G^+ = \langle \sigma_1, \sigma_2, \sigma_3\rangle$ (with these specified generators)
is the rotation (and full automorphism) group
of the chiral polytope $\mathcal{R}$ of type $\{8,3,3\}$. From \cite[Section 3]{schulte2} we have that the (differently generated) group
$\bar{G^+} = \langle \bar{\sigma}_1, \bar{\sigma}_2, \bar{\sigma}_3\rangle$ is the automorphism group
for the \textit{enantiomorphic} chiral polytope $\bar{\mathcal{R}}$ .
By generating the common group in these two ways we effectively exhibit
right- and left-handed versions of the same polytope.
Our geometrical realization of $\mathcal{R}$ began with the base vertex $v = (1,1,1,1)$ (which
also served as base vertex for the $4$-cube $\mathcal{P}$). It is crucial here that $v$ does span the
subspace fixed by $\sigma_1$ and $\sigma_2$. By instead taking
$\bar{G^+}$ with base vertex $\bar{v} = (-1,1,1,1)$ fixed by $ \bar{\sigma}_2,$ and $\bar{\sigma}_3\rangle$,
we have a faithful geometric realization of $\bar{R}$, still in $\mathbb{E}^4$, of course.
We will soon have good reason to
mix $G^+$ and $\bar{G^+}$ in a geometric way. Each group acts irreducibly on $\mathbb{E}^4$.
Construct the block matrices, $\kappa_j = (\sigma_j, \bar{\sigma}_j)$, $j = 1,2,3$, now acting on
$\mathbb{E}^8$ and preserving two orthogonal subspaces of dimension $4$.
Obviously we may extend our notation for signed permutation matrices to the cubical group $B_8$ acting
on $\mathbb{E}^8$. Thus, taking the second copy of $\mathbb{E}^4$ to have basis $b_5, b_6, b_7, b_8$, we
may combine our descriptions of $\sigma_j, \bar{\sigma}_j$ to get
\begin{eqnarray*}
\kappa_1 &= &(-1,1,1,1,1,1,1,-1) \cdot(4,3,2,1)(5,6,7,8),\\
\kappa_2 &= &(1,1,1,1,-1,-1,1,1) \cdot(1,2,4)(5,7,6),\\
\kappa_3 &= &\;\;\;\;\;(1,1,1,1,1,1,1,1) \cdot (2,4,3)(6,8,7).
\end{eqnarray*}
Now let $T^+ = \langle \kappa_1, \kappa_2, \kappa_3\rangle$.
In slot-wise fashion, $\kappa_1,\kappa_2, \kappa_3$ satisfy relations like those
in (\ref{rolipresA}) and (\ref{rolipresB}). From the proof of Theorem~\ref{rconst}, we conclude that
$T^+$ has order $384$. We even get a presentation for it.
Recall that the centre of $G^+$ is generated by $\zeta = \sigma_1^4 = \bar{\sigma}_1^4$.
Thus the centre of $T^+$ has order $4$, with non-trivial elements
\begin{equation}\label{Tcen}
(\zeta,1) = (\kappa_1\kappa_3)^4,\; (1, \zeta) = (\kappa_1^{-1}\kappa_3)^4,\mathrm{and}\;
(\zeta,\zeta) = \kappa_1^4.
\end{equation}
(This is at the heart of the proof that $\mathcal{R}$
is abstractly chiral.) Looking at (\ref{chirpr}), we see that
$$ T^+/\langle (1,\zeta)\rangle \simeq \Gamma(\mathcal{R}),$$
and thus see the reason for the special relation in (\ref{rolipresC}).
Similarly, $ T^+/\langle (\zeta,1)\rangle \simeq \Gamma(\bar{\mathcal{R}})$.
Finally, we have
\begin{equation}\label{rotqu}
T^+/\langle (\zeta,\zeta)\rangle \simeq \Gamma(\mathcal{P})^+,
\end{equation}
the rotation group of the $4$-cube (isomorphic to $G^+$ generated in the customary way).
Now $T^+$ is clearly isomorphic to
the \textit{mix} $G^+ \diamondsuit\, \bar{G^+}$
described in \cite[Theorem 7.2]{mixa}. Guided by that result, we
seek an isometry $\tau_0$ of $\mathbb{E}^8$ which swaps the two orthogonal subspaces, while conjugating
each $\sigma_j$ to $\bar{\sigma}_j$. It is easy to check that
$$ \tau_0 = (1,1,1,1,1,1,1,1)\cdot (1,5)(2,6)(3,7)(4,8) $$
does the job.
We find that $T^+$ is the rotation subgroup of a string C-group
$ T = \langle \tau_0, \tau_1, \tau_2 , \tau_3\rangle$, where
$\tau_1=\tau_0 \kappa_1, \tau_2=\tau_0 \kappa_1\kappa_2, \tau_3=\tau_0 \kappa_1\kappa_2\kappa_3$.
The corresponding directly regular
$4$-polytope has type $\{8,3,3\}$ and
must be the minimal regular cover of each of the chiral polytopes $\mathcal{R}$ and $\bar{\mathcal{R}}$. We consolidate
all this in
\bigskip
\begin{theorem}\label{mincovth}
\emph{(a)} The group $ T = \langle \tau_0, \tau_1, \tau_2 , \tau_3\rangle$ is a string C-group
of order $768$ and with the presentation
\begin{eqnarray*}
\langle \tau_0,\tau_1,\tau_2,\tau_3\ & |\ & \tau_j^2= (\tau_0\tau_1)^8 = (\tau_1\tau_2)^3 =(\tau_3\tau_3)^3 = 1,\;0\leq j \leq 3,\\
& & (\tau_0\tau_2)^2 = (\tau_0\tau_3)^2 =(\tau_1\tau_3)^2 = ((\tau_1\tau_0)^3\tau_1\tau_2)^2=1 \rangle
\end{eqnarray*}
\emph{(b)} The corresponding regular $4$-polytope $\mathcal{T}$ has type $\{8,3,3\}$ and is
faithfully realized in $\mathbb{E}^8$,
with base vertex $(v,\bar{v}) = (1,1,1,1,-1,1,1,1,1)$.
The polytope
$\mathcal{T}$ is the minimal regular cover
for Roli's cube $\mathcal{R}$ and its enantiomorph $\bar{R}$. It is also a double cover of the $4$-cube $\mathcal{P}$.
\emph{(c)} $\mathcal{T} \simeq \{ \mathcal{M}\, , \, \{3,3\} \}$ is the universal regular polytope with facets $\mathcal{M}$ and tetrahedal vertex-figures.
\end{theorem}
\noindent\textbf{Proof}. The centre of $T$ is generated by $(\zeta,\zeta)$. It is easy to check that
$T/\langle(\zeta,\zeta)\rangle \simeq G$, the full symmetry group of the cube; compare (\ref{rotqu}).
In other words, the mapping $\tau_j\mapsto\rho_j,\;0\leq j \leq 3$, induces an epimorphism
$\varphi: T \rightarrow G$. Since $\tau_1,\tau_2,\tau_3$ and $\rho_1,\rho_2,\rho_3$ both satisfy the defining relations for $\Gamma(\{3,3\}) \simeq S_4$, $\varphi$ is one-to-one on
$\langle \tau_1,\tau_2,\tau_3 \rangle$. By the quotient criterion in \cite[2E17]{arp},
$T$ really is a string C-group. The remaining details are routine. For background on
(c) we refer to \cite[4A]{arp}.
\hfill$\square$
\medskip
Much as in the proof, the assignment $\kappa_j\mapsto\sigma_j,\,(j=1,2,3)$,
induces an epimorphism $\varphi_R: T^+ \rightarrow G^+$. On the abstract level, this in
turn induces a \textit{covering} $\widetilde{\varphi}_R: \mathcal{T}^+ \rightarrow \mathcal{R}$, in other words,
a rank- and adjacency-preserving
surjection of polytopes as partially ordered sets. The corresponding covering of geometric polytopes
is induced by the projection
\begin{eqnarray*}
\mathbb{E}^8 & \rightarrow & \mathbb{E}^4\\
(x,y) & \mapsto & x
\end{eqnarray*}
The projection $(x,y)\mapsto y$ likewise induces the geometrical
covering $\widetilde{\varphi}_L: \mathcal{T}^+ \rightarrow \bar{\mathcal{R}}$.
Both $\widetilde{\varphi}_R$ and $\widetilde{\varphi}_L$ are $3$-coverings, meaning here that each
acts isomorphically on facets $\mathcal{M}$ and vertex-figures $\{3,3\}$ \cite[page 43]{arp}. Notice that each face of
$\mathcal{R}$ and $\bar{\mathcal{R}}$ has two preimages in $\mathcal{T}$.
The polytope $\mathcal{T}$ is also a double cover of the $4$-cube $\mathcal{P}$. But there is no
natural way to embed $\mathcal{P}$ in $\mathbb{E}^8$ to illustrate the geometric covering,
since $\kappa_1^4 = -1$ on any subspace of $\mathbb{E}^8$, whereas $(\rho_0\rho_1)^4 = 1$
for $\mathcal{P}$.
\medskip
\section{Conclusion - the M\"{o}bius-Kantor configuration again}
We noted earlier that $8_3$ can be `realized' as a point-line configuration in
$\mathbb{C}^2$. We will show this here by first endowing $\mathbb{E}^4$ with a
\textit{complex structure}. Thus, we want a suitable orthogonal transformation $J$
on $\mathbb{E}^4$ such that $J^2 = \zeta$.
Keeping the addition, we then define
$$ (a+\imath b) u = au + b(uJ),\;\mathrm{for}\; a,b\in \mathbb{R},\; u \in \mathbb{E}^4.$$
Thus $\imath u = uJ$. Over $\mathbb{C}$, $\mathbb{E}^4$ has dimension $2$.
Our choice for the matrix $J$ is motivated by an orthogonal projection different from that
in Figures~\ref{proj1} and \ref{levi}.
The vectors representing the vertices labelled $0,\ldots,7$ in Figure~\ref{levi} are either opposite
or perpendicular. Thus, these eight points are the vertices
of a cross-polytope $\mathcal{O} = \{3,3,4\}$, one of two inscribed in $\mathcal{P}$.
In \cite[Figure 4.2A]{rcp}, Coxeter gives a projection of $\mathcal{O}$ which nicely displays
certain $2$-faces of $\mathcal{O}$.
\begin{figure}[ht]\begin{center}
\includegraphics[width=60mm]{fig6.pdf}
\end{center}
\caption{Another projection of the cross-polytope $\mathcal{O}$.}
\label{cross}
\end{figure}
In Figure~\ref{cross}, each vertex of either of the two concentric squares
forms an equilateral triangle with one edge of the other square. These $8$ triangles
correspond to the unlabelled nodes in Figure~\ref{levi}, and also to the lines of
the configuration $8_3$. (Any real triangle lies on a unique complex line in $\mathbb{C}^2$.)
We may take the vertices in Figure~\ref{cross} to be
$(\pm1,\pm1)$ and $(\pm r,0), (0,\pm r)$, where $r = \sqrt{3}-1$.
But what plane $\Lambda$ in $\mathbb{E}^4$ actually gives such a projection?
Starting with an unknown basis $a_1, b_1$ for $\Lambda$, we can force a lot. For example,
edge $[2,0]$ is the projection of $(0,2,-2,0)$ and is obtained from $[7,0]$,
the projection of $(2,2,0,0)$, by a rotation through $60^{\circ}$. From such details in the geometry,
we soon find that $\Lambda$ is uniquely determined and get a basis
satisfying $a_1\cdot a_1 = b_1\cdot b_1$ and $a_1\cdot b_1 = 0$. But any such basis can still be rescaled or rotated within $\Lambda$. Tweaking these finer details, we find it convenient to take $a_1, b_1$ to be the first two
rows of the matrix
\begin{equation}\label{Lbasis}
L = \frac{1}{2\sqrt{3}}\left[\begin{array}{cccc}
\sqrt{3}& -1 &1 & -(2+\sqrt{3})\\
-1 & 2+\sqrt{3} & \sqrt{3}& -1 \\
-1 & -\sqrt{3} & 2+\sqrt{3} & 1\\
2+\sqrt{3} & 1 & 1 & \sqrt{3}
\end{array}
\right]\;.
\end{equation}
The last two rows $a_2,b_2$ of $L$ give a basis for the orthogonal complement
$\Lambda^{\perp}$.
Since we want $J$ to induce $90^{\circ}$ rotations in both $\Lambda$ and $\Lambda^{\perp}$, we
have
\begin{equation}\label{Jmat}
J = \frac{1}{\sqrt{3}}\left[\begin{array}{rrrr}
0&1&-1&-1\\-1&0&-1&1\\1&1&0&1\\1&-1&-1&0
\end{array}
\right]\;.
\end{equation}
Notice that $a_1 J = b_1$ and $a_2 J = b_2$, so , $\{a_1, a_2\}$ is a $\mathbb{C}$-basis
for $\mathbb{E}^4$; and the plane $\Lambda$ in Figure~\ref{cross} is just $z_2=0$ in the resulting complex
coordinates. The points in the configuration $8_3$ now have these complex coordinates:
\medskip
\begin{center}
\begin{tabular}{c|c}
Label & $(z_1,z_2)$ \\ \hline
$0$ & $( r , 1-\imath )$ \\ \hline
$1$ & $( -1+\imath , r )$ \\ \hline
$2$ & $( r\imath , -1-\imath )$ \\ \hline
$3$ & $( -1-\imath , -r\imath )$ \\ \hline
$4$ & $( -r , -1+\imath )$ \\ \hline
$5$ & $( 1-\imath , -r )$ \\ \hline
$6$ & $( -r\imath , 1+\imath )$ \\ \hline
$0$ & $( 1+\imath , r\imath )$
\end{tabular}
\end{center}
(Recall that $r = \sqrt{3} - 1$.) The first coordinates do give the points displayed in Figure~\ref{cross}.
The second coordinates describe the projection onto $\Lambda^{\perp}$ ( $z_1=0$). There
labels on the inner and outer squares are suitably swapped.
A typical line in the configuration $8_3$, like that containing points $1,6,7$, has equation
$$ r(1-\imath) z_1 + 2 z_2 = 2 r (1+\imath)\;.$$
After consulting \cite[Sections 10.6 and 11.2]{rcp}, we observe that the eight points are also the vertices of the regular complex polygon $3\{3\}3$. Its symmetry group (of unitary transformations on $\mathbb{C}^2$)
is the group $3[3]3$ with the presentation
\begin{equation}\label{333pres}
\langle \gamma_1, \gamma_2\,|\, \gamma_1^3 = 1,\; \gamma_1 \gamma_2 \gamma_1 = \gamma_2 \gamma_1 \gamma_2\rangle\;.
\end{equation}
In fact, this group of order $24$ is isomorphic to the \textit{binary tetrahedral} group $\langle 3,3,2\rangle$.
But in our context, we may identify it with the centralizer in $G$ of the structure matrix $J$. A bit of computation shows that this subgroup
of $G$ is generated by
$$ \gamma_1 = \rho_1\rho_2\rho_3\rho_2 = (1,4,2)\;\mathrm{and}\; \gamma_2 = \rho_2\rho_0\rho_1\rho_0 =
(-1,1,-1,1)\cdot (1,2,3), $$
which do satisfy the relations in (\ref{333pres}).
\bigskip
\noindent\textbf{Acknowledgements}.
I want to thank Daniel Pellicer, both for his many geometrical ideas and
also for generously welcoming me to the
Centro de Ciencias Matem\'{a}ticas at UNAM (Morelia).
\bigskip
\bibliographystyle{siam}
|
{
"timestamp": "2021-02-18T02:19:58",
"yymm": "2102",
"arxiv_id": "2102.08796",
"language": "en",
"url": "https://arxiv.org/abs/2102.08796"
}
|
\section{Introduction}
Suppose $(Y,{\mathbf X}^T)^T$ have a joint continuous distribution, where $Y \in {\mathbb R}$ denotes a univariate response and ${\mathbf X} \in {\mathbb R}^p$ a $p$-dimensional covariate vector. We assume that the dependence of $Y$ and ${\mathbf X}$ is modelled by
\begin{align}
Y = g({\mathbf B}^T {\mathbf X}) + \epsilon, \label{mod:basic}
\end{align}
where ${\mathbf X}$ is independent of $\epsilon$ with positive definite variance-covariance matrix, $\var(X)=\Sigmabf_{\x}$, $\epsilon \in {\mathbb R}$ is a mean zero random variable with finite $\var(\epsilon)= E\left(\epsilon^2\right)=\eta^2$, $g$ is an unknown continuous non-constant function, and ${\mathbf B}= ({\mathbf b}_1, ..., {\mathbf b}_k) \in {\mathbb R}^{p \times k}$ of rank $k \leq p$.
Model \eqref{mod:basic} states that
\begin{align}\label{meanSDR}
\mathbb{E}(Y\mid {\mathbf X})&= \mathbb{E}(Y \mid {\mathbf B}^T {\mathbf X})
\end{align}
and requires the first conditional moment $\mathbb{E}(Y \mid {\mathbf X})=g({\mathbf B}^T {\mathbf X})$ contain the entirety of the information in $X$ about $Y$ and be captured by ${\mathbf B}^T {\mathbf X}$, so that
$F(Y\mid {\mathbf X})=F(Y \mid {\mathbf B}^T {\mathbf X})$,
where $F(\cdot \mid \cdot)$ denotes the conditional cumulative distribution function (cdf) of the first given the second argument. That is, $Y$ is statistically independent of ${\mathbf X}$ when ${\mathbf B}^T {\mathbf X}$ is given and replacing ${\mathbf X}$ by ${\mathbf B}^T {\mathbf X}$ induces no loss of information for the regression of $Y$ on ${\mathbf X}$.
Identifying the span of ${\mathbf B}$; i.e., the column space of ${\mathbf B}$, as only the $\operatorname{span}\{ {\mathbf B} \}$ is identifiable,
suffices in order to identify the \textit{sufficient reduction} of ${\mathbf X}$ for the regression of $Y$ on ${\mathbf X}$.
We assume, without loss of generality, ${\mathbf B}$ is semi-orthogonal, i.e., ${\mathbf B}^T {\mathbf B} = {\mathbf I}_k$, since a change of coordinate system by an orthogonal transformation does not alter model~\eqref{meanSDR}.
For $q \leq p$, let
\begin{equation}\label{Smanifold}
{\mathcal S}(p,q) = \{{\mathbf V} \in {\mathbb R}^{p \times q}: {\mathbf V}^T{\mathbf V} = {\mathbf I}_q\},
\end{equation}
denote the Stiefel manifold, that comprizes of all $p \times q$ matrices with orthonormal columns. ${\mathcal S}(p,q)$ is compact and $\dim({\mathcal S}(p,q)) = pq - q(q+1)/2$ [see \cite{Boothby} and Section 2.1 of \cite{Tagare2011}]. Further let
\begin{equation}\label{Grassman_def}
Gr(p,q) = {\mathcal S}(p,q)/{\mathcal S}(q,q)
\end{equation}
denote the Grassmann manifold, i.e. all $q$-dimensional subspaces in ${\mathbb R}^p$, which is exactly the quotient space of ${\mathcal S}(p,q)$ with all $q \times q$ orthonormal matrices ${\mathcal S}(q,q)$, i.e. the basis of a linear subspace is unique up to orthogonal transformations.
The fact that only $\operatorname{span}\{{\mathbf B}\}$ is identifiable, can be expressed through the Grassmann manifold $Gr(p,q)$ in \eqref{Grassman_def}. The goal of sufficient dimension reduction in model \eqref{mod:basic} is to find a subspace ${\mathbf M} \in Gr(p,k)$ such that any basis ${\mathbf B} \in {\mathcal S}(p,k)$ of ${\mathbf M}$ fulfills \eqref{mod:basic} or equivalently \eqref{meanSDR}.
Finding sufficient reductions of the predictors to replace them in regression and classification without loss of information is called \textit{sufficient dimension reduction}
\cite{Cook1998}. The first split in sufficient dimension reduction taxonomy occurs between likelihood and non-likelihood based methods. The former, which were developed more recently \cite{Cook2007, CookForzani2008, CookForzani2009, BuraForzani2015, BuraDuarteForzani2016}, assume knowledge either of the joint family of distributions of $(Y,{\mathbf X}^T)^T$, or the conditional family of distributions for ${\mathbf X} \mid Y$. The latter is the most researched branch of sufficient dimension reduction and comprizes of three classes of methods: Inverse regression based, semi-parametric and nonparametric. Reviews of the former two classes can be found in \cite{AdragniCook2009,MaZhu2013, Li2018}.
In this paper we present the \textit{conditional variance estimation}, which falls in the class of nonparametric methods. The estimators in this class
minimize a criterion that describes the fit of the dimension reduction model \eqref{meanSDR} under \eqref{mod:basic} to the observed data. Since the criterion involves unknown distributions or regression functions, nonparametric estimation is used to recover $\operatorname{span}\{{\mathbf B}\}$.
Statistical approaches to identify ${\mathbf B}$ in \eqref{meanSDR} include ordinary least squares and nonparametric multiple index models \cite{multiIndexModel}. The least squares estimator, $\Sigmabf_{\x}^{-1} \mbox{cov}({\mathbf X},Y)$, always falls in $\operatorname{span}\{{\mathbf B}\}$ \cite[Th. 8.3]{Li2018}. Principal Hessian Directions \cite{Li1992} was the first sufficient dimension reduction estimator to target $\operatorname{span}\{{\mathbf B}\}$ in \eqref{meanSDR}. Its main disadvantage is that it requires the so called \textit{linearity} and \textit{constant variance} conditions on the marginal distribution of ${\mathbf X}$. Its relaxation, Iterative Hessian Transformation \cite{CookLi2004}, still requires the linearity condition in order to recover vectors in $\operatorname{span}\{{\mathbf B}\}$.
The most competitive nonparametric sufficient dimension reduction method up to now has been \textit{minimum average variance estimation} (MAVE, \cite{Xiaetal2002}). It assumes model \eqref{mod:basic},
bounded fourth derivative covariate density, and existence of continuous bounded third derivatives for $g$. It uses a local first order approximation of $g$ in \eqref{mod:basic} and minimizes the expected conditional variance of the response given ${\mathbf B}^T {\mathbf X}$.
The \textit{conditional variance estimator} also targets and recovers $\operatorname{span}\{B\}$ in models \eqref{mod:basic} and \eqref{meanSDR}. The objective function is based on the intuition that the directions in the predictor space that capture the dependence of $Y$ on $X$ should exhibit significantly higher variation in $Y$ as compared with the directions along which $Y$ exhibits markedly less variation. The \textit{conditional variance estimator} is a fully data-driven estimator that performs better than or is on par with \textit{minimum average variance estimation} in simulations.
The \textit{conditional variance estimator} differs from other approaches, including MAVE, in that it only targets the $\operatorname{span}\{{\mathbf B}\}$ and does not require an explicit form or estimation of the link function $g$. As a result, it requires weaker assumptions on its smoothness.
\section{Motivation}\label{motivation}
Let $(\Omega ,{\mathcal {F}},P)$ be a probability space, and $ {\mathbf X}:\Omega \rightarrow {\mathbb R}^p$ be a random vector with a continuous probability density function $f_{{\mathbf X}}$ and denote its support by $\mbox{supp}(f_{{\mathbf X}})$. Throughout $\|\cdot\|$ denotes the Frobenius norm for matrices, Euclidean norm for vectors, and scalar product refers to the euclidean scalar product. For any matrix ${\mathbf M}$, or linear subspace ${\mathbf M}$, we denote by $\mathbf{P}_{{\mathbf M}}$ the projection matrix on the column space of the matrix or on the subspace, i.e. $\mathbf{P}_{{\mathbf M}} = {\mathbf M}({\mathbf M}^T {\mathbf M})^{-1} {\mathbf M}^T \in {\mathbb R}^{p \times p}$ for ${\mathbf M} \in {\mathbb R}^{p \times q}$. For any ${\mathbf V} \in {\mathcal S}(p,q)$, defined in \eqref{Smanifold}, we generically denote a basis of the orthogonal complement of its column space $\operatorname{span}\{{\mathbf V}\}$, by ${\mathbf U}$. That is, ${\mathbf U} \in {\mathcal S}(p,p-q)$ such that $\operatorname{span}\{{\mathbf V}\} \perp \operatorname{span}\{U\}$ and $\operatorname{span}\{{\mathbf V}\} \cup \operatorname{span}\{{\mathbf U}\} = {\mathbb R}^p$, ${\mathbf U}^T{\mathbf V} = {\bf 0} \in {\mathbb R}^{(p-q) \times q}, {\mathbf U}^T{\mathbf U} = {\mathbf I}_{p-q}$. For any ${\mathbf x}, \mathbf s_0 \in {\mathbb R}^p$ we can always write
\begin{equation}\label{ortho_decomp}
{\mathbf x} = \mathbf s_0 + \mathbf{P}_{\mathbf V} ({\mathbf x} - \mathbf s_0) + \mathbf{P}_U ({\mathbf x} - \mathbf s_0) = \mathbf s_0 + {\mathbf V}{\mathbf r}_1 + {\mathbf U}{\mathbf r}_2
\end{equation}
where ${\mathbf r}_1 = {\mathbf V}^T({\mathbf x}-\mathbf s_0) \in {\mathbb R}^{q}, {\mathbf r}_2 = {\mathbf U}^T({\mathbf x}-\mathbf s_0) \in {\mathbb R}^{p-q}$.
In the sequel, we refer to the following assumptions as needed and the proofs of the Theorems are presented in the Appendix.
\begin{assumption1a}
Model $Y = g({\mathbf B}^T{\mathbf X}) + \epsilon$
holds with $Y \in {\mathbb R}$, $g:{\mathbb R}^k \to {\mathbb R}$ non constant in all arguments, ${\mathbf B}= ({\mathbf b}_1, ..., {\mathbf b}_k) \in {\mathbb R}^{p \times k}$ of rank $k \leq p$, ${\mathbf X} \in {\mathbb R}^p$ independent from $\epsilon$, $ \var({\mathbf X}) = \Sigmabf_{\x} $ is positive definite , $\mathbb{E}(\epsilon)=0$, $\var(\epsilon)= \eta^2 < \infty$.
\end{assumption1a}
\begin{assumption2a}
The link function $g$ and the density $f_{\mathbf X} : {\mathbb R}^p \to [0,\infty)$ of ${\mathbf X}$ are twice continuous differentiable.
\end{assumption2a}
\begin{assumption3a}\label{A3}
$\mathbb{E}(|Y|^8) < \infty$.
\end{assumption3a}
\begin{assumption4a}\label{A4}
$\text{supp}(f_{\mathbf X})$ is compact.
\end{assumption4a}
\begin{remark}
Assumption (A.4) is not as restrictive as it might seem. \cite{CompactAssumption} showed in Proposition 11 that there is a compact set $\mathcal{S} \subset {\mathbb R}^p$ such that the mean subspace of model \eqref{mod:basic} is the same as the mean subspace of $Y = g({\mathbf B}^T{\mathbf X}_{|\mathcal{S}}) + \epsilon$, where ${\mathbf X}_{|\mathcal{S}} = {\mathbf X} 1_{\{{\mathbf X} \in \mathcal{S}\}}$ and $1_A$ is the indicator function of $A$. Further $\mathcal{S}$ can be assumed to be an ellipsoid and for all $\widetilde{\mathcal{S}} \supseteq \mathcal{S}$ the same assertion holds true.
\end{remark}
\begin{definition}
For $q \leq p \in N$ and any ${\mathbf V} \in {\mathcal S}(p,q)$, we define
\begin{equation}\label{Lvs}
\tilde{L}({\mathbf V}, \mathbf s_0) = \mathbb{V}\mathrm{ar}(Y\mid{\mathbf X} \in \mathbf s_0 + \operatorname{span}\{{\mathbf V}\}),
\end{equation}
where $\mathbf s_0 \in {\mathbb R}^p$ is a shifting point.
\end{definition}
\begin{definition}
For ${\mathbf V} \in {\mathcal S}(p,q)$, we define the objective function,
\begin{equation}\label{objective}
L({\mathbf V})= \int_{{\mathbb R}^p}\tilde{L}({\mathbf V},{\mathbf x})f_{{\mathbf X}}({\mathbf x})d{\mathbf x} = \mathbb{E}\left(\tilde{L}({\mathbf V},{\mathbf X})\right).
\end{equation}
\end{definition}
$L({\mathbf V})$ in \eqref{objective} is the objective function for the estimator we propose for the span of ${\mathbf B}$ in \eqref{mod:basic} and Theorem~\ref{thm1} provides the statistical motivation for the objective function \eqref{objective} of the conditional variance estimator. First we derive that both population based functions \eqref{Lvs} and \eqref{objective} are well defined.
Let ${\mathbf X}$ be a $p$-dimensional continuous random vector with density $f_{\mathbf X}({\mathbf x})$, $\mathbf s_0 \in \text{supp}(f_{\mathbf X}) \subset {\mathbb R}^p$, and ${\mathbf V}$ belongs to the Stiefel manifold ${\mathcal S}(p,q)$ defined in \eqref{Smanifold}.
The function \begin{gather}\label{density}
f_{{\mathbf X}\mid{\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}}({\mathbf r}_1) =
\frac{f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)}{\int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r}}
\end{gather}
is a proper conditional density of ${\mathbf X}$
that is concentrated on the affine subspace $\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}$ using the concept of regular conditional probability \cite{Leaoetal2004} under assumption (A.2). The detailed justification is given in the Appendix, where we also show that under assumptions (A.1), (A.2) and (A.4), $\tilde{L}({\mathbf V}, \mathbf s_0)$ in \eqref{Lvs} and $L({\mathbf V})$ in \eqref{objective} are well defined and continuous. Moreover,
\begin{align}\label{LtildeVs0}
\tilde{L}({\mathbf V},\mathbf s_0) = \mu_2({\mathbf V},\mathbf s_0) - \mu_1({\mathbf V},\mathbf s_0)^2 + \eta^2
\end{align}
where
\begin{align}\label{mu_l}
\mu_l({\mathbf V},\mathbf s_0) &= \int_{{\mathbb R}^q} g({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V}{\mathbf r}_1)^l\frac{f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)}{\int_{{\mathbb R}^q}f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r})d{\mathbf r}}d{\mathbf r}_1 = \frac{t^{(l)}({\mathbf V},\mathbf s_0)}{t^{(0)}({\mathbf V},\mathbf s_0)}
\end{align}
with \begin{align}\label{tl}
t^{(l)}({\mathbf V},\mathbf s_0) &= \int_{{\mathbb R}^q} g({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V}{\mathbf r}_1)^l f_{\mathbf X}(\mathbf s_0 + {\mathbf V}{\mathbf r}_1)d{\mathbf r}_1.
\end{align}
\begin{thm}\label{thm1}
Suppose ${\mathbf V} = (\mathbf v_1,...,\mathbf v_q) \in {\mathcal S}(p,q)$
and $q\in \{1,\ldots,p\}$. Under assumptions (A.1), (A.2) and (A.4),
\begin{itemize}
\item[(a)] For all $\mathbf s_0 \in {\mathbb R}^p $ and ${\mathbf V} $ such that there exist $u \in \{1,...,q\}$ with $\mathbf v_u \in \operatorname{span}\{{\mathbf B}\}$, $\tilde{L}({\mathbf V},\mathbf s_0) > \mathbb{V}\mathrm{ar}(\epsilon)= \eta^2$ and $L({\mathbf V}) > \eta^2$.
\item[(b)] For all $\mathbf s_0 \in {\mathbb R}^p$ and $\operatorname{span}\{{\mathbf V}\} \perp \operatorname{span}\{{\mathbf B}\}$, $\tilde{L}({\mathbf V},\mathbf s_0) = \eta^2$ and $L({\mathbf V}) = \eta^2$.
\end{itemize}
\end{thm}
\begin{proof}
Let $\mathbf s_0 \in {\mathbb R}^p$ and ${\mathbf V} = (\mathbf v_1,...,\mathbf v_q) \in {\mathbb R}^{p \times q}$ so that $\mathbf v_u \in \operatorname{span}\{{\mathbf B}\}$ for some $u \in \{1,...,q\}$.
To obtain (a), observe ${\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\} \Longleftrightarrow {\mathbf X} = \mathbf s_0 + \mathbf{P}_{V}({\mathbf X} - \mathbf s_0)$ and using \eqref{Lvs} yields
\begin{align}
\tilde{L}({\mathbf V},\mathbf s_0)
&= \mathbb{V}\mathrm{ar}\left(g({\mathbf B}^T{\mathbf X})\mid{\mathbf X} = \mathbf s_0 + {\mathbf V}\V^T({\mathbf X}-\mathbf s_0)\right) + \mathbb{V}\mathrm{ar}(\epsilon) \notag \\
&= \mathbb{V}\mathrm{ar}\left(g({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V}\V^T({\mathbf X}-\mathbf s_0))\mid{\mathbf X} = \mathbf s_0 + {\mathbf V}\V^T({\mathbf X}-\mathbf s_0)\right) + \eta^2 >\eta^2 \label{1stterm}
\end{align}
since ${\mathbf B}^T{\mathbf V}\V^T({\mathbf X}-\mathbf s_0) \neq 0$ with probability 1, and therefore the variance term in \eqref{1stterm} is positive. For ${\mathbf V}$ such that ${\mathbf V} $ and ${\mathbf B}$ are orthogonal, ${\mathbf B}^T{\mathbf V}\V^T({\mathbf X}-\mathbf s_0) = 0$ and (b) follows. Since $\mathbf s_0$ is arbitrary yet constant, the statements for $L({\mathbf V})$ follow.
\end{proof}
Theorem~\ref{thm1} also has an intuitive geometrical interpretation for the proposed method. If ${\mathbf X}$ is not random, the deterministic function $Y = g({\mathbf B}^T{\mathbf X})$ is constant in all directions orthogonal to ${\mathbf B}$ and varies in all other directions. If randomness is introduced, as in model \eqref{mod:basic},
then the variation in $Y$ stems only from $\epsilon$ in all directions orthogonal to ${\mathbf B}$. In all other directions the variation comprizes of the sum of the variation of $\epsilon$ and of $g({\mathbf B}^T{\mathbf X})$.
In consequence, the objective function \eqref{objective} captures the variation of $Y$ as ${\mathbf X}$ varies in the column space of ${\mathbf V}$ and is minimized in the directions orthogonal to ${\mathbf B}$.
\subsection{Conditional Variance Estimator (CVE)}
We have shown that the objective function $L({\mathbf V})$ in \eqref{objective} is well defined and continuous in Section \ref{motivation}. Let
\begin{equation}\label{optim}
{\mathbf V}_q = \operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,q)}L({\mathbf V}).
\end{equation}
${\mathbf V}_q$ is well defined as the minimizer of a continuous function over the compact set ${\mathcal S}(p,q)$. Nevertheless, ${\mathbf V}_{q}$ is not unique since for all orthogonal ${\mathbf O} \in {\mathbb R}^{q \times q}$ such that ${\mathbf O}\Ob^T = {\mathbf I}_{q}$, $L({\mathbf V} {\mathbf O}) = L({\mathbf V})$ as $L({\mathbf V})$ depends on ${\mathbf V}$ only through $\operatorname{span}\{{\mathbf V}\}$. Nevertheless, it is a unique minimizer over the Grassmann manifold $Gr(p,q)$ in \eqref{Grassman_def}. To see this, suppose ${\mathbf V} \in {\mathcal S}(p,q)$ is an arbitrary basis of a subspace ${\mathbf M} \in Gr(p,q)$. We can identify $M$ through the projection $\mathbf{P}_{\mathbf M} = {\mathbf V}\V^T$.
By \eqref{ortho_decomp} we write ${\mathbf x} = {\mathbf V} {\mathbf r}_1 + {\mathbf U} {\mathbf r}_2$. By the Fubini-Tornelli Theorem we obtain \begin{align}\label{Grassman}
\tilde{t}^{(l)}(\mathbf{P}_{\mathbf M},\mathbf s_0) &= \int_{\text{supp}(f_{\mathbf X})} g({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T P_{\mathbf M} {\mathbf x})^l f_{\mathbf X}(\mathbf s_0 + \mathbf{P}_{\mathbf M} {\mathbf x})d{\mathbf x} \\&= t^{(l)}({\mathbf V},\mathbf s_0) \int_{\text{supp}(f_{\mathbf X})\cap {\mathbb R}^{p-q} }d{\mathbf r}_2. \notag
\end{align}
Therefore $\tilde{t}^{(l)}(\mathbf{P}_{\mathbf M},\mathbf s_0)/\tilde{t}^{(0)}(\mathbf{P}_{\mathbf M},\mathbf s_0) = t^{(l)}({\mathbf V},\mathbf s_0)/t^{(0)}({\mathbf V},\mathbf s_0)$ and $\mu_l(\cdot,\mathbf s_0)$ in \eqref{mu_l} can also be viewed as a function from $Gr(p,q)$ to ${\mathbb R}$. If the optimization~\eqref{optim} is over $Gr(p,q)$, the objective function \eqref{objective} has a unique minimum at $\operatorname{span}\{{\mathbf B}\}^\perp$ by Theorem~\ref{thm1}. Therefore ${\mathbf B}$ is not uniquely identifiable but its $\operatorname{span}\{{\mathbf B}\}$ is.
Corollary~\ref{cor1} follows directly from Theorem~\ref{thm1} and provides the means for identifying the linear projections of the predictors satisfying \eqref{mod:basic}.
\begin{cor}\label{cor1}
Under the assumptions (A.1), (A.2), and (A.3) the solution of the optimisation problem ${\mathbf V}_q$ in \eqref{optim} is well defined. Let $k=\dim(\operatorname{span}\{{\mathbf B}\})$ and $q=p-k$,
\begin{enumerate}
\item[(a)] $\operatorname{span}\{{\mathbf V}_{q}\} = \operatorname{span}\{{\mathbf B}\}^\perp$
\item[(b)] $\operatorname{span}\{{\mathbf V}_{q}\}^\perp = \operatorname{span}\{{\mathbf B}\}$ \label{est_equ}
\end{enumerate}
\end{cor}
We next define the novel estimator of the sufficient reduction space, $\operatorname{span}\{{\mathbf B}\}$, in \eqref{mod:basic}, which is motivated by Theorem~\ref{thm1} and
Corollary~\ref{cor1} (b) serves as the estimation equation for the conditional variance estimator at the population level.
\begin{definition}
The \textbf{Conditional Variance Estimator} is defined to be any basis ${{\mathbf B}}_{p-q}$ of $\operatorname{span}\{{{\mathbf V}}_{q}\}^\perp$. That is, the CVE of ${\mathbf B}$ is any ${\mathbf B}_{p-q}$ such that
\begin{equation}\label{CVE}
\operatorname{span}\{{{\mathbf B}}_{p-q}\} = \operatorname{span}\{{{\mathbf V}}_{q}\}^\perp
\end{equation}
\end{definition}
When $q=p-k$, where $k=\mbox{rank}({\mathbf B})$ in \eqref{mod:basic}, then the CVE obtains the population $\operatorname{span}\{{\mathbf B}\}$.
Alternatively, we can also target ${\mathbf B}$ directly by maximizing the objective function $L({\mathbf V})$. The downside of this approach is that ${\mathbf X}$ either needs to be standardized, or the conditioning argument needs to be changed to
${\mathbf X} = \mathbf s_0 + \mathbf{P}_{\Sigmabf_{\x}^{-1}(\operatorname{span}\{{\mathbf V}\})}({\mathbf X}-\mathbf s_0)$, where $\mathbf{P}_{{\mathbf M}(\operatorname{span}\{{\mathbf V}\})}$ is the orthogonal projection operator with respect to the inner product $\langle {\mathbf x},{\mathbf y} \rangle_{\mathbf M} = {\mathbf x}^T{\mathbf M} {\mathbf y}$.
In either case, the inversion of $\Sigmabf_{\x}$ is required. Our choice of targeting the orthogonal complement avoids the inversion of $\Sigmabf_{\x}$, and the estimation algorithm in Section~\ref{Optim} can be applied to regressions with $p > n$ or $p \approx n$, where $n$ denotes the sample size. Additionally, targeting the complement has computational advantages. The dimension of the search space $\operatorname{span}\{{{\mathbf V}}_{q}\}^\perp$ is $p-q$, which is smaller than the dimension of the direct target space in \eqref{CVE} when $q=p-k$ for small $k$, which is the appropriate setting in a dimension reduction context.
\section{Estimation}\label{estimation}
Assume $(Y_i,{\mathbf X}_i^T)_{i=1,...,n}^T$ is an independent identical distributed sample from model \eqref{mod:basic}. For ${\mathbf V} \in {\mathcal S}(p,q)$ and $\mathbf s_0 \in {\mathbb R}^p$, we define
\begin{align}
d_i({\mathbf V},\mathbf s_0)&= \|{\mathbf X}_i - \mathbf{P}_{\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}}{\mathbf X}_i\|^2 = \|{\mathbf X}_i -\mathbf s_0\|^2 - \langle {\mathbf X}_i - \mathbf s_0,{\mathbf V}\V^T({\mathbf X}_i - \mathbf s_0)\rangle \notag\\
&= \| ({\mathbf I}_p - {\mathbf V}\V^T)({\mathbf X}_i - \mathbf s_0)\|^2 = \| \mathbf{P}_{{\mathbf U}}({\mathbf X}_i - \mathbf s_0)\|^2 \label{distance}
\end{align}
where $\langle \cdot, \cdot\rangle$ is the usual inner product in ${\mathbb R}^p$, $\mathbf{P}_{{\mathbf V}}={\mathbf V}\V^T$ and $\mathbf{P}_{U}={\mathbf I}_p-\mathbf{P}_{{\mathbf V}}$ using the orthogonal decomposition given by \eqref{ortho_decomp}.
Let $h_n \in {\mathbb R}_+$ be a sequence of bandwidths and we call the set ${\mathcal S}_{\mathbf s_0,{\mathbf V}}=\{{\mathbf x} \in {\mathbb R}^p: \|{\mathbf x} - \mathbf{P}_{\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}}{\mathbf x}\|^2 \leq h_n\}$ a \textit{slice} that depends on both the shifting point $\mathbf s_0$ and the matrix ${\mathbf V}$.
$h_n$ represent the squared width of a slice around the subspace $\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}$ and fulfills the following assumptions.
\begin{assumptionH1
For $n \to \infty$, $h_n \to 0$
\end{assumptionH1}
\begin{assumptionH2
For $n \to \infty$, $nh^{(p-q)/2}_n \to \infty$
\end{assumptionH2}
\begin{remark}
For obtaining the consistency of the proposed estimator (H.2) will be strengthened to $\log(n)/nh^{(p-q)/2}_n \to 0$.
\end{remark}
Let $K$ be a function satisfying the following assumptions.
\begin{assumptionK1}
$K:[0,\infty) \rightarrow [0,\infty)$
is a non increasing and continuous
function, so that $|K(z)| \leq M_1$, with $\int_{{\mathbb R}^{q}} K(\|{\mathbf r}\|^2) d{\mathbf r} < \infty$ for $q \leq p-1$.
\end{assumptionK1}
\begin{assumptionK2
There exist positive finite constants $L_1$ and $L_2$ such that the kernel $K$ satisfies one of the following:
\begin{itemize}
\item[(1)] $K(u) = 0$ for $|u| > L_2$ and for all $u, \tilde{u}$ it holds $|K(u) - K(\tilde{u})| \leq L_1 |u - \tilde{u}|$
\item[(2)] $K(u)$ is differentiable with $|\partial_u K(u)| \leq L_1$ and for some $\nu > 1$ it holds $|\partial_u K(u)| \leq L_1 |u|^{-\nu}$ for $|u| > L_2$
\end{itemize}
\end{assumptionK2}
Examples of functions that satisfy (K.1) and (K.2) include the Gaussian, $K(z) = c\exp(-z^2/2)$, the exponential, $K(z) = c\exp(-z)$, and the squared Epanechnikov kernel, $K(z) = c \max\{(1-z^2),0\}^2$ (i.e. polynomial kernels), where $c$ is a constant. The rectangular, $K(z) = c I(z\leq 1)$, does not fulfill the assumptions but will be mentioned for intuitive explanations. A list of further kernel functions is given in \cite[Table 1]{Parzen1961}.
\subsection{\texorpdfstring{The estimator of $L({\mathbf V})$ and its uniform convergence}{The estimator of L(V) and its uniform convergence}}\label{LV.est}
\begin{definition}
For $i=1,\ldots,n$, we define
\begin{equation}
w_i({\mathbf V},\mathbf s_0) = \frac{K\left(\frac{d_i({\mathbf V},\mathbf s_0)}{h_n}\right)}{\sum_{j=1}^nK\left(\frac{d_j({\mathbf V},\mathbf s_0)}{h_n}\right)} \label{weights}
\end{equation}
\end{definition}
\begin{definition}
The sample based estimate of $\tilde{L}({\mathbf V},\mathbf s_0)$ is defined as
\begin{equation}
\tilde{L}_n({\mathbf V},s_0) = \sum_{i=1}^n w_i({\mathbf V},\mathbf s_0)(Y_i - \bar{y}_1({\mathbf V},\mathbf s_0))^2 = \bar{y}_2({\mathbf V},\mathbf s_0) - \bar{y}_1({\mathbf V},\mathbf s_0)^2 \label{Ltilde}
\end{equation}
where $\bar{y}_l({\mathbf V},\mathbf s_0) = \sum_{i=1}^n w_i({\mathbf V},\mathbf s_0)Y^l_i$, $l=1,2$.
\end{definition}
\begin{definition}
The estimate of the objective function $L({\mathbf V})$ in \eqref{objective} is defined as
\begin{equation}
L_n({\mathbf V}) = \frac{1}{n} \sum_{i=1}^n \tilde{L}_n({\mathbf V},{\mathbf X}_i), \label{LN}
\end{equation}
where each data point ${\mathbf X}_i$ is a shifting point.
\end{definition}
To obtain insight as to the choice of $\tilde{L}_n({\mathbf V},\mathbf s_0)$ in \eqref{Ltilde}, let us consider the rectangular kernel, $K(z) = 1_{\{z \leq 1\}}$. In this case, $\tilde{L}_n({\mathbf V},\mathbf s_0)$ computes the empirical variance of the $Y_i$'s corresponding to the ${\mathbf X}_i$'s that are no further than $\sqrt{h_n}$ away from the affine space $\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}$, i.e.,
$d_i({\mathbf V}, \mathbf s_0) = \|{\mathbf X}_i - \mathbf{P}_{\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}}{\mathbf X}_i\|^2 \leq h_n$. If a smooth kernel is used, such as the Gaussian
in our simulation studies, then $\tilde{L}_n({\mathbf V},\mathbf s_0)$ is also smooth, which allows the computation of gradients required to solve the optimization problem.
In Theorem \ref{thm_L_uniform} we state the conditions under which $L_n({\mathbf V})$ in \eqref{LN} converges uniformly to its population counterpart in \eqref{objective}. This result will lead to the consistency of our estimator.
\begin{thm}\label{thm_L_uniform}
Let $\tilde{a}^2_n = \log(n)/n$. Under
(A.1), (A.2), (A.3), (A.4), (K.1), (K.2), (H.1), $a_n^2 = \log(n)/nh_n^{(p-q)/2} = o(1)$ , and $a_n/h_n^{(p-q)/2} = O(1)$,
\begin{equation} \label{thm5_eq}
\sup_{{\mathbf V} \in {\mathcal S}(p,q)}\left|L_n({\mathbf V}) - L({\mathbf V})\right| \to 0 \quad \text{in probability as}\,\, n \to \inft
\end{equation}
\end{thm}
\subsection{The Conditional Variance Estimator}\label{CVE.est}
Next we define the estimator we propose for $\operatorname{span}\{{\mathbf B}\}$ in \eqref{mod:basic}. Our main theoretical result follows in Theorem~\ref{thm_consistency} which establishes the consistency of our estimator.
\begin{definition}\label{Vhat}
The sample based {\bf Conditional Variance Estimator} $\widehat{B}_{p-q}$ is any basis of $\operatorname{span}\{\widehat{{\mathbf V}}_q\}^\perp$
where $\widehat{{\mathbf V}}_q = \operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,q)}L_n({\mathbf V}).$
\end{definition}
\begin{thm}\label{thm_consistency}
Under
(A.1), (A.2), (A.3), (A.4), (K.1), (K.2), (H.1), $a_n^2 = \log(n)/nh_n^{(p-q)/2} = o(1)$, and $a_n/h_n^{(p-q)/2} = O(1)$,
$\operatorname{span}\{\widehat{{\mathbf B}}_{k}\}$ is a consistent estimator for $\operatorname{span}\{{\mathbf B}\}$ in model \eqref{mod:basic}; i.e.,
\begin{equation*}
\|\mathbf{P}_{\widehat{{\mathbf B}}_k} - \mathbf{P}_{{\mathbf B}}\| \to 0 \quad \text{in probability as } n \to \infty .
\end{equation*}
\end{thm}
\subsection{\texorpdfstring{Weighted estimation of $L({\mathbf V})$}{Weighted estimation of L(V)}}\label{weight_section}
The set of points $\{{\mathbf x} \in {\mathbb R}^p: \|{\mathbf x} - \mathbf{P}_{\mathbf s_0 + \operatorname{span}\{{\mathbf V}\}}{\mathbf x}\|^2 \leq h_n\}$ represents a \textit{slice} in the a subspace of ${\mathbb R}^p$ about $\mathbf s_0+ \operatorname{span}\{{\mathbf V}\}$.
In the estimation of $L({\mathbf V})$ two different weighting schemes are used:
\begin{itemize}
\item[(a)]
\textit{Within a slice}. The weights are defined in \eqref{weights} and are used to calculate \eqref{Ltilde}.
\item[(b)]
\textit{Between slices}. Equal weights $1/n$ are used to calculate \eqref{LN}.
\end{itemize}
The choice of weights can be potentially influential. Especially the between weighting scheme can further be refined by assigning more weight to slices with more points. This can be realized by altering \eqref{LN} to
\begin{align}
L^{(w)}_n({\mathbf V}) &= \sum_{i=1}^n \tilde{w}({\mathbf V},{\mathbf X}_i) \tilde{L}_n({\mathbf V},{\mathbf X}_i), \quad \mbox{with} \label{wLN}\\
\tilde{w}({\mathbf V},{\mathbf X}_i) &= \frac{\sum_{j=1}^n K(d_j({\mathbf V},{\mathbf X}_i)/h_n) - 1}{\sum_{l,u=1}^nK(d_l({\mathbf V},{\mathbf X}_u)/h_n) -n} = \frac{\sum_{j=1,j\neq i}^n K(d_j({\mathbf V},{\mathbf X}_i)/h_n) }{\sum_{l,u=1, l\neq u}^nK(d_l({\mathbf V},{\mathbf X}_u)/h_n)}\label{wtilde}
\end{align}
For example, if a rectangular kernel is used, $\sum_{j=1,j\neq i}^n K(d_j({\mathbf V},{\mathbf X}_i)/h_n)$ is the number of ${\mathbf X}_j$ ($j \neq i$) points in the slice corresponding to $\tilde{L}_n({\mathbf V},{\mathbf X}_i)$. Therefore this slice gets higher weight, if the number of ${\mathbf X}_j$ points in this slice is larger. That is, the more observations we use for estimating $L({\mathbf V},{\mathbf X}_i)$ the better its accuracy. The denominator in \eqref{wtilde}
guarantees the weights $\tilde{w}({\mathbf V},{\mathbf X}_i)$ sum up to one.
\subsection{Bandwidth selection}
The performance of conditional variance estimation depends crucially on the choice of the bandwidth sequence $h_n$ that controls the bias-variance trade-off if the mean squared error is used as measure for accuracy, in the sense that the smaller $h_n$ is, the lower the bias and the higher the variance and vice versa. Furthermore, the choice of $h_n$ depends on $p$, $q$, the sample size $n$, and the distribution of ${\mathbf X}$. We assume throughout the bandwidth satisfies assumptions (H.1) and (H.2).
We will use Lemma~\ref{thmsigma} to derive a data-driven bandwidth we use in the computation of our estimator.
\begin{lemma}\label{thmsigma}
Let ${\mathbf M}$ be a $p \times p$ positive definite matrix. Then,
\begin{equation}
\frac{\mbox{tr}({\mathbf M})}{p}=\operatorname{argmin}_{s>0}\|{\mathbf M} - s {\mathbf I}_p\|\label{trace}
\end{equation}
\end{lemma}
\begin{proof}
Let ${\mathbf U}$ be the $p \times p$ matrix whose columns are the eigenvectors of ${\mathbf M}$ corresponding to its eigenvalues $\lambda_1\ge \ldots \ge \lambda_p>0$. Then, ${\mathbf M} = {\mathbf U} \mbox{diag}(\lambda_1,...,\lambda_p) {\mathbf U}^T$, which implies $\|{\mathbf M} - s {\mathbf I}_p\|^2_2 = \|\mbox{diag}(\lambda_1,...,\lambda_p) -s {\mathbf I}_p\|^2 = \sum_{l=1}^p (\lambda_l -s)^2$. Taking the derivative with respect to $s$, setting it to 0 and solving for $s$ obtains \eqref{trace}, since $\sum_{l=1}^p \lambda_l = \mbox{tr}({\mathbf M})$.
\end{proof}
If the predictors are multivariate normal,
their joint density is approximated by $N(\mu_{\mathbf X}, \sigma^2 {\mathbf I}_p)$ by Lemma~\ref{thmsigma}, with $\sigma^2 = \mbox{tr}(\Sigmabf_{\x})/p$. This results in no bandwidth dependence on ${\mathbf V}$ and leads to a rule for bandwidth selection, as follows.
Under ${\mathbf X} \sim N_p(\mu_{\mathbf X},\sigma^2 {\mathbf I}_p)$, $\widetilde{{\mathbf X}}_i = {\mathbf X}_i - {\mathbf X}_j \sim N_p(0, 2\sigma^2 {\mathbf I}_p)$ for $i \neq j$, where we suppress the dependence on $j$ for notational convenience. Since all data are used as shifting points, $d_i({\mathbf V},{\mathbf X}_j) = \|{\mathbf X}_i-{\mathbf X}_j\|^2 - ({\mathbf X}_i-{\mathbf X}_j)^T{\mathbf V}\V^T({\mathbf X}_i-{\mathbf X}_j) = \|\widetilde{{\mathbf X}}_i\|^2 - \widetilde{{\mathbf X}}_i^T{\mathbf V}\V^T\widetilde{{\mathbf X}}_i$. Let
\begin{align}
\text{nObs} &= \mathbb{E}\left(\#\{i\in\{1,...,n\}: \widetilde{{\mathbf X}}_i \in \operatorname{span}_{h}\{{\mathbf V}\}\}\right) \notag\\
&= 1 + (n-1)\mathbb{ P}(d_1({\mathbf V},{\mathbf X}_2) \leq h) = 1 + (n-1)\mathbb{ P}(\|\widetilde{{\mathbf X}}\|^2 - \widetilde{{\mathbf X}}^T{\mathbf V}\V^T\widetilde{{\mathbf X}} \leq h) \label{nobs}
\end{align}
where $\operatorname{span}_{h}\{{\mathbf V}\} = \{{\mathbf x} \in {\mathbb R}^p: \|{\mathbf x} - \mathbf{P}_{\operatorname{span}\{{\mathbf V}\}}{\mathbf x}\|^2\leq h\}$ and $\widetilde{{\mathbf X}} = {\mathbf X} - {\mathbf X}^*$, with ${\mathbf X}^*$ an independent copy of ${\mathbf X}$.
nObs is the expected number of points in a slice.
Given a user specified value for \text{nObs}, $h$ is the solution to \eqref{nobs}.
Let ${\mathbf x} \in {\mathbb R}^p$. For any ${\mathbf V} \in {\mathcal S}(p,q)$ in \eqref{Smanifold}, there exists an orthonormal basis ${\mathbf U} \in {\mathbb R}^{p \times (p-q)}$ of $\operatorname{span}\{{\mathbf V}\}^\perp$ such that
${\mathbf x} = {\mathbf V}{\mathbf r}_1 + {\mathbf U}{\mathbf r}_2$,
by \eqref{ortho_decomp}.
Then, $\widetilde{{\mathbf X}} = {\mathbf V}{\mathbf R}_1 + {\mathbf U}{\mathbf R}_2$, with ${\mathbf R}_1 = {\mathbf V}^T\widetilde{{\mathbf X}} \sim N(0,2\sigma^2{\mathbf I}_q), {\mathbf R}_2 = {\mathbf U}^T\widetilde{{\mathbf X}} \sim N(0,2\sigma^2{\mathbf I}_{p-q})$, and $\widetilde{{\mathbf X}}^T{\mathbf V}\V^T\widetilde{{\mathbf X}} = \|{\mathbf R}_1\|^2$ and $\|\widetilde{{\mathbf X}}\|^2 = \|{\mathbf R}_1\|^2 + \|{\mathbf R}_2\|^2$. Therefore,
\begin{align}\label{chi}
\mathbb{ P}\left(\|\widetilde{{\mathbf X}}\|^2 - \widetilde{{\mathbf X}}^T{\mathbf V}\V^T\widetilde{{\mathbf X}} \leq h\right) = \mathbb{ P}(\|{\mathbf R}_2\|^2 \leq h) = \chi_{p-q}\left(\frac{h}{2\sigma^2}\right),
\end{align}
where $\chi_{p-q}$ is the cumulative distribution function of a chi-squared random variable with $p-q$ degrees of freedom. Plugging \eqref{chi} in \eqref{nobs} obtains
\begin{align}\label{nobs2}
\text{nObs} = 1 + (n-1)\chi_{p-q}\left(\frac{h}{2\sigma^2}\right).
\end{align}
Solving \eqref{nobs2} for $h$ and Lemma~\ref{thmsigma} yield
\begin{equation}\label{hn}
h_n(\text{nObs}) = \chi_{p-q}^{-1}\left(\frac{\text{nObs}-1}{n-1}\right) \frac{2\mbox{tr}(\widehat{\Sigma}_{{\mathbf x}})}{p},
\end{equation}
where $\widehat{\Sigma}_{{\mathbf x}}=\sum_{i} ({\mathbf X}_i -\bar{{\mathbf X}}) ({\mathbf X}_i -\bar{{\mathbf X}})^T/n$ and $\bar{{\mathbf X}} = \sum_i {\mathbf X}_i/n$.
In order to ascertain $h_n$ satisfies (H.1) and (H.2), a reasonable choice is to set $\text{nObs} = \gamma(n)$ for a function $\gamma(\cdot)$ with $\gamma(n) \to \infty$, ${\gamma(n)}/{n} \leq 1$ and ${\gamma(n)}/{n} \to 0$.
For example, $\text{nObs} = \gamma(n) = n^\beta$ with $\beta \in (0,1)$ can be used.
Alternatively, a plug-in bandwidth based on rule-of-thumb rules of the form $c s n^{-1/(4+k)}$, where $s$ is an estimate of scale and $c$ a number close to 1, such as Silverman's ($c=1.06$, $s=$standard deviation) or Scott's ($c=1$, $s=$standard deviation), used in nonparametric density estimation [see \cite{Silverman86}], is
\begin{equation}
\label{bandwidth}
h_n = 1.2^2 \frac{2\mbox{tr}(\widehat{\Sigma}_{\mathbf x})}{p} \left(n^{-1/(4+p-q)} \right)^2.
\end{equation}
The term $2 \mbox{tr}(\widehat{\Sigma}_{\mathbf X})/p$ can be interpreted as the variance of ${\mathbf X}_i - {\mathbf X}_j$ and $p-q$ is the true dimension $k$.
We use 1.2 as $c$ based on empirical evidence from simulations.
Since both \eqref{hn} and \eqref{bandwidth} yield satisfactory results, we opted against cross validation for bandwidth selection because of the computational burden involved, and used the bandwidth in \eqref{bandwidth} in simulations and data analyses.
\section{Optimization Algorithm}\label{Optim}
A Stiefel manifold optimization algorithm is used to obtain the solution of the sample version of the optimization problem \eqref{optim}. To calculate $\widehat{{\mathbf V}}_{q}$ in \eqref{Vhat}, a curvilinear search is carried out \cite{ZaiwenWen2012,Tagare2011}, which is similar to gradient descent.
First an arbitrary starting value ${\mathbf V}^{(0)}$ is selected by drawing a $p \times q$ matrix from the invariant measure; i.e., the distribution that corresponds to the uniform, on ${\mathcal S}(p,q)$, see \cite{StatisticsOnManifolds}. The $Q$-component of the \textsc{QR} decomposition of a $p \times q$ matrix with independent standard normal entries follows the invariant measure \cite{Chikuse1994}. The step-size $\tau > 0$, the step size reduction factor $\gamma \in (0,1)$, and tolerance $\text{tol} > 0$ are fixed at the outset.\\
{\centering
\begin{minipage}{1.1\linewidth}
\begin{algorithm}[H]\label{algo1}
\SetAlgoLined
\KwResult{${\mathbf V}^{(\text{end})}$}
Initialize:
${\mathbf V}^{(0)}$, $\tau = 1$, $\text{tol} = 10^{-3}$, $\gamma = 0.5$
$\text{error} = \text{tol} + 1$, $\text{maxit} = 50$, $\text{count}=0$;\\
\While{$\text{error} > \text{tol}$ $\text{and}$ $\text{count} \leq \text{maxit}$}{
\begin{itemize}
\item ${\mathbf G} = \nabla_{{\mathbf V}}L_n({\mathbf V}^{(j)}) \in {\mathbb R}^{p \times q}$, ${\bf W} = {\mathbf G} {\mathbf V}^T - {\mathbf V} {\mathbf G}^T$
\item ${\mathbf V}^{(j+1)} = ({\mathbf I}_p + \tau {\bf W})^{-1}({\mathbf I}_p - \tau {\bf W}){\mathbf V}^{(j)}$
\item $\text{error} = \|{\mathbf V}^{(j)}{\mathbf V}^{(j){\mathbf T}}-{\mathbf V}^{(j+1)}{\mathbf V}^{(j+1)^T}\|/\sqrt{2q}$
\end{itemize}
\eIf{$L_n({\mathbf V}^{(j+1)}) > L_n({\mathbf V}^{(j)}) $}{
${\mathbf V}^{(j+1)} \leftarrow {\mathbf V}^{(j)}$; $\tau \leftarrow \tau \gamma$; $\text{error} \leftarrow \text{tol} + 1$
}{
$\text{count} \leftarrow \text{count} + 1$\\
$\tau \leftarrow \frac{\tau}{\gamma}$
}
}
\caption{Curvilinear search}
\end{algorithm}
\end{minipage}
\par
}
\medskip
Under mild regularity conditions on the objective function, \cite{ZaiwenWen2012} showed that the sequence generated by the algorithm converges to a stationary point if the Armijo-Wolfe conditions \cite{ArmijoWolfe} are used for determining the stepsize $\tau$.
The Armijo-Wolfe conditions require the evaluation of the gradient for each potential step size until one is found that fulfills the conditions and the step is accepted, i.e. for the determination of one step size the gradient has to be evaluated multiple times. Since for the conditional variance estimator, the gradient computation incurs the highest computational cost, we use simpler conditions to determine the step size. Specifically, we simply require the step decrease the objective function, otherwise the step size $\tau$ is decreased by the factor $\gamma \in (0,1)$). These simplified conditions are computationally less expensive and exhibit same behavior as the Armijo-Wolfe conditions in the simulations. Further we capped the maximum number of steps at $\text{maxit} = 50$ steps, since the algorithm converged in about 10 iterations in all our simulations.
The algorithm is repeated for $m$ arbitrary ${\mathbf V}^{(0)}$ starting values drawn from the invariant measure on ${\mathcal S}(p,q)$. Among those, the value at which $L_n$ in \eqref{LN} is minimal is selected as $\widehat{{\mathbf V}}_{q}$.
The algorithm requires the computation of the gradient of $L_n({\mathbf V})$ in \eqref{LN} or \eqref{wLN}. We compute the gradient of the objective function for the Gaussian kernel in Theorems~\ref{lemma-one} and~\ref{lemma-two}.
The Gaussian kernel is the default kernel we use in the implementation of the estimation algorithm in the \texttt{R} code that accompanies this manuscript.
\begin{thm}\label{lemma-one}
Let $K(z) = \exp{(-z^2/2)}$ be the Gaussian kernel. Then, the gradient of $\tilde{L}_n({\mathbf V},\mathbf s_0) $ in \eqref{Ltilde} is given by
\begin{align*}
\nabla_{{\mathbf V}}\tilde{L}_n({\mathbf V},\mathbf s_0) = \frac{1}{h_n^2}\sum_{i=1}^n (\tilde{L}_n({\mathbf V},\mathbf s_0) - (Y_i-\bar{y}_1({\mathbf V},\mathbf s_0))^2)w_id_i\nabla_{{\mathbf V}}d_i({\mathbf V},\mathbf s_0) \in {\mathbb R}^{p \times q},
\end{align*}
and the gradient of $L_n({\mathbf V})$ in \eqref{LN} is
\[
\nabla_{{\mathbf V}}L_n({\mathbf V}) = \frac{1}{n} \sum_{i=1}^n \nabla_{{\mathbf V}}\tilde{L}_n({\mathbf V},{\mathbf X}_i).
\]
with $w_i = {w}({\mathbf V},{\mathbf X}_i)$ in \eqref{weights}.
\end{thm}
The weighted version of conditional variance estimation in Section \ref{weight_section} is expected to increase the accuracy of the estimator for unevenly spaced data. When \eqref{wLN} and the gradient in \eqref{full} are used in the optimisation algorithm, we refer to the estimator as \textit{weighted conditional variance estimation}. If \eqref{wLN} and the gradient $\sum_{i=1}^n \tilde{w}({\mathbf V},{\mathbf X}_i)\nabla_{{\mathbf V}}\tilde{L}_n({\mathbf V},{\mathbf X}_i)$ is used; i.e., the first summand in \eqref{full} is dropped, we refer to it as \textit{partially weighted conditional variance estimation}.
For both, we replace $G$ in algorithm~\ref{algo1} with the corresponding gradient derived in Theorem~\ref{lemma-two}.
\begin{thm}\label{lemma-two}
Let $K(z) = \exp{(-z^2/2)}$ be the Gaussian kernel. Then, the gradient of $L^{(w)}_n({\mathbf V})$ in \eqref{wLN} is given by
\begin{align}
\nabla_{{\mathbf V}}L^{(w)}_n({\mathbf V}) &= \sum_{i=1}^n \left(\nabla_{{\mathbf V}}\tilde{w}({\mathbf V},{\mathbf X}_i) \tilde{L}_n({\mathbf V},{\mathbf X}_i) + \tilde{w}({\mathbf V},{\mathbf X}_i)\nabla_{{\mathbf V}}\tilde{L}_n({\mathbf V},{\mathbf X}_i)\right), \label{full}
\end{align}
where $\nabla_{{\mathbf V}}\tilde{L}_n({\mathbf V},{\mathbf X}_i)$ is given in Theorem~\ref{lemma-one}. Furthermore,
\[ \nabla_{{\mathbf V}}\tilde{w}({\mathbf V},{\mathbf X}_i) = -\frac{1}{h_n^2} \sum_j \left( \frac{K_{j,i}}{\sum_{l,u=1}^n K_{l,u}}d_{j,i}\nabla_{{\mathbf V}} d_{j,i} - \tilde{w}_i \sum_{l,u=1}^n \frac{K_{l,u}}{\sum_{o,s=1}^n K_{o,s}}d_{l,u}\nabla_{{\mathbf V}} d_{l,u} \right)
\]
with $\tilde{w}_i = \tilde{w}({\mathbf V},{\mathbf X}_i)$ in \eqref{wtilde}, $K_{j,i} = K(d_j({\mathbf V},{\mathbf X}_i)/h_n)$, and $d_{j,i} = d_j({\mathbf V},{\mathbf X}_i)$ given in \eqref{distance}.
\end{thm}
\subsection{\texorpdfstring{A study of the behaviour of $L_n({\mathbf V})$}{A study of the behaviour of Ln(V)}}\label{ToyExample}
We explore how accurately the sample version \eqref{LN} of the objective function estimates the target subspace in an example. We consider a bivariate normal predictor vector, ${\mathbf X} = (X_1,X_2)^T \sim N(\mathbf{0},\Sigmabf_{\x})$. We generate the response from $Y = g({\mathbf B}^T{\mathbf X}) + \epsilon = X_1 + \epsilon$, with $\epsilon \sim N(0,\eta^2)$ independent of ${\mathbf X}$. In this setting, $k = 1$, ${\mathbf B} = (1,0)^T $, $g(z) = z \in {\mathbb R}$ in model~\eqref{mod:basic}.
With these specifications, \eqref{mu_l}
becomes
\begin{align}\label{mul}
\mu_l({\mathbf V},\mathbf s_0)
&= \int_{{\mathbb R}}({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T {\mathbf V} r)^l f_{{\mathbf X}\mid{\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}}(r)dr
\end{align}
Dropping the terms that do not contain ${\mathbf r}$ in \eqref{density} yields
\begin{gather}
f_{{\mathbf X}\mid{\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}}(r) \propto f_{\mathbf X}(\mathbf s_0 + {\mathbf V} r) \propto \exp{\left(-\frac{1}{2}(\mathbf s_0 + r{\mathbf V})^T\Sigmabf_{\x}^{-1}(\mathbf s_0 + r {\mathbf V})\right)} \notag \\
\propto \exp{\left(-\frac{1}{2}\left(2r{\mathbf V}^T\Sigmabf_{\x}^{-1}\mathbf s_0 + r^2{\mathbf V}^T\Sigmabf_{\x}^{-1}{\mathbf V}\right)\right)}
= \exp{\left(-\frac{1}{2\sigma^2}\left(2r\sigma^2{\mathbf V}^T\Sigmabf_{\x}^{-1}\mathbf s_0 + r^2\right)\right)} \notag \\
\propto \exp{\left(-\frac{1}{2\sigma^2}(r - \alpha)^2\right)}, \label{third}
\end{gather}
where
$\sigma^2 =1/({\mathbf V}^T\Sigmabf_{\x}^{-1}{\mathbf V})$, $ \alpha = -\sigma^2{\mathbf V}^T\Sigmabf_{\x}^{-1}\mathbf s_0$ and the symbol $\propto$ stands for proportional to. Letting $\psi(z) $ denote the density of a standard normal variable, \eqref{third} obtains
\begin{align}\label{fcond}
f_{{\mathbf X}\mid{\mathbf X} \in \mathbf s_0 +\operatorname{span}\{{\mathbf V}\}}(r) =
\frac{1}{\sigma}\psi\left(\frac{r- \alpha}{\sigma}\right)
\end{align}
for ${\mathbf V}, \mathbf s_0 \in {\mathbb R}^{2 \times 1}$.
Inserting \eqref{fcond} in \eqref{mul} yields
\begin{gather*}
\int_{{\mathbb R}} ({\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V} r)^l \frac{1}{\sigma}\psi\left(\frac{r- \alpha}{\sigma}\right) d r
= \begin{cases}
{\mathbf B}^T\mathbf s_0 + {\mathbf B}^T{\mathbf V}\alpha & l = 1\\
({\mathbf B}^T\mathbf s_0)^2 + 2({\mathbf B}^T\mathbf s_0)({\mathbf B}^T{\mathbf V})\alpha + ({\mathbf B}^T{\mathbf V})^2(\sigma^2 + \alpha^2) & l = 2 \\
\end{cases}
\end{gather*}
Using
\eqref{LtildeVs0},~\eqref{Lvs} and \eqref{objective}, yields $\tilde{L}({\mathbf V},\mathbf s_0) = \mu_2({\mathbf V},\mathbf s_0) - \mu_1({\mathbf V},\mathbf s_0)^2 +\eta^2 = ({\mathbf B}^T{\mathbf V})^2\sigma^2 + \eta^2$, so that
\begin{equation}
L({\mathbf V}) = \mathbb{E}\left(\tilde{L}({\mathbf V},{\mathbf X})\right) = ({\mathbf B}^T{\mathbf V})^2\sigma^2 + \eta^2 = \frac{({\mathbf B}^T{\mathbf V})^2}{{\mathbf V}^T\Sigmabf_{\x}^{-1}{\mathbf V}} + \eta^2 \label{toyexample}
\end{equation}
From \eqref{toyexample} we can easily see that $L({\mathbf V})$ attains its minimum at ${\mathbf V} \perp {\mathbf B}$. Also, if $\Sigmabf_{\x}={\mathbf I}_2$, the maximum of $L({\mathbf V})$ is attained at ${\mathbf V} = {\mathbf B}$.
To visualize the behavior of $\tilde{L}_n({\mathbf V})$ as the sample size increases, we parametrize ${\mathbf V}$ by ${\mathbf V}(\theta) = (\cos(\theta),\sin(\theta))^T$, $\theta \in [0,\pi]$. Since ${\mathbf B} = (1,0)^T$, the minimum of $\tilde{L}({\mathbf V})$ is at ${\mathbf V}(\pi/2) = (0,1)^T$ , which is orthogonal to ${\mathbf B}$.
The true $L({\mathbf V}(\theta))$ and its estimates $L_n({\mathbf V}(\theta))$ are plotted for samples of different sizes $n$ in Figure~\ref{Lvplot}. $L_n({\mathbf V}(\theta))$ approximates $L({\mathbf V})$ fast and attains its minimum at the same value as $L({\mathbf V})$ even for $n= 10$.
As an aside, we note that assumption (A.4) is violated in this example, which suggests that the proposed estimator of conditional variance estimation may apply under weaker assumptions.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{trueLoss_vs_estLoss_21.pdf}
\caption{Solid black line is $L({\mathbf V}(\theta)) = \cos(\theta)^2 + 0.1^2$, colored is $L_n({\mathbf V}(\theta))$, $\theta \in [0, \pi]$, $n=10,50,100,500$. The vertical black line is at $\theta=\pi /2
}
\label{Lvplot}
\end{figure}
\section{Simulation studies}\label{SimStudy}
We compare the estimation accuracy of conditional variance estimation with the forward model based sufficient dimension reduction methods, mean outer product gradient estimation (\texttt{meanOPG}), mean minimum average variance estimation (\texttt{meanMAVE}) \cite{MAVEpackage}, refined outer product gradient (\texttt{rOPG}), refined minimum average variance estimation (\texttt{rmave}) \cite{Xiaetal2002, Li2018}, and principal Hessian directions (\texttt{pHd}) \cite{Li1992, CookLi2002}, and the inverse regression based methods, sliced inverse regression (\texttt{SIR}) \cite{Li1991} and sliced average variance estimation (\texttt{SAVE}) \cite{CookWeisberg1991}. The dimension $k$ is assumed to be known throughout.
We report results for conditional variance estimation using the ``plug-in'' bandwidth in \eqref{bandwidth}
and three different conditional variance estimation versions, \texttt{CVE}, \texttt{wCVE}, and \texttt{rCVE}. \texttt{CVE} is obtained by using $m = 10$ arbitrary starting values in the optimization algorithm and optimizing \eqref{LN} as described in Section~\ref{Optim}. \texttt{rCVE}, or \textit{refined weighted CVE}, is obtained by setting the starting value ${\mathbf V}^{(0)}$ at the optimizer of \texttt{CVE}, and using \eqref{wLN} in the optimization algorithm in Section~\ref{Optim} with the partially weighted gradient as described in Section~\ref{weight_section}. \texttt{wCVE}, or \textit{weighted CVE}, is obtained by optimizing \eqref{wLN} with partially weighted gradient as described in Sections~\ref{weight_section} and \ref{Optim}. Methods \texttt{rOPG} and \texttt{rmave} refer to the original refined outer product gradient and refined minimum average variance estimation algorithms published in \cite{Xiaetal2002}. They are implemented using the \texttt{R} code in \cite{Li2018} with number of iterations $\text{nit}=25$, since the algorithm is seen to converge by 25. The \texttt{dr} package is used for the \texttt{SIR}, \texttt{SAVE} and \texttt{pHd} calculations, and the \texttt{MAVE} package for mean outer product gradient estimation (\texttt{meanOPG}) and mean minimum average variance estimation (\texttt{meanMAVE}). The source code for conditional variance estimation can be downloaded from \url{https://git.art-ist.cc/daniel/CVE}.
Table~\ref{tab:mod} lists the seven models (M1-M7) we consider.
Throughout, we set $p=20$, ${\mathbf b}_1 = (1,1,1,1,1,1,0, ...,0)^T/\sqrt{6}$, ${\mathbf b}_2 = (1,-1,1,-1,1,-1,0,...,0)^T/\sqrt{6} \in {\mathbb R}^p$ for M1-M5. For M6, ${\mathbf b}_1 = \mathbf{e}_1, {\mathbf b}_2 = \mathbf{e}_2$ and ${\mathbf b}_3 = \mathbf{e}_p$, and for M7 ${\mathbf b}_1,{\mathbf b}_2,{\mathbf b}_3$ are the same as in M6 and ${\mathbf b}_4 = \mathbf{e}_3$, where $\mathbf{e}_j$ denotes the $p$-vector with $j$th element equal to 1 and all others are 0. The error term $\epsilon$ is independent of ${\mathbf X}$ for all models. In M2, M3, M4, M5 and M6, $\epsilon \sim N(0,1)$. For M1 and M7, $\epsilon$ has a generalized normal distribution $GN(a,b,c)$ with densitiy $f_{\epsilon}(z) = c/(2b\Gamma(1/c))\exp((|z-a|/b)^c)$, see \cite{gnorm} with location 0 and shape-parameter 0.5 for M1, and shape-parameter 1 for M7 (Laplace distribution). For both the scale-parameter is chosen such that $\var(\epsilon) = 0.25$.
\begin{table}[!htbp]
\centering
\caption{Models}
\vspace{0.05in}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccccc}
\toprule
Name & Model & ${\mathbf X}$ distribution & $\epsilon$ distribution& $k$ & $n$ \\ \midrule
M1& $Y = \cos({\mathbf b}_1^T{\mathbf X}) + \epsilon$ & ${\mathbf X} \sim N_p({\bf 0},\greekbold{\Sigma})$& $GN(0,\sqrt{1/2},0.5)$ & 1 & 100\\
M2 &$Y = \cos({\mathbf b}_1^T{\mathbf X}) + 0.5\epsilon$ & $ {\mathbf X} \sim \lambda Z \mathbf 1_{p} + N_p({\bf 0},{\mathbf I}_{p})$& $N(0,1)$ & 1 & 100\\
M3& $Y = 2\log(|{\mathbf b}_1^T{\mathbf X}|+2)+ 0.5\epsilon$&${\mathbf X} \sim N_p({\bf 0},{\mathbf I}_p)$& $N(0,1)$ & 1 & 100\\
M4& $Y = ({\mathbf b}_1^T{\mathbf X})/(0.5 +(1.5 + {\mathbf b}_2^T{\mathbf X})^2) + 0.5\epsilon$&${\mathbf X} \sim N_p(0,\greekbold{\Sigma})$& $N(0,1)$ & 2 & 200\\
M5 & $Y = \cos(\pi {\mathbf b}_1^T{\mathbf X})({\mathbf b}_2^T{\mathbf X} + 1)^2 + 0.5\epsilon$&${\mathbf X} \sim U([0,1]^p)$ & $N(0,1)$& 2 & 200\\
M6 &$Y = ({\mathbf b}_1^T{\mathbf X})^2 + ({\mathbf b}_2^T{\mathbf X})^2 + ({\mathbf b}_3^T{\mathbf X})^2 + 0.5\epsilon$ & ${\mathbf X} \sim N_p({\bf 0},{\mathbf I}_p)$& $N(0,1)$ & 3 & 200\\
M7 &$Y = ({\mathbf b}_1^T{\mathbf X})({\mathbf b}_2^T{\mathbf X})^2 + ({\mathbf b}_3^T{\mathbf X})({\mathbf b}_4^T{\mathbf X}) + \epsilon$ & ${\mathbf X} \sim t_3({\mathbf I}_p)$& $GN(0,\sqrt{1/\Gamma(6)},1)$ & 4 & 400\\
\bottomrule
\end{tabular}%
}
\label{tab:mod}%
\end{table
The variance-covariance structure of ${\mathbf X}$ in models M1 and M4 satisfies $\greekbold{\Sigma}_{i,j} = 0.5^{|i-j|}$ for $i,j=1,\ldots,p$. In M5, ${\mathbf X}$ is uniform with independent entries on the $p$-dimensional hyper-cube. In M7, ${\mathbf X}$ is multivariate $t$-distributed with 3 degrees of freedom. The link functions of M4 and M7 are studied in \cite{Xiaetal2002
, but we use $p=20$ instead of 10 and a non identity covariance structure for M4 and the $t$-distribution instead of normal for M7.
In M2,
$Z \sim 2\text{Bernoulli}(p_{\text{mix}}) - 1 \in \{-1,1\}$,
where $\mathbf 1_q = (1,1,...,1)^T\in {\mathbb R}^q$, mixing probability $p_{\text{mix}} \in [0,1]$ and dispersion parameter $\lambda > 0$.
For $0 < p_{\text{mix}} <1$, ${\mathbf X}$ has a mixture normal distribution, where $p_{\text{mix}}$ is the relative mode height and $\lambda$ is a measure of mode distance.
We set $q = p - k$ and generate $r=100$ replications of models M1 - M7. We estimate ${\mathbf B}$ using the ten sufficient dimension reduction methods. The accuracy of the estimates is assessed using $err= \|\mathbf{P}_{\mathbf B} - \mathbf{P}_{\widehat{{\mathbf B}}}\|/\sqrt{2k}$, which lies in the interval $[0,1]$. The factor $\sqrt{2k}$ normalizes the distance, with values closer to zero indicating better agreement and values closer to one indicating strong disagreement
, specifically, $\|\mathbf{P}_{\mathbf B} - \mathbf{P}_{\widehat{{\mathbf B}}}\|^2 \leq 2k$
In Table~\ref{tab:summary} the mean and standard deviation of $err$ for M1 - M7 are reported. In particular, for M2, $p_{mix}=0.3$ and $\lambda = 1$. The smallest error values are boldfaced. In models M1, M2 and M3, the conditional variance estimator is the best performer, with its refined version as close second. In M4, M5 and M6, any of the four versions of MAVE performs better than the CVE. For model M7 the results of \texttt{rOPG} and \texttt{rmave} are not reported because the code frequently produces an error message that a matrix is not invertible. Among the rest, the weighted version of CVE, wCVE, attains the minimum error.
Sliced inverse regression (\texttt{SIR}) and sliced average variance estimation (\texttt{SAVE}) are not competitive throughout our experiments. Sliced inverse regression (\texttt{SIR}), in particular, is expected to fail in models M1-M3, and M6 since $\mathbb{E}(Y\mid{\mathbf X})$ is even.
In Figure~\ref{fig:M4}, box-plots for all combinations of $p_{\text{mix}} \in \{0.3,0.4,0.5\}$ and $\lambda \in \{0,0.5,1,1.5\}$ are presented. The reference methods are restricted to \texttt{meanOPG} and \texttt{meanMAVE}, since the others are not competitive. Conditional variance estimation performs better than all competing methods and is the only method with consistently smaller errors when the two modes are further apart ($\lambda \geq 1$) regardless of the mixing probability $p_{\text{mix}}$. The performance of both \texttt{meanOPG} and \texttt{meanMAVE} worsens as one moves from left to right row-wise. The mixing probability, $p_{\text{mix}}$, has no noticeable effect on the performance of any method; i.e., the plots are very similar column-wise. In sum, \texttt{meanMAVE}'s performance deteriorates as the bimodality of the predictor distribution becomes more distinct. In contrast, conditional variance estimation is unaffected.
and appears to have an advantage over \texttt{meanMAVE} when the predictors have mixture distributions, the link function is even about the midpoint of the two modes, and ${\mathbf B}$ is not orthogonal to the line connecting the two modes. Conditional variance estimation is the only method that estimates the mean subspace reliably in model M2 ($err$ $\approx 0.4$ to $0.5$), whereas \texttt{meanMAVE} misses it completely ($err$ $\approx 1$).
These results indicate that conditional variance estimation is often approximately on par, and can perform much better than \texttt{meanMAVE} depending on the predictor distribution and the link function.
\begin{table}[!htbp]
\centering
\caption{Mean and standard deviation of estimation errors}
\vspace{0.05in}
\label{tab:summary}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lcccccccccc}
\toprule
Model & CVE &wCVE&rCVE & meanOPG & rOPG& meanMAVE& rmave& pHd& sir & save \\ \midrule
$M1$\\
\qquad mean& {\bf0.3827}& 0.4414 &0.4051&0.6220&0.9876&0.5099&0.9840&0.8278&0.9875&0.9788\\
\qquad sd& 0.1269& 0.1595 &0.1329&0.1879&0.0223&0.1800&0.0295&0.1206&0.0243&0.0334\\
$M2$\\
\qquad mean& {\bf0.4572}&0.4992&0.4658&0.8987&0.9332&0.8905&0.9242&0.9000&0.9783&0.9781\\
\qquad sd& 0.1038& 0.1524& 0.0989&0.0908&0.0683&0.0983&0.0897&0.0735&0.0278&0.0318\\
$M3$ \\
\qquad mean& {\bf0.6282}& 0.7509 &0.6371&0.7847&0.9644&0.7576&0.9674&0.6964&0.9647&0.9519\\
\qquad sd& 0.2354& 0.2262 &0.2181&0.2201&0.0667&0.2435&0.0609&0.1626&0.0587&0.0650\\
$M4$\\
\qquad mean& 0.5663&0.5897&0.5554&0.4071&0.4026&0.4361&{\bf0.3905}&0.7772&0.5824&0.9727\\
\qquad sd& 0.1239& 0.1246&0.1298&0.0814&0.0609&0.0997&0.0584&0.0662&0.0951&0.0202\\
$M5$\\
\qquad mean& 0.4429& 0.5604&0.4779&0.4058&{\bf0.3737}&0.3929&0.3750&0.7329&0.6374&0.9730\\
\qquad sd& 0.0891& 0.1233&0.0976&0.1022&0.0680&0.0894&0.0871&0.0832&0.0968&0.0186\\
$M6$\\
\qquad mean& 0.3828&0.3027&0.3230&0.1827&0.4632&{\bf0.1656}&0.4863&0.4978&0.9129&0.8236\\
\qquad sd& 0.1006& 0.0748&0.1098&0.0289&0.1717&0.0252&0.1676&0.0601&0.0420&0.0518\\
$M7$\\
\qquad mean& 0.6856& {\bf0.5050}&0.5651&0.5694&NA&0.5482&NA&0.8536&0.8133&0.8699\\
\qquad sd& 0.0588& 0.0862&0.0879&0.1122&NA&0.1271&NA&0.0354&0.0341&0.0342\\
\bottomrule
\end{tabular}%
}
\end{table
\begin{figure}[!htbp]
\centering
\includegraphics[scale=.32]{method_compair_M2_2019-12-12T154457.pdf}
\caption{M2, $p = 20, n = 100$}
\label{fig:M4}
\end{figure}
Furthermore we estimate the dimension $k$ via cross-validation, following the approach in \cite{Xiaetal2002}
, with
\begin{align}
\hat{k} &= \operatorname{argmin}_{l=1,...,p}CV(l)= \operatorname{argmin}_{l=1,...,p} \frac{\sum_i (Y_i - \hat{g}^{-i}(\widehat{{\mathbf B}}_{l}^T{\mathbf X}_i))^2}{n},\label{estdim}
\end{align}
where $\hat{g}^{-i}(\cdot)$ is computed from the data $(Y_j,\widehat{{\mathbf B}}_{l}^T{\mathbf X}_j)_{j=1,...,n; j \neq i}$ using multivariate adaptive regression splines \cite{mars} in the \texttt{R}-package \texttt{mda
, and $\widehat{{\mathbf B}}_{l} =\widehat{{\mathbf V}}_{p-l}^\perp$ is any basis of the orthogonal complement of $\widehat{{\mathbf V}}_{p-l}= \operatorname{argmin}_{{\mathbf V} \in {\mathcal S}(p,p-l)}L_n({\mathbf V})$.
For a given $l$, we calculate $\widehat{{\mathbf B}}_l$ from the whole data set and predict $Y_i$ by $\hat{Y}_{i,l} = \hat{g}^{-i}(\widehat{{\mathbf B}}_l^T{\mathbf X}_i)$. For $l=p$, $\widehat{{\mathbf B}}_{p} ={\mathbf I}_p$. The results for the seven models are reported in Table~\ref{pred_dim}. The CVE based dimension estimation is the most accurate in models M1, M2, M3, and M6 and differs slightly from that of MAVE in M7. MAVE performs better in M4 and M5, completely misses the true dimension in M2 and misses it most of the time in M3. Thus, the dimension estimation performance of CVE and MAVE agrees with the estimation accuracy of the true subspace in Table~\ref{tab:summary}, CVE estimates the dimension more accurately even in model M6, where it exhibits worse subspace estimation performance, and overall appears to be more accurate.
\begin{table}[!htbp]
\centering
\caption{Number of times dimension $k$ is correctly estimated in $100$ replications}
\vspace{0.05in}
\begin{tabular}{l|ccccccc}
\toprule
&M1 & M2 & M3& M4 & M5&M6&M7\\ \midrule
CVE &83&41&88&62&46&74&19\\
MAVE &67&0&14&76&60&57&21\\
\bottomrule
\end{tabular}%
\label{pred_dim}
\end{table}
We carried out many simulation experiments for an array of combinations of link functions, sufficient reduction matrices ${\mathbf B}$ and their ranks, as well as predictor and error distributions. All reported and unreported results indicate that the difference in performance of the two methods, CVE and mean MAVE, can be attributed to both the form of the link function and the marginal predictor distribution. We observed that when the link function had a bounded first order derivative, CVE often outperformed mean MAVE across predictor distributions. In the opposite case, MAVE performed mostly better. Also, when the predictors have a bimodal distribution with well separated modes and the link function is even, regardless of whether its derivative is bounded, CVE outperforms mean MAVE. In the other settings for the generated data, both methods were roughly on par.
\section{Real Data Analyses}\label{real_data}
Three data sets are analyzed: the \textit{Hitters} data in the \texttt{R} package \texttt{ISLR}, which was also analyzed by \cite{Xiaetal2002}, the \textit{Boston Housing} data in the \texttt{R} package \texttt{mlbench}, and the \textit{Concrete} data from the \texttt{MAVE} package.
The reference method is \texttt{meanMAVE} from the \texttt{MAVE} package in \texttt{R} and the \texttt{CVE} is calculated using $m = 50$ and $\text{maxit} = 10$ in the optimization algorithm \ref{algo1} in Section~\ref{Optim}. The estimation of the dimension is based on \eqref{estdim} in Section~\ref{SimStudy}.
Following \cite{Xiaetal2002}, we remove 7 outliers from the \textit{Hitters} data set leading to a sample size of 256. The response is $Y = \log(\text{salary})$ and the 16 continuous predictors are the game statistics of players in the Major League Baseball league in the seasons 1986 and 1987.
Further information can be found in \url{https://www.rdocumentation.org/packages/ISLR/versions/1.2/topics/Hitters}.
The \textit{Boston Housing} data set contains 506 census tracts on 14 variables from the 1970 census. The response is \texttt{medv}, the median value of owner-occupied homes in USD 1000's. The factor variable \texttt{chas} is removed from the data set for the analysis so that the response is modeled by the remaining 12 continous predictors. The description of the variables can be found in \url{https://www.rdocumentation.org/packages/mlbench/versions/2.1-1/topics/BostonHousing}.
The \textit{Concrete} data set contains 1030 instances on 9 continuous variables
The response is concrete compressive strength.
Concrete strength is very important in civil engineering and is a highly nonlinear function of age and ingredients. The description of the variables can be found in \url{https://www.rdocumentation.org/packages/MAVE/versions/1.3.10/topics/Concrete}.
For all three data sets we standardize both the predictors and the response by subtracting the mean and rescaling column-wise so that each variable has unit variance.
The data sets are analyzed using 10 fold cross-validation to calculate an unbiased estimate of the prediction error \cite{crossvalidation} for our method , \texttt{CVE}, and its main competitor \texttt{meanMAVE} using the \texttt{MAVE} package.
The dimension for each method is estimated with \eqref{estdim} on the trainings set and we then fit a forward regression model on the training set replacing the original with the reduced predictors using multivariate adaptive regression splines \cite{mars} using the \texttt{R} package \texttt{mda} and calculate the prediction error on the test set for both methods.
The dimension estimates of \texttt{CVE} and \texttt{MAVE} mostly disagree.
The mean and standard deviation of the 10-fold cross-validation prediction errors are reported in Table~\ref{table:datasets}. Since the response is standardized, the values in Table~\ref{table:datasets} are bounded between 0 and 1, with smaller values indicating better predictive performance. \texttt{CVE} performs slightly worse than mean \texttt{MAVE} in the \textit{Hitters} data set, slightly better in the \textit{Boston Housing} and
better in the \textit{Concrete} data set analysis.
\begin{table}[!htbp]
\centering
\caption{Mean and standard deviation (in parenthesis) of standardized out of sample prediction errors for the three data sets}
\vspace{0.05in}
\begin{tabular}{l|ccc}
\toprule
Method& Hitters & Housing &Concrete\\
\bottomrule
CVE & 0.216 & 0.260 &0.361 \\
& (0.101) & (0.331) & (0.206) \\
MAVE & 0.203 & 0.299&0.417 \\
& (0.083) & (0.382) & (0.348) \\
\bottomrule
\end{tabular}%
}
\label{table:datasets}%
\end{table}
\subsection{\texorpdfstring{Hitters Data Analysis as in \cite{Xiaetal2002}}{Hitters Data Analysis}}\label{hitters}
Additionally,
we reconstruct the analysis of the \textit{Hitters} data in \cite{Xiaetal2002}, which does not account for the out-of-sample prediction error as in Section~\ref{real_data} but uses the whole sample for estimation of ${\mathbf B}$ and its rank. Only the dimension $k$ is estimated with leave-one-out cross validation.
Table~\ref{tab:ex1} reports the average cross validation mean squared error $CV(k)$ in \eqref{estdim} using the whole data set over $k=1,\ldots,5$.
Both conditional variance estimation
and mean minimum average variance estimation estimate the dimension to be 2.
\begin{table}[!htbp]
\centering
\caption{Mean cross-validation error}
\vspace{0.05in}
\begin{tabular}{l|ccccc}
\toprule
$k$ &1 & 2 & 3& 4 & 5\\ \midrule
CVE &0.308&0.218&0.275&0.327&0.371\\
MAVE &0.370&0.277&0.339&0.413&0.440\\
\bottomrule
\end{tabular}%
\label{tab:ex1}
\end{table}
We plot the response against the estimated directions in Figure~\ref{Fig 6}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{Hitters.png}
\caption{ $Y$ against $\widehat{{\mathbf b}}_1^T{\mathbf X}$ and $\widehat{{\mathbf b}}_2^T{\mathbf X}$}
\label{Fig 6}
\end{figure}
Both exhibit the same pattern: the response appears to be linear in one direction and quadratic in the second. The difference is that the linear pattern is clearer in the second CVE direction and the quadratic pattern exhibits increasing variance in the first MAVE direction.
Based on the scatterplots in Figure~\ref{Fig 6}, we fit the same models for both.
For conditional variance estimation, the fitted regression is
\begin{equation} \label{regCVE}
\hat{Y} = 0.39578 + 0.33724 (\widehat{{\mathbf b}}_1^T{\mathbf X}) - 0.08066 (\widehat{{\mathbf b}}_1^T{\mathbf X})^2 + 0.29126 (\widehat{{\mathbf b}}_2^T{\mathbf X})
\end{equation}
with $R^2 = 0.7975$, and
for minimum average variance estimation
\begin{equation}\label{regMAVE}
\hat{Y} = 0.39051 + 1.32529(\widehat{{\mathbf b}}_1^T{\mathbf X}) - 0.55328(\widehat{{\mathbf b}}_1^T{\mathbf X})^2 + 0.49546 (\widehat{{\mathbf b}}_2^T{\mathbf X})
\end{equation}
with $R^2 = 0.7859$.
Both models \eqref{regCVE} and \eqref{regMAVE} have about the same fit as measured by $R^2$. The in sample performance of the two methods is practically the same for the \texttt{Hitters} data.
\section{Discussion}\label{discussion}
In this paper the novel conditional variance estimator (CVE) for the mean subspace is introduced. We present its geometrical and theoretical foundation, show its consistency and propose an estimation algorithm with assured convergence. CVE requires the forward model \eqref{mod:basic}, $Y=g({\mathbf B}^T{\mathbf X}) +\epsilon$, holds and weak assumptions on the response and the covariates.
Minimum average variance estimation (MAVE) \cite{Xiaetal2002} is the only other sufficient dimension reduction method based on the forward model \eqref{mod:basic}. It estimates the sufficient dimension reduction targeting both the reduction and the link function $g$ in \eqref{mod:basic}. CVE targets only the reduction and does not require estimation of the link function, which may explain why it has an advantage over MAVE in some regression settings. For example, CVE exhibits similar performance across different link functions (cos, exp, etc) for fixed $\lambda$, whereas the performance of MAVE is very uneven for model M2 in Section~\ref{SimStudy}. CVE is more accurate than MAVE when the link function is even and the predictor distribution is bimodal throughout our simulation studies. Moreover, CVE does not require the inversion of the predictor covariance matrix and can be applied to regressions with $p \approx n$ or $p > n$.
The theoretical challenge in deriving the statistical properties of conditional variance estimation arises from the novelty of its definition that involves random non i.i.d. weights that depend on the parameter to be estimated.
\nocite{*}
|
{
"timestamp": "2021-02-18T02:19:05",
"yymm": "2102",
"arxiv_id": "2102.08782",
"language": "en",
"url": "https://arxiv.org/abs/2102.08782"
}
|
\section{Introduction}
\subsection{A continuous version of the Lov\'asz Local Lemma}
\subsubsection{Constraint satisfaction problems and the LLL}
Suppose we wish to prove that an object with certain combinatorial properties exists. A possible way to achieve this is by showing that an object chosen \emph{at random} from some class has the desired properties with positive probability. This approach was pioneered by Erd\H os in the 1940s and has since become indispensable throughout combinatorics;
see the book \cite{AS} by Alon and Spencer for an introduction. An important probabilistic tool is the so-called \emphd{Lov\'asz Local Lemma} \ep{the \emphd{LLL} for short}. The LLL is particularly useful for proving the existence of colorings satisfying a given set of ``local'' constraints. Formally, we define \emph{constraint satisfaction problems} as follows:
\begin{defn}
Let $X$ be a set and let $k \in {\mathbb{N}}^+$. We identify $k$ with the $k$-element set $\set{0, 1, \ldots, k-1}$.
\begin{itemize}[wide]
\item A \emphd{$k$-coloring} of a set $S$ is a function $f \colon S \to k$.
\item For a finite set $D \subseteq X$, an \emphd{$(X,k)$-constraint} \ep{or simply a \emphd{constraint} if $X$ and $k$ are clear from the context} with \emphd{domain} $D$ is a set $B \subseteq k^D$ of $k$-colorings of $D$.
We write $\mathrm{dom}(B) \coloneqq D$.
\item A $k$-coloring $f \colon X \to k$ \emphd{violates} a constraint $B$ with domain $D$ if the restriction of $f$ to $D$ is in $B$, and \emphd{satisfies} $B$ otherwise.
\item A \emphd{constraint satisfaction problem} \ep{a \emphd{CSP} for short} $\mathscr{B}$ on $X$ with range $k$, in symbols $\mathscr{B} \colon X \to^? k$, is a set of $(X,k)$-constraints.
\item A \emphd{solution} to a CSP $\mathscr{B} \colon X \to^? k$ is a $k$-coloring $f \colon X \to k$ that satisfies every constraint $B \in \mathscr{B}$.
\end{itemize}
\end{defn}
In other words, each constraint $B \in \mathscr{B}$ in a CSP $\mathscr{B} \colon X \to^? k$ is interpreted as a set of finite ``forbidden patterns'' that are not allowed to appear in a solution $f \colon X \to k$. The LLL provides a simple probabilistic condition that guarantees that a given CSP has a solution. Fix a CSP $\mathscr{B} \colon X \to^? k$. For each $B \in \mathscr{B}$, the \emphd{probability} of $B$ is the quantity $\P[B]$ defined by
\[
\P[B] \,\coloneqq\, \frac{|B|}{k^{|\mathrm{dom}(B)|}} \,=\, \text{the probability that $B$ is violated by uniformly random $f \colon X \to k$}.
\]
The \emphd{neighborhood} of $B$ is the set
\[
{N}(B) \,\coloneqq\, \set{B' \in \mathscr{B} \,:\, B'\neq B \text{ and } \mathrm{dom}(B') \cap \mathrm{dom}(B) \neq \varnothing}.
\]
The LLL invokes the parameters $\mathsf{p}(\mathscr{B}) \coloneqq \sup_{B \in \mathscr{B}} \P[B]$ and $\mathsf{d}(\mathscr{B}) \coloneqq \sup_{B \in \mathscr{B}} |{N}(B)|$.
\begin{theo}[{{Lov\'asz Local Lemma}; Erd\H os--Lov\'asz \cite{EL}}]\label{theo:LLL}
If $\mathscr{B}$ is a CSP such that
\begin{equation}\label{eq:LLL}
e \cdot \mathsf{p}(\mathscr{B}) \cdot (\mathsf{d}(\mathscr{B}) + 1) \,\leq\, 1,
\end{equation}
where $e = 2.71\ldots$ is the base of the natural logarithm, then $\mathscr{B}$ has a solution.
\end{theo}
The LLL is often stated in the case when $\mathscr{B}$ is finite. However, a straightforward compactness argument shows that Theorem~\ref{theo:LLL} holds for infinite $\mathscr{B}$ as well \ep{see, e.g., \cite[proof of Theorem 5.2.2]{AS}}.
\subsubsection{Continuous colorings}\label{subsubsec:cont_LLL}
In this paper we are interested in the following question:
\begin{ques}[{Continuous LLL}]\label{ques:cont_ques}
Suppose $X$ is a zero-dimensional Polish space. What LLL-style conditions guarantee that a CSP $\mathscr{B} \colon X \to^? k$ has a \emph{continuous} solution $f \colon X \to k$?
\end{ques}
Recall that a topological space is \emphd{Polish} if it is separable and completely metrizable, and \emphd{zero-dimensional} if it has a base consisting of clopen sets. The restriction to zero-dimensional spaces $X$ in Question~\ref{ques:cont_ques} is natural since a continuous map $X \to k$ can be thought of as a partition of $X$ into clopen sets indexed by $0$, $1$, \ldots, $k-1$, so if we hope to find a continuous solution to $\mathscr{B}$, it is reasonable to assume that $X$ has ``many'' clopen subsets.
Questions in the spirit of Question~\ref{ques:cont_ques} have recently attracted attention due to their applications in dynamical systems and descriptive set theory. For a sample of related results, see \cite{MLLL, CGMPT, BerDist}. These questions form a part of the general area called \emphd{descriptive combinatorics}, which investigates combinatorial problems under a variety of topological or measure-theoretic regularity requirements. For more background, see the surveys \cite{KechrisMarks} by Kechris and Marks and \cite{Pikh_survey} by Pikhurko.
In the context of Question~\ref{ques:cont_ques}, it is necessary to assume that the CSP $\mathscr{B}$ itself ``respects'' the topology on $X$ in an appropriate sense. To this end, we define \emph{continuous CSPs} as follows:
\begin{defn}\label{defn:cont}
Let $X$ be a zero-dimensional Polish space. A CSP $\mathscr{B} \colon X \to^? k$ is \emphd{continuous} if for every set $B$ of functions $\set{1,\ldots, n} \to k$ and for all clopen subsets $U_2$, \ldots, $U_n \subseteq X$, the following set is clopen:
\[
\set{x_1 \in X \,:\, \text{there are $x_2 \in U_2$, \ldots, $x_n \in U_n$ such that $x_1$, \ldots, $x_n$ are distinct and $B(x_1, \ldots, x_n) \in \mathscr{B}$}}.
\]
Here $B(x_1, \ldots, x_n) \coloneqq \set{\phi \circ \iota \,:\, \phi \in B}$, where $\iota \colon \set{x_1, \ldots, x_n} \to \set{1, \ldots, n}$ is given by $x_i \mapsto i$.
\end{defn}
Conley, Jackson, Marks, Seward, and Tucker-Drob \cite[Theorem~1.6]{CJMST-D} constructed examples showing that the standard LLL condition \eqref{eq:LLL} is not sufficient to guarantee the existence of a {Borel}---let alone continuous---solution. In contrast to this, we prove that a certain strengthening of \eqref{eq:LLL} does yield continuous solutions. In addition to $\mathsf{p}(\mathscr{B})$ and $\mathsf{d}(\mathscr{B})$, we consider
two more parameters associated to a CSP $\mathscr{B} \colon X \to^? k$. Namely, we define the \emphd{maximum vertex-degree} $\mathsf{vdeg}(\mathscr{B})$ of $\mathscr{B}$ as
\[
\mathsf{vdeg}(\mathscr{B}) \,\coloneqq\, \sup_{x \in X} |\set{B\in \mathscr{B} \,:\, x \in \mathrm{dom}(B)}|,
\]
and let the \emphd{order} $\mathsf{ord}(\mathscr{B})$ of $\mathscr{B}$ be $\mathsf{ord}(\mathscr{B}) \coloneqq \sup_{B \in \mathscr{B}} |\mathrm{dom}(B)|$. Note that $\mathsf{d}(\mathscr{B}) \leq (\mathsf{vdeg}(\mathscr{B}) - 1) \mathsf{ord}(\mathscr{B})$.
\begin{theo}\label{theo:cont_LLL}
Let $\mathscr{B} \colon X \to^? k$ be a continuous CSP on a zero-dimensional Polish space $X$. If
\begin{equation}\label{eq:bound}
\mathsf{p}(\mathscr{B}) \cdot \mathsf{vdeg}(\mathscr{B})^{\mathsf{ord}(\mathscr{B})} \,<\, 1,
\end{equation}
then $\mathscr{B}$ has a continuous solution $f \colon X \to k$.
\end{theo}
We prove Theorem~\ref{theo:cont_LLL} in \S\ref{sec:proof_LLL} using the \emph{method of conditional probabilities}---a standard derandomization technique in computer science. This connection to computer science is not coincidental: results and methods in descriptive combinatorics often mirror those in \emphd{distributed computing}, i.e., the area concerned with problems that can be solved efficiently by a decentralized network of processors. For example, an argument similar to our proof of Theorem~\ref{theo:cont_LLL} was involved in Fischer and Ghaffari's breakthrough work on distributed algorithms for the LLL \cite[Theorem 3.5]{FG}.
Another relevant result in distributed computing is due to Brandt, Grunau, and Rozho\v{n} \cite{expLLL}, who recently developed an efficient deterministic distributed algorithm for finding solutions to CSPs under the condition $\mathsf{p} 2^{\mathsf{d}} < 1$ \ep{in the special case when $\mathsf{vdeg} \leq 3$, such an algorithm was devised earlier by Brandt, Maus, and Uitto \cite{BMU}}.
In \cite{BerDist}, the author established a series of general results that allow using efficient distributed algorithms to obtain colorings with desirable regularity properties \ep{such as continuity, measurability, etc.}. In particular, \cite[Theorem~2.13]{BerDist} implies that under suitable assumptions the condition $\mathsf{p} 2^{\mathsf{d}} < 1$ is also sufficient to produce continuous solutions. While in general neither of the bounds $\mathsf{p} 2^{\mathsf{d}} < 1$ and $\mathsf{p} \cdot \mathsf{vdeg}^\mathsf{ord} < 1$ \ep{that is, \eqref{eq:bound}} implies the other, in practice one often estimates $\mathsf{d}$ using the inequality $\mathsf{d} \leq (\mathsf{vdeg}-1) \mathsf{ord}$, which makes the latter bound more widely applicable, especially when $\mathsf{ord}$ is much smaller than $\mathsf{vdeg}$.
Remarkably, the bound $\mathsf{p} 2^{\mathsf{d}} < 1$ in the Brandt--Grunau--Rozho\v{n} result is sharp: there is no such efficient distributed algorithm that finds solutions to CSPs if the bound is relaxed to $\mathsf{p} 2^{\mathsf{d}} \leq 1$. This follows from the analysis of the so-called \emph{sinkless orientation problem} performed in the randomized setting by Brandt et al. \cite{lowerbound} and extended to the deterministic setting by Chang, Kopelowitz, and Pettie \cite{CKP}. This sharpness result has a counterpart in descriptive combinatorics. Namely, suppose $\mathsf{d} \in {\mathbb{N}}$ and let $G$ be a \emphd{$\mathsf{d}$-regular} graph, meaning that every vertex of $G$ is incident to exactly $\mathsf{d}$ edges. An orientation of $G$ is \emphd{sinkless} if the outdegree of every vertex is at least $1$. A sinkless orientation of $G$ can be naturally encoded as a solution to a CSP $\mathscr{B}_{\text{sinkless}} = \set{B_x}_{x \in V(G)} \colon E(G) \to^? 2$. Here the color of each edge $e \in E(G)$ indicates the direction in which $e$ is oriented, and $B_x$ for $x \in V(G)$ is the constraint with domain $\mathrm{dom}(B_x) = \set{e \in E(G) \,:\, \text{$e$ is incident to $x$}}$ that requires the outdegree of $x$ to be at least $1$. It is easy to see that $\mathsf{d}(\mathscr{B}_{\text{sinkless}}) = \mathsf{d}$ and $\mathsf{p}(\mathscr{B}_{\text{sinkless}}) = 1/2^\mathsf{d}$. However, Thornton \cite[Theorem 3.5]{Thor} used the determinacy method of Marks \cite{Marks} to construct, for any given $\mathsf{d} \in {\mathbb{N}}$, a Borel $\mathsf{d}$-regular graph $G$ that does not admit a Borel sinkless orientation. Note that since $\mathsf{vdeg}(\mathscr{B}_{\text{sinkless}}) = 2$ and $\mathsf{ord}(\mathscr{B}_{\text{sinkless}}) = \mathsf{d}$, this also serves as a sharpness example for Theorem~\ref{theo:cont_LLL}.
We shall resume the discussion of distributed algorithms in \S\ref{subsubsec:LOCAL}, where we describe one of the consequences derived using Theorem~\ref{theo:cont_LLL}, namely that for certain types of coloring problems, a continuous solution exists \emph{if and only if} the problem can be solved by an efficient distributed algorithm.
\subsubsection{Borel colorings}
Sometimes we only wish to find a Borel solution instead of a continuous one. Recall that a \emphd{standard Borel space} is a set $X$ equipped with a $\sigma$-algebra $\mathfrak{B}(X)$ of \emphd{Borel sets} generated by a Polish topology on $X$. We say that a Polish topology on a standard Borel space $X$ is \emphd{compatible} if it generates $\mathfrak{B}(X)$. If $X$ is a standard Borel space, then the set $\fins{X}$ of all finite subsets of $X$ also carries a natural standard Borel structure. Since every $(X,k)$-constraint can be viewed as a finite subset of $\fins{X \times k}$, we may speak of \emphd{Borel CSPs} $\mathscr{B} \colon X \to^? k$, i.e., Borel sets $\mathscr{B} \subseteq \fins{\fins{X \times k}}$ of $(X,k)$-constraints. The following is an immediate corollary of Theorem~\ref{theo:cont_LLL}:
\begin{corl}
Let $\mathscr{B} \colon X \to^? k$ be a Borel CSP on a standard Borel space $X$. If
\[
\mathsf{p}(\mathscr{B}) \cdot \mathsf{vdeg}(\mathscr{B})^{\mathsf{ord}(\mathscr{B})} \,<\, 1,
\]
then $\mathscr{B}$ has a Borel solution $f \colon X \to k$.
\end{corl}
\begin{scproof}
For a set $B$ of functions $\set{1,\ldots, n} \to k$, write $x_1 \sim_B (x_2, \ldots, x_n)$ if $x_1$, \ldots, $x_n$ are distinct and $B(x_1, \ldots, x_n) \in \mathscr{B}$. Since $\mathsf{vdeg}(\mathscr{B}) < \infty$, the Luzin--Novikov theorem \cite[Theorem 18.10]{KechrisDST} yields a finite sequence of partial Borel maps $h_{B,i} \colon X \rightharpoonup X^{n-1}$, $1 \leq i \leq \mathsf{vdeg}(\mathscr{B})$, such that $x_1 \sim_B (x_2, \ldots, x_n)$ if and only if $(x_2, \ldots, x_n) = h_{B, i}(x_1)$ for some $i$. It follows from standard results in descriptive set theory \cite[\S13]{KechrisDST} that there is a compatible zero-dimensional Polish topology $\tau$ on $X$ with respect to which all the maps $h_{B,i}$ are continuous and defined on clopen sets. Then $\mathscr{B}$ is continuous with respect to $\tau$, so, by Theorem~\ref{theo:cont_LLL}, $\mathscr{B}$ has a $\tau$-continuous \ep{hence Borel} solution. Alternatively, it is straightforward to check directly that the proof of Theorem~\ref{theo:cont_LLL} given in \S\ref{sec:proof_LLL} goes through in the Borel setting with the words ``continuous'' and ``clopen'' replaced everywhere by ``Borel.''
\end{scproof}
\subsection{Applications in dynamics}
\subsubsection{A simple proof of the Seward--Tucker-Drob theorem}
Throughout the rest of this paper, $\G$ denotes a countably infinite discrete group with identity element~$\mathbf{1}$. Our fist application of Theorem~\ref{theo:cont_LLL} is a simple proof of the following result of Seward and Tucker-Drob:
\begin{theo}[{Seward--Tucker-Drob \cite{STD}}]\label{theo:STD}
If $\G \curvearrowright X$ is a free Borel action of $\G$ on a standard Borel space $X$, then there is a $\G$-equivariant Borel map $\pi \colon X \to Y$, where $Y \subset 2^\G$ is a free subshift.
\end{theo}
Let us recall the terminology used in the statement of Theorem~\ref{theo:STD}. A~\emphd{$\G$-space} is a topological space $X$ equipped with a continuous action $\G \curvearrowright X$.
The product space $k^\G$ of all $k$-colorings $\G \to k$ of $\G$ is a compact zero-dimensional Polish space, and it becomes a $\G$-space under the action $\G\curvearrowright k^\G$ given by
\[(\gamma \cdot x)(\delta) \,\coloneqq\, x(\delta \gamma) \quad \text{for all } x \in k^\G \text{ and } \gamma,\, \delta \in \G.\] The $\G$-spaces of the form $k^\G$ are called \emphd{Bernoulli shifts}, or simply \emphd{shifts}. The \emphd{free part} of a $\G$-space $X$ is the set $\mathrm{Free}(X) \coloneqq \set{x \in X \,:\, \mathrm{St}_\G(x) = \set{\mathbf{1}}}$ equipped with the subspace topology and the induced action of $\G$ \ep{here $\mathrm{St}_\G(x)$ denotes the stabilizer of $x$}. In other words, $\mathrm{Free}(X)$ is the largest $\G$-invariant subspace of $X$ on which $\G$ acts freely. If $X$ is a Polish $\G$-space, then $\mathrm{Free}(X)$ is a $G_\delta$ subset of $X$, and hence it is also Polish \cite[Theorem 3.11]{KechrisDST}. A closed $\G$-invariant subset of $k^\G$ is called a \emphd{subshift}, and we say that a subshift $X \subseteq k^\G$ is \emphd{free} if $X \subseteq \mathrm{Free}(k^\G)$, i.e., if $\G$ acts freely on $X$.
Even the \emph{existence} of a nonempty free subshift for an arbitrary countable group $\G$ is far from obvious. It was established by Gao, Jackson, and Seward \cite{GJS1, GJS2} with a rather long and complicated construction. The proof of Theorem~\ref{theo:STD} due to Seward and Tucker-Drob develops the ideas of Gao, Jackson, and Seward further and is similarly quite technical. However, it turns out that questions about free subshifts are well-suited for LLL-based approaches. This fact was first observed by Aubrun, Barbieri, and Thomass\'e \cite{ABT}, who gave a short and simple LLL-based alternative construction of a nonempty free subshift $X \subset 2^\G$. Roughly speaking, their method was to define $X$ as the set of all colorings $\G \to 2$ satisfying certain constraints, and then to show that $X \neq \varnothing$ using the LLL. This technique is quite flexible and allows constructing free subshifts with various additional properties. For example, in \cite{LSS} the LLL is applied to construct free subshifts that are not just nonempty but ``large'' {in terms of Hausdorff dimension and entropy}.
Unfortunately, the approach of \cite{ABT, LSS} cannot prove Theorem~\ref{theo:STD}, since it invokes a version of the LLL that does not generally yield Borel solutions.
\ep{Actually, \cite{ABT, LSS} rely on the so-called \emph{General LLL} \cite[Lemma 5.1.1]{AS}, which is a strengthening of Theorem~\ref{theo:LLL} that, in general, does not even yield \emph{measurable} solutions \cite[Theorem 7.1]{MLLL}.} In \S\ref{sec:main_proof}, we show that Theorem~\ref{theo:STD} can nevertheless be established with a simple probabilistic argument---namely with the help of Theorem~\ref{theo:cont_LLL}.
\subsubsection{Topological Ab\'ert--Weiss theorem}
Inspired by the analogous notions for measure-preserving actions \ep{which were in turn modeled after similar concepts in representation theory}, Elek \cite{Elek} introduced the relations of \emph{weak containment} and \emph{weak equivalence} on the class of zero-dimensional Polish $\G$-spaces. \ep{Technically, Elek only considered \emph{compact} zero-dimensional $\G$-spaces, but the same definitions can be applied verbatim to non-compact spaces as well.} Let $k \geq 1$ be an integer. A~\emphd{$k$-pattern} is a partial map $p \colon \G \rightharpoonup k$ whose domain is a finite subset of $\G$. Given an action $\G \curvearrowright X$ and a $k$-coloring $f \colon X \to k$, we say that a $k$-pattern $p$ \emphd{occurs} in $f$ if there a point $x \in X$ such that $f(\gamma \cdot x) = p(\gamma)$ for all $\gamma \in \mathrm{dom}(p)$.
For a finite subset $F \subset \G$, we let $\mathscr{P}_F(X, f)$ denote the set of all $k$-patterns $p \colon F \to k$ with domain $F$ that occur in $f$.
\begin{defn}\label{defn:weak}
Let $X$ and $Y$ be zero-dimensional Polish $\G$-spaces. We say that $X$ is \emphd{weakly contained} in $Y$, in symbols $X \preccurlyeq Y$, if given any $k \in {\mathbb{N}}^+$, a finite subset $F \subset \G$, and a continuous $k$-coloring $f \colon X \to k$, there is a continuous $k$-coloring $g \colon Y \to k$ such that $\mathscr{P}_F(Y, g) = \mathscr{P}_F(X, f)$. If $X \preccurlyeq Y$ and $Y \preccurlyeq X$, then we say that $X$ and $Y$ are \emphd{weakly equivalent} and write $X \simeq Y$.
\end{defn}
As mentioned earlier, Definition~\ref{defn:weak} was introduced \ep{for compact $\G$-spaces} by Elek in \cite{Elek}. For minimal actions of the group $\mathbb{Z}$, weak equivalence \ep{under the name of \emph{weak approximate conjugacy}} was considered previously by Lin and Matui in \cite{LM}.
Among several other results, Elek proved that the pre-order of weak containment has a minimum element in the class of all nonempty free zero-dimensional Polish $\G$-spaces \cite[Theorem 2]{Elek}. In other words, Elek showed that there exists a free \ep{compact} zero-dimensional Polish $\G$-space $M$ such that $M \preccurlyeq X$ for every nonempty free zero-dimensional Polish $\G$-space $X$ \ep{it is easy to check that Elek's argument does not need $X$ to be compact}. We show that, except for the compactness requirement, one can actually take $M$ to be the free part of the Bernoulli shift $2^\G$:
\begin{theo}\label{theo:top_AW}
If $X$ is a nonempty free zero-dimensional Polish $\G$-space, then $\mathrm{Free}(2^\G) \preccurlyeq X$.
\end{theo}
Theorem~\ref{theo:top_AW} is a topological counterpart to the ergodic-theoretic result of Ab\'ert and Weiss \cite{AW}, namely that the Bernoulli shift $2^\G$ is weakly contained \ep{in the sense of Kechris \cite{K_book}} in each almost everywhere free probability measure-preserving action of $\G$. The proof of Theorem~\ref{theo:top_AW} is given in \S\ref{subsec:proof_top_AW}. It is an elaboration of our proof of Theorem~\ref{theo:STD}, leveraging the fact that Theorem~\ref{theo:cont_LLL} yields continuous \ep{and not just Borel} solutions.
\subsection{Consequences in continuous combinatorics}
The main motivation for this work comes from the area of \emphd{continuous combinatorics}, which studies the behavior of combinatorial notions---such as graph colorings, matchings, etc.---under additional continuity constraints. For example, suppose that $G$ is a graph whose vertex set $V(G)$ is a zero-dimensional Polish space. A typical problem in continuous combinatorics is to determine the \emphd{continuous chromatic number} $\chi_c(G)$ of $G$, i.e., the least $k$ for which there exists a continuous $k$-coloring $f \colon V(G) \to k$ satisfying $f(x) \neq f(y)$ whenever vertices $x$ and $y$ are adjacent \ep{such colorings are called \emphd{proper}}.
In \cite{Abelian}, Gao, Jackson, Krohne, and Seward initiated the systematic study of continuous combinatorics of countable group actions and performed a detailed analysis in the case $\G = \mathbb{Z}^d$. In particular, they completely characterized combinatorial problems that can be solved continuously on the space $\mathrm{Free}(2^{\G})$ for $\G \in \set{\mathbb{Z}, \mathbb{Z}^2}$ by reducing them to certain questions about finite graphs. Here we continue this line of research and extend it to the case of $\G$-spaces for arbitrary countably infinite groups $\G$.
Some of our results in this section, specifically Theorems~\ref{theo:universal} and \ref{theo:LOCAL},
were obtained independently by Greb\'ik, Jackson, Rozho\v{n}, Seward, and Vidny\'{a}nszky using the techniques from \cite{GJS1, GJS2, STD} \ep{personal communication}.
\subsubsection{Universality of the shift}
We say that a coloring $f \colon X \to k$ is \emphd{$\mathscr{P}$-avoiding}, where $X$ is a $\G$-space and $\mathscr{P}$ is a set of $k$-patterns, if no pattern $p \in \mathscr{P}$ occurs in $f$. As a side remark, we note that continuous $\mathscr{P}$-avoiding colorings of $\G$-spaces have a natural meaning from the standpoint of topological dynamics. Specifically, viewing $\G$ itself as a discrete $\G$-space under the left multiplication action $\G \curvearrowright \G$, we can consider the set $\mathsf{Av}(\mathscr{P}) \subseteq k^\G$ of all $\mathscr{P}$-avoiding $k$-colorings of $\G$, for a given finite set $\mathscr{P}$ of $k$-patterns. The set $\mathsf{Av}(\mathscr{P})$ is closed and $\G$-invariant, and it is called a \emphd{subshift of finite type} \ep{``finite'' because $\mathscr{P}$ is finite}. If $X$ is a $\G$-space, then there is a natural one-to-one correspondence
\[
\set{\text{$\mathscr{P}$-avoiding continuous colorings $X \to k$}} \quad \longleftrightarrow \quad \set{\text{$\G$-equivariant continuous maps $X \to \mathsf{Av}(\mathscr{P})$}},
\]
where each
$\mathscr{P}$-avoiding continuous coloring $f \colon X \to k$ gives rise to the so-called \emphd{coding map} $\pi_f \colon X \to \mathsf{Av}(\mathscr{P})$ given by $\pi_f(x)(\gamma) \coloneqq f(\gamma \cdot x)$ for all $x \in X$ and $\gamma \in \G$.
In view of this correspondence, studying continuous colorings that avoid finite sets of patterns is equivalent to studying equivariant continuous maps to subshifts of finite type. The following is an immediate consequence of Theorem~\ref{theo:top_AW}:
\begin{theo}\label{theo:universal}
Let $\mathscr{P}$ be a finite set of $k$-patterns. The following statements are equivalent.
\begin{enumerate}[label=\ep{\normalfont\arabic*}]
\item\label{item:shift} There is a continuous $\mathscr{P}$-avoiding $k$-coloring of $\mathrm{Free}(2^\G)$.
\item\label{item:all} Every free zero-dimensional Polish $\G$-space admits a continuous $\mathscr{P}$-avoiding $k$-coloring.
\end{enumerate}
\end{theo}
\begin{scproof}
Implication \ref{item:all} $\Longrightarrow$ \ref{item:shift} is obvious, while \ref{item:shift} $\Longrightarrow$ \ref{item:all} is given by Theorem~\ref{theo:top_AW}.
\end{scproof}
Informally, Theorem~\ref{theo:universal} says that of all the free zero-dimensional Polish $\G$-spaces, it is the hardest to solve combinatorial problems continuously on $\mathrm{Free}(2^\G)$. Here is just one specific instance of this phenomenon. Let $S \subset \G$ be finite.
The \emphd{Schreier graph} of a $\G$-space $X$ corresponding to $S$ is the \ep{simple undirected} graph $G(X, S)$ with vertex set $X$ where two distinct vertices $x$, $y$ are adjacent if and only if $y = \sigma \cdot x$ for some $\sigma \in S \cup S^{-1}$. A consequence of Theorem~\ref{theo:universal} is that the Schreier graph of $\mathrm{Free}(2^\G)$ has the largest continuous chromatic number among all free zero-dimensional Polish $\G$-spaces:
\begin{corl}\label{corl:color}
Let $S$ be a finite subset of $\G$. If $X$ is a free zero-dimensional Polish $\G$-space, then \[\chi_c(G(X, S)) \,\leq\, \chi_c(G(\mathrm{Free}(2^\G), S)).\]
\end{corl}
\begin{scproof}
Set $k \coloneqq \chi_c(G(\mathrm{Free}(2^\G), S))$ and apply Theorem~\ref{theo:universal} with $\mathscr{P} \coloneqq \set{p_{i, \sigma} \,:\, 0 \leq i < k, \sigma \in S \setminus \set{\mathbf{1}}}$, where for each $i$ and $\sigma$, $p_{i, \sigma}$ is the $k$-pattern with domain $\set{\mathbf{1}, \sigma}$ that sends both $\mathbf{1}$ and $\sigma$ to $i$.
\end{scproof}
\subsubsection{Reduction to finite graphs}
In our remaining results, we reduce problems about continuous colorings to questions about colorings of finite graphs. To state them, we require a few definitions. Let $S \subset \G$ be a finite set. An \emphd{$S$-labeled graph} is a simple undirected graph $G$ equipped with a \emphd{labeling map} $\lambda$ that assigns to each \ep{ordered} pair $(x,y)$ of adjacent vertices a group element $\lambda(x,y) \in S \cup S^{-1}$ so that $\lambda(y,x) = \lambda(x,y)^{-1}$. For a subset $U \subseteq V(G)$, we let $G[U]$ denote the \emphd{subgraph} of $G$ \emphd{induced} by $U$, i.e., the $S$-labeled graph with vertex set $U$ whose adjacency relation and labeling map are inherited from $G$.
Schreier graphs of free $\G$-spaces are natural examples of $S$-labeled graphs, with $\lambda(x,y)$ being the unique element $\sigma \in S \cup S^{-1}$ such that $y = \sigma \cdot x$. When $\G$ itself is viewed as a discrete $\G$-space under the left multiplication action $\G \curvearrowright \G$, the $S$-labeled Schreier graph $G(\G, S)$ is called the \emphd{Cayley graph} of $\G$ corresponding to $S$. Note that the graph $G(\G, S)$ is connected if and only if $S$ generates $\G$.
For a subset $F \subseteq \G$, we use $G(F, S) \coloneqq G(\G,S)[F]$ to denote the subgraph of $G(\G, S)$ induced by $F$.
A \emphd{homomorphism} from an $S$-labeled graph $G$ to an $S$-labeled graph $H$ is a map $\phi \colon V(G) \to V(H)$ such that if $x$, $y \in V(G)$ are adjacent in $G$, then $\phi(x)$, $\phi(y)$ are adjacent in $H$ and $\lambda(\phi(x), \phi(y)) = \lambda(x, y)$. Let $F \subset \G$ be a finite set and let $p \colon F \to k$ be a $k$-pattern. We say that $p$ is \emphd{$S$-connected} if the graph $G(F,S)$ is connected. Given an $S$-labeled graph $G$ and a coloring $f \colon V(G) \to k$, we say that an $S$-connected $k$-pattern $p \colon F \to k$ \emphd{occurs} in $f$ if there is a homomorphism $\phi \colon F \to V(G)$ from $G(F, S)$ to $G$ such that $f \circ \phi = p$. When $G$ is the Schreier graph $G(X, S)$ of a free $\G$-space $X$, this notion coincides with our previous definition, since the only homomorphisms from $G(F, S)$ to $G(X, S)$ are the ones of the form $F \to X \colon \gamma \mapsto \gamma \cdot x$ for some $x \in X$ \ep{here we use that $p$ is $S$-connected}. Given a finite set $\mathscr{P}$ of $S$-connected $k$-patterns, we say that a coloring $f \colon V(G) \to k$ of an $S$-labeled graph $G$ is \emphd{$\mathscr{P}$-avoiding} if none of the patterns in $\mathscr{P}$ occur in $f$.
Consider the standard generating set $S \coloneqq \set{(1,0), (0,1)}$ for the group $\mathbb{Z}^2$. In \cite[Theorem~5.5]{Abelian}, Gao, Jackson, Krohne, and Seward constructed an explicit countable family $\mathscr{H}$ of finite $S$-labeled graphs such that the following statements are equivalent for any finite set $\mathscr{P}$ of $S$-connected $k$-patterns:
\begin{itemize}
\item $\mathrm{Free}(2^{\mathbb{Z}^2})$ admits a continuous $\mathscr{P}$-avoiding $k$-coloring;
\item there is a graph in $\mathscr{H}$ that admits a $\mathscr{P}$-avoiding $k$-coloring;
\item all but finitely many graphs in $\mathscr{H}$ admit $\mathscr{P}$-avoiding $k$-colorings.
\end{itemize}
In other words, to determine whether the \ep{infinite} $\mathbb{Z}^2$-space $\mathrm{Free}(2^{\mathbb{Z}^2})$ has a continuous $\mathscr{P}$-avoiding $k$-coloring, one simply needs to check if the \ep{finite} graphs in $\mathscr{H}$ admit $\mathscr{P}$-avoiding $k$-colorings. This can be seen as a ``compactness theorem'' for continuous colorings of $\mathrm{Free}(2^{\mathbb{Z}^2})$. Gao, Jackson, Krohne, and Seward call \cite[Theorem~5.5]{Abelian} the ``Twelve Tiles Theorem,'' since each graph in $\mathscr{H}$ is obtained from twelve pieces---``tiles''---glued to each other according to certain rules.
We obtain
an analogous result for arbitrary countable groups $\G$:
\begin{theo}\label{theo:compact}
In the setting of Theorem~\ref{theo:universal}, assume that $S \subset \G$ is a finite set such that the $k$-patterns in $\mathscr{P}$ are $S$-connected. There is an explicit countable family $\mathscr{H}$ of finite $S$-labeled graphs \ep{see \S\ref{subsec:tiles} for the definition} such that statements \ref{item:shift} and \ref{item:all} are also equivalent to:
\begin{enumerate}[label=\ep{\normalfont\arabic*}]\setcounter{enumi}{2}
\item\label{item:compact1} There is a graph in $\mathscr{H}$ that admits a $\mathscr{P}$-avoiding $k$-coloring.
\item\label{item:compact_many} All but finitely many graphs in $\mathscr{H}$ admit $\mathscr{P}$-avoiding $k$-colorings.
\end{enumerate}
\end{theo}
We construct the family $\mathscr{H}$ and prove Theorem~\ref{theo:compact} in \S\ref{subsec:tiles}.
\subsubsection{$\mathsf{LOCAL}$\xspace algorithms}\label{subsubsec:LOCAL}
Our final result establishes a precise connection between continuous combinatorics and distributed computing:
\begin{theo}\label{theo:LOCAL}
In the setting of Theorem~\ref{theo:universal}, assume that $S \subset \G$ is a finite set such that the $k$-patterns in $\mathscr{P}$ are $S$-connected. Then statements \ref{item:shift}--\ref{item:compact_many} are also equivalent to:
\begin{enumerate}[label=\ep{\normalfont\arabic*}]\setcounter{enumi}{4}
\item\label{item:LOCAL} There is a deterministic distributed algorithm in the $\mathsf{LOCAL}$\xspace model that, given an $n$-vertex $S$-labeled subgraph $G$ of $G(\G, S)$, in $O(\log^\ast n)$ rounds outputs a $\mathscr{P}$-avoiding $k$-coloring of $G$.
\end{enumerate}
\end{theo}
Here $\log^\ast n$ denotes the \emphd{iterated logarithm} of $n$, i.e., the number of times the logarithm function must be iteratively applied to $n$ before the result becomes at most $1$.
Statement \ref{item:LOCAL} in Theorem~\ref{theo:LOCAL} refers to the $\mathsf{LOCAL}$\xspace model of distributed computation, which was introduced by Linial in \cite{Linial}.
For a comprehensive introduction to this model, see the book \cite{BE} by Barenboim and Elkin. The $\mathsf{LOCAL}$\xspace model operates on an $n$-vertex graph $G$. Here we think of $G$ as representing a decentralized communication network where each vertex plays the role of a processor and edges represent communication links. The computation proceeds in \emph{rounds}. During each round, the vertices first perform some local computations and then synchronously broadcast messages to all their neighbors. After a number of rounds, every vertex must output a color, and the resulting coloring of $V(G)$ is considered to be the output of the algorithm. The efficiency of such an algorithm is measured by the number of communication rounds required.
An important feature of the $\mathsf{LOCAL}$\xspace model is that every vertex of $G$ is executing the same algorithm. Therefore, to make this model nontrivial, the vertices must be given a way of breaking symmetry. In the \emph{deterministic} variant of the $\mathsf{LOCAL}$\xspace model, this is achieved by assigning a unique identifier $\mathrm{Id}(x) \in \set{1,\ldots, n}$ to every vertex $x \in V(G)$.
The identifier assigned to a vertex $x$ is treated as part of $x$'s input; that is, $x$ ``knows'' what its own identifier is initially and can communicate this information to its neighbors. When we say that a deterministic $\mathsf{LOCAL}$\xspace algorithm \emph{solves} a coloring problem $\mathscr{P}$ on a given class $\mathscr{G}$ of finite graphs, we mean that the coloring it outputs on any graph from $\mathscr{G}$ is a valid solution to $\mathscr{P}$, regardless of the way the identifiers are assigned. The word ``deterministic'' distinguishes this model from the \emph{randomized} version, where the vertices are allowed to generate sequences of random bits. In this paper we will only be concerned with deterministic algorithms.
If $x$ and $y$ are two vertices whose graph distance in $G$ is $T$, then no information from $y$ can reach $x$ in fewer than $T$ communication rounds \ep{this explains the name ``$\mathsf{LOCAL}$\xspace''}. Conversely, in $T$ rounds every vertex can collect all the data present at the vertices at distance at most $T$ from it. Thus, a $T$-round $\mathsf{LOCAL}$\xspace algorithm may be construed simply as a function that, given the structure of the radius-$T$ ball around $x$ \ep{including the assignment of the identifiers to its vertices}, outputs $x$'s color \cite[\S{}4.1.2]{BE}.
In general, the input graph $G$ may possess some additional structure \ep{such as an orientation, a fixed coloring of the vertices, etc.}. For example, in Theorem~\ref{theo:LOCAL} we consider $\mathsf{LOCAL}$\xspace algorithms operating on \emph{$S$-labeled} graphs $G$. This means that the labels on the edges of $G$ form part of the problem's input, and each vertex can discover the labels of the edges in its radius-$T$ ball in $T$ communication rounds.
The formal equivalence between general classes of problems in continuous combinatorics and in distributed computing given by Theorem~\ref{theo:LOCAL} explains the parallels between specific results in these two areas. For example, suppose $\G = \mathbb{Z}^2$ and let $S \coloneqq \set{(1,0), (0,1)}$. Among numerous other results, Gao, Jackson, Krohne, and Seward proved in \cite{Abelian} that the continuous chromatic number of $G(\mathrm{Free}(2^{\mathbb{Z}^2}), S)$ is $4$,
and also that there is no algorithm for deciding, given a finite set $\mathscr{P}$ of $k$-patterns, whether $\mathrm{Free}(2^{\mathbb{Z}^2})$ admits a continuous $\mathscr{P}$-avoiding $k$-coloring.
In \cite{grids}, Brandt et al.~established analogous results for distributed algorithms on $n\times n$ grid graphs: proper $k$-colorings of such graphs can be computed by an $O(\log^\ast n)$-round $\mathsf{LOCAL}$\xspace algorithm for $k \geq 4$ but not for $k = 3$,
and there is no decision procedure that determines, for a given finite set $\mathscr{P}$ of $k$-patterns, whether $\mathscr{P}$-avoiding $k$-colorings of such graphs can be found by a $O(\log^\ast n)$-round $\mathsf{LOCAL}$\xspace algorithm.
Theorem~\ref{theo:LOCAL} provides a general reason underlying this analogy.
The connection between continuous combinatorics and distributed algorithms was observed recently by Elek \cite{Elek} and the author \cite{BerDist}.
In particular, implication \ref{item:LOCAL} $\Longrightarrow$ \ref{item:all} is a special case of \cite[Theorem~2.13]{BerDist}, which is a general result that provides a way to use efficient deterministic $\mathsf{LOCAL}$\xspace algorithms to obtain continuous colorings.
Thus, we only have to prove \ref{item:all} $\Longrightarrow$ \ref{item:LOCAL} here, which is done in \S\ref{subsec:LOCAL} by utilizing the specific construction of the family of finite graphs $\mathscr{H}$ from Theorem~\ref{theo:compact}.
\subsubsection*{{{Acknowledgments}}}
I am indebted to Jan Greb\'ik, Stephen Jackson, V\'aclav Rozho\v{n}, Brandon Seward, and Zolt\'an Vidny\'{a}nszky for many insightful discussions and for helpful comments on the manuscript.
\section{Preliminaries}\label{sec:prelim}
We shall require a few basic facts about continuous graph combinatorics. These facts may be somewhat less well-known than their Borel counterparts, so we prove them here for completeness. \ep{The proofs are standard and essentially present in \cite[\S4]{KechrisSoleckiTodorcevic}.}
Let $G$ be a graph. For a subset $S \subseteq V(G)$, ${N}_G(S)$ denotes the \emphd{neighborhood} of $S$ in $G$, i.e., the set of all vertices that have a neighbor in $S$. For a vertex $x \in V(G)$, we write ${N}_G(x) \coloneqq {N}_G(\set{x})$. A graph $G$ is \emphd{locally finite} if ${N}_G(x)$ is finite for every $x \in X$. A set $I \subseteq V(G)$ is \emphd{independent} if $I \cap {N}_G(I) = \varnothing$, i.e., if no two vertices in $I$ are adjacent. For a subset $U \subseteq V(G)$, we use $G[U]$ to denote the \emphd{subgraph} of $G$ \emphd{induced} by $U$, i.e., the graph with vertex set $U$ whose adjacency relation is inherited from $G$, and we write $G - U \coloneqq G [V(G) \setminus U]$. We say that $G$ is a \emphd{continuous graph} if $V(G)$ is a zero-dimensional Polish space and for every clopen set $U \subseteq V(G)$ its neighborhood ${N}_G(U)$ is also clopen. \ep{This is analogous to Definition~\ref{defn:cont}.} Note that if $G$ is a continuous graph and $U \subseteq V(G)$ is a clopen set of vertices, then the subgraph $G[U]$ of $G$ induced by $U$ is also continuous.
\begin{lemma}\label{lemma:countable}
Every locally finite continuous graph $G$ admits a partition $V(G) = \bigsqcup_{n = 0}^\infty I_n$ into countably many clopen independent sets.
\end{lemma}
\begin{scproof}
Let $\set{U_n \,:\, n \in {\mathbb{N}}}$ be a countable base for the topology on $V(G)$ consisting of clopen sets. For each $n \in {\mathbb{N}}$, let $V_n \coloneqq U_n \setminus {N}_G(U_n)$. By construction, each $V_n$ is independent and, since $G$ is continuous, clopen. Since $G$ is locally finite, each $x \in V(G)$ has an open neighborhood disjoint from ${N}_G(x)$, and hence $\bigcup_{n = 0}^\infty V_n = V(G)$. It remains to make the sets disjoint by setting $I_n \coloneqq V_n \setminus (V_0 \cup \ldots \cup V_{n-1})$.
\end{scproof}
\begin{lemma}\label{lemma:max}
Every locally finite continuous graph $G$ has a clopen maximal independent set $I \subseteq V(G)$.
\end{lemma}
\begin{scproof}
Let $V(G) = \bigsqcup_{n = 0}^\infty I_n$ be a partition into countably many clopen independent sets given by Lemma~\ref{lemma:countable}. Define a sequence of clopen subsets $I_n' \subseteq I_n$ recursively by setting $I_0' \coloneqq I_0$ and $I_{n+1}' \coloneqq I_{n+1} \setminus {N}_G(I_0' \sqcup \ldots \sqcup I_n')$ for all $n \in {\mathbb{N}}$. By construction, the set $I \coloneqq \bigsqcup_{n=0}^\infty I_n'$ is a maximal independent set in $G$. Since $G$ is continuous, the sets $I_n'$ are clopen, and hence $I$ is open. But the sets $I_n \setminus I_n'$ are also clopen, so $V(G) \setminus I = \bigsqcup_{n=0}^\infty (I_n \setminus I_n')$ is open as well, and hence $I$ is clopen, as desired.
\end{scproof}
The \emphd{maximum degree} $\Delta(G)$ of a graph $G$ is defined by $\Delta(G) \coloneqq \sup_{x \in V(G)} |{N}_G(x)|$.
\begin{lemma}\label{lemma:coloring}
If $G$ is a continuous graph of finite maximum degree $\Delta$, then $\chi_c(G) \leq \Delta + 1$.
\end{lemma}
\begin{scproof}
We need to find a partition of $V(G)$ into $\Delta + 1$ clopen independent sets. To this end, we iteratively apply Lemma~\ref{lemma:max} to obtain a sequence $I_0$, \ldots, $I_\Delta$ where each $I_n$ is a clopen maximal independent set in the graph $G - I_0 - \cdots - I_{n-1}$. We claim that $V(G) = \bigsqcup_{n=0}^\Delta I_n$. Indeed, every vertex not in $\bigsqcup_{n=0}^\Delta I_n$ must have a neighbor in each of $I_0$, \ldots, $I_\Delta$, which is impossible as the maximum degree of $G$ is $\Delta$.
\end{scproof}
\section{Proof of Theorem~\ref{theo:cont_LLL}}\label{sec:proof_LLL}
\subsection{First observations}
Call a CSP $\mathscr{B}$ \emphd{bounded} if $\mathsf{vdeg}(\mathscr{B})$ and $\mathsf{ord}(\mathscr{B})$ are both finite. Given a CSP $\mathscr{B} \colon X \to^? k$, define a graph $G_\mathscr{B}$ with vertex set $X$ by making two distinct vertices $x$, $y$ adjacent if and only if there is a constraint $B \in \mathscr{B}$ such that $\set{x,y} \subseteq \mathrm{dom}(B)$.
\begin{lemma}\label{lemma:cont_graph}
If $\mathscr{B} \colon X \to^? k$ is a bounded continuous CSP on a zero-dimensional Polish space $X$, then the graph $G_\mathscr{B}$ is continuous.
\end{lemma}
\begin{scproof}
Set $G \coloneqq G_\mathscr{B}$ and let $U \subseteq X$ be a clopen set. A vertex $x_1$ is in ${N}_G(U)$ is and only if there are some $2 \leq i \leq n \leq \mathsf{ord}(\mathscr{B})$ and $B \subseteq k^{\set{1,\ldots, n}}$ such that:
\[
\text{there are $x_2 \in X$, \ldots, $x_i \in U$, \ldots $x_n \in X$ such that $x_1$, \ldots, $x_n$ are distinct and $B(x_1, \ldots, x_n) \in \mathscr{B}$}.
\]
This shows that ${N}_G(U)$ is a union of finitely many clopen sets, hence it is itself clopen.
\end{scproof}
Let $X$ be a set and let $g \colon X' \to k$ be a coloring with domain $X' \subseteq X$. Given an $(X,k)$-constraint $B$ with domain $D$, let $B/g$ be the constraint with domain $\mathrm{dom}(B/g) \coloneqq D \setminus X'$ given by
\[
B/g \,\coloneqq\, \set{\phi \colon D \setminus X' \to k \,:\, \rest{g}{D \cap X'} \sqcup \phi \,\in\, B}.
\]
In other words, $\phi \in B/g$ if and only if the coloring $g \sqcup \phi$ violates $B$. Here it is possible that $D \subseteq X'$, in which case $\mathrm{dom}(B/g) = \varnothing$; more specifically, $B/g = \set{\varnothing}$ if $g$ violates $B$, and $B/g = \varnothing$ otherwise. For a CSP $\mathscr{B} \colon X \to^? k$, we define $\mathscr{B}/g \coloneqq \set{B/g \,:\, B \in \mathscr{B}}$ and view $\mathscr{B}/g$ as a CSP on $X \setminus X'$. By construction, $h \colon X \setminus X' \to k$ is a solution to $\mathscr{B}/g$ if and only if $g \sqcup h$ is a solution to $\mathscr{B}$.
\begin{lemma}\label{lemma:part_cont}
Let $\mathscr{B} \colon X \to^? k$ be a bounded continuous CSP on a zero-dimensional Polish space $X$. If $X' \subseteq X$ is a clopen set and $g \colon X' \to k$ is continuous, then the CSP $\mathscr{B}/g \colon X \setminus X' \to^? k$ is also continuous.
\end{lemma}
The proof is similar to the proof of Lemma~\ref{lemma:cont_graph}, so we omit it.
\subsection{Good CSPs and conditional probabilities}
Call a CSP $\mathscr{B}$ \emphd{good} if it is bounded and for all $B \in \mathscr{B}$,
\begin{equation}\label{eq:good}
\P[B] \cdot \mathsf{vdeg}(\mathscr{B})^{|\mathrm{dom}(B)|} \,<\, 1.
\end{equation}
If $\mathsf{vdeg}(\mathscr{B}) = |\mathrm{dom}(B)| = 0$, we interpret the expression $0^0$ appearing in \eqref{eq:good} as $1$. Note that every CSP satisfying \eqref{eq:bound} is good. The following lemma is the main step in the proof of Theorem~\ref{theo:cont_LLL}:
\begin{lemma}\label{lemma:step}
Let $\mathscr{B} \colon X \to^? k$ be a good continuous CSP on a zero-dimensional Polish space $X$, and let $I \subseteq X$ be a clopen independent set in $G_\mathscr{B}$. Then there is a continuous coloring $g \colon I \to k$ such that $\mathscr{B}/g$ is good.
\end{lemma}
\begin{scproof}
For brevity, let $G \coloneqq G_\mathscr{B}$ and $\mathsf{vdeg} \coloneqq \mathsf{vdeg}(\mathscr{B})$. Note that $\mathsf{vdeg}(\mathscr{B}/g) \leq \mathsf{vdeg}$ for every $g \colon I \to k$, so it is enough to argue that there is a continuous coloring $g \colon I \to k$ such that
\begin{equation}\label{eq:new_bound}
\P[B/g] \cdot \mathsf{vdeg}^{|\mathrm{dom}(B/g)|} \,<\, 1 \quad \text{for all }B \in \mathscr{B}.
\end{equation}
For each $x \in I$, let $\mathscr{B}_x \subseteq \mathscr{B}$ denote the set of all constraints $B$ with $x \in \mathrm{dom}(B)$. Note that $|\mathscr{B}_x| \leq \mathsf{vdeg}$. Since $I$ is independent in $G$, $x$ is the unique element of $I \cap \mathrm{dom}(B)$ for each $B \in \mathscr{B}_x$; in particular, the value $\P[B/g]$ only depends on the color $g(x)$. Specifically, for each $B \in \mathscr{B}_x$ and a color $\alpha$, we define
\[
\P[B\,\vert\, x \mapsto \alpha] \,\coloneqq\, \frac{|\set{\phi \in B \,:\, \phi(x) = \alpha}|}{k^{|\mathrm{dom}(B)| - 1}}.
\]
Then for any coloring $g \colon I \to k$, $\P[B/g] = \P[B\,\vert\, x \mapsto g(x)]$. We say that a color $\alpha$ is \emphd{good} for $x$ if
\[
\P[B\,\vert\, x \mapsto \alpha] \,\leq\, \P[B] \cdot \mathsf{vdeg} \quad \text{for all } B \in \mathscr{B}_x.
\]
\begin{claim*}\label{claim:claim}
For each $x \in I$, there is a good color.
\end{claim*}
\begin{stepproof}
Take any $x \in I$ and notice that for each $B \in \mathscr{B}_x$, $\P[B] = (1/k)\sum_{\alpha = 0}^{k-1} \P[B\,\vert\, x \mapsto \alpha]$. This implies that there are fewer than $k/\mathsf{vdeg}$ colors $\alpha$ such that $\P[B\,\vert\, x \mapsto \alpha] > \P[B] \cdot \mathsf{vdeg}$. Since $|\mathscr{B}_x| \leq \mathsf{vdeg}$, there are fewer than $k$ colors that are not good for $x$, as desired.
\end{stepproof}
Now we define $g \colon I \to k$ by making $g(x)$ be the minimum color that is good for $x$. Since $\mathscr{B}$ is continuous, it is straightforward to check
that $g$ is continuous. It remains to verify that \eqref{eq:new_bound} holds. To this end, take any $B \in \mathscr{B}$. If $I \cap \mathrm{dom}(B) = \varnothing$, then $B/g = B$ and \eqref{eq:new_bound} is satisfied automatically \ep{since $\mathscr{B}$ is good}. Otherwise, $B \in \mathscr{B}_x$ for some \ep{unique} $x \in I$, and we can write
\begin{align*}
\P[B/g] \cdot \mathsf{vdeg}^{|\mathrm{dom}(B/g)|} \,&=\, \P[B\,\vert\, x \mapsto g(x)] \cdot \mathsf{vdeg}^{|\mathrm{dom}(B)| - 1} \\
\big[\text{since $g(x)$ is good for $x$}\big]\quad\quad&\leq\, \P[B] \cdot \mathsf{vdeg} \cdot \mathsf{vdeg}^{|\mathrm{dom}(B)| - 1} \\
&=\, \P[B] \cdot \mathsf{vdeg}^{|\mathrm{dom}(B)|} \\
\big[\text{since $\mathscr{B}$ is good}\big]\quad\quad&<\, 1. \qedhere
\end{align*}
\end{scproof}
We are now ready to prove the following strengthening of Theorem~\ref{theo:cont_LLL}:
\begin{theo}
If $\mathscr{B} \colon X \to^? k$ is a good continuous CSP on a zero-dimensional Polish space $X$, then $\mathscr{B}$ has a continuous solution $f \colon X \to k$.
\end{theo}
\begin{scproof}
The graph $G \coloneqq G_\mathscr{B}$ has $\Delta(G) \leq \mathsf{vdeg}(\mathscr{B})(\mathsf{ord}(\mathscr{B}) - 1) < \infty$, so, by Lemmas~\ref{lemma:cont_graph} and \ref{lemma:coloring}, there is a partition $X = I_1 \sqcup \ldots \sqcup I_n$ of $X$ into finitely many clopen sets that are independent in $G$. Thanks to Lemma~\ref{lemma:part_cont}, we may iteratively apply Lemma~\ref{lemma:step} to produce a sequence of continuous colorings $g_i \colon I_i \to k$ such that for all $i \leq n$, the CSP $\mathscr{B}/(g_1 \sqcup \ldots \sqcup g_i)$ is good. We claim that $f \coloneqq g_1 \sqcup \ldots \sqcup g_n$ is a solution to $\mathscr{B}$, as desired. Indeed, suppose $f$ violates a constraint $B \in \mathscr{B}$. Then we have $B/f = \set{\varnothing}$, but this means that $\P[B/f] = 1$, contradicting the fact that the CSP $\mathscr{B}/f$ is good.
\end{scproof}
\section{Proofs of Theorems~\ref{theo:STD} and \ref{theo:top_AW}}\label{sec:main_proof}
\subsection{The main lemma}
Recall that $\G$ is a countably infinite group with identity element $\mathbf{1}$. Given an action $\G \curvearrowright X$ and a set $S \subset \G$, a subset $A \subseteq X$ is \emphd{$S$-syndetic} if $S^{-1} \cdot A = X$ and \emphd{$S$-separated} if for all distinct $x$, $y \in A$, $y \not \in S \cdot x$. Note that a set $A \subseteq X$ is $S$-separated if and only if it is independent in the Schreier graph $G(X, S)$. If $X$ is a free zero-dimensional Polish $\G$-space, then the neighborhood of a clopen set $U \subseteq X$ in $G(X,S)$ is $((S \cup S^{-1}) \setminus \set{\mathbf{1}}) \cdot U$, which is also clopen. Hence, in this situation the graph $G(X,S)$ is continuous, so we may apply the results of \S\ref{sec:prelim} to it.
Let $\G\curvearrowright X$ be an action and let $f \colon X \rightharpoonup k$ be a partial coloring. Given a subset $S \subseteq \G$, we say that two points $x$, $y \in X$ are \emphd{$S$-similar} in $f$, in symbols $x \equiv^S_f y$, if
\[
\forall \sigma \in S, \quad \set{\sigma \cdot x,\, \sigma \cdot y} \, \subseteq \, \mathrm{dom}(f) \quad \Longrightarrow \quad f(\sigma \cdot x) \,=\, f(\sigma \cdot y).
\]
\begin{lemma}\label{lemma:syndetic}
For every finite set $F \subset \G$, there is a finite set $S \subset \G$ with the following property:
\noindent Let $X$ be a free zero-dimensional Polish $\G$-space and let $X \coloneqq C_0 \sqcup C \sqcup U$ be a partition of $X$ into clopen sets such that $C$ is $F$-syndetic and $U$ is $S$-separated. Then, given an element $\mathbf{1} \neq \gamma \in \G$,
every continuous $2$-coloring $f_0 \colon C_0 \to 2$ can be extended to a continuous $2$-coloring $f \colon C_0 \sqcup C \to 2$ such that
\begin{equation}\label{eq:want1}
\forall x \in X, \quad x \,\not\equiv^S_f \, \gamma \cdot x.
\end{equation}
\end{lemma}
In the notation of Lemma~\ref{lemma:syndetic}, the set $C_0$ is already \emph{colored}, the set $C$ is the one we need to \emph{color}, and the set $U$ will be left \emph{uncolored}. Lemma~\ref{lemma:syndetic} is analogous to \cite[Lemma~3.9]{STD} and is used in much the same inductive fashion in our proof of Theorem~\ref{theo:STD}. The main novelty of our approach is in the proof of Lemma~\ref{lemma:syndetic}, which uses Theorem~\ref{theo:cont_LLL}.
\begin{scproof}
Let $F \subset \G$ be a finite set.
We may assume that $F$ is symmetric \ep{i.e., $F^{-1} = F$} and $\mathbf{1} \in F$. Let $M$ be any finite symmetric subset of $\G$ with $\mathbf{1} \in M$ of size $|M| = m|F|$, where $m > 0$ is so large that
\begin{equation}\label{eq:m}
2^{m} \,>\, (2m|F|)^{500}.
\end{equation}
Let $N \coloneqq FM \cup MF$. We claim that the conclusion of Lemma~\ref{lemma:syndetic}
holds for $S \coloneqq N^5F$.
Let $X$ be a free zero-dimensional Polish $\G$-space and let $X \coloneqq C_0 \sqcup C \sqcup U$ be a partition of $X$ into clopen sets such that $C$ is $F$-syndetic and $U$ is $S$-separated. Fix a group element $\gamma \neq \mathbf{1}$ and let $\Delta \coloneqq N^4 F \gamma F N^4 \setminus \set{\mathbf{1}}$.
By Lemma~\ref{lemma:max}, there is a clopen maximal $N^4$-separated subset $Z$ of $C$.
Since $N$ is symmetric and contains $\mathbf{1}$, the maximality of $Z$ means that $C \subseteq N^4 \cdot Z$. Since $C$ is $F$-syndetic, this implies that $Z$ is $N^4F$-syndetic.
Let $g \colon C_0 \sqcup (C \setminus (N \cdot Z)) \to 2$ be an arbitrary continuous extension of $f_0$ \ep{for instance, we can set $g(x) \coloneqq 0$ for all $x \in C \setminus (N \cdot Z)$}. We shall extend $g$ to a continuous coloring $f \colon C_0 \sqcup C \to 2$ such that
\begin{equation}\label{eq:want}
\forall z \in Z \, \forall \delta \in \Delta, \quad z \,\not\equiv^N_f \, \delta \cdot z.
\end{equation}
\begin{claim*}
If $f$ satisfies \eqref{eq:want}, then it also satisfies \eqref{eq:want1}.
\end{claim*}
\begin{stepproof}
Take any $x \in X$. Since $Z$ is $N^4F$-syndetic, there is $\beta \in N^4 F$ such that $\beta \cdot x \in Z$. Applying \eqref{eq:want} with $z = \beta \cdot x$ and $\delta = \beta \gamma \beta^{-1}$, we get $\beta \cdot x \not\equiv^N_f \beta \gamma \cdot x$. Since $N\beta \subseteq S$, this yields
$x \not \equiv^S_f \gamma \cdot x$, as desired.
\end{stepproof}
Extensions of $g$ to $C_0 \sqcup C$ can be encoded by $2^{|N|}$-colorings of $Z$, as follows. A natural number less than $2^{|N|}$ can be identified with a binary sequence of length $|N|$, so a $2^{|N|}$-coloring $h \colon Z \to 2^{|N|}$ can be viewed as an $|N|$-tuple of $2$-colorings $h_1$, \ldots, $h_{|N|} \colon Z \to 2$. Let $N = \set{\nu_1, \ldots, \nu_{|N|}}$ be an enumeration of $N$. Since $X$ is free and $Z$ is $N^4$-separated, each point $x \in N \cdot Z$ can be expressed uniquely as $x = \nu_i \cdot z$ for some $z \in Z$ and $1 \leq i \leq |N|$. Thus, given $h \colon Z \to 2^{|N|}$, we can define $f^h \colon C_0 \sqcup C \to 2$ by the formula
\[
f^h(x) \,\coloneqq\, \begin{cases}
g(x) &\text{if } x \in C_0 \sqcup (C \setminus (N \cdot Z));\\
h_i(z) &\text{if } x \in C \text{ and } x = \nu_i \cdot z \text{ for } z \in Z \text{ and } 1 \leq i \leq |N|.
\end{cases}
\]
In other words, for each $z \in Z$, the color $h(z) \in 2^{|N|}$ encodes the restriction of $f^h$ to the set $C \cap (N \cdot z)$. This encoding is generally not one-to-one: unless $N \cdot z \subseteq C$, the sequence $h_1(z)$, \ldots, $h_{|N|}(z)$ includes some redundant bits. Nevertheless, choosing $h(z)$ uniformly at random does correspond to picking
the restriction of $f^h$ to $C \cap (N \cdot z)$
uniformly at random form the set of all $2$-colorings $C \cap (N \cdot z) \to 2$. Notice also that if $h$ is continuous, then so is $f^h$.
To apply Theorem~\ref{theo:cont_LLL}, we now need to define a constraint satisfaction problem $\mathscr{B} \colon Z \to^? 2^{|N|}$ such that $h \colon Z \to 2^{|N|}$ is a solution to $\mathscr{B}$ if and only if $f^h$ satisfies \eqref{eq:want}, i.e.,
\[
\text{$h$ is a solution to $\mathscr{B}$} \quad \Longleftrightarrow \quad \forall z \in Z \, \forall \delta \in \Delta, \quad z \,\not\equiv^N_{f^h} \, \delta \cdot z.
\]
To this end, observe that the truth of the statement $z \not\equiv^N_{f^h} \delta \cdot z$ only depends on the restriction of $f^h$ to $(N \cdot z) \cup (N\delta \cdot z)$. Thus, for each $z \in Z$ and $\delta \in \Delta$, there is a constraint $B_{z, \delta}$ with domain
\[
\mathrm{dom}(B_{z,\delta}) \,\coloneqq\, \set{z' \in Z \,:\, (N \cdot z') \cap ((N \cdot z) \cup (N\delta \cdot z)) \neq \varnothing} \,=\, Z \cap \left((N^2 \cup N^2\delta) \cdot z\right)
\]
such that $h$ satisfies $B_{z, \delta}$ if and only if $z \not\equiv^N_{f^h} \delta\cdot z$. We then let $\mathscr{B}\coloneqq \set{B_{z,\delta} \,:\, z \in Z \text{ and } \delta \in \Delta}$. It is clear from the definition that the CSP $\mathscr{B}$ is continuous.
\begin{claim*}
$\mathsf{ord}(\mathscr{B}) \leq 2$.
\end{claim*}
\begin{stepproof}
Since $Z$ is $N^4$-separated, $|Z \cap (N^2 \cdot x)| \leq 1$ for all $x \in X$. Hence, for any $z \in Z$ and $\delta \in \Delta$, there are at most $2$ elements in $Z \cap \left((N^2 \cup N^2\delta) \cdot z\right)$, i.e., $|\mathrm{dom}(B_{z,\delta})| \leq 2$, as desired.
\end{stepproof}
\begin{claim*}
$\mathsf{vdeg}(\mathscr{B}) \leq 2^{11}m^{10}|F|^{22}$.
\end{claim*}
\begin{stepproof}
Take any $z' \in Z$. We need to bound the number of pairs $(z, \delta) \in Z \times \Delta$ such that $z' \in \mathrm{dom}(B_{z,\delta})$. Recall that $N = FM \cup MF$, where $|M| = m|F|$, so $|N| \leq 2m|F|^2$. Hence, $|\Delta| \leq |N^4F\gamma F N^4| \leq 2^8 m^8|F|^{18}$. Once $\delta$ is fixed, $z$ must satisfy $z' \in (N^2 \cup N^2 \delta) \cdot z$, i.e., $z \in (N^2 \cup \delta^{-1}N^2) \cdot z'$, so there are at most $8m^2|F|^4$ such $z$. Thus, the number of choices for $(z, \delta)$ is at most $2^8m^8|F|^{18} \cdot 8m^2|F|^4 = 2^{11}m^{10}|F|^{22}$.
\end{stepproof}
\begin{claim*}
$\mathsf{p}(\mathscr{B}) \leq 2^{-m/6}$.
\end{claim*}
\begin{stepproof}
Take any $z \in Z$ and $\delta \in \Delta$. For brevity, let $y \coloneqq \delta \cdot z$. We need to show that $\P[B_{z,\delta}] \leq 2^{-m/6}$, i.e., the probability that $z$ is $N$-similar to $y$ in a random extension $f$ of $g$ to $C_0 \sqcup C$ is at most $2^{-m/6}$.
Call an element $\nu \in N$ \emphd{eligible} if $\nu \cdot z \in C$ and $\nu \cdot y \in C_0 \sqcup C$. Let $E$ be the set of all eligible $\nu \in N$. Note that if $\nu$ is eligible, then $\nu \cdot z$ is uncolored in $g$ but becomes colored in $f$, and $\nu \cdot y$ is also colored in $f$ \ep{but it may or may not be already colored in $g$}. The color $f(\nu \cdot z)$ is chosen randomly, so the probability that $f(\nu \cdot z) = f(\nu \cdot y)$ is exactly $1/2$, regardless of whether $\nu \cdot y$ is already colored in $g$.
Since $C$ is $F$-syndetic and $N \supseteq FM$, we have $|C \cap (N \cdot z)| \geq |M|/|F| = m$, and since $U$ is $S$-separated and $S \supseteq N^2$, $|(N \cdot y) \cap U| \leq 1$. Therefore, $|E| \geq m - 1 \geq m/2$. Let $G$ be the graph with vertex set $(N \cdot z) \cup (N \cdot y)$ in which we put an edge between $\nu \cdot z$ and $\nu \cdot y$ for each $\nu \in E$. The maximum degree of $G$ is at most $2$, so we can pick a subset $E' \subseteq E$ of size $|E'| \geq |E|/3 \geq m/6$ such that the pairs $\set{\nu \cdot z, \nu \cdot y}$, $\nu \in E'$, are pairwise disjoint. When $f$ is chosen randomly, the events $f(\nu \cdot z) = f(\nu \cdot y)$ for distinct $\nu \in E'$ are mutually independent, so the probability that they all occur simultaneously is $2^{-|E'|} \leq 2^{-m/6}$, which gives us the desired upper bound on the probability that $z$ is $N$-similar to $y$ in $f$.
\end{stepproof}
And now we are done: by Theorem \ref{theo:cont_LLL}, $\mathscr{B}$ has a continuous solution as long as
\[
\mathsf{p}(\mathscr{B}) \cdot \mathsf{vdeg}(\mathscr{B})^{\mathsf{ord}(\mathscr{B})} \,\leq\, 2^{-m/6} \cdot (2^{11}m^{10}|F|^{22})^2 \,=\, 2^{-m/6} \cdot 2^{22}m^{20}|F|^{44}\,<\, 1,
\]
which holds by \eqref{eq:m}.
\end{scproof}
\subsection{Proof of Theorem~\ref{theo:STD}}\label{subsec:proof_ST-D}
For the reader's convenience, we state Theorem~\ref{theo:STD} again:
\begin{theocopy}{theo:STD}
If $\G \curvearrowright X$ is a free Borel action of $\G$ on a standard Borel space $X$, then there is a $\G$-equivariant Borel map $\pi \colon X \to Y$, where $Y \subset 2^\G$ is a free subshift.
\end{theocopy}
To prove Theorem~\ref{theo:STD}, we shall first define a free subshift $Y \subset 2^\G$ and then iteratively apply Lemma~\ref{lemma:syndetic} to construct a desired $\G$-equivariant Borel map $\pi \colon X \to Y$.
We start by recursively defining a sequence of finite sets $H_0$, $F_0$, $S_0$, $H_1$, $F_1$, $S_1$, \ldots{} $\subset \G$ as follows. Let $H_0$ be an arbitrary nonempty finite subset of $\G$. Once $H_n$ is defined, let $\delta_n$ be any group element such that $H_n \cap (H_n\delta_n) = \varnothing$ \ep{such $\delta_n$ exists since $\G$ is infinite} and set $F_n \coloneqq H_n \cup (H_n\delta_n)$. Next, let $S_n$ be the set $S$ produced by Lemma~\ref{lemma:syndetic} applied with $F = F_n$. Upon replacing $S_n$ with a superset if necessary, we may additionally assume that $S_n$ is symmetric and $S_n \supseteq F_n^{-1}F_n$. Finally, we let $H_{n+1} \coloneqq S_nH_n$. The following claim explains why the sets $H_n$, $F_n$, and $S_n$ are defined in this manner.
\begin{big_claim}\label{claim:split}
Let $X$ be a free zero-dimensional Polish $\G$-space and let $W \subseteq X$ by an $H_n$-syndetic clopen set. Then there is a partition $W = C \sqcup U$ into two clopen sets such that:
\begin{itemize}
\item the set $C$ is $F_n$-syndetic;
\item the set $U$ is $S_n$-separated and $H_{n+1}$-syndetic.
\end{itemize}
\end{big_claim}
\begin{scproof}
By Lemma~\ref{lemma:max}, we can let $U$ be a clopen maximal $S_n$-separated subset of $W$ and define $C \coloneqq W \setminus U$. Since $S_n$ is symmetric and contains $\mathbf{1}$, the maximality of $U$ means that $W \subseteq S_n \cdot U$, and since $W$ is $H_n$-syndetic and $H_{n+1} = S_n H_n$, this implies that $U$ is $H_{n+1}$-syndetic, as claimed.
To see that $C$ is $F_n$-syndetic, take any $x \in X$. We need to argue that $F_n \cdot x$ contains a point in $C$. Recall that $F_n = H_n \cup (H_n \delta_n)$. Since $W$ is $H_n$-syndetic, the sets $H_n \cdot x$ and $H_n \delta_n \cdot x$ each contain a point in $W$. Since the sets $H_n$ and $H_n \delta_n$ are disjoint, we have $|(F_n \cdot x) \cap W| \geq 2$. On the other hand, $|(F_n \cdot x) \cap U| \leq 1$ since $U$ is $F_n^{-1}F_n$-separated. Therefore, $|(F_n \cdot x) \cap C| \geq 1$, as desired.
\end{scproof}
Fix an arbitrary enumeration $\gamma_0$, $\gamma_1$, \ldots{} of the non-identity elements of $\G$. For each $n \in {\mathbb{N}}$, let $Y_n \subset 2^\G$ be the set of all $2$-colorings $y \colon \G \to 2$ such that
\[
\exists \sigma \in S_n \text{ with } y(\sigma) \,\neq\, y(\sigma\gamma_n).
\]
The set $Y_n$ is clopen, and if $y \in Y_n$, then $\gamma_n \cdot y \neq y$. Hence, the set $Y' \coloneqq \bigcap_{n = 0}^\infty Y_n$ is closed and every point $y \in Y'$ has trivial stabilizer. Finally, we define $Y \coloneqq \bigcap_{\delta \in \G} (\delta \cdot Y')$. The set $Y$ is closed, $\G$-invariant, and contained in $Y' \subseteq \mathrm{Free}(2^\G)$, so $Y$ is a free subshift \ep{although we have not yet shown that $Y$ is nonempty}.
Now let $\G \curvearrowright X$ be a free Borel action of $\G$ on a standard Borel space $X$. It follows from standard results in descriptive set theory that there is a compatible zero-dimensional Polish topology $\tau$ on $X$ with respect to which the action $\G \curvearrowright X$ is continuous \cite[\S13]{KechrisDST}. Iterative applications of Claim~\ref{claim:split} yield a sequence of clopen subsets $U_0$, $C_0$, $U_1$, $C_1$, \ldots{} of $X$ such that $U_0 = X$ and for all $n \in {\mathbb{N}}$,
\begin{itemize}
\item $U_n = C_n \sqcup U_{n+1}$; and
\item the set $C_n$ is $F_n$-syndetic, while $U_{n+1}$ is $S_n$-separated and $H_{n+1}$-syndetic.
\end{itemize}
Next we use Lemma~\ref{lemma:syndetic} repeatedly to obtain an increasing sequence $f_0 \subseteq f_1 \subseteq \ldots$ such that for each $n \in {\mathbb{N}}$, $f_n \colon C_0 \sqcup \ldots \sqcup C_n \to 2$ is a continuous $2$-coloring satisfying
\begin{equation}\label{eq:not_sim}
\forall x \in X, \quad x \,\not\equiv^{S_n}_{f_n} \, \gamma_n \cdot x.
\end{equation}
Let $f \colon X \to 2$ be an arbitrary Borel extension of $\bigcup_{n=0}^\infty f_n$ \ep{e.g., we may set $f(x) \coloneqq 0$ for all $x \not \in \bigsqcup_{n=0}^\infty C_n$}. Define a $\G$-equivariant Borel map $\pi_f \colon X \to 2^\G$ by setting $\pi_f(x)(\gamma) \coloneqq f(\gamma \cdot x)$ for all $x \in X$ and $\gamma \in \G$. We claim that $\pi_f(x) \in Y$ for all $x \in X$, as desired. Indeed, since $\pi_f$ is $\G$-equivariant, it suffices to argue that $\pi_f(x) \in Y_n$ for all $x \in X$ and $n \in {\mathbb{N}}$, i.e., that for all $x \in X$ and $n \in {\mathbb{N}}$,
\[
\exists \sigma \in S_n \text{ with } \pi_f(x)(\sigma) \,\neq\, \pi_f(x)(\sigma \gamma_n).
\]
Using the definition of $\pi_f$, we can rewrite the latter statement as
\[
\exists \sigma \in S_n \text{ with } f(\sigma \cdot x) \,\neq\, f(\sigma \gamma_n \cdot x),
\]
which holds by \eqref{eq:not_sim} since $f$ is an extension of $f_n$.
\subsection{Proof of Theorem~\ref{theo:top_AW}}\label{subsec:proof_top_AW}
Let us state Theorem~\ref{theo:top_AW} again:
\begin{theocopy}{theo:top_AW}
If $X$ is a nonempty free zero-dimensional Polish $\G$-space, then $\mathrm{Free}(2^\G) \preccurlyeq X$.
\smallskip
\noindent Explicitly, given any $k \in {\mathbb{N}}^+$, a finite subset $F \subset \G$, and a continuous $k$-coloring $f \colon \mathrm{Free}(2^\G) \to k$, there is a continuous $k$-coloring $g \colon X \to k$ such that $\mathscr{P}_F(X, g) = \mathscr{P}_F(\mathrm{Free}(2^\G), f)$.
\end{theocopy}
Our proof of Theorem~\ref{theo:top_AW} is a modification of the proof of Theorem~\ref{theo:STD} presented in \S\ref{subsec:proof_ST-D}. To begin with, fix $k \in {\mathbb{N}}^+$, a finite subset $F \subset \G$, and a continuous $k$-coloring $f \colon \mathrm{Free}(2^\G) \to k$. The following clopen sets from a base for the topology on $2^\G$:
\[
U(s) \,\coloneqq\, \set{x \in 2^\G \,:\, x(\gamma) = s(\gamma) \text{ for all } \gamma \in \mathrm{dom}(s)},
\]
where $s$ is a $2$-pattern \ep{i.e., a partial mapping $s \colon \G \rightharpoonup 2$ whose domain is a finite subset of $\G$}. Given a finite set $D \subset \G$ and a point $x \in \mathrm{Free}(2^\G)$, we say that $D$ \emphd{$f$-determines} $x$ if for all $z \in \mathrm{Free}(2^\G)$,
\[
\forall \delta \in D_{x}, \, z(\delta) = x(\delta) \qquad \Longrightarrow \qquad f(z) = f(x).\qedhere
\]
The continuity of $f$ is then equivalent to the following assertion:
\begin{big_claim}\label{claim:cont}
For each $x \in \mathrm{Free}(2^\G)$, there is a finite set $D \subset \G$ that $f$-determines $x$. \qed
\end{big_claim}
\begin{big_claim}\label{claim:sp}
For each $k$-pattern $p \in \mathscr{P}_F(\mathrm{Free}(2^\G),f)$, there is a $2$-pattern $s_p$ such that for all $z \in \mathrm{Free}(2^\G)$,
\[
z \in U(s_p) \qquad \Longrightarrow \qquad \forall \gamma \in F, \, f(\gamma \cdot z) = p(\gamma).
\]
\end{big_claim}
\begin{scproof}
Since $p$ occurs in $f$, there is some $x \in \mathrm{Free}(2^\G)$ such that $f(\gamma \cdot x) = p(\gamma)$ for all $\gamma \in F$. Claim~\ref{claim:cont} yields a finite set $D$ such that for all $z \in \mathrm{Free}(2^\G)$,
\[
\forall \delta \in D, \, z(\delta) = x(\delta) \qquad \Longrightarrow \qquad \forall \gamma \in F, \, f(\gamma \cdot z) = p(\gamma).
\]
Thus, we may take $s_p$ be the $2$-pattern with domain $D$ given by $s_p(\delta) \coloneqq x(\delta)$ for all $\delta \in D$.
\end{scproof}
Set $D \coloneqq \bigcup \set{\mathrm{dom}(s_p) \,:\, p \in \mathscr{P}_F(\mathrm{Free}(2^\G), f)}$ \ep{where $s_p$ is the $2$-pattern given by Claim~\ref{claim:sp}} and let $H_0$ be an arbitrary symmetric finite subset of $\G$ with $|H_0| > |D|$. Next we recursively build a sequence of finite sets $H_0$, $F_0$, $S_0$, $H_1$, $F_1$, $S_1$, \ldots{} $\subset \G$ in the same way we did in \S\ref{subsec:proof_ST-D}. That is, once $H_n$ is defined, we let $\delta_n$ be any group element such that $H_n \cap (H_n\delta_n) = \varnothing$ and set $F_n \coloneqq H_n \cup (H_n\delta_n)$. Then we let $S_n$ be the set $S$ produced by Lemma~\ref{lemma:syndetic} applied with $F = F_n$. Upon replacing $S_n$ with a superset if necessary, we may additionally assume that $S_n$ is symmetric and $S_n \supseteq F_n^{-1}F_n$. Finally, we let $H_{n+1} \coloneqq S_nH_n$. The following is a restatement of Claim~\ref{claim:split}:
\begin{big_claim}\label{claim:split1}
Let $X$ be a free zero-dimensional Polish $\G$-space and let $W \subseteq X$ by an $H_n$-syndetic clopen set. Then there is a partition $W = C \sqcup U$ into two clopen sets such that:
\begin{itemize}
\item the set $C$ is $F_n$-syndetic;
\item the set $U$ is $S_n$-separated and $H_{n+1}$-syndetic.
\end{itemize}
\end{big_claim}
\begin{scproof}
See the proof of Claim~\ref{claim:split} in \S\ref{subsec:proof_ST-D}.
\end{scproof}
As in \S\ref{subsec:proof_ST-D}, we now fix an arbitrary enumeration $\gamma_0$, $\gamma_1$, \ldots{} of the non-identity elements of $\G$. For each $n \in {\mathbb{N}}$, let $Y_n \subset 2^\G$ be the set of all $2$-colorings $y \colon \G \to 2$ such that
\[
\exists \sigma \in S_n \text{ with } y(\sigma) \,\neq\, y(\sigma\gamma_n).
\]
Let $Y \coloneqq \bigcap_{n=0}^\infty\bigcap_{\delta \in \G} (\delta \cdot Y_n)$. As discussed in \S\ref{subsec:proof_ST-D}, $Y$ is a free subshift. For each $N \in {\mathbb{N}}$, we also define
\[
Y_{\leq N} \,\coloneqq\, \bigcap_{n = 0}^N\bigcap_{\delta \in \G} (\delta \cdot Y_n).
\]
Then $Y_{\leq N}$ is a subshift and $Y=\bigcap_{N=0}^\infty Y_{\leq N}$, where the intersection is decreasing. Note that $Y_{\leq N}$ need not be free; in particular, $f$ may not be defined on all of $Y_{\leq N}$. Nevertheless, for large enough $N$, it is possible to define a continuous $k$-coloring $f^\ast \colon Y_{\leq N} \to k$ that, in some sense, approximates $f$:
\begin{big_claim}\label{claim:approx}
There exist $N \in {\mathbb{N}}$ and a continuous $k$-coloring $f^\ast \colon Y_{\leq N} \to k$ such that for each $z \in Y_{\leq N}$, there is $y \in Y$ with the following properties:
\begin{itemize}
\item for all $\delta \in D$, $z(\delta) = y(\delta)$; and
\item for all $\gamma \in F$, $f^\ast(\gamma \cdot z) = f(\gamma \cdot y)$.
\end{itemize}
\end{big_claim}
\begin{scproof}
First we argue that there is a finite set $L \subset \G$ that $f$-determines every point $y \in Y$. For each finite set $L \subset \G$, let $V_L$ be the set of all points $y \in Y$ that are $f$-determined by $L$. Each set $V_L$ is relatively open in $Y$. Moreover, by Claim~\ref{claim:cont}, the union of all the sets $V_L$ is $Y$. Since $Y$ is compact, this implies that there is a finite collection $L_1$, \ldots, $L_r$ of finite subsets of $\G$ such that $Y = V_{L_1} \cup \ldots \cup V_{L_r}$. Then every point $y \in Y$ is $f$-determined by $L \coloneqq L_1 \cup \ldots \cup L_r$, as desired.
Next we observe that there is $N \in {\mathbb{N}}$ such that for each $z \in Y_{\leq N}$,
\begin{equation}\label{eq:approx}
\exists y \in Y \text{ such that } \forall \delta \in D \cup L \cup LF, \, z(\delta) = y(\delta).
\end{equation}
Indeed, let $Q$ be the set of all $z \in 2^\G$ for which \eqref{eq:approx} fails. Then $Q$ is a clopen subset of $2^\G$ and $Q \cap Y = \varnothing$. Since $2^\G$ is compact and $Y = \bigcap_{N=0}^\infty Y_{\leq N}$, there must exist some $N \in {\mathbb{N}}$ with $Q \cap Y_{\leq N} = \varnothing$, as desired.
Finally, we define a $k$-coloring $f^\ast \colon Y_{\leq N} \to k$ as follows:
\begin{align*}
f^\ast(z) = c \quad \vcentcolon&\Longleftrightarrow \quad \exists y \in Y \text{ such that } f(y) =c \text{ and } \forall \delta \in L, \, z(\delta) = y(\delta)\\
&\Longleftrightarrow \quad \forall y \in Y, \text{ we have } \left(\forall \delta \in L, \, z(\delta) = y(\delta)\right) \ \Longrightarrow \ f(y) = c.
\end{align*}
The two definitions given above are equivalent since every $y \in Y$ is $f$-determined by $L$. By construction, $L$ also $f^\ast$-determines every $z \in Y_{\leq N}$, so $f^\ast$ is continuous. Now consider any $z \in Y_{\leq N}$. By \eqref{eq:approx}, there is $y \in Y$ such that for all $\delta \in D \cup L \cup LF$, $z(\delta) = y(\delta)$, and it is clear that $y$ has the desired properties.
\end{scproof}
Now let $X$ be a nonempty free zero-dimensional Polish $\G$-space. Fix $N \in {\mathbb{N}}$ and $f^\ast \colon Y_{\leq N} \to k$ given by Claim~\ref{claim:approx}. We shall construct a continuous $k$-coloring $g \colon X \to k$ such that $\mathscr{P}_F(X, g) = \mathscr{P}_F(\mathrm{Free}(2^\G), f)$ by first building a continuous $\G$-equivariant map $\pi \colon X \to Y_{\leq N}$ and then setting $g \coloneqq f^\ast \circ \pi$.
We start our construction by letting $W \subseteq X$ be a clopen maximal $D^{-1}H_0^2D$-separated subset of $X$ \ep{which exists by Lemma~\ref{lemma:max}}. Since $X$ is free and nonempty, every $\G$-orbit in $X$ intersects $W$ in infinitely many points, so $W$ is infinite. Thus, we may partition $W$ as $W = \bigsqcup_p W_p$, where the union is over all $p \in \mathscr{P}_F(\mathrm{Free}(2^\G), f)$ and each $W_p$ is nonempty and clopen. Let $B_p \coloneqq \mathrm{dom}(s_p) \cdot W_p$ and $B \coloneqq \bigsqcup_p B_p$ \ep{the union is disjoint since $W$ is $D^{-1}D$-separated} and define a continuous $2$-coloring $b \colon B \to 2$ by
\begin{equation}\label{eq:b}
b(\delta \cdot w) \,\coloneqq\, s_p(\delta) \text{ for all $p \in \mathscr{P}_F(\mathrm{Free}(2^\G), f)$, $w \in W_p$, and $\delta \in \mathrm{dom}(s_p)$}.
\end{equation}
Property \eqref{eq:b} will be eventually used to show that $\mathscr{P}_F(X, g) \supseteq \mathscr{P}_F(\mathrm{Free}(2^\G), f)$.
To continue our construction,
we need to make sure that $X \setminus B$ is syndetic:
\begin{big_claim}\label{claim:base}
The set $X \setminus B$ is $H_0$-syndetic.
\end{big_claim}
\begin{scproof}
Take any $x \in X$. Since $W$ is $D^{-1}H_0^2D$-separated, there is at most one $w \in W$ such that $(D \cdot w) \cap (H_0 \cdot x) \neq \varnothing$, so $|(D \cdot W) \cap (H_0 \cdot x)| \leq |D| < |H_0|$. Since $B \subseteq D \cdot W$, this implies $(H_0 \cdot x) \setminus B \neq \varnothing$.
\end{scproof}
Claim~\ref{claim:base} allows us to iteratively apply Claim~\ref{claim:split1} in order to obtain a sequence of clopen subsets $U_0$, $C_0$, $U_1$, $C_1$, \ldots{} of $X$ such that $U_0 = X \setminus B$ and for all $n \in {\mathbb{N}}$,
\begin{itemize}
\item $U_n = C_n \sqcup U_{n+1}$; and
\item the set $C_n$ is $F_n$-syndetic, while $U_{n+1}$ is $S_n$-separated and $H_{n+1}$-syndetic.
\end{itemize}
We can then use Lemma~\ref{lemma:syndetic} repeatedly to obtain an increasing sequence $b \subseteq h_0 \subseteq h_1 \subseteq \ldots$ such that for each $n \in {\mathbb{N}}$, $h_n \colon B \sqcup C_0 \sqcup \ldots \sqcup C_n \to 2$ is a continuous $2$-coloring satisfying
\begin{equation}\label{eq:not_sim1}
\forall x \in X, \quad x \,\not\equiv^{S_n}_{h_n} \, \gamma_n \cdot x.
\end{equation}
Recall that $N \in {\mathbb{N}}$ and $f^\ast \colon Y_{\leq N} \to k$ are given by Claim~\ref{claim:approx}. Let $h \colon X \to 2$ be an arbitrary continuous extension of $h_N$ \ep{e.g., we may set $h(x) \coloneqq 0$ for all $x \not \in \mathrm{dom}(h_N)$} and define a $\G$-equivariant continuous map $\pi_h \colon X \to 2^\G$ by setting $\pi_h(x)(\gamma) \coloneqq h(\gamma \cdot x)$ for all $x \in X$ and $\gamma \in \G$. Condition~\eqref{eq:not_sim1} ensures that $\pi_h(x) \in Y_{\leq N}$ for all $x \in X$, so we can define a continuous $k$-coloring $g \colon X \to 2$ via $g \coloneqq f^\ast \circ \pi_h$.
\begin{big_claim}\label{claim:sup}
$\mathscr{P}_F(X, g) \supseteq \mathscr{P}_F(\mathrm{Free}(2^\G), f)$.
\end{big_claim}
\begin{scproof}
Consider any $p \in \mathscr{P}_F(\mathrm{Free}(2^\G), f)$. Take an arbitrary point $w \in W_p$ and let $z \coloneqq \pi_h(w) \in Y_{\leq N}$. Note that for all $\gamma \in \G$, $g(\gamma \cdot w) = f^\ast(\gamma \cdot z)$. By Claim~\ref{claim:approx}, there is $y \in Y$ such that:
\begin{enumerate}[label=\ep{\itshape\alph*}]
\item\label{item:a} for all $\delta \in D$, $z(\delta) = y(\delta)$; and
\item\label{item:b} for all $\gamma \in F$, $f^\ast(\gamma \cdot z) = f(\gamma \cdot y)$.
\end{enumerate}
By \eqref{eq:b}, since $h$ extends $b$, we have $z(\delta) = h(\delta \cdot w) = b(\delta \cdot w) = s_p(\delta)$ for all $\delta \in \mathrm{dom}(s_p)$, i.e., $z \in U(s_p)$. By \ref{item:a}, $y \in U(s_p)$ as well, so for all $\gamma \in F$,
\[
g(\gamma \cdot w) \,=\, f^\ast(\gamma \cdot z) \,=\, f(\gamma \cdot y) \,=\, p(\gamma),
\]
where the second equality holds by \ref{item:b}, and the third by Claim~\ref{claim:sp} and since $y \in U(s_p)$. This shows that $p$ appears in $g$, as desired.
\end{scproof}
\begin{big_claim}\label{claim:sub}
$\mathscr{P}_F(X, g) \subseteq \mathscr{P}_F(\mathrm{Free}(2^\G), f)$.
\end{big_claim}
\begin{scproof}
Take any $p \in \mathscr{P}_F(X, g)$ and let $x \in X$ be such that $g(\gamma \cdot x) = p(\gamma)$ for all $\gamma \in F$. Let $z \coloneqq \pi_h(x) \in Y_{\leq N}$, so $g(\gamma \cdot x) = f^\ast(\gamma \cdot z)$ for all $\gamma \in \G$. By Claim~\ref{claim:approx}, there is $y \in Y$ such that:
\begin{itemize}
\item for all $\gamma \in F$, $f^\ast(\gamma \cdot z) = f(\gamma \cdot y)$.
\end{itemize}
Then for all $\gamma \in F$, $f(\gamma \cdot y) = f^\ast(\gamma \cdot z) = g(\gamma \cdot x) = p(\gamma)$, which shows that $p$ appears in $f$, as desired.
\end{scproof}
Claims~\ref{claim:sup} and \ref{claim:sub} yield $\mathscr{P}_F(X, g) = \mathscr{P}_F(\mathrm{Free}(2^\G), f)$, and the proof of Theorem~\ref{theo:top_AW} is complete.
\section{Combinatorial results}\label{sec:corls}
\subsection{Local colorings of special subshifts}
In this subsection we prove a certain technical result \ep{namely Lemma~\ref{lemma:F_loc}} that will be later used to derive Theorems \ref{theo:compact} and \ref{theo:LOCAL}.
Given a subshift $X \subseteq n^\G$, a finite subset $F \subset \G$, and an integer $k \geq 1$, we say that a $k$-coloring $f \colon X \to k$ is \emphd{$F$-local} if for all $x \in X$, the value $f(x)$ is determined by the restriction of $x$ to $F$, i.e., if there is a mapping $\rho \colon n^F \to k$ such that for all $x \in X$, $f(x) = \rho \left((x(\sigma))_{\sigma \in F}\right)$. Note that an $F$-local coloring is necessarily continuous. Conversely, if $f \colon X \to k$ is continuous, then, due to the compactness of $X$, there is a finite set $F \subset \G$ such that $f$ is $F$-local.
Let $D$ be a finite subset of $\G$ and let $n \geq 1$ be an integer. Define a subshift $X_{D, n} \subseteq n^\G$ as follows:
\[
X_{D, n} \,\coloneqq\, \set{x \in n^\G \,:\, \text{for all $\gamma \in \G$ and $\sigma \in S \setminus \set{\mathbf{1}}$, we have $x(\gamma) \neq x(\sigma \gamma)$}}.
\]
In other words, the elements of $X_{D,n}$ are the proper $n$-colorings of the Cayley graph $G(\G, D)$.
The main result of this subsection allows us to build $F$-local colorings of $X_{D,n}$ with some control over the set $F$:
\begin{lemma}[\textls{Local colorings of $X_{D,n}$}]\label{lemma:F_loc}
Let $\mathscr{P}$ be a finite set of $k$-patterns such that every free zero-dimensional Polish $\G$-space admits a continuous $\mathscr{P}$-avoiding $k$-coloring. Then there are an integer $n_0 \geq 1$ and a finite set $F \subset \G$ with the following property:
\smallskip
\noindent Let $n \geq n_0$ and let $D \subset \G$ be a finite set. Set $F^\ast \coloneqq F^{\log^\ast n}$ and suppose that $D \supseteq F^\ast$. Then the subshift $X_{D,n}$ admits an $F^\ast$-local $\mathscr{P}$-avoiding $k$-coloring.
\end{lemma}
Before proving Lemma~\ref{lemma:F_loc} in full generality, we need to establish the following special case:
\begin{lemma}\label{lemma:F_loc_col}
Let $S \subset \G$ be a finite set and let $d \coloneqq |(S \cup S^{-1})\setminus\set{\mathbf{1}}|$. Then there are an integer $n_0 \geq 1$ and a finite set $F \subset \G$ with the following property:
\smallskip
\noindent Let $n \geq n_0$ and let $D \subset \G$ be a finite set. Set $F^\ast \coloneqq F^{\log^\ast n}$ and suppose that $D \supseteq F^\ast$. Then the Schreier graph $G(X_{D,n}, S)$ admits an $F^\ast$-local proper $(d+1)$-coloring.
\end{lemma}
An equivalent way of phrasing the conclusion of Lemma~\ref{lemma:F_loc_col} is that there is a $\G$-equivariant map $\pi \colon X_{D, n} \to X_{S,d+1}$ such that the mapping $x \mapsto \pi(x)(\mathbf{1})$ is $F^\ast$-local. Since the Schreier graph $G(X_{D,n}, S)$ has maximum degree $d$, Lemma~\ref{lemma:coloring} provides a \emph{continuous} proper $(d+1)$-coloring of $G(X_{D,n}, S)$. Hence, the challenge in proving Lemma~\ref{lemma:F_loc_col} is to get an \emph{$F^\ast$-local} coloring. We shall use
the following classical result from distributed computing, dating back to Goldberg, Plotkin, and Shannon \cite{GPS}:
\begin{theo}[{\cite[Corollary 3.15]{BE}}]\label{theo:d+1}
There is a deterministic $\mathsf{LOCAL}$\xspace algorithm that computes a proper $(d + 1)$-coloring of an $n$-vertex graph $G$ of maximum degree $d$ in $\log^\ast n + O(d^2)$ rounds.
\end{theo}
\begin{scproof}[Proof of Lemma~\ref{lemma:F_loc_col}]
Let $\mathcal{A}$ be the $\mathsf{LOCAL}$\xspace algorithm given by Theorem~\ref{theo:d+1}. This means that there are integer constants $n_0$, $c \geq 0$ such that if $G$ is an $n$-vertex graph of maximum degree $d$ with $n \geq n_0$ and each vertex $x \in V(G)$ is assigned a unique identifier $\mathrm{Id}(x) \in \set{1, \ldots, n}$, then $\mathcal{A}$ generates a proper $(d + 1)$-coloring of $G$ in at most $\log^\ast n + cd^2$ communication rounds. We will show that the conclusion of Lemma~\ref{lemma:F_loc_col} is satisfied for this $n_0$ and $F \coloneqq (S\cup S^{-1} \cup \set{\mathbf{1}})^{N}$, where $N \geq 10cd^2$.
Take any $n \geq n_0$ and suppose that $D \supseteq F^\ast \coloneqq F^{\log^\ast n}$. Let $G \coloneqq G(X_{D, n}, S)$. We need to show that $G$ has an $F^\ast$-local proper $(d+1)$-coloring. Note that the maximum degree of $G$ is $d$. Set $T \coloneqq \log^\ast n + cd^2$. For each $x \in X_{D,n}$, consider the finite subgraph $H_x$ of $G$ induced by the $(T+1)$-ball around $x$ in $G$, i.e., by the set of all vertices that can be reached from $x$ by a path of length at most $T + 1$. Then $D \cdot y \supseteq V(H_x)$ for all $y \in V(H_x)$, so, since $x \in X_{D,n}$, the mapping $V(H_x) \to n \colon y \mapsto y(\mathbf{1})$ is injective. In particular, $|V(H_x)| \leq n$. Let $H_x'$ be an $n$-vertex graph obtained from $H_x$ by adding $n - V(H_x)$ isolated vertices. We can then extend the mapping $V(H_x) \to n \colon y \mapsto y(\mathbf{1})$ to a bijection $\mathrm{Id} \colon V(H_x') \to \set{1, \ldots, n}$.
Now run the algorithm $\mathcal{A}$ on $H_x'$ for $T$ rounds using the identifiers $\mathrm{Id}$ and let $\phi_x \colon V(H_x') \to (d+1)$ be the resulting output coloring. Set $f(x) \coloneqq \phi_x(x)$. We claim that $f$ constructed in this way is an $F^\ast$-local proper $(d+1)$-coloring of $G$, as desired.
Since $\mathcal{A}$ runs for at most $T$ rounds, $f(x)$ is determined by the identifiers of the vertices in the radius-$T$ ball around $x$, which is included in $F^\ast \cdot x$, so $f$ is $F^\ast$-local. To see that $f$ is proper, observe that if $y$ is adjacent to $x$, then the radius-$T$ ball around $y$ is contained in $V(H_x)$. Since $y$ executes the same algorithm in $H_x'$ and in $H_y'$, we conclude that $\phi_x(y) = \phi_y(y)$. But $\phi_x$ is a proper coloring, so we can write $f(x) = \phi_x(x) \neq \phi_x(y) = \phi_y(y) = f(y)$.
\end{scproof}
\begin{scproof}[Proof of Lemma~\ref{lemma:F_loc}]
This argument is inspired by Elek's proof of \cite[Theorem 2]{Elek}. Enumerate the non-identity elements of $\G$ as $\gamma_1$, $\gamma_2$, \ldots{} and let $X_i \coloneqq X_{\set{\gamma_i}, 3}$. Consider the product space $X \coloneqq \prod_{i=1}^\infty X_i$, equipped with the diagonal action of $\G$. Then $X$ is a compact zero-dimensional Polish $\G$-space. Furthermore, $X$ is free since $\gamma_i \cdot x \neq x$ for all $x \in X_i$. Hence, by the assumptions on $\mathscr{P}$, there is a continuous $\mathscr{P}$-avoiding $k$-coloring $f \colon X \to k$. The following sets form a base for the topology on $X$:
\begin{equation}\label{eq:X_base}
\set{x = (x_1, x_2, \ldots) \in X \,:\, x_i(\delta) = s_i(\delta) \text{ for all } 1 \leq i \leq N \text{ and } \delta \in R},
\end{equation}
where $N$ is a natural number, $R \subset \G$ is a finite set, and $s_1 \colon R \to 3$, \ldots, $s_N \colon R \to 3$ are $3$-patterns. Hence, each $x \in X$ has a clopen neighborhood of the form \eqref{eq:X_base} on which $f$ is constant. The compactness of $X$ then implies that there exist $N$ and $R$ as above such that for all $x = (x_1, x_2, \ldots) \in X$, the value $f(x)$ is determined by the restrictions of $x_1$, $x_2$, \ldots, $x_N$ to $R$. In other words, there is a mapping $\rho \colon (3^R)^N \to k$ such that for all $x = (x_1, x_2, \ldots) \in X$,
\begin{equation}\label{eq:rho_box}
f(x) \,=\, \rho\left((x_i(\delta))_{1 \leq i \leq N, \delta \in R}\right).
\end{equation}
We can then use \eqref{eq:rho_box} to define a continuous $\mathscr{P}$-avoiding $k$-coloring $f' \colon X_{\leq N} \to k$ of $X_{\leq N} \coloneqq \prod_{i = 1}^N X_i$.
For each $1 \leq i \leq N$, let $n_i$ and $F_i$ be given by Lemma~\ref{lemma:F_loc_col} applied to $\set{\gamma_i}$ in place of $S$. Set
\[
n_0 \,\coloneqq\, \max \left\{n_1, \ldots, n_N\right\} \qquad \text{and} \qquad F \,\coloneqq\, \left(F_1 \cup \ldots \cup F_N\right)\left(R \cup \set{\mathbf{1}}\right).
\]
We claim that the conclusion of Lemma~\ref{lemma:F_loc} holds with these $n_0$ and $F$.
Take any $n \geq n_0$ and $D \supseteq F^\ast \coloneqq F^{\log^\ast n}$. If we let $F_i^\ast \coloneqq F_i^{\log^\ast n}$ for $1 \leq i \leq N$, then, by Lemma~\ref{lemma:F_loc_col}, the Schreier graph $G(X_{D,n}, \set{\gamma_i})$ admits an $F_i^\ast$-local proper $3$-coloring $f_i \colon X_{D, n} \to 3$. Define a $\G$-equivariant map $\pi_i \colon X_{D,n} \to X_i$ by setting $\pi_i(x)(\gamma) \coloneqq f_i(\gamma \cdot x)$ for all $x \in X_{D,n}$ and $\gamma \in \G$. Then
\[
\pi \colon X_{D,n} \to X_{\leq N} \colon x \mapsto (\pi_1(x), \ldots, \pi_N(x))
\]
is a $\G$-equivariant map from $X_{D,n}$ to $X_{\leq N}$. Thus, $f' \circ \pi \colon X_{D,n} \to k$ is a $\mathscr{P}$-avoiding $k$-coloring of $X_{D,n}$. Furthermore, to determine $(f' \circ \pi)(x)$, we only need to know $f_i(\delta \cdot x)$ for all $1 \leq i \leq N$ and $\delta \in R$, so this coloring is $(F_1^\ast \cup \ldots \cup F_N^\ast)R$-local, and we are done since $F^\ast \supseteq (F_1^\ast \cup \ldots \cup F_N^\ast)R$.
\end{scproof}
\subsection{Reduction to finite graphs}\label{subsec:tiles}
For this subsection, we fix a finite subset $S \subset \G$. For each finite set $D \subset \G$ with $S \cup S^{-1} \cup \set{\mathbf{1}} \subseteq D$, we define a finite $S$-labeled graph $H_{D,n}$ as follows. For sets $A$ and $B$, let $\mathrm{Inj}(A,B)$ denote the set of all injective mappings from $A$ to $B$. The vertex set of $H_{D,n}$ is $V(H_{D,n}) \coloneqq \mathrm{Inj}(D, n)$. If $q$, $q' \in \mathrm{Inj}(D,n)$, we put an edge labeled $\sigma \in S \cup S^{-1}$ going from $q$ to $q'$ if and only if the following holds:
\begin{equation}\label{eq:compat}
\forall \delta,\, \delta' \in D, \qquad \left(\delta = \delta'\sigma \quad\Longrightarrow\quad q(\delta) = q'(\delta') \right).
\end{equation}
If \eqref{eq:compat} holds, we say that $q$ and $q'$ are \emphd{$\sigma$-compatible}. If $q$ and $q'$ are $\sigma$-compatible, then, in particular, $q'(\mathbf{1}) = q(\sigma)$. Since $q$ is injective, this implies that $q' \neq q$ and also that $q$ and $q'$ are not $\tau$-compatible for any $\tau \neq \sigma$, so the edge from $q$ to $q'$ in $H_{D,n}$ receives a unique label.
\begin{lemma}\label{lemma:hom_exists}
Let $D \subset \G$ be a finite set with $S \cup S^{-1} \cup \set{\mathbf{1}} \subseteq D$ and let $n \geq |D|^2$ be an integer. Then for every free zero-dimensional Polish $\G$-space $X$, there is a continuous homomorphism $G(X, S) \to H_{D,n}$.
\end{lemma}
\begin{scproof}
The Schreier graph $G(X, D^{-1}D)$ has maximum degree at most $|D^{-1}D| - 1 \leq n - 1$ \ep{we are subtracting $1$ since $\mathbf{1} \in D^{-1}D$ does not count toward the degree}, so, by Lemma~\ref{lemma:coloring}, $G(X, D^{-1}D)$ has a continuous proper $n$-coloring $f \colon X \to n$. For each $x \in X$, let $q_x \colon D \to n$ be given by $q_x(\delta) \coloneqq f(\delta \cdot x)$ for all $\delta \in D$. By the choice of $f$, $q_x \in \mathrm{Inj}(D, n)$. Furthermore, it is clear that for any $\sigma \in S$, $q_x$ and $q_{\sigma \cdot x}$ are $\sigma$-compatible. Therefore, $x \mapsto q_x$ is a continuous homomorphism from $G(X,S)$ to $H_{D,n}$, as desired.
\end{scproof}
\begin{lemma}[\textls{Colorings of $H_{D,n}$}]\label{lemma:H_col}
Let $\mathscr{P}$ be a finite set of $S$-connected $k$-patterns such that every free zero-dimensional Polish $\G$-space admits a continuous $\mathscr{P}$-avoiding $k$-coloring. Then there are an integer $n_0 \geq 1$ and a finite set $F \subset \G$ containing $S \cup S^{-1} \cup \set{\mathbf{1}}$ with the following property:
\smallskip
\noindent Let $n \geq n_0$ and let $D \subset \G$ be a finite set. Set $F^\ast \coloneqq F^{\log^\ast n}$ and suppose that $D \supseteq F^\ast$. If $n \geq 2|D|$, then $H_{D,n}$ admits a $\mathscr{P}$-avoiding $k$-coloring.
\end{lemma}
\begin{scproof}
Without loss of generality, we may assume that $\mathbf{1} \in \mathrm{dom}(p)$ for all $p \in \mathscr{P}$. Since each $p \in \mathscr{P}$ is $S$-connected, we can define $\Delta_p$ to be the diameter of $\mathrm{dom}(p)$ in $G(\G, S)$, i.e., the maximum length of a shortest path in $G(\mathrm{dom}(p), S)$ between two elements of $\mathrm{dom}(p)$. Set $\Delta \coloneqq \max_p \Delta_p$. Let $n_0$ and $F_0$ be given by Lemma~\ref{lemma:F_loc} applied to $\mathscr{P}$ and set
\begin{equation}\label{eq:F}
F \coloneqq (F_0 \cup \set{\mathbf{1}}) \left(S\cup S^{-1} \cup \set{\mathbf{1}}\right)^\Delta.
\end{equation}
Take any $n \geq n_0$ and suppose that $D \supseteq F^\ast \coloneqq F^{\log^\ast n}$. Let $F_0^\ast \coloneqq F_0^{\log^\ast n}$. Then, by Lemma~\ref{lemma:F_loc}, $X_{D,n}$ has an $F_0^\ast$-local $\mathscr{P}$-avoiding $k$-coloring $f \colon X_{D,n} \to k$, i.e., there is a map $\rho \colon n^D \to k$ such that for each $x \in X_{D,n}$,
\begin{equation}\label{eq:rho_on_X}
f(x) \,=\, \rho\left((x(\delta))_{\delta \in F_0^\ast}\right).
\end{equation}
We can simply use formula \eqref{eq:rho_on_X} to define a $k$-coloring $g$ of $H_{D,n}$; that is, for all $q \in \mathrm{Inj}(D,n)$, we let
\begin{equation}\label{eq:rho_on_H}
g(q) \,\coloneqq\, \rho\left((q(\delta))_{\delta \in F_0^\ast}\right).
\end{equation}
We claim that $g$ is $\mathscr{P}$-avoiding, as desired.
\begin{claim*}
If $q \in \mathrm{Inj}(D,n)$, then there is a point $x \in X_{D,n}$ such that $x(\delta) = q(\delta)$ for all $\delta \in D$.
\end{claim*}
\begin{stepproof}
The maximum degree of the Cayley graph $G(\G, S)$ is at most $|D \cup D^{-1}| - 1 \leq 2|D| - 1$ \ep{we are subtracting $1$ since $\mathbf{1} \in D$ does not count toward the degree}. Since $n \geq 2|D|$, we conclude that $q \colon D \to n$ can be extended to a proper $n$-coloring $x \colon \G \to n$ of $G(\G, D)$ greedily.
\end{stepproof}
Suppose that there is a pattern $p \in \mathscr{P}$ that occurs in $g$. This means that there is a homomorphism $\phi \colon \mathrm{dom}(p) \to \mathrm{Inj}(D,n)$ from $G(\mathrm{dom}(p), S)$ to $H_{D,n}$ such that $g(\phi(\gamma)) = p(\gamma)$ for all $\gamma \in \G$. By the above claim, there is a point $x \in X_{D,n}$ such that $x(\delta) = \phi(\mathbf{1})(\delta)$ for all $\delta \in D$. Since $\mathrm{dom}(p) \subseteq (S \cup S^{-1} \cup \set{\mathbf{1}})^\Delta$, equations \eqref{eq:F}, \eqref{eq:rho_on_X}, and \eqref{eq:rho_on_H} and the definition of $H_{D,n}$ yield $f(\gamma \cdot x) = g(\phi(\gamma))$ for all $\gamma \in \mathrm{dom}(p)$, so $p$ occurs in $f$, which is a contradiction.
\end{scproof}
Theorem~\ref{theo:compact} follows immediately from Lemmas~\ref{lemma:hom_exists} and \ref{lemma:H_col}. Fix an arbitrary increasing sequence $S \cup S^{-1} \cup \set{\mathbf{1}} \subseteq F_1 \subset F_2 \subset \ldots$ of finite subsets of $\G$ such that $\bigcup_{i = 1}^\infty F_i = \G$. Let $n_i \geq 1$ be any integer with
\[
n_i \,\geq\, |F_i|^{2\log^\ast n_i}.
\]
Set $D_i \coloneqq F_i^{\log^\ast n_i}$ and let $\mathscr{H} \coloneqq \set{H_{D_i, n_i}}_{i=1}^\infty$. Then Theorem~\ref{theo:compact} holds for this $\mathscr{H}$:
\begin{theocopy}{theo:compact}
Let $\mathscr{P}$ be a finite set of $S$-connected $k$-patterns. The following statements are equivalent:
\begin{itemize}
\item[\ref{item:all}] Every free zero-dimensional Polish $\G$-space admits a continuous $\mathscr{P}$-avoiding $k$-coloring.
\item[\ref{item:compact1}] There is a graph in $\mathscr{H}$ that admits a $\mathscr{P}$-avoiding $k$-coloring.
\item[\ref{item:compact_many}] All but finitely many graphs in $\mathscr{H}$ admit $\mathscr{P}$-avoiding $k$-colorings.
\end{itemize}
\end{theocopy}
\begin{scproof}
Implication \ref{item:compact_many} $\Longrightarrow$ \ref{item:compact1} is trivial, while \ref{item:compact1} $\Longrightarrow$ \ref{item:all} holds by Lemma~\ref{lemma:hom_exists} since $n_i \geq |D_i|^2$ for all $i$. Assuming \ref{item:all}, let $n_0$ and $F$ be given by Lemma~\ref{lemma:H_col} applied to $\mathscr{P}$. Then \ref{item:compact_many} holds since for all but finitely many $i$, we have $n_i \geq n_0$ and $F_i \supseteq F$.
\end{scproof}
\subsection{$\mathsf{LOCAL}$\xspace algorithms}\label{subsec:LOCAL}
In this subsection we prove Theorem~\ref{theo:LOCAL}:
\begin{theocopy}{theo:LOCAL}
Let $S \subset \G$ be a finite set and let $\mathscr{P}$ be a finite set of $S$-connected $k$-patterns. The following statements are equivalent:
\begin{itemize}
\item[\ref{item:all}] Every free zero-dimensional Polish $\G$-space admits a continuous $\mathscr{P}$-avoiding $k$-coloring.
\item[\ref{item:LOCAL}] There is a deterministic distributed algorithm in the $\mathsf{LOCAL}$\xspace model that, given an $n$-vertex $S$-labeled subgraph $G$ of $G(\G, S)$, in $O(\log^\ast n)$ rounds outputs a $\mathscr{P}$-avoiding $k$-coloring of $G$.
\end{itemize}
\end{theocopy}
Implication \ref{item:LOCAL} $\Longrightarrow$ \ref{item:all} is a special case of \cite[Theorem~2.13]{BerDist}, so we only need to prove \ref{item:all} $\Longrightarrow$ \ref{item:LOCAL}. Assume \ref{item:all} and let $n_0$ and $F$ be given by Lemma~\ref{lemma:H_col} applied to $\mathscr{P}$. Take $m \geq n_0$ so large that
\begin{equation}\label{eq:m_lower}
m \,>\, |F|^{3\log^\ast m}.
\end{equation}
Set $D \coloneqq F^{\log^\ast m}$. By Lemma~\ref{lemma:H_col}, the graph $H_{D,m}$ admits a $\mathscr{P}$-avoiding $k$-coloring $h \colon \mathrm{Inj}(D,m) \to k$. Thus, to prove \ref{item:LOCAL}, it suffices to show that there is a deterministic $\mathsf{LOCAL}$\xspace algorithm that, given an $n$-vertex $S$-labeled subgraph $G$ of $G(\G, S)$, in $O(\log^\ast n)$ rounds outputs a homomorphism $G \to H_{D,m}$ \ep{since composing such a homomorphism with $h$ requires no additional rounds of communication}.
Our algorithm is supposed to output a homomorphism $G \to H_{D,m}$. In other words, each vertex $x \in V(G)$ has to compute an injective mapping $q_x \colon D \to m$ so that if $x$ is joined to $y$ by an edge with label $\sigma$, then $q_x$ and $q_y$ are $\sigma$-compatible. It is tempting to use the same strategy as in the proof of Lemma~\ref{lemma:hom_exists}, i.e., to first compute a locally injective $m$-coloring of $G$ and then make each vertex $x$ collect the colors within some finite radius around $x$ in $G$. Unfortunately, this approach does not quite work, because the set $D x$ may not be a subset of $V(G)$. Furthermore, there may be some $y \in D x$ whose distance to $x$ in $G$ is much larger than in $G(\G, S)$, so $x$ cannot find out the color of $y$ within a small number of rounds. We circumvent this difficulty by computing a homomorphism $G \to H_{D,m}$ directly.
Let us start by introducing some useful notation. Let $\lambda$ be the edge labeling on $G$. We say that pairs $(x,\delta)$, $(y,\delta') \in V(G) \times D$ are \emphd{one-step equivalent}, in symbols $(x,\delta) \sim_1 (y, \delta')$, if $x$ and $y$ are adjacent and $\delta = \delta' \lambda(x,y)$. The transitive closure of the relation $\sim_1$ is denoted by $\sim$. Explicitly, $(x,\delta) \sim (y, \delta')$ if and only if either $(x, \delta) = (y, \delta')$ or there exists a sequence $(z_1, \delta_1)$, \ldots, $(z_t, \delta_t)$ such that \[(x, \delta) \sim_1 (z_1, \delta_1) \sim_1 \cdots \sim_1 (z_t, \delta_t) \sim_1 (y, \delta').\]
If $(x,\delta) \sim (y, \delta')$, we say that $(x,\delta)$ and $(y, \delta')$ are \emphd{equivalent}. Observe that a mapping $x \mapsto q_x$ is a homomorphism from $G$ to $H_{D,m}$ if and only if $q_x(\delta) = q_y(\delta')$ whenever $(x,\delta) \sim (y, \delta')$.
Let us establish a few simple facts about the relation $\sim$.
\begin{big_claim}\label{claim:one}
The following statements are valid:
\begin{enumerate}[label={\ep{\itshape\alph*}}]
\item\label{item:y} For every $x$ and $\delta$, $\delta' \in D$, there is at most one $y \in V(G)$ such that $(x,\delta) \sim (y, \delta')$.
\item\label{item:delta} For every $x$, $y \in V(G)$ and $\delta \in D$, there is at most one $\delta' \in D$ such that $(x,\delta) \sim (y, \delta')$.
\end{enumerate}
\end{big_claim}
\begin{scproof}
Recall that $G$ is a subgraph of the Cayley graph $G(\G, S)$. Therefore, $x$ and $y$ are elements of the group $\G$, and if $(x,\delta) \sim (y, \delta')$, then we can write $\delta x = \delta' y$, so $y = (\delta')^{-1}\delta x$ and $\delta' = \delta x y^{-1}$.
\end{scproof}
For $x \in V(G)$, let $[x] \coloneqq \set{y \in V(G) \,:\, \text{$(x, \delta) \sim (y, \delta')$ for some $\delta$, $\delta ' \in D$}}$. Note that the relation ``$y \in [x]$'' is reflexive and symmetric, but not necessarily transitive.
\begin{big_claim}\label{claim:close}
For every $x \in V(G)$ and $y \in [x]$, the graph distance between $x$ and $y$ in $G$ is at most $|D|$.
\end{big_claim}
\begin{scproof}
If $y = x$, then we are done. Otherwise, there is a sequence $(z_1, \delta_1)$, \ldots, $(z_t, \delta_t)$ such that
\begin{equation}\label{eq:chain}
(x, \delta) \sim_1 (z_1, \delta_1) \sim_1 \cdots \sim_1 (z_t, \delta_t) \sim_1 (y, \delta').
\end{equation}
By minimizing $t$, we may assume that the pairs $(x, \delta)$, $(z_1, \delta_1)$, \ldots, $(z_t, \delta_t)$, $(y, \delta')$ are pairwise distinct. By Claim~\ref{claim:one}\ref{item:y}, this implies that the elements $\delta$, $\delta_1$, \ldots, $\delta_t$, $\delta'$ are also pairwise distinct. Therefore, $t + 2 \leq |D|$. From \eqref{eq:chain}, we see that the distance between $x$ and $y$ is at most $t+1 < |D|$.
\end{scproof}
Let $G'$ denote the graph with the same vertex set as $G$ in which two distinct vertices $x$, $y$ are adjacent if and only if there is $z \in V(G)$ such that $z \in [x]$ and $y \in [z]$ \ep{this includes the case when $z = x$ and $y \in [x]$}. By Claim~\ref{claim:one}\ref{item:y}, $|[x]| \leq |D|^2$ for all $x \in V(G)$, so the maximum degree of $G'$ is at most
\[
N \,\coloneqq\, |D|^4 \,=\, O(1).
\]
By Claim~\ref{claim:close}, a single communication round in the $\mathsf{LOCAL}$\xspace model on $G'$ can be simulated by $2|D| = O(1)$ rounds on $G$. Hence, by Theorem~\ref{theo:d+1}, we can compute a proper $(N + 1)$-coloring $\phi \colon V(G) \to (N + 1)$ of $G'$ in $O(\log^\ast n)$ rounds. For $0 \leq i \leq N$, let $X_i \coloneqq \phi^{-1}(i)$.
We shall compute the desired homomorphism from $G$ to $H_{D,m}$ in $N + 1$ stages indexed by $0$, $1$, \ldots, $N$. At the start of stage $i$, each vertex $x$ will have already computed the values $q_x(\delta)$ for some subset of $\delta \in D$, subject to the following requirement:
\begin{quote}
\emph{If $(x, \delta) \sim (y, \delta')$, then $q_x(\delta) = q_y(\delta')$ whenever at least one of $q_x(\delta)$ and $q_y(\delta')$ is defined.}
\end{quote}
During stage $i$, we have to compute $q_x(\delta)$ for all $x \in X_i$ and $\delta \in D$. To this end, each vertex $x \in X_i$ considers the elements $\delta \in D$ one by one and performs the following procedure for each of them. If $q_x(\delta)$ is already defined, then there is nothing to do. Otherwise, by Claim~\ref{claim:close}, in $|D|$ rounds $x$ can determine the following set:
\[
B \, \coloneqq\, \set{q_y(\epsilon) \,:\, \text{$y \in [x]$, $\epsilon \in D$, and $q_y(\epsilon)$ is defined}}.
\]
Since $|[x]| \leq |D|^2$, we have $|B| \leq |D|^3 < m$ by \eqref{eq:m_lower}, so $x$ can pick a color $\alpha < m$ that is not in $B$ and set $q_x(\delta) \coloneqq \alpha$. Then in $|D|$ rounds $x$ can notify each $y \in [x]$ so that if $(x,\delta) \sim (y, \delta')$, then $y$ sets $q_y(\delta') \coloneqq \alpha$ \ep{such $\delta'$ is unique by Claim~\ref{claim:one}\ref{item:delta}}. By the choice of $\alpha$, the mappings $q_y \colon D \rightharpoonup m$ for all $y \in [x]$ remain injective after this procedure. Notice also that since the set $X_i$ is $G'$-independent, all the elements of $X_i$ can run this procedure in parallel without creating any conflicts.
After $(N+1)$ stages, we will have computed a homomorphism $G \to H_{D, m}$. Note that each stage takes $O(1)$ rounds, and there are $O(1)$ stages, so the total required number of communication rounds is
\[
\underbrace{O(\log^\ast n)}_{\text{computing $\phi$}} \,+\, O(1) \,=\, O(\log^\ast n),
\]
and the proof is complete.
\printbibliography
\end{document}
|
{
"timestamp": "2021-02-18T02:20:03",
"yymm": "2102",
"arxiv_id": "2102.08797",
"language": "en",
"url": "https://arxiv.org/abs/2102.08797"
}
|
\section*{Introduction}
\label{sec:intro}
Over the last 30 years molecular dynamics (MD) simulations of biological macro-molecules have advanced our understanding of their structure and function\cite{Karplus2002}. Today MD simulations have become an essential tool for scientific discovery in the fields of biology, chemistry, and medicine. However, they remain hampered by their limited access to timescales of biological relevance for protein folding pathways, conformational dynamics, and rare-event kinetics.
In order to resolve this bottleneck two complementary approaches have been pursued.
First efforts centered around innovative hardware solutions started with crowd sourcing for compute cycles~\cite{Shirts2000} and have more recently received a boost with the Anton machine~\cite{Shaw2009} enabling remarkable, second-long, simulations for small bio-molecules. Complementary algorithmic efforts aim to advance time scales by systematic coarse graining of the system dynamics.
One of the first such studies used the principal component or normal mode analysis to simulate the conformational changes in proteins~\cite{balsera1996principal,brooks1983harmonic,skjaerven2011principal,praprotnik2005molecular}.
Several coarse-graining (CG) methods reduce the complexity of molecular systems by modeling several atoms as a single particle~\cite{noid2013perspective,zavadlav2019bayesian,Voth:2016}.
Backmapping techniques~\cite{pezeshkian2020backmapping, stieffenhofer2020adversarial,Kremer:2006} can be subsequently utilized to recover the atomistic degrees of freedom from a CG representation.
Multiscale approaches combine the atomistic and coarse-grained/continuum models~\cite{werder2005hybrid,ayton2007multiscale,annurev2008} to augment the accessible timescales while significant efforts have focused on enhanced sampling techniques~\cite{huber1994local,voudouris1998guided, dellago1998efficient, laio2002escaping,van2003novel, maragliano2006string, jaffrelot2020high}.
Several of these methods exploit the fact that coarse kinetic dynamics on the molecular level are often governed by a few, slow collective variables (CVs) (also termed reaction coordinates)~\cite{peters2006obtaining, stamati2010application,bittracher2018data,bonati2020data}, or by transitions between a few long-lived metastable states~\cite{schutte2011markov,bittracher2018transition}.
The CVs are typically specified a priori and their choice crucially impacts the performance and success of the respective sampling methods.
Similar to to the CG models, the CVs provide a low order representation of the molecular system, albeit without a particle representation.
CVs are of much lower dimensionality than CG models, and retrieving atomistic configurations from CVs is a more challenging problem.
While many research efforts have addressed the coarse to fine mapping in CG models, the literature is still scarce on methods to retrieve atomistic configurations from CVs.
Machine learning (ML) methods~\cite{Michie1968,Bishop2006}, exploiting the expressive power of deep networks and their scalability to large datasets, have been used to alleviate the computational burden associated with the simulation of proteins, leading to profound scientific discoveries~\cite{noe2020machine, butler2018machine, noe2020machine2}.
The pioneering work in Ref.~\cite{behler2007generalized}, utilized neural networks to learn an approximate potential energy surface of density functional theory (DFT) in bulk silicon from quantum mechanical calculations, performing MD simulations with this approximate potential and accelerating the DFT simulations.
The field of data-driven learning of potential energy surfaces and force fields is rapidly attracting attention with important recent extensions and applications~\cite{rupp2012fast, chmiela2018towards, schutt2018schnet, rowe2020accurate, bartok2017machine, imbalzano2018automatic, hansen2013assessment, faber2017prediction, cheng2019ab}.
ML is employed to identify CG models for MD in Refs.~\cite{zhang2018deepcg, wang2019machine, durumeric2019adversarial}.
Boltzmann generators are proposed in Ref.~\cite{noe2019boltzmann} to sample from the equilibrium distribution of a molecular system directly surpassing the need to perform MD.
Early ML methods for the identification of CVs are building on the variational approach~\cite{nuske2014variational} leading to the time-lagged independent analysis (TICA)~\cite{perez2016hierarchical}.
TICA is based on the Koopman operator theory, suggesting the existence of a latent transformation to an infinite-dimensional space that linearizes the dynamics on average.
As a consequence, slow CVs are modeled as linear combinations of feature functions of the state of the protein (atom coordinates, or internal structural coordinates).
Coarse-graining of the molecular dynamics is achieved by discretizing the state space and employing indicator vector functions as features~\cite{buchete2008coarse, noe2011dynamical, nuske2014variational, ribeiro2018reweighted}.
Consequently, the feature state dynamics reduce to the propagation law of a Markov State Model (MSM).
More recently the need for expert knowledge to construct the latent feature functions has been alleviated by learning the latent space using neural networks ~\cite{mardt2018vampnets,wehmeyer2018time}.
The dynamics on the latent space are assumed to be linear and Markovian.
For example, VAMPnets~\cite{mardt2018vampnets,chen2019nonlinear} learn nonlinear features of the molecular state with autoencoder (AE) networks.
However, they are not generative and cannot recover the detailed configuration of the protein (decoding part).
Moreover, the method requires the construction of an MSM to sample the latent dynamics and approximate the time-scales of the dynamics.
Time-lagged AE have been utilized to identify a reaction coordinate embedding and propagate the dynamics in Ref.~\cite{wehmeyer2018time} but they are not generative, as the learned mappings are deterministic, while the effective dynamics are assumed to be Markovian.
Extensions to generative approaches include Refs.~\cite{wu2018deep,hernandez2018variational,sidky2020molecular}.
In Ref.~\cite{wu2018deep}, a deep generative MSM is utilized to capture the long-timescale dynamics and sample realistic alanine dipeptide configurations.
Even though Mixture Density Networks (MDNs) are employed in Ref.~\cite{sidky2020molecular} to propagate the dynamics in the latent space, memory effects are not taken into account.
The proposed method is based on the autocorrelation loss, which suffers from the dependency on the batch size~\cite{hernandez2018variational}.
In~\cite{ribeiro2018reweighted},the Reweighted autoencoded variational Bayes for enhanced sampling (RAVE) method is proposed that alternates between iterations of MD and a Variational AE (VAEs) model.
RAVE is encoding each time-step independently without taking into account the temporal aspect of the latent dynamics.
RAVE requires the transition to the high-dimensional configuration space to progress the simulation in time, which can be computationally expensive.
The works mentioned above imply memory-less (Markovian) latent space dynamics by selecting an appropriate time-lag in the master equations~\cite{buchete2008coarse, noe2011dynamical}.
The time-lag is usually estimated heuristically, balancing the requirements to be large enough so that the Markovian assumption holds, and at the same time small enough to ensure that the method samples the configuration space efficiently. We remark that
in cases where a protein is interacting with a solvent, only the configuration of the protein is taken into account and not the solvent.
This renders the Markovian assumption in the latent dynamics rather unrealistic. This issue is addressed in this work by employing Long Short-Term Memory (LSTM)~\cite{hochreiter1997long} Recurrent Neural Networks (RNNs) that capture memory effects of the latent dynamics.
Here we propose a novel data-driven generative framework that relies on Learning the Effective Dynamics (LED) of the molecular systems \cite{vlachas2020learning}.
LED is founded on the equation-free framework (EFF)~\cite{kevrekidis2003equation} and it enriches it by employing ML methodologies to evolve the latent space dynamics with the Mixture Density Network - Long Short-Term Memory RNN (MDN-LSTM) and the two-way mapping between coarse and fine scales with Mixture Density Network Autoencoders (MDN-AEs)~\cite{bishop1994mixture}.
These enrichments are essenetial in extending the applicability of EFF to non-Markovian settings and problems with strong non-linearities.
We demonstrate the effectiveness of the LED framework in simulations of the M\"ueller-Brown potential (MBP), the Trp Cage miniprotein, and the alanine dipeptide in water.
LED can accurately capture the statistics, and reproduce the free energy landscape from data.
Moreover, LED uncovers low-energy metastable states in the free energy projected to the latent space and recovers the transition time-scales between them.
We find that in simulations of the alanine dipeptide and the Trp Cage miniprotein, LED is three orders of magnitude faster than the classical MD solver.
As a data-driven generative method, LED has the ability to sample novel unseen configurations interpolating the training data and accelerating the exploration of the state space.
\section*{Materials and Methods}
\label{sec:method}
The LED framework \cite{vlachas2020learning} for molecular systems is founded on the equation-free framework (EFF)~\cite{kevrekidis2003equation}.
It addresses the key bottlenecks of EFF namely, the coarse to fine mapping and the evolution of the latent space using an MDN-AE and an MDN-LSTM respectively.
An illustration of the LED framework is given in Figure~\ref{fig:figures-led:led}.
In the following, the state of a molecule at time $t$ is described by a high dimensional vector $\boldsymbol{s}_t \in \Omega \subseteq \mathbb{R}^{d_{\boldsymbol{s}}}$, where $d_{\boldsymbol{s}} \in \mathbb{N}$ denotes its dimension.
The state vector can include the atom positions or their rotation/translation invariant features obtained using for example the Kabsch transform~\cite{kabsch1976solution}.
A trajectory of this system is obtained by an MD integrator and the
state of the molecule after a timestep $\Delta t$ is described by the probability distribution function (PDF):
\begin{equation}
p( \boldsymbol{s}_{t + \Delta t}| \boldsymbol{s}_{t} ).
\label{eq:transitionalpdf}
\end{equation}
The transition distribution in Equation~\ref{eq:transitionalpdf} depends on the choice of $\Delta t$.
\paragraph*{Mixture Density Network (MDN) Autoencoder (AE):} Here the MDN-AE is utilized to identify the latent (coarse) representation and upscale it probabilistically to the high dimensional state space. MDNs\cite{Bishop2006} are neural architectures that can represent arbitrary conditional distributions. The MDN output is a parametrization of the distribution of a multivariate random variable conditioned on the input of the network.
The latent state is computed by $\boldsymbol{z}_t = \mathcal{E} (\boldsymbol{s}_t ; \boldsymbol{w}_{ \mathcal{E}})$, where $\mathcal{E}$ is the encoder (a deep neural network) with trainable parameters $\boldsymbol{w}_{ \mathcal{E}}$ and $\boldsymbol{z}_t \in \mathbb{R}^{d_{\boldsymbol{z}}}$ with $d_{\boldsymbol{z}} \ll d_{\boldsymbol{s}}$.
Since $\boldsymbol{z}_t$ is a coarse approximation, many states can be mapped to the same $\boldsymbol{z}_t$.
As a consequence, a deterministic mapping $\boldsymbol{z}_t \to \boldsymbol{s}_t$ like the one used in Refs.~\cite{mardt2018vampnets,wehmeyer2018time} is not suitable.
Here, an MDN is employed to model the upscaling conditional PDF $p( \boldsymbol{s}_{t} | \boldsymbol{z}_{t} )$ described by the parameters $\boldsymbol{w}_{\boldsymbol{s} | \boldsymbol{z} }$.
These parameters are the outputs of the decoder with weights $\boldsymbol{w}_{ \mathcal{D}}$ and are a function of the latent representation $\boldsymbol{z}_t$, i.e.
\begin{equation}
\boldsymbol{w}_{\boldsymbol{s} | \boldsymbol{z} } (\boldsymbol{z}_t) = \mathcal{D} (\boldsymbol{z}_t ; \boldsymbol{w}_{ \mathcal{D}}).
\label{eq:mdndecoder}
\end{equation}
The state of the molecule can then be sampled from $
p( \boldsymbol{s}_t | \boldsymbol{z}_t ) \vcentcolon= p ( \boldsymbol{s}_t ; \boldsymbol{w}_{\boldsymbol{s} | \boldsymbol{z} } )
$.
Including in the state $\boldsymbol{s}_t$ the rotation/translation invariant features of the molecule under study~\cite{kabsch1976solution}, ensures that the MDN samples physically meaningful molecular configurations.
The state $\boldsymbol{s}_t$ is composed of states representing bond lengths $\boldsymbol{s}_t^{b}\in \mathbf{R}^{d_{\boldsymbol{s}}^b}$, and angles $\boldsymbol{s}_t^{a}\in \mathbf{R}^{d_{\boldsymbol{s}}^a}$.
Initially, the MD data of the bonds are scaled to $[0,1]$.
An auxiliary variable vector $\boldsymbol{v}_t \in \mathbf{R}^{d_{\boldsymbol{s}}^b}$ is defined to model the distribution of bonds.
In particular, $p( \boldsymbol{v}_{t} | \boldsymbol{z}_{t} )$ is modeled as a Gaussian mixture model with $K_{\boldsymbol{s}}$ mixture kernels as
\begin{equation}
p( \boldsymbol{v}_t | \boldsymbol{z}_t ) =
\sum_{k=1}^{K_{\boldsymbol{s}}} \pi^{k}_{ \boldsymbol{v} }(\boldsymbol{z}_t) \, \mathcal{N} \bigg( \, \boldsymbol{\mu}_{ \boldsymbol{v} }^k(\boldsymbol{z}_t), \boldsymbol{\sigma}_{\boldsymbol{v}}^k(\boldsymbol{z}_t) \, \bigg),
\label{eq:mdnbonds}
\end{equation}
and the mapping $\boldsymbol{s}_t^b= \ln(1+\exp(\boldsymbol{v}_t))$ is used to recover the distribution of the scaled bond lengths at the output.
The functional form of the mixing coefficients $\pi^{k}_{ \boldsymbol{v} }(\boldsymbol{z}_t)$, the means $\boldsymbol{\mu}_{ \boldsymbol{v} }^k(\boldsymbol{z}_t)$, and the variances $\boldsymbol{\sigma}_{\boldsymbol{v}}^k(\boldsymbol{z}_t)$ is a deep neural network (decoder $\mathcal{D}$).
The distribution of the angles is modeled with the circular normal (von Mises) distribution, i.e.
\begin{equation}
p( \boldsymbol{s}_t^a | \boldsymbol{z}_t ) =
\sum_{k=1}^{K_{\boldsymbol{s}}} \pi^{k}_{ \boldsymbol{s}^a }(\boldsymbol{z}_t) \, \frac{
\exp \bigg(\boldsymbol{\nu}^{k}_{ \boldsymbol{s}^a }(\boldsymbol{z}_t) \, \cos \Big( \boldsymbol{s}_t^a - \boldsymbol{\mu}_{ \boldsymbol{s}^a }^k(\boldsymbol{z}_t) \Big) \bigg)
}{2 \pi I_{0} \big(\boldsymbol{\nu}^{k}_{ \boldsymbol{s}^a }(\boldsymbol{z}_t) \big)},
\label{eq:mdnangles}
\end{equation}
where $I_{0}(\boldsymbol{\nu}^{k}_{ \boldsymbol{s}^a })$ is the modified Bessel function of order $0$.
Here, again the functional form of $\pi^{k}_{ \boldsymbol{s}^a }(\boldsymbol{z}_t)$, $\boldsymbol{\mu}_{ \boldsymbol{s}^a }^k(\boldsymbol{z}_t)$ and $\boldsymbol{\nu}^{k}_{ \boldsymbol{s}^a }(\boldsymbol{z}_t)$ is a deep neural network (decoder $\mathcal{D}$).
In total, the outputs of the decoder $\mathcal{D}$ that parametrize $p( \boldsymbol{s}_t | \boldsymbol{z}_t )$ are
\begin{equation}
\boldsymbol{w}_{\boldsymbol{s} | \boldsymbol{z} } = \{
\pi^{k}_{ \boldsymbol{v} }, \boldsymbol{\mu}_{ \boldsymbol{v} }^k, \boldsymbol{\sigma}_{\boldsymbol{v}}^k,
\pi^{k}_{ \boldsymbol{s}^a }, \boldsymbol{\mu}_{ \boldsymbol{s}^a }^k, \boldsymbol{\nu}^{k}_{ \boldsymbol{s}^a }
\}_{k \in \{1,\dots, K_{\boldsymbol{s}} \}},
\end{equation}
which are all functions of the latent state $\boldsymbol{z}_t$, which is the decoder input.
The MDN-AE is trained to predict the mixing coefficients minimizing the data likelihood
\begin{equation}
\begin{aligned}
\boldsymbol{w}_{ \mathcal{E}}, \boldsymbol{w}_{ \mathcal{D}}
=&
\underset{ \boldsymbol{w}_{ \mathcal{E}}, \boldsymbol{w}_{ \mathcal{D}} }{\operatorname{argmax}}
\,
p( \boldsymbol{s}_t | \boldsymbol{z}_t )
\\
=&
\underset{ \boldsymbol{w}_{ \mathcal{E}}, \boldsymbol{w}_{ \mathcal{D}} }{\operatorname{argmax}}
\,
p \big( \boldsymbol{s}_t ; \boldsymbol{w}_{\boldsymbol{s} | \boldsymbol{z} } \big),
\end{aligned}
\end{equation}
where $\boldsymbol{w}_{\boldsymbol{s} | \boldsymbol{z} } = \mathcal{D} \big( \mathcal{E} (\boldsymbol{s}_t ; \boldsymbol{w}_{ \mathcal{E}} ) ; \boldsymbol{w}_{ \mathcal{D}} \big)$ is the output of the MDN-AE and $\boldsymbol{s}_t$ are the MD data.
The details of the training procedure can be found in~\cite{vlachas2020backpropagation}.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\textwidth,clip]{figures-led-led.pdf}
\centering
\caption{
High dimensional (fine scale) dynamics $\mathbf{s}_t$ are simulated for a short period ($T_{\mu}$).
During this warm-up period, the state $\mathbf{s}_t$ is passed through the encoder network.
The outputs of the encoder $\mathbf{z}_t$ provide the time-series input to the LSTM, allowing for the update of its hidden state $\mathbf{h}_t$, thus capturing non-Markovian effects.
The output of the LSTM is a parametrization of the probabilistic non-Markovian latent dynamics $p(\mathbf{z}_t | \mathbf{h}_{t})$.
Starting from the last latent state $\mathbf{z}_t$, the LSTM iteratively samples $p(\mathbf{z}_t | \mathbf{h}_{t})$ and propagates the low order latent dynamics up to a total horizon of $T_m$ time units, with $T_m>T_{\mu}$.
The LED decoder may be utilized at any desired time-scale to map the latent state $\mathbf{z}_t$ back to a high-dimensional representation $\mathbf{s}_t \sim p(\cdot | \mathbf{z}_t, \mathbf{z}_{t-\Delta t}, \dots)$.
Propagation in the low order space unraveled by LED is orders of magnitude cheaper than evolving the high dimensional system based on first principles (molecular dynamics/density functional theory, etc.).
}
\label{fig:figures-led:led}
\end{figure*}
\paragraph*{Long Short-Term Memory recurrent neural network (LSTM)}
The latent dynamics may be characterized by non-Markovian effects, i.e.
$$p(\boldsymbol{z}_{t+\Delta t} | \boldsymbol{z}_{t}, \boldsymbol{z}_{t-\Delta t}, \dots),$$ due to the neglected degrees of freedom (solvent) or the selection of a relatively small time-lag $\Delta t$.
Here the LSTM cell architecture~\cite{hochreiter1997long} is utilized to evolve the nonlinear and non-Markovian latent dynamics.
The propagation in the LSTM is given by:
\begin{equation}
\boldsymbol{h}_{t}, \boldsymbol{c}_{t} =
\mathcal{R}
\big(
\boldsymbol{z}_t, \boldsymbol{h}_{t-\Delta t}, \boldsymbol{c}_{t-\Delta t}
;
\boldsymbol{w}_{ \mathcal{R}}
\big)
,
\label{eq:lstmrec}
\end{equation}
where the hidden-to-hidden recurrent mapping $\mathcal{R}$ takes the form
\begin{equation}
\begin{aligned}
\boldsymbol{g}^f_t &= \sigma_f \big(W_f [\boldsymbol{h}_{t-\Delta t}, \boldsymbol{z}_t ] + \boldsymbol{b}_f\big) \\
\boldsymbol{g}^{i}_t &= \sigma_i \big( W_i [\boldsymbol{h}_{t-\Delta t}, \boldsymbol{z}_t ] +\boldsymbol{b}_i \big) \\
\tilde{\boldsymbol{c}}_t &=\tanh \big( W_c [\boldsymbol{h}_{t-\Delta t}, \boldsymbol{z}_t ] +\boldsymbol{b}_c \big) \\
\boldsymbol{c}_t &=\boldsymbol{g}^f_t \odot \boldsymbol{c}_{t-\Delta t} + \boldsymbol{g}^{i}_t \odot \tilde{\boldsymbol{c}}_t \\
\boldsymbol{g}^{\boldsymbol{z}}_t &= \sigma_h \big( W_h [\boldsymbol{h}_{t-\Delta t}, \boldsymbol{z}_t ] + \boldsymbol{b}_h \big) \\
\boldsymbol{h}_t &= \boldsymbol{g}^{\boldsymbol{z}}_t \odot \tanh(\boldsymbol{c}_t),
\end{aligned}
\label{eq:lstmequations}
\end{equation}
where $\boldsymbol{g}^f_t, \boldsymbol{g}^{i}_t, \boldsymbol{g}^{\boldsymbol{z}}_t \in \mathbb{R}^{d_{\boldsymbol{h}}}$,
are the gate vector signals (forget, input and output gates),
$\boldsymbol{z}_{t} \in \mathbb{R}^{d_{\boldsymbol{z}}}$ is the latent input at time $t$,
$\boldsymbol{h}_{t} \in \mathbb{R}^{d_{\boldsymbol{h}}}$ is the hidden state,
$\boldsymbol{c}_{t}\in \mathbb{R}^{d_{\boldsymbol{h}}}$ is the cell state,
while $W_f$, $W_i$, $W_c, W_h$ $\in \mathbb{R}^{d_{\boldsymbol{h}} \times (d_{\boldsymbol{h}}+d_{\boldsymbol{z}})}$,
are weight matrices and $\boldsymbol{b}_f, \boldsymbol{b}_i, \boldsymbol{b}_c, \boldsymbol{b}_h \in \mathbb{R}^{d_{\boldsymbol{h}}}$ biases.
The symbol $\odot$ denotes the element-wise product.
The activation functions $\sigma_f$, $\sigma_i$ and $\sigma_h$ are sigmoids.
The dimension of the hidden state $d_{\boldsymbol{h}}$ (number of hidden units) controls the capacity of the cell to encode history information.
The set of trainable parameters of the LSTM are
\begin{equation}
\boldsymbol{w}_{ \mathcal{R}}
=
\{
\boldsymbol{b}_f, \boldsymbol{b}_i, \boldsymbol{b}_c, \boldsymbol{b}_h,
W_f, W_i, W_c, W_h
\}
.
\end{equation}
An illustration of the information flow in a LSTM cell is given in Figure~\ref{fig:lstm}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth,clip]{lstm-figure.pdf}
\caption{
Information flow in an LSTM cell.
}
\label{fig:lstm}
\end{figure}
The cell state can encode the history of the latent state evolution and capture non-Markovian effects.
\paragraph*{Mixture Density LSTM Network (MDN-LSTM)}
The LSTM captures the history of the latent state and the non-Markovian latent transition dynamics are expressed as:
\begin{equation}
p(\boldsymbol{z}_{t+\Delta t} | \boldsymbol{z}_{t}, \boldsymbol{z}_{t-\Delta t}, \dots)
= p ( \boldsymbol{z}_{t+\Delta t} | \boldsymbol{h}_{t}),
\label{eq:latentsampling}
\end{equation}
where $\boldsymbol{h}_{t}$ given in Equation~\ref{eq:lstmrec}.
A second MDN is used to model the conditional distribution $p ( \boldsymbol{z}_{t+\Delta t} | \boldsymbol{h}_{t})$ of the latent transition dynamics.
This MDN is conditioned on the hidden state of the LSTM $\boldsymbol{h}_{t}$ and implicitly conditioned on the history, i.e., $p(\boldsymbol{z}_{t+\Delta t} | \boldsymbol{z}_{t}, \boldsymbol{z}_{t-\Delta t}, \dots)
\vcentcolon= p ( \boldsymbol{z}_{t+\Delta t} ; \boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}} )$, so it can capture non-Markovian dynamics.
The distribution $p ( \boldsymbol{z}_{t+\Delta t} | \boldsymbol{h}_{t})$ is modeled as a Gaussian mixture with $K_{\boldsymbol{z}}$ mixture kernels
\begin{equation}
p( \boldsymbol{z}_{t + \Delta t} | \boldsymbol{h}_{t} ) =
\sum_{k=1}^{K_{\boldsymbol{z}}} \pi^{k}_{ \boldsymbol{z} }(\boldsymbol{h}_{t}) \, \mathcal{N} \bigg( \, \boldsymbol{\mu}_{ \boldsymbol{z} }^k(\boldsymbol{h}_{t}), \boldsymbol{\sigma}_{\boldsymbol{z}}^k(\boldsymbol{h}_{t}) \, \bigg),
\label{eq:mdnlatent}
\end{equation}
with parameters $\boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}}$ given by
\begin{equation}
\boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}} (\boldsymbol{h}_t) = \{
\pi^{k}_{ \boldsymbol{z} } (\boldsymbol{h}_t), \boldsymbol{\mu}_{ \boldsymbol{z} }^k(\boldsymbol{h}_t), \boldsymbol{\sigma}_{\boldsymbol{z}}^k(\boldsymbol{h}_t)
\},
\end{equation}
that are a function of $\boldsymbol{h}_{t}$.
These parameters are the outputs of the neural network $\mathcal{Z}(\boldsymbol{h}_{t} ; \boldsymbol{w}_{ \mathcal{Z}})$, with trainable weights $\boldsymbol{w}_{ \mathcal{Z}}$, and are a function of the hidden state, i.e.
\begin{equation}
\begin{aligned}
& p ( \boldsymbol{z}_{t+\Delta t} | \boldsymbol{h}_{t}) \vcentcolon= p ( \boldsymbol{z}_{t+\Delta t} ; \boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}} ), \\
& \boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}} (\boldsymbol{h}_{t}) = \mathcal{Z}(\boldsymbol{h}_{t} ; \boldsymbol{w}_{ \mathcal{Z}})
.
\label{eq:wzch}
\end{aligned}
\end{equation}
The weights of the LSTM $\boldsymbol{w}_{ \mathcal{R}}$ and the latent MDN $\boldsymbol{w}_{ \mathcal{Z}}$ are trained to output the parameters $\boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}}$ that maximize the likelihood of the latent evolution
\begin{equation}
\begin{aligned}
\boldsymbol{w}_{ \mathcal{R}}, \boldsymbol{w}_{ \mathcal{Z}}
=
& \, \underset{ \boldsymbol{w}_{ \mathcal{R}}, \boldsymbol{w}_{ \mathcal{Z}} }{\operatorname{argmax}}
\, p \big( \boldsymbol{z}_{t + \Delta t} ; \boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}}
\big),
\label{eq:wrwz}
\end{aligned}
\end{equation}
where $\boldsymbol{w}_{\boldsymbol{z} | \boldsymbol{h}}$ is defined in Equation~\ref{eq:wzch}, and $\boldsymbol{h}_t$ appearing in Equation~\ref{eq:wzch} is defined in Equation~\ref{eq:lstmrec}.
During the training phase, the MD trajectory data $\boldsymbol{s}_t$ are provided at the input of the trained MDN-AE $\boldsymbol{z}_{t} = \mathcal{E}(\boldsymbol{s}_t ; \boldsymbol{w}_{ \mathcal{E}})$.
The encoder outputs the latent dynamics $\boldsymbol{z}_t$ that are used to update the hidden state of the LSTM and optimize its weights according to Equation~\ref{eq:wrwz}.
In contrast to the linear operator utilized in MSMs, the recurrent functional form in Equation~\ref{eq:lstmrec} can be nonlinear and incorporate memory effects, via the hidden state of the LSTM.
\paragraph*{Learned Effective Dynamics}
The LED framework can be employed to accelerate MD simulations and enable more efficient exploration of the state space and uncovering of novel protein configurations (shown in SM Section~\ref{sec:appendix:alanine:novelconf}).
The networks in LED are trained on trajectories from MD simulations in two phases.
First, the MDN-AE provides a reduced-order representation, maximizing the data likelihood (Ref.~\cite{vlachas2020learning}).
The MDN-AE is trained with Backpropagation~\cite{rumelhart1985learning} using the adaptive stochastic optimization method Adam~\cite{kingma2014adam}.
Adding a pre-training phase fitting the kernels $\boldsymbol{\mu}^k, \boldsymbol{\sigma}^k$ of the MDN-AE to the data, and fixing them during MDN-AE training led to better results.
Next, the MDN-LSTM is trained to forecast the latent space dynamics (the MDN-AE weights are considered fixed) to maximize the latent data likelihood.
MDN-LSTM is trained with Backpropagation through time (BPTT)~\cite{werbos1988generalization} with Adam optimizer.
The LED propagates the computationally inexpensive dynamics on its latent space.
Starting from an initial state from a test dataset (unseen during training), a short time history $T_{\mu}$ of the state evolution is utilized to warm up the hidden state of the LED.
The MDN-LSTM is used to propagate the latent dynamics for a time horizon $T_m\gg T_{\mu}$.
High-dimensional state configurations can be recovered at any time instant by using the probabilistic decoder part of MDN-AE. We find that the LED framework can accelerate MD simulations by three orders of magnitude.
\section*{Results}
\label{sec:results}
The LED framework is tested in three systems, single-particle Langevin dynamics using the two-dimensional MBP, the Trp Cage miniprotein, and the alanine dipeptide, widely adopted as benchmarks for molecular dynamics modeling~\cite{muller1979location,nuske2014variational,mardt2018vampnets,wehmeyer2018time,sidky2020molecular}.
\paragraph*{M\"uller-Brown potential (MBP)}
\label{sec:mbp}
The Langevin dynamics of a particle in the MBP are characterized by the stochastic differential equation
\begin{equation}
m \ddot{\boldsymbol{x}}(t) = -\nabla V \big(\boldsymbol{x}(t) \big) - \gamma \dot{\boldsymbol{x}}(t)+ \sqrt{2 k_B T} R(t),
\end{equation}
where $\boldsymbol{x} \in \mathbb{R}^2$ is the position, $\dot{\boldsymbol{x}} $ is the velocity, $\ddot{\boldsymbol{x}} $ is the acceleration, $V(\boldsymbol{x})$ is the MBP (defined in SM Section~\ref{sec:appendix:mbp}), $k_B$ is the Boltzmann's constant, $T$ is the temperature, $\gamma$ is the damping coefficient, and $R(t)$
a delta-correlated stationary Gaussian process with zero-mean.
The nature of the dynamics is affected by the damping coefficient $\gamma$.
Low damping coefficients lead to an inertial regime.
High damping factors lead to a diffusive regime (Brownian motion) with less prominent memory effects.
Here, a low damping $\gamma=1$ is considered, along with $k_B T=15$.
The equations are integrated with the Velocity Verlet algorithm with timestep $\delta t = 10^{-2}$, starting from $96$ initial conditions randomly sampled uniformly from $\boldsymbol{x} \in [-1.5, 1.2] \times [-0.2, 2]$ till $T=10^4$, after truncating an initial transient period of $\tilde{T}=10^3$.
The data are sub-sampled keeping every 50\textsuperscript{th} data point to create the training and testing datasets for LED.
The coarse time-step of LED is $\Delta t = 0.5$. We use
$32$ initial conditions for training, $32$ for validation and all $96$ for testing.
LED is trained with a one-dimensional reduced order latent representation $\boldsymbol{z}_t \in \mathbb{R}$.
The reader is referred to the SM Section~\ref{sec:appendix:bmp:hyp} for further information regarding the MBP parameterization of Ref.~\cite{muller1979location} and hyperparameters of LED.
The MBP is shown in Figure~\ref{fig:bmp:bmp_density_clusters}, along with a density scatter plot of the joint distribution of the MBP states computed from the testing data and LED.
The joint distribution reveals two long-lived metastable states that correspond to the low-energy regions.
The LED learns to transition probabilistically between the metastable states, mimicking the dynamics of the system and reproducing the state statistics.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.9\textwidth,clip]{figures-bmp-bmp_density_clusters.pdf}
\caption{
From left to right: the M\"uller-Brown potential, a scatter plot of the joint state distribution computed from reference data (with annotation of two long-lived metastable states), and the same scatter plot obtained by LED sampled trajectories.
}
\label{fig:bmp:bmp_density_clusters}
\end{figure*}
The free energy projected on the latent space, i.e., $F=-\kappa_B T \log \, p(z_t)$ is plotted in Figure~\ref{fig:bmp:bmp_free_energy_clusters}.
The free energy profile of the trajectories sampled from LED matches closely the one from the reference data with a root mean square error between the two free energy profiles of $\approx 0.74 \kappa_B T$.
LED reveals two minima in the free energy profile.
Utilizing the LED decoder, the latent states in these regions are mapped to their image in the two-dimensional state representation $\boldsymbol{s}_t \in \mathbb{R}^2$ (here corresponding to $\boldsymbol{x}_t \in \mathbb{R}^2$) in Figure~\ref{fig:bmp:bmp_free_energy_clusters}.
LED is mapping the low-energetic regions in the free energy profile to the long-lived metastable states in the two dimensional space of the MBP.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.8\textwidth,clip]{figures-bmp-bmp_free_energy_clusters.pdf}
\centering
\caption{
Middle: free energy profile projected on the latent space learned by the LED encoder, i.e., $F=-\kappa_B T \ln p(\boldsymbol z_t)$.
The free energy profile computed by LED (propagation of the latent dynamics with LED) matches closely the one from the reference data.
Quantitatively, the root mean square error is $0.74 \kappa_B T$.
LED recovers two low-energy regions that are mapped to the two long-lived metastable states (left and right) in the two-dimensional state space $\boldsymbol{s}_t \in \mathbb{R}^2$.
}
\label{fig:bmp:bmp_free_energy_clusters}
\end{figure*}
Next, we evaluate the LED framework in reproducing the transition times between the long-lived states.
In LED, metastable states can be defined either on the reduced order latent space $\boldsymbol{z}_t\in \mathbb{R}$ or the state space $\boldsymbol{s}_t \in \mathbb{R}^2$ (as the decoder can map any latent state to a state space).
In the following, two metastable states are defined as ellipses on the state space depicted in Figure~\ref{fig:bmp:bmp_density_clusters} (defined in the SM Section~\ref{sec:appendix:bmp:metastablestatescenters}).
The time-scales will vary depending on the definition of the metastable states in the phase space.
The distribution of transition times computed from LED trajectories is compared with the transition time distribution from the test data in Figure~\ref{fig:bmp:iterative_latent_forecasting_test_MBP_trans_time}.
LED captures qualitatively the transition time distributions and the mean values are close to each other.
In SM Section~\ref{sec:appendix:bmp:ledlatentspacetimescale}, we also report the transition times obtained with metastable states definition on the latent space.
This approach has the benefit of not requiring the prior knowledge about the metastable states in the state space.
In conclusion, LED is capturing the joint state distribution on the MBP, and matching the timescales of the system.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth,clip]{figures-bmp-iterative_latent_forecasting_test_BMP_trans_time_from_0_to_1_cropped.pdf}
\hfill
\includegraphics[width=0.45\textwidth,clip]{figures-bmp-iterative_latent_forecasting_test_BMP_trans_time_from_1_to_0_cropped.pdf}
\caption{
Distribution of the transition times learned by LED \tkey{figures-bmp-bar_blue}{20pt}{0.2pt}, computed from sampled trajectories, matches the original fine scale transition times of the MBP dynamics \tkey{figures-bmp-bar_green}{20pt}{0.2pt}.
Left: Histogram of $T_{0 \to 1}$.
Mean $T_{0\to1}$ of MD trajectories is $61$, mean $T_{0\to1}=91$ for LED.
Right: Histogram of $T_{1 \to 0}$.
Mean $T_{1 \to 0}$ of MD trajectories is $188$, mean $T_{1 \to 0}=164$ for LED.
LED has learned to propagate the effective dynamics (a one dimensional latent state $\boldsymbol{z}$) and capture the non-Markovian effects.
}
\label{fig:bmp:iterative_latent_forecasting_test_MBP_trans_time}
\end{figure}
\paragraph*{Trp Cage}
\label{sec:trp}
The Trp-cage is considered a prototypical miniprotein for the study of protein folding~\cite{sidky2020molecular}.
The protein is simulated with MD~\cite{guzman2019espressopp} with a time-step $\delta t=1\text{fs}$, up to a total time of $T=100\text{ns}$.
The data is sub-sampled at $\Delta t=0.1\text{ps}$, creating a trajectory with $N=10^6$ samples.
The data is divided into $248$ sequences of $4000$ samples ($T=400\text{ps}$ each).
The first $96$ sequences are used for training (corresponding to $38.4\text{ns}$), the next $96$ sequences for validation, while all the data is used for testing.
The protein positions are transformed into rototranslational invariant features (internal coordinates), composed of bonds, angles, and dihedral angles, leading to a state with dimension $d_{\boldsymbol{s} }=456$.
LED is trained with a latent space $\boldsymbol{z}_t \in \mathbb{R}^2$, i.e., $d_{\boldsymbol{z}}=2$.
LED is tested by starting from the initial condition in each of the $248$ test sequences, iteratively propagating the latent space to forecast $T=400\text{ps}$.
For more information on the hyperparameters of LED, refer to the SM Section~\ref{sec:appendix:trp:hyp}.
The projection of MD trajectory data to LED latent space is illustrated in Figure~\ref{fig:trp:latent_dynamics_free_energy_test} left, in the form of the free energy, i.e., $F=-\kappa_B T \log p(\boldsymbol{z}_t)$, with $\boldsymbol{z}_t = (\mathbf{z}_1,\mathbf{z}_2)^T \in \mathbb{R}^2$.
The free energy on the latent space computed from trajectories sampled from LED is given in Figure~\ref{fig:trp:latent_dynamics_free_energy_test} on the right.
LED successfully captures the three metastable states of the Trp Cage miniprotein, while being three orders of magnitude faster compared to the MD solver.
Quantitatively, the two profiles agree up to an error margin of approximately $22.5 \kappa_B T$.
The SM Section~\ref{sec:appendix:trp} provides additional results on the agreement of the marginal state distributions, and realistic samples of the protein configuration sampled from LED.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{figures-trp-trp_latent_space.pdf}
\caption{
Free energy projection on the latent space $F=-\kappa_B T \log p(\boldsymbol{z}_t)$, with $\boldsymbol{z}_t \in \mathbb{R}^2$.
Left: MD data projected to the LED latent space.
Right: the free energy of trajectories sampled from LED.
LED is capturing the free energy profile.
}
\label{fig:trp:latent_dynamics_free_energy_test}
\end{figure}
\paragraph*{Alanine dipeptide}
\label{sec:alanine}
The alanine dipeptide is often used as the testing ground for enhanced sampling methods~\cite{MacCarthy:2017}.
LED is evaluated in learning and propagating the dynamics of alanine dipeptide in water.
The molecule is simulated with MD~\cite{guzman2019espressopp} with a time-step $\delta t=1\text{fs}$, up to $T=100\text{ns}$.
We subsample the data, keeping every $100$\textsuperscript{th} datapoint, creating a trajectory with $N=10^6$ samples.
LED is thus operating on a timescale $\Delta t=0.1\text{ps}$.
The data is divided into $248$ sequences of $4000$ samples ($T=400\text{ps}$ each).
The first $96$ sequences are used for training (corresponding to $38.4\text{ns}$), the next $96$ sequences for validation, while all the data is used for testing.
LED is tested by starting from the initial condition in each of the $248$ test sequences, iteratively propagating the latent space to forecast $T=400\text{ps}$.
The dipeptide positions are transformed into rototranslational invariant features (internal coordinates), composed of bonds, angles, and dihedral angles, leading to a state with dimension $d_{\boldsymbol{s} }=24$.
In order to demonstrate that LED can uncover the dynamics in a drastically reduced order latent space, the dimension of the later is set to one $d_{\boldsymbol{z}}=1$, i.e. $\boldsymbol{z}_t \in \mathbb{R}$.
For more information on the hyperparameters of LED, refer to the SM Section~\ref{sec:appendix:alanine:hyp}.
The metastable states of the dynamics are represented in terms of the energetically favored regions in the state space of two backbone dihedral angles, $\phi$ and $\psi$, i.e., the Ramachandran space~\cite{ramachandran1963stereochemistry} plotted in Figure~\ref{fig:alanine:RamachandranPlot_badNOrotTr_waterNVT}.
Specifically, previous works consider five low-energy clusters, i.e., $\{C5, P_{II},\alpha_R, \alpha_L, C_7^{ax} \}$.
The trained LED is qualitatively reproducing the density in the Ramachandran plot in Figure~\ref{fig:alanine:RamachandranPlot_badNOrotTr_waterNVT} qualitatively, identifying the three dominant low-energy metastable states $\{C5, P_{II},\alpha_R \}$.
LED, however, fails to capture the state density on the less frequently observed states in the training data $\{\alpha_L, C_7^{ax}\}$.
The marginal distributions of the trajectories generated by LED match the ground-truth ones (MD data) closely, as depicted in Figure~\ref{fig:alanine:iterative_latent_forecasting_state_dist_bar} in the SM Section~\ref{sec:appendix:alanine}.
Even though LED is propagating a one-dimensional latent state, it can reproduce the statistics while being three orders of magnitude faster than the MD solver.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{figures-alanine-iterative_latent_forecasting_ramachandran_distr_scatter_test_ALL_normed_paper.pdf}
\centering
\caption{
Ramachandran plot of the alanine dipeptide, i.e., space spanned by two backbone dihedral angles $(\phi, \psi)$.
Scatter plots are colored based on the joint density of $(\phi, \psi)$.
Left: test data.
Right: LED trajectories.
We observe five energetically favorable metastable states denoted with $\{C5, P_{II},\alpha_R, \alpha_L, C_7^{ax} \}$.
LED captures the three dominant metastable states $\{C5, P_{II},\alpha_R, \}$.
The states $\{ \alpha_L, C_7^{ax} \}$ are rarely observed in the training data.
}
\label{fig:alanine:RamachandranPlot_badNOrotTr_waterNVT}
\end{figure}
The free energy is projected to the latent space, i.e., $F=-\kappa_B T \ln(p(\boldsymbol{z}_t))$, and plotted in Figure~\ref{fig:alanine:alanine-clusters}.
The free energy projection computed from MD trajectories (test data) is compared with the one computed from trajectories sampled from LED.
The two free energy profiles agree up to a root mean square error of $2 \kappa_B T$.
Note that LED unravels three dominant minima in the latent space.
These low-energy regions correspond to metastable states of the dynamics.
The Ramachandran space $(\phi,\psi)$ is frequently used to describe the long-term behavior and metastable states of the system~\cite{wehmeyer2018time,trendelkamp2016efficient}.
The latent encoding of the LED is evaluated based on the mapping between the latent space and the Ramachandran space.
Utilizing the MDN decoder, the LED can map the latent state $\boldsymbol{z}$ to the respective rototranslational invariant features (bonds and angles) and regions in the Ramachandran plot.
As illustrated in Figure~\ref{fig:alanine:alanine-clusters}, the LED is mapping the three low-energy regions in the latent space to the three dominant metastable states in the Ramachandran plot $\{C_5, P_{II}, \alpha_R \}$.
Even though LED is propagating a reduced-order one-dimensional latent state, it captures the stochastic dynamics of the system.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth]{figures-alanine-alanine-clusters.pdf}
\centering
\caption{
Plot of the free energy profile projected on the latent state learned by the LED, i.e., $F=-\kappa_B T \ln \, p(\boldsymbol{z}_t)$.
The latent free energy profile of MD trajectories is compared with the latent free energy profile of trajectories sampled from LED.
The two profiles agree up to a root mean square error of $2 \kappa_B T$.
Utilizing the LED decoder, the low-energy regions in the latent space (close to the minima) can be mapped to the corresponding protein configurations and metastable states in the Ramachandran plot.
The LED uncovers the three dominant metastable states $\{C_5, P_{II}, \alpha_R \}$ in the free energy surface (minima).
The LED captures the free energy profile and the dominant metastable states while being computationally three orders of magnitude cheaper than MD.
}
\label{fig:alanine:alanine-clusters}
\end{figure*}
In Figure~\ref{fig:alanine:blender}, a configuration randomly sampled from MD data is given for each metastable state.
The closest configuration sampled from LED is compared with the MD data sample in terms of the Root Mean Square Deviation (RMSD) score.
The LED samples realistic configurations with low RMSD errors for all metastable states.
The mean and standard deviation of the RMSD scores of the $10$ closest neighbors sampled from LED are $\mu \pm \sigma= 0.148 \pm 0.021 \mathring{A}$ for the $C_5$ MD sample configuration (Figure~\ref{fig:alanine:blender} top left).
This score for the rest of the metastable states is $0.340 \pm 0.463 \mathring{A}$ for $P_{II}$, $0.101 \pm 0.019 \mathring{A}$ for $\alpha_R$, $0.885 \pm 0.162 \mathring{A}$ for $\alpha_L$, and $0.383 \pm 0.125 \mathring{A}$ for $C_{7}^{ax}$.
The LED samples similar configurations with low RMSD scores for the most frequently observed metastable states $\{C_5, P_{II}, \alpha_R \}$.
The average RMSD error is slightly higher and fluctuates more for the less frequently observed $\{ \alpha_R, C_{7}^{ax} \}$.
\begin{figure*}[t]
\includegraphics[width=0.48\textwidth]{figures-alanine-C0_C5.pdf}
\hfill
\includegraphics[width=0.48\textwidth]{figures-alanine-C1_P_II.pdf}
\includegraphics[width=0.48\textwidth]{figures-alanine-C2_aR.pdf}
\hfill
\includegraphics[width=0.48\textwidth]{figures-alanine-C3_aL.pdf}
\caption{
For each metastable state, a random alanine dipeptide configuration sampled from MD data is compared against the closest configuration sampled from the LED with $d_{\mathbf{z}}=1$.
The Root Mean Square Deviation (RMSD) in $\mathring{A}$ between the two is plotted for reference.
}
\label{fig:alanine:blender}
\end{figure*}
The dynamics learned by LED are evaluated according to the mean first-passage times (MFPTs) between the dominant metastable states.
The MFPT is the average time-scale to reach a final metastable state, starting from any initial state.
The MFPTs are computed a posteriori from trajectories sampled from the LED and the MD test trajectories, using the PyEMMA software~\cite{scherer_pyemma_2015}.
The metastable states considered here are given in the SM Section~\ref{sec:appendix:alanine:metastablestates}.
As a reference for the MFPTs, we consider an MSM fitted to the MD data (test dataset).
The reference MFPTs agree with previous literature~\cite{trendelkamp2016efficient,jang2006multiple,chekmarev2004long,wang2014exploring}.
The time-lag of the MSM is set to $\Delta t_{MSM}=10\text{ps}$ to ensure the necessary Markovianity of the dynamics.
This time-lag is two orders of magnitude larger than the timestep of LED.
Fitting an MSM with a time-lag of $\Delta t_{MSM}=1\text{ps}$ on the MD data results in very high errors ($\approx 85\%$ on average) in the computation of MFPTs.
This emphasizes the need for non-Markovian models that can reproduce the system's dynamics and statistics independent of the selection of the time-lag.
The MFPTs of trajectories sampled from LED are estimated with an MSM with a time-lag $\Delta t_{MSM}=10\text{ps}$.
Note that the LED is operating on a time-step $\Delta t = 0.1 \text{ps}$.
The MFPTs are identified with a low average error of $10.51\%$.
The results on the MFPT are summarized in Table~\ref{tbl:alanine:mfpt}.
LED captures very well the transitions that are dominant in the data e.g. $T_{C_5 \to P_{II} }$, $T_{P_{II} \to C_5 }$ or $T_{\alpha_{R} \to C_5 }$.
In contrast, LED exhibits high MFPT errors in transitions that are rarely observed in the training data.
LED identifies the dominant MFPT successfully by utilizing a very small amount of training data ($38.4\text{ns}$ for training and $38.4\text{ns}$ validation) and propagating the latent dynamics on a reduced order space ($d_{\boldsymbol{z}}=1$).
LED trajectories are three orders of magnitude cheaper to obtain compared to MD data.
At the same time, MSM fitting is a relatively fast procedure once the clustering based on the metastable states is obtained.
In contrast, a careless selection of the time-lag in the MSM that fails to render the dynamics Markovian, (e.g. $\Delta t =1 \text{ps}$) leads to a surrogate model that fails to capture the system time-scales.
This emphasizes the need to model non-Markovian effects with LED in case of limited data sampled at a high frequency (small time-steps $\Delta t$).
A more informative selection of the time-lag may alleviate this problem, rendering the dynamics Markovian as in the reference MSM.
Still, the consequent sub-sampling of the data can lead to omissions of effects whose time-scales are smaller than the time-lag.
As a consequence, the heuristic selection of the time-lag is rendering the modeling process error-prone.
\begin{table}
\caption{
Mean first-passage times (MFPT) between the metastable states of alanine dipeptide in water in [ns].
MFPTs are estimated by fitting MSMs with different time-lags ($10 \text{ps}$ and $1 \text{ps}$) on trajectories generated by MD, or the LED framework.
The average relative error is given for reference.
}
\label{tbl:alanine:mfpt}
\centering
\includegraphics[width=0.8\textwidth]{./alanine-mfpt-alanine-mfpt.pdf}
\end{table}
The SM Section~\ref{sec:appendix:alanine:metastablestateslatent} provides additional results on the MFPTs estimated based on metastable state definition in the latent space of LED (without prior knowledge).
Furthermore, the effectiveness of LED to interpolate the training data and unravel novel configurations of the protein (state-space) is also illustrated.
\section*{Discussion}
\label{sec:conclusion}
This work proposes a data-driven framework, LED, to learn and propagate the effective dynamics of molecular systems accelerating MD simulations by orders of magnitude.
Previous state-of-the-art methods are based on the Markovian assumption on the latent state, or minimize the autocorrelation or the variational loss on the data.
The latter take into account the error on the long-term equilibrium statistics explicitly to capture the system time-scales but suffer from a dependency on the batch size~\cite{hernandez2018variational}.
In contrast, the LED is trained to maximize the data likelihood and identify a continuous reduced-order latent representation.
The nonlinear dynamics are propagated in the latent space and the memory effects are captured through the hidden state of the LSTM.
Moreover, the method is generative and the decoder part of the MDN-AE can be employed to sample high dimensional configurations on any desired time-scales.
The encoder of LED is analogous to the coarse graining model design, while the decoder is implicitly learning a backmapping to atomistic configurations. The LED automates the dimensionality reduction often associated with the empirical a-priori selection of Collective Variables in molecular simulations~\cite{maragliano2006string, wehmeyer2018time}.
At the same time the MDN-LSTM propagates the dynamics on the latent space in a form that is comparable to nonlinear, non-Markovian metadynamics~\cite{MacCarthy:2017}.
The effectiveness of LED is demonstrated for three systems.
In the case of the Langevin dynamics using BMP, LED can recover the free energy landscape in the latent space, identify two low-energetic states corresponding to the long-lived metastable states of the potential, and capture the transition times between the metastable states.
For the Trp Cage miniprotein, LED captures the free energy projection on the latent space and unravels three metastable states.
Lastly, for the system of alanine dipeptide in water, LED captures the configuration statistics of the system accurately while being three orders of magnitude faster than MD solvers.
It identifies three low-energetic regions in the free energy profile projected to the one-dimensional latent state that corresponds to the three dominant metastable states $\{\alpha_R, C_5, P_{II} \}$.
LED is also able to capture the dominant mean first-passage times in contrast to the MSM operating on the same time-scale, owing to the non-Markovian latent propagation in the latent state with the MDN-LSTM.
Furthermore, we showcase how our framework is capable of unraveling novel protein configurations interpolating on the training data.
The speed-up achieved by LED depends on the MD solver used, the dimensionality, and the complexity of the protein under study.
Still, it is expected that the computationally cheap propagation in the latent space of the LED is orders of magnitude faster than the MD solver.
The success of LED paves the way for faster exploration of the conformational space of molecular systems.
Future research efforts will target the application of LED to larger proteins and the investigation of LED's capabilities and limitations in uncovering the metastable states as minima in the free energy profile.
An alternative research direction is the automatic extraction of features directly from the raw position and velocity data (not using rototranslational invariant features).
Moreover, further studies will concentrate on coupling LED with an MD solver in an alternating fashion for faster exploration of the state space.
\section*{Acknowledgments}
The authors thank Ioannis Mandralis, Pascal Weber, and Fabian Wermelinger for fruitful discussions and providing feedback.
The authors also acknowledge the Swiss National Supercomputing Centre (CSCS) support providing the necessary computational resources under Project s930.
The authors declare no competing interests.
\section*{Data and materials availability}
\label{sec:code}
Code and data to reproduce the findings of this study will be made openly available in a public repository upon publication.
The PyEMMA software package~\cite{scherer_pyemma_2015} is employed in the current work for MSM fitting and MFPT estimation.
\section*{Supplementary Materials}
Section S1, Definition of the MBP, the metastable states and time-scale analysis on the LED latent space\\
Section S1, Figure 10. Marginal state statistics of LED in the MBP\\
Section S1, Table 3. AE hyperparameter tuning in the MBP\\
Section S1, Table 4. LSTM hyperparameter tuning in the MBP\\
Section S1, Table 5. LED hyperparameters in the MBP\\
Section S2, Figure 11. Marginal state statistics of LED in the Trp Cage\\
Section S2, Figure 12. Configuration sampled from LED in the Trp Cage\\
Section S2, Table 6. AE hyperparameter tuning in the Trp Cage\\
Section S2, Table 7. LSTM hyperparameter tuning in the Trp Cage\\
Section S2, Table 8. LED hyperparameters in the Trp Cage\\
Section S3, Information on the simulation of alanine, definition of the metastable states, time-scale analysis on the LED latent space, and study on unraveling novel configurations with LED\\
Section S3, Table 9. Metastable state defintions in alanine\\
Section S3, Table 10. Mean First Passage Time analysis in alanine\\
Section S3, Figure 13. Marginal state statistics of LED in alanine\\
Section S3, Figure 14. Unraveling novel configurations with LED in alanine\\
Section S3, Table 11. AE hyperparameter tuning in alanine\\
Section S3, Table 12. LSTM hyperparameter tuning in alanine\\
Section S3, Table 13. LED hyperparameters in alanine\\
\bibliographystyle{Science}
|
{
"timestamp": "2021-02-18T02:20:35",
"yymm": "2102",
"arxiv_id": "2102.08810",
"language": "en",
"url": "https://arxiv.org/abs/2102.08810"
}
|
\section{Introduction and statement of results.}\label{Sec.Intro}
The space of probability measures equipped with the Wasserstein metric reflects
several geometrical properties of a metric measure space $(X,d,\mathfrak{m})$ such as;
compactness, existence of geodesics, and non-negative sectional curvature. (see for example \cite{AmbGig}, \cite{Vill}).
A natural question therefore is asking whether it is possible for the Wasserstein space
to be more symmetric than the base space. Consider the following, if $g: X \rightarrow X$ is an isometry then
it is easy to check that $g_{\#}: \Prob_p(X)\rightarrow \Prob_p(X)$ is also an isometry for any $p \in (1,\infty).$
Therefore ${\#}\Iso(M) \subset \Iso(\Prob_p(X)).$ So more concretely the question is to determine whether these two groups of
isometries are the same, in such case we will say that $X$ is isometrically rigid.
First of all, notice that for any map $g: X \rightarrow X,$ $g_{\#}\delta_x= \delta_{g(x)}$ for all $x\in X,$ i.e. the pushforward of a Dirac delta is again a Dirac delta. Hence our first approach should be to determine whether the set of Dirac deltas is invariant under isometries. Our result in this regard is:
\newtheorem*{thm:A}{Theorem \ref{teo:A}}
\begin{thm:A}
Let $(X,d,\mathfrak{m})$ be a compact metric measure space with $\GTB_p$ for some $p \in (1,\infty).$ Then for any isometry $\Phi: \Prob_p(X) \rightarrow \Prob_p(X)$ the set of Dirac deltas, $\Delta_1,$ is invariant, i.e. $\Phi(\Delta_1)=\Delta_1.$
\end{thm:A}
This result gives us as a corollary that if two compact m.m.s. $(X,d_X,\mathfrak{m})$ and $(Y,d_Y,\mathfrak{n})$ have isometric
$L^p-$Wasserstein spaces then $X$ and $Y$ must also be isometric (see Corollary \ref{cor.isometricwass}).
We will need some structure on the metric measure spaces we will we working with, we will assume compactness, non-branching of geodesics and that the reference measure $\mathfrak{m}$ is such that the space has the so called Good transport behaviour $\GTB_p$
for some $p \in (1,\infty).$ Loosely speaking, this last condition requires that optimal transports starting from absolutely continuous measures are given by a map (see Definition \ref{def.GTBp}). This will also imply that the geodesic in $\Prob_p(X)$ induced by these transports remains inside the set of absolutely continuous measures until it reaches its endpoint. This condition was first defined in \cite{GalKellMonSosa} by Galaz-Garc\'ia, Kell, Mondino, and Sosa. Later, it was investigated in more detail by Kell \cite{Kell}.
The class of metric measure spaces that satisfy $\GTB_p$ is quite rich. Examples include Riemannian manifolds, Alexandrov spaces, non-branching $MCP(K,N)$ spaces, and non-branching $RCD^*(K,N)$ spaces.
This question of determining the structure of the group of isometries of the Wasserstein space is not new. It was first posed in \cite{Klo} by Kloeckner
in the setting of Euclidean spaces, and later for Hadamard spaces in \cite{BerKlo} in collaboration with Bertrand. In the latter isometric rigidity is proved. While in the former, more exotic isometries appear, being the
case of the line the most interesting. (see Lemmas $5.2, 5.3$ in \cite{Klo}).
Some difficulties arise when working in a compact setting; most of the machinery used previously is no longer available, for example the uniqueness of barycenters is generally no longer true in compact spaces.
In order to obtain that all the isometries of the Wasserstein space come from isometries
of the space we assume an additional hypothesis. Namely we will work on Riemannian manifolds
with strict positive sectional curvature.
\newtheorem*{thm:B}{Theorem \ref{teo:B}}
\begin{thm:B}
Let $M$ be a closed Riemannian manifold with strictly positive sectional curvature. Then
it is isometrically rigid, that is, the isometry groups of $M$ and $\Prob_2 (M)$
coincide.
\end{thm:B}
It is also possible to formulate this same question for different $L^p-$Wasserstein spaces.
In the case of $\mathbb{R}$ and $[0,1]$ this has been done by Geh\'er, Titkos and Virosztek \cite{GehTitVir}. In said
paper they prove that depending on the exponent $p$ the behaviour of the Wasserstein isometries can change,
being the case $p=1$ the most exotic. The same authors have also treated the cases of discrete and Hilbert spaces
in \cite{GehTitVir2} and \cite{GehTitVir3} respectively.
In this paper we can answer this question for a certain class of Riemannian manifolds: Compact Rank One Symmetric Spaces (CROSSes). These spaces have nice properties (see Subsection \ref{subsec.CROSS}) that give us enough information on the limitations that Wasserstein isometries must have.
\newtheorem*{thm:C}{Theorem \ref{teo:C}}
\begin{thm:C}
Let $M$ be a CROSS. Then for any $p \in (1,\infty)$ the isometry groups of $M$ and $\Prob_p (M)$
coincide.
\end{thm:C}
As for other works where compactness is assumed Virosztek \cite{Vir} proved that in the particular case of the sphere $L^2-$Wasserstein
isometries must send Dirac deltas to Dirac deltas. Theorem \ref{teo:A} only relies on the compactness
of the space as well as structural properties of the optimal transport there (see Definition \ref{def.GTBp}).
The structure of the paper is the following: Section \ref{Section.Preliminaries} is devoted to the presentation of the optimal transport problem and the existence of solutions to it. Some of the geometric properties of the Wasserstein space such as the structure of the optimal transport when one makes certain assumptions on the reference measure $\mathfrak{m}$ are presented as well. Section \ref{Section.Deltas} contains the proof of Theorem \ref{teo:A}. And finally, in Section \ref{Section.rigidity} we prove isometric rigidity in the context of positive curvature for $p=2$ and in the general case $p \in (1,\infty)$ for CROSSes.
\subsection*{Acknowledgements.}
The author would like to express his thanks to his advisor Prof. Luis Guijarro for valuable comments made during the development
of this paper as well as for his careful reading of earlier versions of this manuscript.
\section{Preliminaries.}\label{Section.Preliminaries}
In this section we will review the concepts on Optimal Transport used throughout
the paper as well as the notation used. Throughout the following $(M,d)$ will
be a closed Riemannian manifold equipped with its usual distance. The proofs
of the results presented in this section can be found in \cite{AmbGig}
\subsection{Optimal Transport.}\label{Subsection.OT}
Let $\mu, \nu$ be two probability measures supported on a m.m.s. $(X,d,\mathfrak{m})$
and $p \in (1,\infty).$ Kantorovich's problem consists of minimizing the functional:
\begin{equation*}
\pi \mapsto \int d^p(x,y) d\pi(x,y) \label{Kantorovich.problem} \tag{KP}
\end{equation*}
among all admissible measures $\pi \in \Adm(\mu,\nu).$ The set $\Adm(\mu,\nu)$
consists of measures in $\pi \in \Prob (M\times M)$ that have marginals $\mu$
and $\nu,$ i.e.
$$\pi \left(A\times M\right)=\mu(A), \pi \left(M\times B\right)= \nu(B), \quad \forall A,B \in \mathcal{B}(M). $$
The intuition behind admissible plans is the following: If $\pi \in \Adm(\mu,\nu)$ then for $A\times B \in \mathcal{B}(M\times M)$ the value $\pi(A\times B)$ holds the information of how the mass from $A$ is sent to $B.$
Observe that the functional \ref{Kantorovich.problem} is linear and that $\Adm(\mu,\nu)$ is a convex closed set (in the narrow topology). It will turn out that Kantorovich's problem always has a solution (see for example Theorem $1.5$ in \cite{AmbGig}).
Measures that minimize \ref{Kantorovich.problem} will be called optimal transports (or optimal plans). The set of optimal transports
between two measures $\mu $ and $\nu$ will be denoted by $\Opt(\mu,\nu).$
Given a probability measure $\pi \in \Adm(\mu,\nu)$ it would be useful to determine whether
it is optimal or not. Intuitively a point $(x,y) \in \supp \pi$ represents
the mass that is sent from $x$ to $y.$ So if our plan is optimal then there shouldn't way to rearrange the points in $\supp \pi$ in a way that decreases the value
of the functional. More rigorously we have:
\begin{defi}\label{def.pcyclical}
Let $p \in (1,\infty),$ we say that a set $\Gamma \in X \times X$ is $p-$cyclically monotone if for all $n \in \mathbb{N}$
and $(x_1,y_1), \cdots, (x_n,y_n) \in \Gamma$ implies
$$\sum_{i=1}^{n}d^p(x_i,y_i) \leq \sum_{i=1}^{n}d^p(x_i,y_{\sigma (i)})$$
for all permutations $\sigma$ of $\lbrace 1, \cdots, n\rbrace.$
\end{defi}
It is proved in Theorem $2.13$ \cite{AmbGig} that a plan $\pi$ is optimal if and only if its
support $\supp \pi$ is $p-$cyclically monotone.
When there exists a measurable map $T: X \rightarrow X$ such that $T_{\#}\mu= \nu$
and the plan $(Id,T)_{\#}\mu$ is optimal we will say that the optimal transport is induced by a map.
As a matter of fact the problem of finding a map $T$ that is optimal is the original transport problem
posed by Monge (see for example Chapter $3$ of \cite{Vill}).
In general it is not possible to find an optimal transport induced by such a map. Brenier (for Euclidean spaces) \cite{Bre} and McCann (for Riemannian spaces) \cite{McCann} showed that if one takes the starting measure $\mu $ to be absolutely continuous such a map exists. In the next subsection we will describe some other spaces where this also possible.
\subsection{Wasserstein space.}
Let $p\in (1,\infty),$ using the solutions to the Kantorovich problem \ref{Kantorovich.problem} it is
possible to define a metric on $\Prob_p (X).$ The $L^p-$Wasserstein metric.
Let $\mu ,\nu \in \Prob_p(X)$ then
$$\W_p^p (\mu,\nu) := \min\bigg\lbrace \int d^2(x,y)d\pi \,|\, \pi \in \Adm(\mu,\nu) \bigg\rbrace $$
\begin{Obs}
Usually $\Prob_p(X)$ denotes the space of probability measures with finite $p-$moments. Since our base spaces will always be compact then
it is clear every probability measure has finite $p-$moments for all $p.$ We will keep the subindex in the notation just to stress that
we are considering the $L^p-$Wasserstein metric.
\end{Obs}
Given $n \in \mathbb{N}$ we define the set of totally atomic measures:
$$\Delta_n (X) := \bigg\lbrace \mu \in \Prob_p(X) \,|\, \mu = \sum_{i=1}^{n}a_i\delta_{x_i}, x_i \in X, \sum_{i=1}^{n}a_i =1, a_i>0 \bigg\rbrace. $$
It is a standard result that $\overline{\bigcup_{n \in \mathbb{N}} \Delta_n (X)}^{\W_p}$ (Theorem $6.18$ in \cite{Vill}). If there is no confusion on which underlying space we are working with
we will simplify the notation and use instead just $\Delta_n.$
The Wasserstein space will share many geometrical properties with the base space $X$ the first of these is that
the Wasserstein space $\Prob_p(X)$ is compact if and only if $X$ is compact.
A metric space $(X,d)$ is said to be geodesic if for every $x,y \in X$ there
exists a curve $\gamma: [0,1] \rightarrow X $ such that $\gamma_0=x, \gamma_1=y$ and
$$d(\gamma_s,\gamma_t)= |s-t|d(\gamma_0,\gamma_1), \quad s,t \in [0,1]. $$
The set of geodesics of $X$ will be denoted as $\Geo(X).$ It will turn out that this
is sufficient for the existence of geodesics in $\Prob_p(X).$
\begin{teo}[\bf Ambrosio, Gigli \cite{AmbGig} ]
Let $(X,d)$ be a geodesic space. Then the Wasserstein space
$(\Prob_p(X),\W_p)$ $p\in (1,\infty)$ is geodesic as well.
\end{teo}
\begin{defi}\label{def.nonbranch}
A geodesic space $(X,d)$ will be said to be non-branching if the map
\begin{align*}
\Geo(X) &\rightarrow X \times X \\
\gamma &\mapsto (\gamma_0,\gamma_t)
\end{align*}
is injective for all $t \in (0,1).$
\end{defi}
The property of being a non-branching geodesic space is inherited by the Wasserstein space as the next result
shows:
\begin{teo}[\bf Ambrosio, Gigli \cite{AmbGig}]\label{teo.interiorregularity}
Let $(X,d,\mathfrak{m})$ be a complete and separable m.m.s. Then the space $(\Prob_p(X),\W_p)$ is
non-branching. Furthermore, given a geodesic $(\mu_t)_{t \in [0,1]}\subset \Prob_p(X)$
then for every $t \in (0,1)$ there exists a unique optimal plan $\Opt(\mu_0,\mu_t)$
and this plan is induced by a map from $\mu_t.$
\end{teo}
\begin{Obs}\label{Obs.measuresinterior}
One of the immediate consequences of the previous Theorem is that measures in $\Delta_{n}(X)$
can only be in the interior of geodesics with endpoints in $\Delta_{1}(X)\cup \cdots \cup \Delta_{n}(X).$
This will be extremely useful for determining how Wasserstein isometries behave.
\end{Obs}
To conclude this section we will describe with more detail the assumptions we will make on the reference measure
$\mathfrak{m}$ and the consequences they will have on the solutions to Kantorovich's problem. Details can be found
in the paper \cite{Kell} by Kell and the refrences therein.
\begin{defi}\label{def.qualnondeg}
A measure $\mathfrak{m}$ is said to be qualitatively non-degenerate if for all $R>0$
and $x_0\in X$ there is a function $f_{R,x_0}: (0,1)\rightarrow (0,\infty)$ such
that $$\limsup_{t\rightarrow 0}f_{R,x_0}(t)>\frac{1}{2} $$ and for every
measurable set $A \subset B_{x_0}(R)$ and all $x \in B_{x_0}(R),$ $t \in (0,1)$ we have:
$$\mathfrak{m}(A_{t,x}) \geq f_{R,x_{0}}(t)\mathfrak{m}(A). $$
Where $A_{t,x}:= \{\gamma_t\,|\, \gamma \in \Geo(X), \gamma_0\in A, \gamma_1=x \}. $
\end{defi}
This kind of measures will give us some topological information on the space:
\begin{lem}\label{lem.finitedimension}
Let $(X,d)$ a metric space and $\mathfrak{m}$ a qualitatively non-degenerate probability measure on it.
Then $\mathfrak{m}$ is doubling and $X$ has finite Hausdorff dimension.
\end{lem}
\begin{proof}
The first affirmation is proved in Proposition $5.3$ in \cite{Kell}, as for the second, notice that since
$(X,d)$ is a doubling space then its Assouad dimension (see definition $10.15$ in \cite{Hei}) is finite.
Finally this implies that the Hausdorff dimension of $X$ must be finite.
\end{proof}
As for optimal transports induced by maps we will recall definition given by Kell in \cite{Kell}:
\begin{defi}\label{def.GTBp}
A m.m.s $X,d,\mathfrak{m}$ is said to have good transport behaviour $\GTB_p$ if for all
$\mu, \nu \in \Prob_p(X)$ such that $\mu \ll \mathfrak{m}$ then any optimal transport between
$\mu$ and $\nu$ is given by a transport map.
\end{defi}
\begin{example} The following spaces have good transport behaviour:
\begin{enumerate}
\setlength\itemsep{1em}
\item For $p=2$ essentially non-branching $MCP(K,N)-$spaces with $K \in \mathbb{R}, N\in [1,\infty).$ Particularly essentially non-branching $CD^*(K,N)-$spaces, essentially non-branching $CD(K,N)$ spaces and $RCD^*(K,N)-$spaces.
\item $p-$essentially non-branching, qualitatively non-degenerate spaces (Definition \ref{def.qualnondeg}) for all $p \in (1,\infty).$
\item Any (locallly) doubling measure $\mathfrak{m}$ on $(\mathbb{R}^n,d_E)$ or on a Riemannian manifold.
\end{enumerate}
\end{example}
\begin{defi}\label{def.stronginter}
Let $(X,d,\mathfrak{m})$ a m.m.s where $\mathfrak{m}$ is qualitatively non-degenerate. We will say that it has the $p-$strong interpolation
property $(\sIP_p)$ for some $p \in (1,\infty)$ if: Given any $\mu_0,\mu_1\in \Prob_p(X)$ with $\mu_0\ll \mathfrak{m}$ there is a unique optimal transport, induced by a map, and the geodesic $(\mu_t)_{t \in [0,1]}$ satisfies $\mu_t \ll \mathfrak{m}$ for all $t \in [0,1).$
\end{defi}
As an immediate corollary of Theorem $5.8$ in \cite{Kell}, we have:
\begin{teo}
If $(X,d,\mathfrak{m})$ is a non-branching m.m.s. and $\mathfrak{m}$ is qualitatively non-degenerate then $GTB_p$ and $sIP_p$ are equivalent.
\end{teo}
\section{Restricting Wasserstein isometries to Dirac deltas.}\label{Section.Deltas}
For the remainder of this paper we will assume that our spaces satisfy the following:
$(X,d,\mathfrak{m})$ is a m.m.s. such that it is compact, non-branching, $\mathfrak{m}$ is qualitatively non-degenerate and with $\GTB_p$
for some $p \in (1,\infty).$
\begin{prop}\label{prop.strictconvexfunct}
Let $(X,d,\mathfrak{m})$ be a m.m.s. with $\GTB_p$ and $\mu \in \Prob_p(X)$ such
that $\mu \ll \mathfrak{m}$ Then the functional:
$$\nu \mapsto \W_p^p(\mu, \nu), $$
is linearly strictly convex, That is, for any $\nu_0,\nu_1 \in \Prob_p(X)$ and $t \in (0,1)$
we have $\W_p^p(\mu, (1-t)\nu_0+t\nu_1)< (1-t)\W_p^p(\mu,\nu_0)+t\W_p^p(\mu,\nu_1). $
\end{prop}
\begin{proof}
Suppose there exist $\eta_0, \eta_1 \in \Prob_p(X)$ and $t^*\in (0,1)$ such that:
$$\W_p^p(\mu,(1-t^*)\eta_0+t^*\eta_1) = (1-t^*)\W_p^p(\mu,\eta_0)+t^*\W_p^p(\mu,\eta_1). $$
Consider now the optimal plans $\pi_0 \in \Adm(\mu,\eta_0) $ and $\pi_1 \in \Adm(\mu,\eta_1). $
It is clear that $(1-t^*)\pi_0+t^*\pi_1$ is an admissible plan between $\mu $ and $(1-t^*)\eta_0+t^*\eta_1.$
Then:
\begin{align*}
\int d^p(x,y) d(1-t^*)\pi_0+t^*\pi_1 &= (1-t^*)\int d^p(x,y)d\pi_0+t^*\int d^p(x,y)d\pi_1 \\
&= (1-t^*)\W_p^p(\mu,\eta_0)+t^*\W_p^p(\mu,\eta_1) \\
&= \W_p^p(\mu,(1-t^*)\eta_0+t^*\eta_1).
\end{align*}
And this gives us a contradiction since the plan $(1-t^*)\pi_0+t^*\pi_1$ cannot be induced by a map.
\end{proof}
And this lemma gives us as a Corollary:
\begin{cor}\label{cor.argmaxDirac}
For $\mu \ll \mathfrak{m},$ $\argmax(\Prob_p(X)\ni \nu \mapsto \W_p^p(\mu,\nu)) \subset \Delta_1.$
\end{cor}
\begin{prop}\label{prop.invabscont}
Let $\Phi \in Iso(\Prob_p(X))$ then the set of absolutely continuous measures $\mu$ such that $\Phi(\mu)$ is also absolutely continuous is dense in $\Prob_p(X).$
\end{prop}
\begin{proof}
Let $\mu, \nu \ll \mathfrak{m},$ and consider the geodesic $(\mu_t)_{t\in [0,1]}$ such that $\mu_0=\mu$ and $\mu_1= \Phi^{-1}(\nu).$ Since the m.m.s. has $\GTB_p$ we have that $\mu_t \ll \mathfrak{m}$ for all $t \in [0,1),$ now apply the isometry to the geodesic to obtain a new geodesic $(\Phi(\mu_t))_{t \in [0,1]},$ it is clear then that $\Phi(\mu_1)=\nu \ll \mathfrak{m} $ and so we conclude that $\Phi(\mu_t)\ll \mathfrak{m}$ for all $t \in (0,1).$
Since the measures $\mu ,\nu$ were picked arbitrarily we obtain the thesis.
\end{proof}
\begin{rem}
Actually we have a stronger property: For any $\nu \in \Prob_p(X)$ there exists $\mu_0\ll \mathfrak{m}$ with $\Phi(\mu_0)\ll \mathfrak{m}$
such that if $(\mu_t)_{t \in [0,1]}$ is the unique geodesic joining $\mu_0$ with $\nu$ then $\Phi(\mu_t) \ll \mathfrak{m}$ for all $t \in [0,1).$
\end{rem}
\begin{Obs}\label{Obs.boundarylinconvex}
For every $x \in X$ and $R>0$ the set $\partial B_{\delta_x}(R)$ is compact and linearly convex. This is just a consecuence of the fact that the optimal transport between a $\delta_x$ and any other measure $\mu$ is given by $\delta_x\otimes \mu.$
\end{Obs}
\begin{lem}\label{lem.fixinghull}
Let $x,y \in X$ be two points such that $\Phi(\delta_x),\Phi(\delta_y)\in \Delta_1,$ where $\Phi \in \Iso(\Prob_p(X)).$ Then there exists a
geodesic $\gamma \in \Geo(X)$ such that $\gamma_0=x,\gamma_1=y$ and $\Phi(\gamma_t)\in \Delta_1$ for all $t \in [0,1].$
\end{lem}
\begin{proof}
Let $x,y \in X$ be such that $\Phi(\delta_x),\Phi(\delta_y)\in \Delta_1.$ Take $\mu \ll \mathfrak{m}$ such that $\Phi(\mu)\ll \mathfrak{m}$ which exists by Proposition \ref{prop.invabscont}. Now consider:
$$\Mid(\delta_x,\delta_y) := e_{1/2\#}\{ (\eta_t)_{t \in [0,1]} \in \Geo(\Prob_p(X))\,|\, \eta_0 = \delta_x, \eta_1=\delta_y \}. $$
and notice that this set has the following properties:
\begin{itemize}
\setlength\itemsep{1em}
\item $\Mid(\delta_x,\delta_y) = \partial B_{\delta_x}(\frac{d(x,y)}{2})\cap \partial B_{\delta_y}(\frac{d(x,y)}{2}),$ so by Observation \ref{Obs.boundarylinconvex} is a linearly convex and compact set.
\item $\Phi(\Mid(\delta_x,\delta_y))= \Mid(\Phi(\delta_x),\Phi(\delta_y)).$
\end{itemize}
Since $\Phi$ is an isometry then it is clear that:
$$\max\{\W_p^p(\mu,\nu)\,|\, \nu \in \Mid(\delta_x,\delta_y) \}= \max\{\W_p^p(\Phi(\mu),\nu)\,|\, \nu \in \Mid(\Phi(\delta_x),\Phi(\delta_y)) \}. $$
And by Proposition \ref{prop.strictconvexfunct} and the fact that both $\mu, \Phi(\mu)\ll \mathfrak{m}$
\begin{align*}
\argmax \big(\Mid(\delta_x,\delta_y)\ni \nu \mapsto \W_p^p(\mu,\nu) \big) &\subset \Delta_1\\
\argmax \big(\Mid(\Phi(\delta_x),\Phi(\delta_y))\ni \nu \mapsto \W_p^p(\Phi(\mu),\nu) \big) &\subset \Delta_1
\end{align*}
Hence there must exists some point $z\in X$ such that $\delta_z \in \argmax(\Mid(\delta_x,\delta_y)\ni \nu \mapsto \W_p^p(\mu,\nu))$ and
$\Phi(\delta_z)\in \Delta_1.$
\end{proof}
The idea for proving Theorem \ref{teo:A} boils down to proving that there exists some set $S\subset X $ with non-empty interior such that
for all $x \in S,$ $\Phi(\delta_x)\in \Delta_1.$ Using then Observation \ref{Obs.measuresinterior} gives us the result. As for how we build the set $S$ we recall that from Lemma \ref{lem.finitedimension} the Hausdorff dimension of $X$ is finite. So then for sufficiently enough points $x_1,\cdots, x_n$ such that $\Phi(\delta_{x_i})\in \Delta_1$ the geodesic convex hull of $\{x_1,\cdots x_n\} $ will have non-empty interior.
Given a set $E \subset X$ we define the antipodal set of $E$ as:
$$A(E):= \argmax(X\ni x \mapsto d(x,E) ). $$
\begin{thmp}\label{teo:A}
Let $(X,d,\mathfrak{m})$ be a compact metric measure space with $\GTB_p$ for some $p \in (1,\infty).$ Then for any isometry $\Phi: \Prob_p(X) \rightarrow \Prob_p(X)$ the set of Dirac deltas, $\Delta_1,$ is invariant, i.e. $\Phi(\Delta_1)=\Delta_1.$
\end{thmp}
\begin{proof}
Take $\mu \ll \mathfrak{m}$ such that $\Phi(\mu)\ll \mathfrak{m}$ (such measure exists by Proposition \ref{prop.invabscont}). Then by Corollary \ref{cor.argmaxDirac}:
\begin{equation}\label{eq.funct1}
\argmax(\Prob_p(X)\ni \nu \mapsto \W_p^p(\mu,\nu)) \subset \Delta_1
\end{equation}
\begin{equation}\label{eq.funct2}
\argmax(\Prob_p(X)\ni \nu \mapsto \W_p^p(\Phi(\mu),\nu)) \subset \Delta_1
\end{equation}
Since $\W_p^p(\mu,\nu)=\W_p^p(\Phi(\mu),\Phi(\nu))$ the isometry $\Phi$ sends the set \ref{eq.funct1} to the set \ref{eq.funct2}. That is, there exists some $x_1\in X$ such that $\Phi(\delta_{x_1})\in \Delta_1.$
Suppose now that we have found $x_1,\cdots ,x_n \in X$ such that $\Phi(\delta_{x_i})\in \Delta_1.$ By Lemma \ref{lem.fixinghull} the geodesic convex hull
$S(x_1,\cdots, x_n )\subset X$ consists of points with the property that for all $y \in S(x_1,\cdots, x_n),$ $\Phi(\delta_y) \in \Delta_1.$
Now consider the totally atomic measure $\sum_{i=1}^n\frac{1}{n}\delta_{x_i},$ by the density stated in Proposition \ref{prop.invabscont}
we can find some measure $\mu_{n+1}$ such that:
\begin{itemize}
\setlength\itemsep{1em}
\item $\mu_{n+1},\Phi(\mu_{n+1})\ll \mathfrak{m}.$
\item $\supp \mu_{n+1} \subset \sqcup_{i=1}^{n}B_{x_i}(\epsilon_i),$ for some $\epsilon_i >0,$ $i \in \{1,\cdots, n\}.$
\item There exists some $y \in A(S(x_1,\cdots, x_n))$ such that $\W_p^p(\mu_{n+1},\delta_y)\geq \W_p^p(\mu_{n+1},\delta_z)$ for all $z \in S(x_1,\cdots, x_n).$
\end{itemize}
So we just look at the arguments of the maxima for the linearly strictly convex functionals $\W_p^p(\mu_{n+1},\cdot), \W_p^p(\Phi(\mu_{n+1}),\cdot)$ and obtain a
point $x_{n+1}\in X- S(x_1,\cdots, x_n)$ such that $\Phi(x_{n+1})\in \Delta_1.$
Since by Lemma \ref{lem.finitedimension} the Hausdorff dimension of $X$ is finite we have that there must exist $m\in \mathbb{N}$ such that
$S(x_1,\cdots,x_m)$ has nonempty interior.
So for points $z \in X$ and $x$ in the interior of $S(x_1,\cdots,x_m)$ a geodesic $\gamma$ with endpoints $y$ and $x$ must satisfy:
There exists $t \in (0,1)$ such that $\gamma_t \in S(x_1,\cdots,x_m).$ So by applying Observation \ref{Obs.measuresinterior} then $\Phi(\delta_z)\in \Delta_1.$
\end{proof}
And with this we also have as a Corollary:
\begin{cor}\label{cor.isometricwass}
Let $(X,d_X,\mathfrak{m}),(Y,d_Y,\mathfrak{n})$ be two compact m.m.s such that they have
$\GTB_p$ for some $p \in (1,\infty).$ Suppose there exists some isometry $ \Phi: \Prob_p(X)\rightarrow \Prob_p(Y).$ Then $X$ and
$Y$ are isometric.
\end{cor}
\begin{proof}
Let $\Phi: \Prob_p(X)\rightarrow \Prob_p(Y)$ be the isometry. Then from Theorem \ref{teo:A} we have that for all $x \in M,$ $\Phi(\delta_x) \in \Delta_1(Y).$ So we define $F: X\rightarrow Y $ as the function such that $F_{\#}(\delta_{x}):= \Phi(\delta_{x}).$ It is clear that this defines an isometry of the base metric spaces.
\end{proof}
\section{Isometric rigidity.}\label{Section.rigidity}
This last section deals with answering affirmatively the question of isometric rigidity, at least in some cases.
\subsection{Positively curved manifolds and $p=2$}
In this subsection we will prove that if we additionally ask that the manifold has
positive sectional curvature then it is possible to obtain isometric rigidity.
We will restrict only to the case where $p=2,$ the reason being that the flatness condition
(see Definition \ref{def.flatspace}) is of no use to us for general $p.$ In the next subsection however, we will prove isometric rigidity
in the case that we have more structure on the manifold.
\begin{thmp}\label{teo:B}
Let $M$ be a closed Riemannian manifold of strictly positive sectional curvature. Then,
the Wasserstein space $(\Prob_2(M),\W_2)$ is isometrically rigid,
i.e. $\Iso(M)= \Iso\Prob_2 (M).$
\end{thmp}
Given an isometry $\Phi: \Prob_2 (M)\rightarrow \Prob_2(M)$ we have obtained from
Theorem \ref{teo:A} that it restricts to an isometry of the Riemannian manifold.
Therefore from here on out we will only consider isometries $\Phi$ of $\Prob_2(M)$ such
that $\Phi (\delta_x) = \delta_x$ for all $x \in M.$
Our aim will be to prove that for all $n \in \mathbb{N},$ $\mu \in \Delta_n$
$\Phi(\mu)=\mu.$ Since the union of these sets over all $n$ is dense then we will
be done.
Consider $\gamma: [0,1]\rightarrow M$ a geodesic. Then $\gamma_{\#}:(\Prob_2 ([0,1]),\W_2)\rightarrow (\Prob_2(M),\W_2)$
is an isometric embedding of $(\Prob_2 ([0,1]),\W_2)$ into $(\Prob_2(M),\W_2).$ We will deonte the set of probability measures
supported in the geodesic $\gamma$ as $\Prob_2(\gamma).$
Our first step will be to prove that $\Prob_2(\gamma)$ is not only invariant under isometries $\Phi$ that fix Dirac deltas but
that $\Phi(\mu)=\mu$ for all $\mu \in \Prob_2(\gamma).$
In general, regardless on any curvature assumption on the manifold $M$ the only possible totally geodesic embedded submanifolds that one can expect to find are precisely the minimizing geodesics. These geodesics are actually flat spaces in the sense that their curvature is identically 0. The next definition gives an alternative formulation of a metric space being flat just in terms of the distance.
\begin{defi}\label{def.flatspace}
Let $(X,d)$ be a geodesic space, we will say that it is flat if given any three points $x,y,z \in X$ and every $\gamma: [0,1]\rightarrow X$
geodesic such that $\gamma_0=y, \gamma_1=z$ we have:
$$d^2(x,\gamma_t) = (1-t)d^2(x,\gamma_0)+td^2(x,\gamma_1)-(1-t)td^2(y,z), \quad \forall t \in [0,1]. $$
\end{defi}
Examples of flat spaces include Hilbert spaces, closed intervals $[a,b]$ equipped with an interior product, and Wasserstein spaces $(\Prob_2([a,b]),\W_2).$ One important observation to make though is that even if the base space $X$ is flat then $\Prob_2(X)$ in general is only non-negatively curved. (see Example $3.2.1$ in \cite{AmbGig}).
\begin{Obs}\label{Obs.delta2dense}
Before moving on with the next results we will make some observations regarding the structure of the Wasserstein space of a closed interval say , $[0,1]$ equipped with its usual Euclidean metric. In \cite{Klo} it is noted (Proposition $3.4$ ) that the set $\Delta_2(\mathbb{R})$ plays a special role as the convex hull of it is dense. This is used in order to define the exotic isometries (see Lemma $5.3$ in \cite{Klo}). It is clear that the convex hull of $\Delta_2([0,1])$ will also be dense in $\Prob_2([0,1]).$
\end{Obs}
We will use this as well in the proofs of the next Propositions. The next result appears in Section $2.2$ of \cite{GehTitVir}, we include a proof since our arguments are different.
\begin{prop}\label{Prop.intervalrigid}
Let $([0,1],d_E)$ be the interval equipped with its usual Euclidean metric. Then the Wasserstein space
$(\Prob_2([0,1]),\W_2)$ is isometrically rigid.
\end{prop}
\begin{proof}
First, let us notice the following: For the functional $\mu \mapsto \W_2^2(\delta_{1/2},\mu), \mu \in \Prob_2([0,1]) $
$$\argmax \left(\mu \mapsto \W_2^2(\delta_{1/2},\mu \right) = \lbrace (1-\lambda)\delta_0+\lambda\delta_1\,|\, \lambda \in [0,1]\rbrace. $$
We can mimic the argument done in Theorem \ref{teo:A} to obtain that all Wasserstein isometries send Dirac deltas to Dirac deltas. Hence it suffices to look at isometries $\Phi: \Prob_2([0,1])\rightarrow \Prob_2([0,1])$ such that $\Phi|_{\delta_1}\equiv id.$
It is immediate that $\Phi \left(\lbrace (1-\lambda)\delta_0+\lambda\delta_1\,|\, \lambda \in [0,1]\rbrace\right)= \lbrace (1-\lambda)\delta_0+\lambda\delta_1\,|\, \lambda \in [0,1]\rbrace $ since $\Phi(\delta_{1/2})=\delta_{1/2}.$
Now, for $(1-\lambda)\delta_0+\lambda\delta_1$ suppose $\Phi((1-\lambda)\delta_0+\lambda\delta_1)= (1-t)\delta_0+t\delta_1$ for some $t \in (0,1).$ Then
$$t= \W_2((1-t)\delta_0+t\delta_1,\delta_0 )= \W_2((1-\lambda)\delta_0+\lambda\delta_1, \delta_0)=\lambda. $$
So $t=\lambda.$ That is $\Phi$ restricted to $\lbrace (1-\lambda)\delta_0+\lambda\delta_1\,|\, \lambda \in [0,1]\rbrace$ is the identity.
Consider now $(1-\lambda)\delta_a+\lambda\delta_b,$ WLOG $a<b,$ then it is an interior point of the unique geodesic joining $\delta_{a/(1+a-b)} $ with $(1-\lambda)\delta_0+\lambda\delta_1.$
Since $\Phi$ fixes both $\delta_{a/(1+a-b)} $ and $(1-\lambda)\delta_0+\lambda\delta_1$ it must fix the whole geodesic including $(1-\lambda)\delta_a+\lambda\delta_b$ (See Observation \ref{Obs.measuresinterior}), hence $\Phi$ fixes
$\Delta_2([0,1]).$ Using Observation \ref{Obs.delta2dense} we conclude then that $\Phi \equiv id.$
\end{proof}
\begin{prop}\label{Prop.invariantgeodesics}
Let $x \in M$ and consider $\gamma$ a geodesic starting at $x$ and such that it cannot be extended past $\gamma_1$ while remaining minimizing. Then for any isometry $\Phi$ such that it fixes all Dirac deltas we have that $\Phi (\Prob_2(\gamma))=\Prob_2(\gamma).$
\end{prop}
\begin{proof}
Let $\Phi: \Prob_2 (M)\rightarrow \Prob_2 (M)$ be an isometry such that $\Phi|_{\Delta_1}\equiv id.$ In order to get the result, from the Observation \ref{Obs.delta2dense} it will be sufficient to prove that $\Phi(\Delta_2(\gamma))=\Delta_2(\gamma).$
First, we will prove that the set $\lbrace (1-\lambda)\delta_{\gamma_0}+\lambda\delta_{\gamma_1} \,|\, \lambda\in [0,1] \rbrace $ is invariant.
Consider two points in $\gamma ([0,1])$ and a geodesic $\sigma :[0,1]\rightarrow M$ joining them. Notice that $\sigma$ is completely contained in $\gamma ([0,1]).$
Let $\mu= (1-\lambda)\delta_{\gamma_0}+\lambda\delta_{\gamma_1},$ since $\Prob_2(\gamma)$ is a flat space we have that
$$\W_2^2(\mu,\delta_{\sigma_t})= (1-t)\W_2^2(\mu,\delta_{\sigma_0})+t\W_2^2(\mu,\delta_{\sigma_1})-(1-t)t\W_2^2(\delta_{\sigma_0},\delta_{\sigma_1}), $$
for all $t \in [0,1].$ And since $\Phi$ fixes every Dirac delta we have an analogous equation for $\Phi(\mu),$ which can be rewritten in the following way since the product measure $\Phi(\mu)\otimes \delta_{\sigma_t} $ is optimal for all $t \in [0,1].$
$$\int d^2(y, \sigma_t)-(1-t)d^2(y,\sigma_0)-td^2(y,\sigma_1)+t(1-t)d^2(\sigma_0,\sigma_1) d\Phi(\mu)(y) =0. $$
As the manifold is of positive sectional curvature then the integrand must the $\Phi(\mu)-$a.e. identically 0. That is,
$$d^2(y, \sigma_t)-(1-t)d^2(y,\sigma_0)-td^2(y,\sigma_1)+t(1-t)d^2(\sigma_0,\sigma_1))=0,$$ for all $t \in [0,1]$ and $\Phi(\mu)$-a.e.
The positive curvature then forces the support of $\Phi(\mu)$ to be in $\gamma [0,1],$ otherwise we would have Euclidean traingles embedded in $M.$ Therefore we obtain the thesis.
\end{proof}
Clearly given a geodesic $\gamma\in \Geo(M)$ we have that $\Prob_2(\gamma)$ and $\Prob_2([0,d(\gamma_0,\gamma_1)])$ are isometric so
combining Propositions \ref{Prop.intervalrigid} and \ref{Prop.invariantgeodesics} we obtain the following corollary:
\begin{cor}
For any geodesic $\gamma: [0,1]\rightarrow M$ and $\Phi: \Prob_2(M)\rightarrow \Prob_2(M)$ an isometry that fixes all Dirac deltas. $\Phi$ restricted to $(\Prob_2(\gamma),\W_2)$ is the identity map.
\end{cor}
And noting that given any $\mu \in \Delta_2(M)$ its atoms are contained in some geodesic we obtain:
\begin{cor}
Let $\Phi: \Prob_2(M)\rightarrow \Prob_2(M)$ be an isometry such that it fixes Dirac deltas. Then $\Phi$ fixes $\Delta_2(M)$ as well.
\end{cor}
Now we will like to see what happens when we look at measures not necessarily supported on a geodesic.
Given a measure $\mu \in \Prob_2(M)$ and a geodesic $\gamma \in \Geo(M)$ we define $\proj_{\gamma}(\mu)$
the projection of $\mu$ onto $\gamma$ as:
\begin{equation}\label{def.projmeasures}
\proj_{\gamma}(\mu) = \argmin \left( \nu \in \Prob_2(\gamma) \mapsto \W_2^2(\mu,\nu) \right).
\end{equation}
Note that in general $\mu \mapsto \proj_{\gamma}(\mu)$ is not a function since the set on right hand side of
\ref{def.projmeasures} may contain more than one element. For example consider in the sphere $\mu = \delta_N$
the north pole and $\gamma $ a geodesic in the equator. It is clear that every measure in $\Prob_2(\gamma)$ is
equidistant to $\Delta_N.$
Nevertheless it will be very useful to us to work with projections onto geodesics. It is easy to convince
oneself that if $\mu$ is a totally atomic measure in say, $\Delta_n(M)$ the projection onto any geodesic contains
at least one totally atomic measure.
\begin{prop}
Let $\mu \in \Delta_n(M),$ then for every geodesic $\gamma$
then $proj_{\gamma}(\mu) \cap (\Delta_{1}(M)\cup\cdots \Delta_{n}(M)) \neq \emptyset.$
\end{prop}
\begin{proof}
Let $\mu = \sum_{i=1}^{n}\lambda_i\delta_{x_i}$ and $\gamma \in \Geo(M).$ Take now a measure $\nu \in \Prob_2(\gamma).$
Take $\pi \in \Opt(\mu,\nu),$ so then:
\begin{align*}
\W_2^2(\mu,\nu) &= \int_{M\times M} d^2(x,y) d\pi(x,y) \\
&= \int_{\lbrace x_1\rbrace \times M} d^2(x,y)d\pi(x,y) + \cdots + \int_{\lbrace x_n\rbrace \times M} d^2(x,y)d\pi(x,y)\\
&\geq \lambda_1 d^2(x_1,y_1)+\cdots +\lambda_nd^2(x_n,y_n).
\end{align*}
Where the points $y_i$ are such that:
$$ y_i \in \argmin \left(\Delta_1(\gamma)\ni \delta_y \mapsto d^2(x_i,y) \right), \quad \forall 1\leq i\leq n. $$
Hence we conclude that
$$\sum_{i=1}^{n}\lambda_i\delta_{y_i}\in \proj_{\gamma}(\mu). $$
\end{proof}
So, let us describe our plan for proving the isometric rigidity. First we will prove that for each $n \in \mathbb{N}$
the set $\Delta_n(M)$ is invariant. Then we will prove that totally atomic measures supported on a sufficently small ball $B$
are fixed. A density argument will yield that any measure whose support is contained in $B$ is also fixed.
Finally we will use a non-branching argument to conclude.
We divide each of these steps into several Lemmas to make the argument as clear as posible.
\begin{lem}\label{lem.invariancetub}
Let $\Phi: \Prob_2(M)\rightarrow\Prob_2(M)$ be an isometry that fixes all Dirac deltas, consider $\gamma \in \Geo(M)$ and
$\Prob_2(\gamma)$ the set of measures supported at $\gamma.$ For $\epsilon \ll 1$ consider:
$$\Tub_{\epsilon}(\Prob_2(\gamma)):= \bigcup_{\mu \in \Prob_2(\gamma)}B_{\mu}(\epsilon) $$
a tubular neighbourhood around $\Prob_2(\gamma).$ Then for every $n \in \mathbb{N},$ and $\mu \in \Delta_n(M)$ such that
$\supp(\mu)\subset \Tub_{\epsilon}(\Prob_2(\gamma))$ we have that $\Phi(\mu) \in \Delta_n(M).$
\end{lem}
\begin{proof}
We may assume that all the measures considered are such that they give zero mass to $\gamma[0,1].$
The proof will be done by induction over $n \in \mathbb{N}.$ For $n=1$ the result is clear as it follows from Theorem \ref{teo:A}.
Let $\mu \in \Delta_{n+1}(M)$ be such that $\supp(\mu)\subset \Tub_{\epsilon}(\Prob_2(\gamma)).$ Then we may write
$\mu = \sum_{i=1}^{n+1}\lambda_i\delta_{x_i},$ where all atoms are different and for all $i,$ $\lambda_i\neq 0.$
Now take $\nu \in \proj_{\gamma}(\mu)\cap \Delta_{n+1}(\gamma)$ such that $\nu = \sum_{i=1}^{n+1}\lambda_i\delta_{y_i}.$
Denote by $r_i = d(x_i,y_i).$ Notice that since $\epsilon$ is sufficiently small there exists only one geodesic between
$x_i$ and $y_i,$ and that such geodesic may be extended up to another point $z_i$ such that $d(x_i,z_i)=2r_i.$
Note that this makes $\nu$ the midpoint between $\mu$ and the totally atomic measure $\sum_{i=1}^{n+1}\lambda_i\delta_{z_i}.$
And so we have that since $\Phi(\nu)=\nu$ using Theorem \ref{teo.interiorregularity} and the induction hypothesis this forces
$\Phi(\mu) \in \Delta_{n+1}(M).$
\end{proof}
\begin{lem}\label{lem.invarianceatomic}
Let $\Phi: \Prob_2(M)\rightarrow\Prob_2(M)$ be an isometry that fixes all Dirac deltas, then for every $n \in \mathbb{N}$
$\Phi(\Delta_n(M))=\Delta_n(M).$
\end{lem}
\begin{proof}
Let $\mu \in \Delta_n(M),$ and consider a geodesic $\gamma \in \Geo(M)$ such that for some $\nu \in \proj_{\gamma}(\mu)\cap \Delta_{n}(\gamma).$ Furthermore assume that the transport between these two measures is given by a map.
Now if we consider the Wasserstein geodesic $(\eta_{t})_{t \in [0,1]}\subset \Delta_n(M)$ between $\Phi(\mu)$ and $\nu=\Phi(\nu)$ there exists some $t_0\in (0,1)$ such
that $\supp (\eta_{t_0})\subset \Tub_{\epsilon}(\Prob_2(\gamma))$ for some sufficiently small $0<\epsilon.$ From the previous Lemma
\ref{lem.invariancetub} we obtain that $\eta_{t_0}\in \Delta_n(M),$ and from Theorem \ref{teo.interiorregularity} we have that
$\Phi(\mu)\in \Delta_n(\gamma).$
\end{proof}
\begin{rem}\label{rem.weights}
Notice that in the proofs of the previous Lemmas \ref{lem.invariancetub}, \ref{lem.invarianceatomic} the transports considered were actually
given by a map. Hence the weights given to each of the atoms are also preserved.
\end{rem}
\begin{lem}\label{lem.fixedatomic}
Let $\Phi: \Prob_2(M)\rightarrow\Prob_2(M)$ be an isometry that fixes all Dirac deltas, and $n \in \mathbb{N}$
then for every $\mu = \sum_{i=1}^{n}\lambda_i\delta_{x_i} \in \Delta_n(M)$ such that $d(x_i,x_j)< \epsilon$ for all $i \neq j$ and $\epsilon$ sufficently small
we have that $\Phi(\mu)=\mu.$
\end{lem}
\begin{proof}
Given a geodesic $\gamma \in \Geo(M)$ it is clear that $\proj_{\gamma}(\mu)$ consists only of totally atomic measures. It also easy to check
that:
$$\proj_{\gamma}(\mu)= \Phi(\proj_{\gamma}(\mu) ) = \proj_{\gamma}(\Phi(\mu)). $$
Furthermore if we additionally assume that $\gamma_{t_0} = x_1$ for some $t_0 \in [0,1]$ we obtain that for all $\nu \in \proj_{\gamma}(\mu),$ $\nu(\lbrace x_1 \rbrace)\geq \lambda_1.$
From Lemma \ref{lem.invarianceatomic} we know that $\Phi(\mu)$ is also totally atomic, actually with the same number of atoms as $\mu.$
Suppose that there exists some $r>0$ such that $d(x_1,y)>r$ for all $y$ atom of $\Phi(\mu).$ This implies however, that there exists some geodesic $\sigma \in \Geo(M)$ such that $\sigma_0=x_1$ and that for some $\tilde{\nu} \in \proj_{\sigma} (\Phi(\mu))$ $\tilde{\nu}(\lbrace x_1\rbrace) = 0.$ This gives us a contradiction. Hence $x_1$ is one of the atoms of $\Phi(\mu).$
We repeat this argument for every $x_i$ and conclude that $\mu$ and $\Phi(\mu)$ must have the same atoms. From the Remark \ref{rem.weights} we conclude then that $\mu = \Phi(\mu).$
\end{proof}
\begin{lem}\label{lem.nonbranchargument}
$M$ is isometrically rigid.
\end{lem}
\begin{proof}
Take a fixed point $w \in M$ and some $\epsilon \ll 1.$ from the Lemma \ref{lem.fixedatomic} we have that for all $\mu \in \Prob_2(B_w(\epsilon)),$ $\Phi(\mu)=\mu.$
Consider then $\mu \Prob_2(B_w(\epsilon))$ absolutely continuous and $\nu \in \Prob_2(M)$ some arbitrary measure. Additionally
assume that $\supp (\mu) \subset B_{w}(\epsilon/2).$ Let $(\eta_t)_{t \in [0,1]}$ be the unique Wasserstein geodesic such
that $\eta_0 = \mu $ and $\eta_1= \nu.$ Hence it follows that $(\Phi\eta_t)_{t\in [0,1]}$ is also a Wasserstein geodesic but now with endpoints $\mu$ and $\Phi(\nu).$
Then there exists some $t_0 \in (0,1) $ such that $\supp\Phi(\eta_{t_0}) \subset B_{w}(\epsilon),$ hence $\Phi(\eta_{t_0})=\eta_{t_0}.$ Since the Wasserstein space $\Prob_2(M)$ is non-branching it follows then that $\Phi(\nu)=\nu.$ Therefore $M$ is isometrically rigid.
\end{proof}
\subsection{Rigidity on CROSSes}\label{subsec.CROSS}
In this last subsection we will restrict ourselves to the class of Compact Riemannian Symmetric Spaces (CROSSes). Several properties
of these spaces are discussed in Chapter $3$ of \cite{Bes}.They have been completely classified and are: Euclidean spheres, projective spaces (with field either real, complex or quaternionic numbers), and the Cayley plane.
In the next Lemma we summarize the properties that will use:
\begin{lem}
Let $M$ be a CROSS. Then:
\begin{itemize}
\item For any point $x \in M$ the isotropy group at $x$ acts transitively on any sphere $\partial B_{x}(R).$
\item For any $x \in M$ the cut locus, $\Cut(x),$ is either a point or a totally geodesic embedded CROSS of codimension $1.$
\end{itemize}
\end{lem}
An immediate but important consecuence of this Lemma is that for any point $x$ the distance to $\Cut(x)$ is constant and equal to the diameter of $M.$
Using Theorem \ref{teo:A} we will now restrict to isometries that fix every Dirac delta. But before continuing let us describe the motivation of our strategy to prove Theorem \ref{teo:C}. In \cite{Klo} Kloeckner proved (Proposition $3.4$) that for $n\geq 2$ the geodesic convex hull of $\Delta_1(\mathbb{R})$ is dense in $\Prob_2(\mathbb{R}).$ Thefore it is enough to describe the behaviour of the isometries on $\Delta_1(\mathbb{R}).$ Now, in our setting, fix some point $x \in M$ and consider the Lipschitz function:
$$d(x,\cdot): M \rightarrow [0,\Diam(M)]. $$
The preimage of an element in $\Delta_n([0,\Diam(M)])$ consists of measures whose supports are contained on spheres $\partial B_{x}(r_i)$ $1\leq i \leq n;$ for example, measures in $\Delta_n(M)$ are included here. Therefore a naive (but ultimately useful) approach would be to look at the behaviour of Wasserstein isometries at these measures. More precisely we have:
\begin{prop}\label{prop.parallelfixed}
Let $M$ be a CROSS, $\Phi \in \Iso(\Prob_p(X)),$ $p \in (1,\infty)$ and assume that for all $x \in M$ every probability measure supported on $\Cut(x)$ is fixed by $\Phi.$ Then, for $\mu \in \Prob_p(M)$ such that:
$$\supp (\mu) \subset \bigcup_{i=1}^{n} \partial B_{x}(r_i), \quad 0\leq r_i \leq d(x,\Cut(x)) $$
we have that
$$\supp (\Phi(\mu)) \subset \bigcup_{i=1}^{n} \partial B_{x}(r_i), \quad 0\leq r_i \leq d(x,\Cut(x)). $$
Moreover, the weights are preserved, i.e. $\mu (\partial B_{x}(r_i)) = \Phi(\mu)(\partial B_{x}(r_i))$ for all $ i \in \{1,\cdots, n\}.$
\end{prop}
Before proving this Proposition we will need a couple simple observations and an auxiliary lemma:
\begin{Obs}\label{obs.midpointinvariant}
The following are simple properties of geodesics in the Wasserstein space $\Prob_p(M).$
\begin{itemize}
\setlength\itemsep{1em}
\item If $\mu_0, \mu_1 \in \Prob_p(M)$ are fixed by $\Phi\in \Iso(\Prob_p(M))$ then the set of geodesics with endpoints $\mu_0$ and $\mu_1$ is invariant under $\Phi.$
\item If $\mu_0 =\delta_x,$ and $\mu_1$ such that $\supp \mu_1 \subset \Cut(x).$ Then for any $(\eta)_{t \in [0,1]} \in \Geo(\Prob_p(X))$ joining them we have that $\supp \eta_t \subset \partial B_{x}(td(x,\Cut(x)))$ for all $t \in [0,1].$
\item If $\mu_0= \delta_x$ and $\nu$ is supported on some $\partial B_{x}(td(x,\Cut(x))), 0<t<1$ then there exists a measure $\mu_1$ supported on $\Cut(x)$ such that $\nu$ is in the interior of some geodesic joining $\mu_0 $ and $\mu_1.$ This is clear, just notice that
every point in the support of $\nu$ is of the form $\gamma_t$ for some geodesic starting at $x.$ So just extending these geodesics yields
the measure $\mu_1.$
\end{itemize}
\end{Obs}
\begin{lem}\label{lem.interiorparallel}
Let $M$ be a CROSS, $\Phi \in \Iso(\Prob_p(M)),$ $p \in (1,\infty)$ and assume that for all $x \in M$ every probability measure supported on $\Cut(x)$ is fixed by $\Phi.$ Then, for $\mu \in \Prob_p(M)$ such that:
$$\supp (\mu) \subset \bigcup_{i=1}^{n} \partial B_{x}(r_i), \quad 0\leq r_i \leq d(x,\Cut(x)). $$
Then
\begin{itemize}
\setlength\itemsep{1em}
\item For $n=2$ there exist $\nu_0$ supported on $\{x\}\cup \Cut(x)$
and $\nu_1$ supported on some $\partial B_x(r) $ such that the geodesic between them passes through $\mu.$
\item If $n\geq 3$ there exist $\nu_0,\nu_1 \in \Prob_p(X)$ supported on $n-1$ spheres $\partial B_{x}(r_i)$ and
such that the geodesic between them passes through $\mu.$
\end{itemize}
\end{lem}
\begin{proof}
It will be sufficient to consider totally atomic measures $\mu$ such that they satisfy the following: If $y \in \supp \mu$ and $\gamma \in \Geo(X)$ is such that $\gamma_0=x $ and $\gamma_t=y$ for some $t \in [0,1]$ then $\gamma[0,1]\cap \supp \mu = \{y\}.$ Let $D$ denote the
diameter of $M.$
\begin{case}[\bf $n=2$]
Consider $\mu = (1-\lambda)\mu_{1}+\lambda\mu_{2}$ where $\supp \mu_{i}\subset \partial B_{x}(r_{i})$ and $r_{1}<r_{2}<D.$ As $\mu_{1}$ is supported on a single sphere then from Observation \ref{obs.midpointinvariant} we can find some measure $\tilde{\mu},$ with
$\supp \tilde{\mu}\subset \Cut(x)$ such that $\mu_{2}$ is in the interior of some geodesic joining $\delta_{x}$ with $\tilde{\mu}.$ We will take then $\nu_0 = (1-\lambda)\delta_x+ \lambda \tilde{\mu}.$
As for $\nu_1$ we do the following: Every point in $\supp \mu_i$ is of the form $\gamma_{t_i},$ where $t_i = r_i/D, $ $\gamma \in \Geo(M)$ starting at $x$ and $i=\{1,2\}.$ So we can just send the mass of these points to $\gamma_{t_1/(1+t_1-t_2)}.$
Therefore the measure $\nu_1$ will be a totally atomic measure whose atoms are of the form $\gamma_{t_1/(1+t_1-t_2)}$ with mass either $\mu_1(\gamma_{t_1})>0$ or $\mu_2(\gamma_{t_2})>0.$ It is easy to see that the geodesic joining $\nu_0$ with $\nu_1$ passes through $\mu.$
\end{case}
\begin{case}[\bf $n\geq 3$]
Let now $\mu = a_{1}\mu_1+\cdots +a_{n}\mu_n,$ $\supp\mu_i \subset \partial B_{x}(r_i),$ $a_i>0,$ and $0<r_1< \cdots <r_n<D.$
We will do a similar construction as in the previous case. Take
\begin{align*}
\nu_0 &= a_1\mu_1+a_2\eta+ a_3\mu_3+\cdots+a_n\mu_n,\\
\nu_1 &= a_1\mu_1+a_2\tilde{\eta}+a_3\mu_3+\cdots +a_n\mu_n.
\end{align*}
Where $\supp\eta \subset \partial B_{x}(r_1),$ $\supp \tilde{\eta}\subset \partial B_{x}(r_3).$ These measures $\eta, \tilde{\eta}$ are obtained by sending the mass of $\mu_2$ along the appropiate geodesics to $\partial B_{x}(r_1)$ and $\partial B_{x}(r_3)$ respectively.
It is clear that $\mu$ is in the interior of the geodesic between $\nu_0$ and $\nu_1$
\end{case}
\end{proof}
With this previous result we can now prove the Proposition.
\begin{proof}[Proof of Proposition \ref{prop.parallelfixed}]
We will proceed by induction on the dimension of $M.$ Take $x \in M,$ since $\Cut(x)$ is either a CROSS or a point then we can assume
that the measures supported there are fixed by $\Phi.$ Denote by $D$ the diameter of $M.$
Take $\mu \in \Prob_p(X)$ and let $n$ bethe number of spheres $\partial B_x(r_i)$ on which the support of $\mu$
is contained. The case $n=1$ follows clearly from Observation \ref{obs.midpointinvariant}.
\begin{case}[\bf $n=2$]
Consider a measure of the form $(1-\lambda)\delta_x+\lambda \mu $ where $\mu$ is supported on $\Cut(x)$ and absolutely continuous
with respect to the volume measure of $\Cut(x).$
Observe that the geodesic from $\mu$ to $(1-\lambda)\delta_x+\lambda \mu$ cannot be extended further since the length of the
geodesics involved in the optimal transport is already maximal. This implies that there must exists some set $\Gamma \subset \supp \pi,$
$\pi \in Opt(\mu,(1-\lambda)\delta_x+\lambda \mu)$ such that $\pi(\Gamma)>0$ and for all $(y,z)\in \Gamma, d(y,z)=D.$ Also, from the
$p-$cyclical monotone condition (see Definition \ref{def.pcyclical}) for all $(y_1,z_1),(y_2,z_2)$ $d(y_1,z_2)=d(y_2,z_1)=D.$
Now, since $\mu(e_0\Gamma)>0$ and $\mu$ is absolutely continuous with respect to the volume measure on $\Cut(x)$ it follows that
$e_1\Gamma = \{x\}.$ So we deduce that $\Phi((1-\lambda)\delta_x+\lambda\mu)= (1-\alpha)\delta_x+\alpha\nu,$ $\alpha \in (0,1)$
and $\nu \in \Prob_p(M).$
Consider now $\eta \in \Mid(\delta_x,\mu),$ so
\begin{align*}
\W_p^p(1-\alpha)\delta_x+\alpha\nu,\eta) &\leq (1-\alpha)\W_p^p(\delta_x,\eta)+\alpha\W_p^p(\nu,\eta)\\
&= (1-\alpha)(\frac{D}{2})^p+\alpha\W_p^p(\nu,\eta).
\end{align*}
If there exists a set $\bar{\Gamma}\subset \bar{\pi},$ $\pi \in \Opt(\eta,\nu)$ such that $\bar{\pi}(\bar{\Gamma})>0$
and for all $(y,z)\in \bar{\Gamma}$ $d(y,z)< (D/2)^p$ then $\W_p^p(\nu,\eta) < (D/2)^p.$ But this contradicts the fact
that $\W_p^p((1-\lambda)\delta_x+\lambda\mu,\Phi^{-1}(\eta)) = (D/2)^p. $
As before, the $p-$cyclical monotonicity of the support of $\bar{\pi}$ guarantees that for every $z \in \supp \nu$ $d(x,z)=D,$ i.e.
$\supp \nu \subset \Cut(x).$
Finally, $\W_p^p((1-\lambda)\delta_x+\lambda\mu, \mu)= (1-\lambda)D^p, \W_p^p((1-\lambda)\delta_x+\lambda\mu, \delta_x)=\lambda D^p $
forces $\alpha =\lambda$ and $\nu=\mu.$
The case proved just now is sufficient. First, from the density of the absolutely continuous measures supported on $\Cut(x)$ we can extend
it for all $\mu \in \Prob_p(M),$ with $\supp \mu \subset \Cut(x).$
Next, given a measure $\nu$ supported on $\partial B_x (r_1)\cup \partial B_x (r_2),$ $0< r_1<r_2<D $ there exists by Lemma \ref{lem.interiorparallel} measures $\mu_0, \mu_1$ supported on $\{x\}\cup \Cut(x), \partial B_x(r_1/(1+r_1-r_2))$ respectively such that
$\nu$ is in the interior of the geodesic joining $\mu_0$ with $\mu_1.$ As previously proved both $\Phi(\mu_0), \Phi(\mu_1)$ are supported on the same spheres and this forces $\Phi(\nu)$ to do so as well.
\end{case}
\begin{case}[\bf $n\geq 3$]
By an induction argument on $n$ we obtain the thesis. Lemma \ref{lem.interiorparallel} tell us that measures $\mu$ supported on $n$ spheres
lie in the interior of a geodesic joining two measures supported on $n-1$ spheres. So by the induction hypothesis the endpoints when we apply the isometry $\Phi$ are supported on the same spheres. Hence this also happens to the support of $\Phi(\mu).$
\end{case}
\end{proof}
And so finally we prove the main Theorem of this subsection.
\begin{thmp}\label{teo:C}
Let $M$ be a CROSS. Then for any $p \in (1,\infty)$ the isometry groups of $M$ and $\Prob_p (M)$
coincide.
\end{thmp}
\begin{proof}
Let us do induction on the dimension of the space $M.$ Take $\mu \in \Delta_n,$ such that $\mu = \sum_{i=1}^n a_i\delta_{x_i}$ where
$a_i > 0, \sum_{i=1}^n a_i=1.$
Fix $x_1$ and notice that $\Cut(x_1)$ is either a point or a totally geodesic embedded CROSS of dimension one less than the dimension of $X.$ By the induction hypothesis we have that any probability measure supported on $\Cut(x_1)$ is fixed by $\Phi.$
Since $\mu$ satisfies the hypothesis of Proposition \ref{prop.parallelfixed} for $r_i = d(x_1,x_i), i \in \{1,\cdots, n \}$ we obtain then:
$$\Phi(\mu) = a_1\delta_{x_1}+\sum_{i=2}^{n}a_i\mu_i, \text{ where } \supp(\mu_i)\subset \partial B_{x_1}(r_i). $$
We repeat this argument for the remaining $x_i$ and obtain that $\mu=\Phi(\mu).$ Since the clousure of totally atomic measures is dense in
$\Prob_p(M)$ we conclude that $\Phi$ must be the identity.
\end{proof}
|
{
"timestamp": "2021-02-18T02:16:40",
"yymm": "2102",
"arxiv_id": "2102.08725",
"language": "en",
"url": "https://arxiv.org/abs/2102.08725"
}
|
\section{Introduction}
\label{sec:introduction}
Similar to biological agents, learning to make decisions based on observations and feedback from the environment is also an essential task for autonomous artificial agents. Traditionally, adaptive linear control and model predictive control have been successfully applied in this area \citep{borrelli2017predictive}. Over the past few years, reinforcement learning has become the predominant approach \citep{recht2019tour}. An emerging alternative perspective to decision making under uncertainty is \emph{active inference} (ActInf) \citep{friston_free-energy_2010}. ActInf is a neuroscience-based theory that has been used extensively to explain behavior of biological agents in dynamic environments \citep{friston_free-energy_2010}.
ActInf is based in the \emph{free energy principle} (FEP), and postulates that perception and action in biological agents minimize a free energy bound on Bayesian surprise. The free energy is an information-theoretic measure that bounds the current and the future expected statistical surprise, i.e., how unpredictable are the observations under a given generative model (GM). The free energy is associated with the Kullback-Leibler (KL) divergence (i.e., the distance) between the approximate and the true posterior. In particular, according to the free energy principle, the agent acts in such a way as to minimize a free-energy bound on the surprise, i.e., Bayesian surprise which, informally speaking, provides a quantification of the difference between the agent's predictions about the system behavior and the observed system behavior. Minimization of free energy is closely related to variational Bayesian methods, reinforcement learning \citep{sallans_using_2001,tschantz2020reinforcement,sajid2021active}, and deep generative models \citep{ueltzhoffer_deep_2018,fountas2020deep}, another set of popular machine learning approaches \citep{goodfellow2014generative}. ActInf is closely related to message passing on graphical models \citep{de2017factor,friston2017graphical}, and several widely used message passing algorithms, including (loopy) belief propagation, variational message passing and expectation propagation can be derived as fixed-point equations of the (Bethe) free energy \citep{heskes_stable_2003,yedidia_constructing_2005,dauwels_variational_2007,zhang2017unifying}. This relation has been harnessed to develop elegant automated methods for ActInf \citep{schwobel_active_2018,van_de_laar_simulating_2019}.
In addition to investigation of motivating connections with the behavior of the biological systems \citep{friston_free_2006,ramstead_answering_2018}, ActInf has been successfully utilized in applications in the traditional stochastic control scenarios, such as linear quadratic Gaussian (LQG) control and similar standard problems such as maze problems \citep{hoffmann_linear_2017,ueltzhoffer_deep_2018,schwobel_active_2018,baltieri_active_2019,millidge2020relationship,imohiosen2020active}, and exploration-exploitation balancing in multi-armed bandit problems \citep{markovic2021empirical}.
Despite these promising developments, the ActInf framework lacks certain desirable features present in model predictive control. In particular, there is no off-the-shelf standard ActInf formulation that allows inclusion of chance constraints in the problem setting. Chance constraints provide an attractive approach for on-line decision making for uncertain systems \citep{Mesbah_2016}, i.e., systems where the dynamics are not fully known or the system contains certain components that are best modeled in a stochastic manner. In such settings, constraints on the system behavior, such as the agent remaining in a given region of the environment with a given probability, cannot directly be encoded in terms of prior beliefs. In contrast to approaches that constrain all realizations of the random variables, chance constraints allow for a (typically small) probability of constraint violation, which can significantly improve performance since chance constraints enable the decision maker trade performance with probability of constraint violation \citep{BlackmoreOnoWilliams_2011}.
This paper proposes a computationally tractable approach to chance-constrained decision making, and applies it to an ActInf context. We include chance constraints in the ActInf objective (i.e., the free energy) by using the Lagrangian formalism. We then solve the Lagrangian optimization problem by variational calculus. Finally, we show that the proposed solution not only leads to a modular and scalable message passing framework for ActInf problems, but also provides a general purpose message passing framework that can account for chance constraints on graphical models in general.
We claim the following main contributions:
\begin{enumerate}
\item We show that the analytic solution to the chance-constrained free energy problem yields posterior beliefs in the form of truncated mixtures. (Theorem \ref{thm:corrected_belief})
\item We show how this solution can be interpreted in terms of message passing on a factor-graph representation of the generative model. (Theorem \ref{thm:message_passing})
\item Consequently, our results provide a message passing framework that is specifically designed to account for chance constraints.
\end{enumerate}
Message passing is inherently modular, and (variational) message update rules can be pre-derived and stored in a lookup table for later use \citep{korl2005factor,van2019automated}. The chance-constrained message updates can then be readily combined with these pre-derived rules, without the need for laborious derivations. Our results illustrate that the proposed framework can successfully find solutions so that the rate of constraint violation specified in the original problem and the one that is actually observed during the closed-loop operation are close. The results also illustrate how to balance the constraints on the actions and the states through the usage of a tuning parameter, which enables exploration of different trade-offs between immediate and delayed intervention.
\section{Problem Statement}
We start by defining a general factorized generative model $f$ with respect to an (arbitrary) collection of variables $x$. As a notational convention, individual variables are indexed by $i, j \in \mathcal{V}$, and factors by $a, b, c \in \mathcal{F}$, unless stated otherwise. The model then factorizes as
\begin{align}
f(x) = \prod_{a\in\mathcal{F}} f_a(x_a)\,, \label{eq:model}
\end{align}
with non-negative real functions $f_a$, and where $x_a\subset x$ collects the arguments of $f_a$. In a probabilistic generative model, the individual factors usually represent conditional probability distributions. Probabilistic inference is then concerned with obtaining an (approximate) posterior belief $q_j(x_j) \propto \int f(x) \mathrm{d}x_{\setminus j}$ over a variable of interest $x_j$, where $x_{\setminus j}$ indicates the integration over all model variables except $x_j$.
We now briefly recap how the computation of these beliefs can be performed efficiently and automated over a factor graph \citep{loeliger2004signal}, and how this process can be interpreted as a Bethe free energy minimization problem \citep{yedidia_constructing_2005}. With these concepts firmly in place, we move to chance constraints and the formal problem statement in Sec.~\ref{sec:problem}.
\subsection{Factor Graphs for Marginal Belief Computation}
A factor graph can be used to visually represent a factorized function. In this paper we use the bi-partite factor graph representation. A bi-partite factor graph
\begin{align*}
\mathcal{G} = (\mathcal{F}, \mathcal{V}, \mathcal{E})\,,
\end{align*}
consists of variable-nodes $\mathcal{V}$, factor-nodes $\mathcal{F}$, and edges $\mathcal{E}$ that connect variable-nodes with factor-nodes. A variable-node $i \in \mathcal{V}$ is connected to a factor-node $a\in \mathcal{F}$ by an edge $(i, a) \in \mathcal{E}$ if (and only if) the variable $x_i$ is an argument of the factor-function $f_a$. An example section of a graph is drawn in Fig.\ref{fig:graph_uncorrected}, where the circle and square represent a variable- and factor-node respectively.
\begin{figure}[h]
\hfill
\begin{center}
\begin{tikzpicture}
[node distance=20mm,auto,>=stealth']
\node[smallbox] (f_b) {$f_b$};
\node[roundbox, right of=f_b] (x_j) {$x_j$};
\node (x_i_1) at ($(f_b)+(-1.8,0.9)$) {};
\node (x_i_2) at ($(f_b)+(-1.8,-0.9)$) {};
\node[left of=f_b, node distance=1.5cm, yshift=1mm] () {$\vdots$};
\node (x_k_1) at ($(x_j)+(1.8,0.9)$) {};
\node (x_k_2) at ($(x_j)+(1.8,-0.9)$) {};
\node[right of=x_j, node distance=1.5cm, yshift=1mm] () {$\vdots$};
\path[line] (x_i_1) edge[-] (f_b);
\path[line] (x_i_2) edge[-] (f_b);
\path[line] (f_b) edge[-] node[anchor=north]{$\substack{\rightarrow\\\mu_{bj}(x_j)}$} node[anchor=south]{$\substack{\mu_{jb}(x_j)\\ \leftarrow}$} (x_j);
\path[line] (x_j) edge[-] (x_k_1);
\path[line] (x_j) edge[-] (x_k_2);
\end{tikzpicture}
\end{center}
\caption{Bi-partite subgraph of a model around a variable-node $j$ (circle) and factor-node $b$ (square), with indicated messages. Ellipses represent a continuation of the model.}
\label{fig:graph_uncorrected}
\end{figure}
We write the neighborhood of a variable-node $i$ as $\mathcal{F}(i)$, which collects all factor-nodes in $\mathcal{F}$ that are direct neighbors of $i$. Similarly, $\mathcal{V}(a)$ collects all variable-nodes in $\mathcal{V}$ that are direct neighbors of $a$.
Suppose we are interested in obtaining a posterior belief $q_j(x_j)$. The belief propagation algorithm \citep{pearl1982reverend} then prescribes we send messages from the branches of the graph towards the variable-node of interest, following the recursive application of the belief propagation update rules:
\begin{subequations}
\label{eq:bp_messages}
\begin{align}
\mu_{jb}(x_j) &= \prod_{\substack{a\in \mathcal{F}(j)\\a\neq b}} \mu_{aj}(x_j)\\
\mu_{bj}(x_j) &= \int f_b(x_b) \prod_{\substack{i\in \mathcal{V}(b)\\i\neq j}} \mu_{ib}(x_i) \d{x_{b\setminus j}}\,,
\end{align}
\end{subequations}
where $x_{b\setminus j}$ collects all $x_b$ with the exception of $x_j$. Here, $\mu_{jb}(x_j)$ represents the message from a variable-node $j\in \mathcal{V}$ to a neighboring factor-node $b\in \mathcal{F}(j)$; and reversely for $\mu_{bj}(x_j)$. These messages are illustrated in Fig.~\ref{fig:graph_uncorrected}.
The posterior belief can then be expressed as
\begin{align}
q_j(x_j) &= \frac{1}{Z_j} \mu_{jb}(x_j) \mu_{bj}(x_j)\,, \label{eq:bp_q_j}
\end{align}
with $Z_j = \int \mu_{jb}(x_j) \mu_{bj}(x_j) \d{x_j}$ a normalizing constant.
In practice, for numerical stability, messages are often re-normalized after computation. Furthermore, messages are usually scheduled for computation, and are often referred to by their position in the schedule instead of their location in the graph. We will use a similar notation in Sec.~\ref{sec:results}. See \citep{bishop2006pattern} for a more detailed introduction to (approximate) inference on bi-partite graphs.
\subsection{Bethe Free Energy Interpretation}
The Bethe free energy for a factorized model of the form of \eqref{eq:model} is defined as
\begin{align}
F[q] &= \sum_{a\in \mathcal{F}} U_a[q_a] - \sum_{a\in \mathcal{F}} H[q_a] + (d_i - 1) \sum_{i\in \mathcal{V}} H[q_i]\,, \label{eq:bfe}
\end{align}
where $d_i$ represents the degree of variable $x_i$. Here $U_a[q_a] = -\int q_a(x_a) \log f_a(x_a) \d{x_a}$ denotes the average energy for factor $f_a$, and $H[q_a]=-\int q_a(x_a) \log q_a(x_a) \d{x_a}$ denotes the entropy. The Bethe free energy is optimized with imposed normalization and marginalization constraints:
\begin{subequations}
\label{eq:norm_marg}
\begin{align}
\int q_a(x_a) \d{x_{a\setminus j}} &= q_j(x_j), \forall a \in \mathcal{F}, \forall j\in \mathcal{V}(a) \label{eq:marg}\\
\int q_a(x_a) \d{x_a} &= 1, \forall a \in \mathcal{F}\\
\int q_i(x_i) \d{x_i} &= 1, \forall i \in \mathcal{V}\,,
\end{align}
\end{subequations}
such that the $q_a$ and $q_i$ represent (approximate) posterior probability distributions (beliefs).
\subsection{Free Energy Minimization for Active Inference}
Active Inference usually defines dynamic models that specialize variables into parameters, states, observation and control sequences for past and future times. Free energy minimization for ActInf is then presented as a dual objective, where minimization of free energy for a model of past variables accounts for state and parameter estimation (perception), and free energy minimization of free energy for a model of future variables accounts for policy planning \citep{baltieri2018modularity,van2019application}.
In the present paper we assume that the current state is observed and that model parameters are given. Therefore, this paper only concerns inference for policy planning. Extensions for perception are however straightforward. Chance constraints only affect inference for planning, and therefore standard techniques for state estimation and parameter learning can be employed \citep{van_de_laar_simulating_2019}.
Furthermore, the current paper employs the Bethe Free Energy (BFE) formulation \eqref{eq:bfe} for policy planning \citep{schwobel_active_2018,van_de_laar_simulating_2019} instead of the more traditional Expected Free Energy (EFE) \citep{friston_active_2015}. The BFE is known to lack the epistemic qualities of the EFE \citep{schwobel_active_2018}, which can be compensated for by introducing an additional mutual information term between the states and the observations to the BFE objective \citep{parr2019generalised}. The benefit of the uncompensated BFE however, is that traditional message passing algorithms, including (loopy) belief propagation, variational message passing, expectation propagation and generalized belief propagation algorithms can all be derived as fixed-point equations of the variational free energy by the use of variational calculus, see \citep{yedidia2000generalized,heskes_stable_2003,yedidia_constructing_2005,dauwels_variational_2007,zhang2017unifying}.
\subsection{Chance Constraints}
\label{sec:problem}
A chance constraint imposes that the probability mass of a belief $q_j(x_j), j\in \mathcal{V}$ outside of a `safe' region $\mathcal{S}_j \subset \mathcal{X}_j$ cannot exceed a pre-set threshold $\epsilon \in [0, 1]$. Formally, a chance constraint imposes the inequality
\begin{align}
1 - \epsilon &\leq \int_{\mathcal{S}_j} q_j(x_j) \d{x_j}\notag\\
&= \int_{\mathcal{X}_j} q_j(x_j)\, g_j(x_j) \d{x_j}\,, \label{eq:chance}
\end{align}
with
\begin{align*}
g_j(x_j) =
\begin{cases}
1 \text{ if } x_j \in \mathcal{S}_j\\
0 \text{ otherwise}\,.
\end{cases}
\end{align*}
Our problem statement then becomes two-fold, namely:
\begin{enumerate}
\item Find the stationary points of the Bethe free energy \eqref{eq:bfe} under the normalization and marginalization constraints of \eqref{eq:norm_marg} and chance constraints of the form \eqref{eq:chance} (Theorem \ref{thm:corrected_belief});
\item Interpret the retrieval of stationary points of the chance-constrained Bethe free energy as message passing on a factor graph (Theorem \ref{thm:message_passing}).
\end{enumerate}
The simulations of Sec.~\ref{sec:results} further specialize the model variables into state, observation and control sequences and demonstrate the added value of chance constraints in an ActInf setting. Crucially, with an interpretation of chance constraints in terms of message passing on a factor graph, chance constraints can be readily applied to any factorized model. Formulating chance constraints as a click-on module for approximate inference then greatly improves the application range of chance constraints.
\section{Chance-Constrained Message Passing}
\label{sec:methods}
In this section we formulate the method of chance-constrained message passing. We identify the stationary points of the chance-constrained Bethe free energy and interpret the result in terms of message passing on a factor graph. We work towards a practical message-passing update rule for chance-constrained variables, as summarized in Algorithm~\ref{alg:chance}. A brief introduction to variational calculus is available in Appendix~\ref{app:calculus_of_variations}. Proofs can be found in Appendix~\ref{app:proofs}.
\subsection{Stationary Points}
From the Bethe free energy \eqref{eq:bfe} and the constraints of \eqref{eq:norm_marg}, \eqref{eq:chance}, we can construct the Lagrangian
\begin{align}
L[q] &= F[q] + \sum_{i\in \mathcal{V}} \gamma_i \left[ \int q_i(x_i) \d{x_i} - 1 \right] + \sum_{a\in \mathcal{F}} \gamma_a \left[ \int q_a(x_a) \d{x_a} - 1 \right] \notag\\
& + \sum_{a\in \mathcal{F}} \sum_{i\in \mathcal{V}(a)} \int \zeta_{ia}(x_i)\left[q_i(x_i) - \int q_a(x_a) \d{x_{a\setminus i}}\right] \d{x_i} \notag\\
& + \sum_{i\in \mathcal{V}} \eta_i\left[\int q_i(x_i) g_i(x_i) \d{x_i} - (1 - \epsilon)\right] \label{eq:L_q}\,,
\end{align}
where the Lagrange multipliers $\gamma, \zeta, \eta$ enforce the constraints of \eqref{eq:norm_marg}, \eqref{eq:chance}.
Under strong duality, for the inequality constraint in \eqref{eq:chance} we have the complementary slackness condition \cite[Ch.~5]{b_boyd}. This condition states that for optimality we have $ \eta_i\left[\int q_i(x_i) g_i(x_i) \d{x_i} - (1 - \epsilon)\right] = 0$. Therefore, either $\eta_i > 0$, which implies that the chance constraint of \eqref{eq:chance} holds with equality (active) or $\eta_i=0$, which implies that the chance constraint may hold without equality (inactive).
In other words, the complementary slackness condition requires us to consider two scenarios: i) \eqref{eq:chance} holds with equality for $\eta_i >0$ and ii) \eqref{eq:chance} is satisfied under $\eta_i = 0$.
Hence, if $\eta_i >0$, the chance constraint is activated and enforced with equality.
In Lemmas~\ref{lem:q_b_star}, \ref{lem:q_j_star} we express the stationary points of $L[q]$ in terms of the beliefs. The proofs are presented in Appendix~\ref{pf:lem:q_b_star} and ~Appendix~\ref{pf:lem:q_j_star}.
\begin{lemma}
\label{lem:q_b_star}
Stationary points of \eqref{eq:L_q} as a functional of $q_b, b\in \mathcal{F}$, are of the form
\begin{align}
q_b^*(x_b) &= \frac{1}{Z_b} f_b(x_b) \prod_{i\in \mathcal{V}(b)} \mu_{ib}(x_i)\,, \label{eq:q_b_star_result}
\end{align}
with
\begin{align*}
Z_b &= \int f_b(x_b) \prod_{i\in \mathcal{V}(b)} \mu_{ib}(x_i) \d{x_b}
\end{align*}
a normalizing constant.
\end{lemma}
\begin{proof}
See Appendix~\ref{pf:lem:q_b_star}.
\end{proof}
Note that the $\mu_{ib}$ have not yet been identified or interpreted as messages. We will explicitly make this connection in Sec.~\ref{sec:cc_mp}.
\begin{lemma}
\label{lem:q_j_star}
Stationary points of \eqref{eq:L_q} as a functional of $q_j, j\in \mathcal{V}$, are of the form
\begin{align}
q_j^*(x_j; \eta_j) &= \frac{1}{Z_j(\eta_j)} \exp\!\left(-\eta_j g_j(x_j)\right) \prod_{a\in \mathcal{F}(j)} \mu_{aj}(x_j)\,, \label{eq:q_j_star_result}
\end{align}
with
\begin{align*}
Z_j(\eta_j) &= \int \exp\!\left(-\eta_j g_j(x_j)\right) \prod_{a\in \mathcal{F}(j)} \mu_{aj}(x_j) \d{x_j}
\end{align*}
a normalizer that still depends on $\eta_j$.
\end{lemma}
\begin{proof}
See Appendix~\ref{pf:lem:q_j_star}.
\end{proof}
Note that, in contrast to \eqref{eq:bp_q_j}, this result incorporates an additional exponential term for $\eta_j$. We will identify this multiplier in Sec.~\ref{sec:active_cc}. However, we already know that when the chance constraint for $j$ is inactive, hence $\eta_j=0$ as a consequence of the complementary slackness condition. In this case, \eqref{eq:q_j_star_result} reduces to \eqref{eq:bp_q_j}.
\subsection{Active Chance Constraint}
\label{sec:active_cc}
In this section, we identify the stationary points under active chance constraint. The result is stated in Theorem~\ref{thm:corrected_belief}.
\begin{theorem}
\label{thm:corrected_belief}
Under active chance constraint, stationary points of \eqref{eq:L_q} as a functional of $q_j, j\in \mathcal{V}$ are of the form
\begin{align}
q_j^*(x_j; \eta_j=\eta_j^*) &=
\begin{cases}
\frac{1 - \epsilon}{\Phi^{(0)}_j} q^{(0)}_j(x_j) &\text{ if } x_j \in \mathcal{S}_j\\
\frac{\epsilon}{1 - \Phi^{(0)}_j} q^{(0)}_j(x_j) &\text{ otherwise,}
\end{cases} \label{eq:optimal_correction}
\end{align}
with
\begin{subequations}
\label{eq:q_phi_0}
\begin{align}
q^{(0)}_j(x_j) &= q_j^*(x_j; \eta_j=0)\,, \label{eq:q_0}\\
\Phi^{(0)}_j &= \int_{\mathcal{S}_j} q^{(0)}_j(x_j) \d{x_j}\,, \label{eq:phi_0}\\
\eta_j^* &= \log (\epsilon \Phi^{(0)}_j) - \log (1 - \epsilon) - \log (1 - \Phi^{(0)}_j)\,.
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
See Appendix~\ref{pf:thm:corrected_belief}.
\end{proof}
This remarkable result tells us that the corrected belief $q_j^*(x_j; \eta_j=\eta_j^*)$ is obtained by scaling the probability mass of the uncorrected belief $q_j^{(0)}(x_j)$ over the respective safe and unsafe regions. This defines the corrected belief as a mixture of truncated beliefs. The optimal scaling of \eqref{eq:optimal_correction} ensures that the overflow is equal to $\epsilon$.
The complementary slackness condition ensures that the chance constraint is only enforced if the probability mass of the \emph{unconstrained} belief overflows the `safe' region $\mathcal{S}_j$ by more than $\epsilon$; i.e., the uncorrected belief is `unsafe' when
\begin{align}
\epsilon &< 1 - \Phi^{(0)}_j\,, \label{eq:is_unsafe}
\end{align}
where we refer to $\Phi^{(0)}_j$ as the `safe mass'.
If \eqref{eq:is_unsafe} is satisfied, then the posterior density $q^{(0)}_j(x_j)$ is corrected according to \eqref{eq:optimal_correction}, which `pushes' the probability mass (just) back inside the safe region.
\subsection{Chance-Constrained Message Passing}
\label{sec:cc_mp}
In this section, we show that chance constraints \eqref{eq:optimal_correction} can be interpreted as auxiliary factor-nodes (with a specific node-function), and can be enforced by belief propagation in an augmented graph.
\begin{theorem}
\label{thm:message_passing}
Given a bipartite graph $\mathcal{G} = (\mathcal{F}, \mathcal{V}, \mathcal{E})$ with a variable node $j \in \mathcal{V}$, and an associated Bethe free energy \eqref{eq:bfe} with a chance constraint \eqref{eq:chance} on the belief $q_j(x_j)$. Then, stationary points of \eqref{eq:L_q} can be obtained by belief propagation on an augmented graph $\mathcal{G}' = (\mathcal{F}', \mathcal{V}, \mathcal{E}')$, where
\begin{subequations}
\label{eq:augmented_graph}
\begin{align}
\mathcal{F}' &= \mathcal{F} \cup g\\
\mathcal{E}' &= \mathcal{E} \cup (j, g)\,,
\end{align}
\end{subequations}
and auxiliary node function
\begin{align}
\label{eq:auxiliary_node_function}
f_g(x_j) &=
\begin{cases}
\frac{1 - \epsilon}{\Phi^{(0)}_j} &\text{ if } x_j \in \mathcal{S}_j\\
\frac{\epsilon}{1 - \Phi^{(0)}_j} &\text{ otherwise.}
\end{cases}
\end{align}
\end{theorem}
\begin{proof}
See Appendix~\ref{pf:thm:message_passing}.
\end{proof}
Theorem \ref{thm:message_passing} shows that chance-constrained message passing can be seamlessly incorporated within the belief propagation framework. Chance constraints simply enter the model definition as auxiliary factors, whose factor function depends upon the incoming message, see Fig.~\ref{fig:graph}. Because uncorrected belief \eqref{eq:q_0} is being represented by the (re-normalized) incoming message $\mu_{jg}(x_j)$, this allows for a modular application of chance constraints by augmenting the original graphical model with auxiliary nodes.
\begin{figure}[h]
\hfill
\begin{center}
\begin{tikzpicture}
[node distance=20mm,auto,>=stealth']
\node[smallbox] (f_b) {$f_b$};
\node[roundbox, right of=f_b] (x_j) {$x_j$};
\node[smallbox, dashed, above of=x_j, node distance=17mm] (f_g) {$f_g$};
\node (x_i_1) at ($(f_b)+(-1.8,0.9)$) {};
\node (x_i_2) at ($(f_b)+(-1.8,-0.9)$) {};
\node[left of=f_b, node distance=1.5cm, yshift=1mm] () {$\vdots$};
\node (x_k_1) at ($(x_j)+(1.8,0.9)$) {};
\node (x_k_2) at ($(x_j)+(1.8,-0.9)$) {};
\node[right of=x_j, node distance=1.5cm, yshift=1mm] () {$\vdots$};
\path[line] (x_i_1) edge[-] (f_b);
\path[line] (x_i_2) edge[-] (f_b);
\path[line] (f_b) edge[-] node[anchor=north]{$\substack{\rightarrow\\\mu_{bj}(x_j)}$} node[anchor=south]{$\substack{\mu_{jb}(x_j)\\ \leftarrow}$} (x_j);
\path[line] (x_j) edge[-] (x_k_1);
\path[line] (x_j) edge[-] (x_k_2);
\path[dashed] (f_g) edge[-] node[anchor=east, pos=0.35]{$_{\mu_{gj}(x_j) \downarrow}$} node[anchor=west, pos=0.35]{$_{\uparrow \mu_{jg}(x_j)}$} (x_j);
\end{tikzpicture}
\end{center}
\caption{Bi-partite graph around a chance-constrained variable $x_j$, with indicated auxiliary factor $f_g$ (dashed square) and messages. Ellipses represent the continued model by an arbitrary (possibly zero) number of connected edges.}
\label{fig:graph}
\end{figure}
\subsection{Gaussian Approximation}
Since the message $\mu_{gj}(x_j)$ introduces discontinuities, the computations for dependent messages may grow prohibitively complex. For efficient computations, it can be helpful to make a Gaussian approximation $\tilde{q}_j(x_j)$ to the corrected belief $q_j^*(x_j; \eta_j=\eta_j^*)$, e.g., by moment matching. The resulting (approximate) message then follows from
\begin{align*}
\mu_{gj}(x_j) = \tilde{q}^{(n)}_j(x_j)/\mu_{jg}(x_j)\,.
\end{align*}
If the message $\mu_{jg}(x_j)$ is also Gaussian, this computation is easily performed by subtracting the canonical statistics. This procedure then resembles the expectation propagation algorithm \citep{minka2001expectation,cox2018robust}. Interestingly, the expectation propagation algorithm can also be derived in terms of Bethe free energy optimization, where the marginalization constraints \eqref{eq:marg} are replaced by moment-matching constraints \citep{zhang2017unifying}. This makes the Gaussian approximation consistent with the Lagrangian approach as presented in this paper.
The approximated belief $\tilde{q}_j(x_j)$ however renders the chance constraint \eqref{eq:chance} inexact. As a result, the approximated belief needs to be iteratively re-corrected:
\begin{align}
q^{(n)}_j(x_j) &=
\begin{cases}
\frac{1 - \epsilon}{\Phi^{(n-1)}_j} \tilde{q}^{(n-1)}_j(x_j) &\text{ if } x_j \in \mathcal{S}_j\\
\frac{\epsilon}{1 - \Phi^{(n-1)}_j} \tilde{q}^{(n-1)}_j(x_j) &\text{ otherwise,}
\end{cases} \label{eq:iterative_correction}
\end{align}
where $n$ denotes an iteration counter. This leads to the procedure summarized in Alg.~\ref{alg:chance}, and depicted in Fig.~\ref{fig:curve}.
\begin{algorithm}
\caption{Chance-constrained message passing with Gaussian approximation}
\label{alg:chance}
\begin{algorithmic}
\STATE {Given a Gaussian inbound message $\mu_{jg}(x_j)$}
\STATE {Compute the uncorrected belief $q^{(0)}_j(x_j)$ through \eqref{eq:q_0}}
\STATE {Compute the safe mass $\Phi^{(0)}_j$ through \eqref{eq:phi_0}}
\STATE {Initialize the approximated belief $\tilde{q}^{(0)}_j(x_j) = q^{(0)}_j(x_j)$}
\STATE {Initialize the iteration counter $n = 0$}
\WHILE {$\epsilon + \delta < 1 - \Phi^{(n)}_j$}
\STATE {\% \emph{Chance constraint is violated with some tolerance $\delta$}}
\STATE {Increase the counter $n \leftarrow n+1$}
\STATE {Compute the corrected belief $q^{(n)}_j(x_j)$ through \eqref{eq:iterative_correction}}
\STATE {Approximate $\tilde{q}^{(n)}_j(x_j) \approx q^{(n)}_j(x_j)$ by Gaussian moment matching}
\STATE {Compute $\Phi^{(n)}_j = \int_{\mathcal{S}_j} \tilde{q}^{(n)}_j(x_j) \d{x_j}$, the safe mass of the approximated belief}
\ENDWHILE
\RETURN {The message $\mu_{gj}(x_j) = \tilde{q}^{(n)}_j(x_j)/\mu_{jg}(x_j)$}
\end{algorithmic}
\end{algorithm}
\begin{figure}[h]
\hfill
\begin{center}
\pgfmathdeclarefunction{gauss}{2}{%
\pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}%
}
\pgfmathdeclarefunction{phi}{3}{%
\pgfmathparse{1-1/(1 + exp(-0.07056*((#1-#2)/#3)^3 - 1.5976*(#1-#2)/#3))}%
}
\begin{tikzpicture}
\begin{axis}[
no markers, domain=0:5, samples=100,
axis lines*=left, xlabel=$x_j$,
every axis x label/.style={at=(current axis.right of origin),anchor=west},
height=4.5cm, width=10cm,
xtick={2}, xticklabels=\empty, ytick=\empty,
enlargelimits=false, clip=false, axis on top,
grid = major
]
\addplot [fill=black!20, draw=none, domain=2:5] {gauss(3,1)} \closedcycle;
\addplot [very thick,black] {gauss(3,1)};
\draw [yshift=-0.4cm, latex-latex](axis cs:2,0) -- node [fill=white] {$\mathcal{S}_j$} (axis cs:5,0);
\node [anchor=south] (q_tilde_n_min) at (axis cs:1.0,0.11) {$\tilde{q}^{(n-1)}_j(x_j)$};
\node [anchor=south] (phi_n_min) at (axis cs:3.15,0.11) {$\Phi^{(n-1)}_j$};
\end{axis}
\end{tikzpicture}
%
\begin{tikzpicture}
\begin{axis}[
no markers, domain=0:5, samples=100,
axis lines*=left, xlabel=$x_j$,
every axis x label/.style={at=(current axis.right of origin),anchor=west},
height=4.5cm, width=10cm,
xtick={2}, xticklabels=\empty, ytick=\empty,
enlargelimits=false, clip=false, axis on top,
grid = major
]
\addplot [fill=black!20, draw=none, domain=0:2] {0.07/(1-phi(2,3,1))*gauss(3,1)} \closedcycle;
\addplot [very thick, black, domain=0:2] {(0.07/(1-phi(2,3,1)))*gauss(3,1)};
\addplot [very thick, black, domain=2:5] {((1-0.07)/phi(2,3,1))*gauss(3,1)};
\addplot [very thick, black, dashed] {gauss(3.22,0.85)};
\draw [yshift=-0.4cm, latex-latex](axis cs:2,0) -- node [fill=white] {$\mathcal{S}_j$} (axis cs:5,0);
\node [anchor=south] (eps) at (axis cs:1.75,0.01) {$\epsilon$};
\node [anchor=south] (q_n) at (axis cs:3.9,0.11) {$q^{(n)}_j(x_j)$};
\node [anchor=south] (q_tilde_n) at (axis cs:4.3,0.35) {$\tilde{q}^{(n)}_j(x_j)$};
\end{axis}
\path (current bounding box.north) ++ (0,0.5cm);
\end{tikzpicture}
\end{center}
\caption{Example of beliefs as computed by Algorithm~\ref{alg:chance}. The top figure evaluates the probability mass within the ``safe'' zone. The bottom figure applies the correction (solid curve) and approximates the corrected belief by Gaussian moment matching (dashed curve).}
\label{fig:curve}
\end{figure}
With this algorithm, we have derived a practical chance-constrained message update from the first principles. The message update can be readily applied to any continuous variable that requires a chance constraint. Note however, that when multiple chance constraints are imposed on the model, the message passing algorithm itself becomes an iterative procedure because of circular message dependencies. For example, a message incoming to an auxiliary node $g$ might (indirectly) depend on a message that exits another auxiliary node $h$. In turn, this exiting message depends on the incoming message to $h$ \eqref{alg:chance}, which depends on the message exiting $g$, etcetera. In order to break this circular message dependency, uninformative messages can be used to initialize the algorithm.
\section{Simulations}
\label{sec:results}
In this section we simulate a drone that aims to elevate itself above a given height threshold with a preset probability, under the influence of a stochastic vertical wind. We define the drone elevation level over time by $x = \{x_0, \dots, x_t, \dots, x_L\}, x_t\in \mathbb{R}$, and actions (ascension velocity) $a = \{a_0, \dots, a_t, \dots, a_L\}, a_t\in \mathbb{R}$. A time-dependent $m_{w,t}$ defines the expected wind velocity that acts upon the agent. The discrete-time stochastic system is defined as:
\begin{align*}
w_t &\sim \N{m_{w,t}, v_w}\\
x_{t+1} &= x_t + a_t + w_t\,,
\end{align*}
where $v_w$ defines the wind velocity variance.
We define an agent that directly observes its elevation level and has knowledge of the statistical system properties $m_{w,t}$ and $v_w$. The agent models future states of the system with a fixed time horizon $T$. As a shorthand notation, we write the future (including current) states $\overline{x}_t = \{x_t, \dots, x_{t+T}\}$ and control variables $\overline{u}_t = \{u_t, \dots, u_{t+T-1}\}$. For notational convenience, we drop the $t$ subscript from these collections. The agent model at time $t$ is defined as:
\begin{align}
f_t(\overline{x}, \overline{u}) = \prod_{k=t}^{t+T-1} p_{x,k}(x_{k+1}|u_k, x_k) p_u(u_k)\,, \label{eq:agent_model}
\end{align}
with a respective state transition model and control prior
\begin{subequations}
\label{eq:agent_state_control}
\begin{align}
p_{x,k}(x_{k+1}|u_k, x_k) &= \N{x_{k+1} | x_k + u_k + m_{w,k}, v_w}\\
p_u(u_k) &= \N{u_k | 0, \lambda^{-1}}\,.
\end{align}
\end{subequations}
We factorize and constrain the variational posterior distribution such that \citep{van_de_laar_simulating_2019}
\begin{align}
q_t(\overline{x}_{\setminus t}, \overline{u}) = q_t(\overline{x}_{\setminus t}) \prod_{k=t}^{t+T-1} \delta(u_k - a_k)\,, \label{eq:variational_posterior}
\end{align}
where $\overline{x}_{\setminus t}$ indicates the collection of latent states (the state sequence $\overline{x}$ without the observed current state $x_t$). The goal of the agent controller then becomes to find the policy $\pi_t = \{a_t, \dots, a_{t+T-1}\}$ that minimizes the Bethe free energy
\begin{align}
F[q_t; x_t, \pi_t] &= \idotsint q_t(\overline{x}_{\setminus t}, \overline{u}) \log \frac{q_t(\overline{x}_{\setminus t}, \overline{u})}{f_t(\overline{x}, \overline{u})} \d{\overline{x}_{\setminus t}} \d{\overline{u}}\,, \label{eq:system_bfe}
\end{align}
under the normalization and marginalization constraints of \eqref{eq:norm_marg} and chance constraints
\begin{align*}
1 - \epsilon \leq \int_{\mathcal{S}} q_{x,k}(x_k) \d{x_k},\, \forall k \in \{t+1, \dots, t+T\}\,,
\end{align*}
where the safe region $\mathcal{S}=(1, \infty)$ and violation probability $\epsilon$ are identical for all future state variables.
\subsection{Graphical Model and Schedule}
As detailed in Sec.~\ref{sec:methods}, Bethe free energy minimization under chance constraints can be performed by message passing on an augmented model. The graphical representation of the augmented model is depicted in Fig.~\ref{fig:augmented_model}.
\begin{figure}[h]
\hfill
\begin{center}
\begin{tikzpicture}
[node distance=15mm,auto,>=stealth']
\node[roundbox, fill=darkgrey] (x_t) {};
\node[yshift=-6mm] at (x_t) {$x_t$};
\node[smallbox, right of=x_t, node distance=3cm] (p_x_k) {};
\node[yshift=-5mm] at (p_x_k) {$p_{x,k}$};
\node[roundbox, above of=p_x_k, node distance=2cm] (u_k) {};
\node[xshift=6mm, yshift=3mm] at (u_k) {$u_k$};
\node[roundbox, fill=darkgrey, above left of=p_x_k, node distance=2cm] (m_w_k) {};
\node[yshift=7mm] at (m_w_k) {$m_{w,k}$};
\node[roundbox, fill=darkgrey, above right of=p_x_k, node distance=2cm] (v_w_k) {};
\node[xshift=5mm, yshift=5mm] at (v_w_k) {$v_w$};
\node[smallbox, above of=u_k, node distance=12mm] (p_u_k) {};
\node[xshift=5mm] at (p_u_k) {$p_u$};
\node[roundbox, fill=darkgrey, above of=p_u_k, node distance=12mm] (lambda_k) {};
\node[xshift=6mm] at (lambda_k) {$\lambda$};
\node[roundbox, right of=p_x_k] (x_k_plus_1) {};
\node[xshift=5mm, yshift=6mm] at (x_k_plus_1) {$x_{k+1}$};
\node[smallbox, dashed, below of=x_k_plus_1] (f_x_k_plus_1) {};
\node[xshift=-8mm, yshift=1mm] at (f_x_k_plus_1) {$f_{x,k+1}$};
\path[line] (x_t) edge[-] (p_x_k);
\path[line] (u_k) edge[-] (p_x_k);
\path[line] (m_w_k) edge[-] (p_x_k);
\path[line] (v_w_k) edge[-] (p_x_k);
\path[line] (p_u_k) edge[-] (u_k);
\path[line] (lambda_k) edge[-] (p_u_k);
\path[line] (p_x_k) edge[-] (x_k_plus_1);
\path[dashed] (x_k_plus_1) edge[-] (f_x_k_plus_1);
\node[right of=x_k_plus_1, node distance=2cm] (ellipsis) {$\dots$};
\path[line] (x_k_plus_1) edge[-] (ellipsis);
\draw[rounded corners] (0.75,-2.0) rectangle (5.75,5.0);
\node at (1.7,-1.7) {$_{k=t:t+T-1}$};
\end{tikzpicture}
\end{center}
\caption{Augmented graphical representation of the agent model \eqref{eq:agent_model}. Circles and squares indicate variable- and factor-nodes respectively. Auxiliary factor-nodes \eqref{eq:auxiliary_node_function} are dashed, and dark circles indicate observed variables or fixed parameters. Ellipses indicate a continuation of the framed section until the lookahead time horizon.}
\label{fig:augmented_model}
\end{figure}
\begin{figure}[h]
\hfill
\begin{center}
\begin{tikzpicture}
[node distance=15mm,auto,>=stealth']
\node (ellipses_k) {\dots};
\node[mediumbox, right of=ellipses_k, node distance=18mm] (u_add_k) {$+$};
\node[roundbox, right of=u_add_k] (aux_k) {};
\node[mediumbox, right of=aux_k] (m_add_k) {$+$};
\node[roundbox, right of=m_add_k, node distance=18mm] (x_k_plus_1) {};
\node[yshift=7mm] at (x_k_plus_1) {$x_{k+1}$};
\node[right of=x_k_plus_1] (ellipses_k_plus_1) {\dots};
\node[mediumbox, below of=x_k_plus_1, dashed] (f_x_k_plus_1) {};
\node[xshift=9mm] at (f_x_k_plus_1) {$f_{x,k+1}$};
\node[roundbox, fill=darkgrey, below of=m_add_k] (m_w_k) {};
\node[xshift=-10mm] at (m_w_k) {$m_{w,k}$};
\node[roundbox, above of=u_add_k] (aux_u) {};
\node[mediumbox, above of=aux_u] (aux_n) {$\mathcal{N}$};
\node[roundbox, fill=darkgrey, left of=aux_n] (v_w_k) {};
\node[yshift=6mm] at (v_w_k) {$v_w$};
\node[roundbox, above of=aux_n] (u_k) {};
\node[xshift=-7mm] at (u_k) {$u_k$};
\node[mediumbox, right of=u_k] (p_u_k) {$\mathcal{N}$};
\node[roundbox, fill=darkgrey, right of=p_u_k] (lambda_k) {};
\node[xshift=6mm] at (lambda_k) {$\lambda$};
\path[line] (ellipses_k) edge[-] (u_add_k);
\path[line] (u_add_k) edge[-] (aux_k);
\path[line] (aux_k) edge[-] (m_add_k);
\path[line] (m_add_k) edge[-] (x_k_plus_1);
\path[line] (x_k_plus_1) edge[-] (ellipses_k_plus_1);
\path[dashed] (x_k_plus_1) edge[-] (f_x_k_plus_1);
\path[line] (m_add_k) edge[-] (m_w_k);
\path[line] (aux_u) edge[-] (u_add_k);
\path[line] (aux_n) edge[-] (aux_u);
\path[line] (u_k) edge[-] (aux_n);
\path[line] (p_u_k) edge[-] (u_k);
\path[line] (lambda_k) edge[-] (p_u_k);
\path[line] (v_w_k) edge[-] (aux_n);
\msg{up}{right}{ellipses_k}{u_add_k}{0.3}{1}
\msg{left}{down}{u_k}{aux_n}{0.4}{2}
\darkmsg{left}{down}{aux_n}{aux_u}{0.5}{3}
\msg{up}{right}{u_add_k}{aux_k}{0.5}{4}
\msg{up}{right}{m_add_k}{x_k_plus_1}{0.45}{5}
\msg{right}{up}{x_k_plus_1}{f_x_k_plus_1}{0.65}{6}
\msg{up}{right}{x_k_plus_1}{ellipses_k_plus_1}{0.65}{7}
\msg{down}{left}{x_k_plus_1}{ellipses_k_plus_1}{0.65}{A}
\msg{left}{down}{x_k_plus_1}{f_x_k_plus_1}{0.65}{B}
\msg{down}{left}{m_add_k}{x_k_plus_1}{0.45}{C}
\msg{up}{left}{aux_k}{m_add_k}{0.5}{D}
\msg{left}{up}{aux_u}{u_add_k}{0.5}{E}
\darkmsg{right}{up}{u_k}{aux_n}{0.4}{F}
\msg{up}{left}{u_k}{p_u_k}{0.5}{G}
\msg{down}{left}{ellipses_k}{u_add_k}{0.3}{H}
\draw[dashed] (0.9,-0.5) rectangle (5.3,3.5);
\node at (4.8,3.1) {$p_{x,k}$};
\end{tikzpicture}
\end{center}
\caption{Augmented agent model \eqref{eq:agent_model}, with $p_{x,k}$ expanded according to \eqref{eq:agent_state_control} (dashed rectangle), and indicated forward (numbers) and backward (letters) message passing schedules for optimization of \eqref{eq:system_bfe}. Circle and square nodes indicate variable- and factor-nodes respectively. Dark nodes indicate observed variables or fixed parameters, and auxiliary factor-nodes \eqref{eq:auxiliary_node_function} are dashed. Ellipses indicate a continuation of the model. Dark messages are computed by the variational update rule, see \citep{winn2005variational,dauwels_variational_2007}.}
\label{fig:schedule}
\end{figure}
The schedule comprises a forward-backward scheme, as illustrated in Fig.~\ref{fig:schedule}.
Four message updates in Fig.~\ref{fig:schedule} are of particular interest. Firstly, since \eqref{eq:variational_posterior} constrains the belief over controls to a point-mass, it follows that
\begin{align*}
\mu_{\smallcircled{2}}^{(i)}(u_k) &= \delta(u_k - a_k^{(i-1)})\,,
\end{align*}
where $i$ counts the number of schedule (forward-backward) iterations. The schedule is initialized with $a_k^{(0)} = 0$ for all $k \geq t$. Secondly, $\mu_{\smallcircled{B}}^{(i)}(x_{k+1})$ takes on the role of $\mu_{jg}(x_j)$ in Alg.~\ref{alg:chance}. Because the noise in the model is Gaussian, this message will be an (unnormalized) Gaussian as well. Therefore, by application of Alg.~\ref{alg:chance}, the third message of interest, $\mu_{\smallcircled{6}}^{(i)}(x_{k+1})$ is computed. For the initial forward pass, $\mu_{\smallcircled{B}}^{(0)}(x_{k+1}) = 1$ is considered uninformative. Fourthly, $\mu_{\smalldarkcircled{F}}^{(i)}(u_k)$ carries information upward to the control variables. Because the variational posterior is chosen to factorize between the state and control sequence \eqref{eq:variational_posterior}, the $\mu_{\smalldarkcircled{F}}^{(i)}(u_k)$ message is computed by a variational update rule as detailed in \citep{winn2005variational} and \citep{dauwels_variational_2007}.
The action for the next iteration then follows from
\begin{align*}
q_{u,k}^{(i)}(u_k) &\propto \mu_{\smalldarkcircled{F}}^{(i)}(u_k)\, \mu_{\smallcircled{G}}^{(i)}(u_k)\\
a_k^{(i)} &= \operatorname{mode} q_{u,k}^{(i)}(u_k)\,.
\end{align*}
Iterating the schedule then corresponds with an expectation maximization scheme. The expectation step of this scheme computes the $\mu_{\smalldarkcircled{F}}^{(i)}(u_k)$ message from the actions $a_k^{(i-1)}$. The maximization step then chooses the updated actions $a_k^{(i)}$ as the current MAP-estimate of $u_k$. The schedule is iterated until the policy converges.
Message passing simulations\footnote{Source code for the simulations is available for download at \url{http://biaslab.github.io/materials/cc_simulations.zip}} are performed with the ForneyLab probabilistic programming toolbox \citep{cox2019factor}, version 0.11.3.
\subsection{Control Law}
Note that the Bethe free energy of \eqref{eq:system_bfe} is still a function of the observed current elevation $x_t$. We can then evaluate the optimal action $a_t$ as a function of the current elevation $x_t$ (the control law), for a given wind profile, chance constraint and model parameters. In order to gain an intuition for controller behavior, we fix $m_{w,t}=0$ for all $t$. We plot the control law in Fig.~\ref{fig:control_law}, for varying values of the lookahead horizon $T$, chance constraint threshold $\epsilon$, wind variance $v_w$ and control prior precision $\lambda$.
\begin{figure}[h]
\hfill
\begin{center}
\resizebox{0.8\columnwidth}{!}{\includegraphics{control_law_1_0.01_0.2_1.0e-12.png}}
\end{center}
\caption{Slices of the control law for $m_{w,t}=0, \mathcal{S}=(1, \infty)$, varied around reference setting $T=1, \epsilon=0.01, v_w=0.2, \lambda=10^{-12}$ (black curves). Dashed vertical lines indicate the minimal safe elevation.}
\label{fig:control_law}
\end{figure}
The top-left diagram shows that with growing lookahead horizon $T$, the agent starts intervening at higher elevation. With this anticipatory effect the agent prepares for events in the more distant future. The top-right diagram also shows that the agent intervenes at higher elevation with decreasing $\epsilon$. When violation of the constraint grows less desirable, the agent must intervene earlier in order to assure that sufficient probability mass is present in the safe region. Also note that no further action is proposed beyond an intervention threshold. Once the agent is sufficiently elevated, no corrections are proposed until the agent wanders (or is forced) below the intervention threshold. The bottom-left figure shows a similar effect for growing wind velocity variance $v_w$. When the system grows more stochastic, chance constraint abidance is ensured by intervening at higher elevations. Finally, the bottom-right figure illustrates what happens when the chance constraint is combined with a Gaussian prior constraint on control. Increasing the control prior precision $\lambda$ penalizes immediate correction. For low precisions (low penalty on control magnitude), the slope of the control law below the intervention threshold is equal to $1$, and compensation is immediate. Control grows more robust with growing precision, at the cost of prolonged chance constraint violation.
\subsection{Comparison Against a Goal-Driven Agent}
In order to illustrate the difference in behavior between a chance- and a goal-driven ActInf agent, we compare the results of Fig.~\ref{fig:control_law} with an ActInf agent where the chance constraint is replaced by a goal prior. We use the graphical model definition of Fig.~\ref{fig:augmented_model} and define the auxiliary node function as a fixed prior $f_{x, k+1}(x_{k+1}) = \N{x_{k+1} | m_x, \vartheta_x}$ for all $t \leq k \leq t+T-1$. We choose $m_x=2$, and the variance $\vartheta_x=0.18478$ such that the overflow of the safe region $1 - \int_{\mathcal{S}} f_{x, k+1}(x_{k+1}) \d{x_{k+1}} \approx 0.01$ resembles the situation for $\epsilon=0.01$. The message passing schedule then follows the definition of Fig.~\ref{fig:schedule}, where $\mu_{\smallcircled{6}}^{(i)}(x_{k+1})$ is no longer computed by Alg.~\ref{alg:chance} and propagates the fixed goal prior instead. Fig.~\ref{fig:control_law_reference} shows the resulting control law for $m_{w,t}=0, T=1, v_w=0.2$ and varying $\lambda$.
The results of Fig.~\ref{fig:control_law_reference} show that the control for the goal-driven agent grows more robust with increasing $\lambda$ -- similar to the control law for the chance-driven agent (Fig.~\ref{fig:control_law}, bottom right). For the smallest $\lambda$, the control law for the prior-driven agent resembles the corresponding control law for the chance-driven agent (dotted curve) only for elevations $x < 2$. For elevations $x > 2$, the goal-driven agent proposes downward corrections, while the chance-driven agent proposes no corrections. This comparison illustrates how a chance-driven agent avoids unnecessary interventions.
\begin{figure}[h]
\hfill
\begin{center}
\resizebox{0.5\columnwidth}{!}{\includegraphics{control_law_reference_1_2.0_0.18478_0.2_1.0e-12.png}}
\end{center}
\caption{Slices of the control law for a goal-driven agent with $m_{w,t}=0, T=1, m_x=2, \vartheta_x=0.18478, v_w=0.2$ with varying $\lambda$. The dashed vertical line indicates the minimal safe elevation. The black dotted curve represents the reference result ($\lambda=10^{-12}$) for the chance-driven agent (Fig.~\ref{fig:control_law}, black curves).}
\label{fig:control_law_reference}
\end{figure}
\subsection{Simulation Results}
In this section we study an active inference agent in interaction with a simulated environment. The action-perception loop is based on \citep{van_de_laar_simulating_2019} and consists of four steps at every time $t$:
\begin{enumerate}
\item \emph{Observe} the current agent elevation;
\item \emph{Infer} a policy from the current elevation and the future expected wind velocities by chance-constrained message passing;
\item \emph{Act} by selecting the first (current) action from the inferred policy;
\item \emph{Execute} the selected action in the system and advance the time index by one.
\end{enumerate}
\begin{figure}[h]
\hfill
\makebox[\textwidth][c]{
\begin{subfigure}[t]{0.65\textwidth}
\centering
\includegraphics[width=\textwidth]{sim_1_0.01_10000_0.2_1.0e-12_10.08.png}
\end{subfigure}
\begin{subfigure}[t]{0.65\textwidth}
\centering
\includegraphics[width=\textwidth]{sim_reference_1_2.0_0.18478_10000_0.2_1.0e-12_12.99.png}
\end{subfigure}}
\caption{Results for ten thousand simulations with varying wind strength over time, and $T=1, v_w=0.2, \lambda=10^{-12}$, for a chance-driven agent ($\epsilon=0.01$, left), and a goal-driven agent ($m_x=2, \vartheta_x=0.18478$, right).}
\label{fig:simulation}
\end{figure}
The results for ten thousand independent runs are plotted in Fig.~\ref{fig:simulation} for a chance-driven agent (left) and a goal-driven agent (right). The first row of diagrams plots the expected wind velocity over time, which is identical for each run. The sampled wind velocity trajectories $w_t$ do vary per run, under influence of the wind velocity variance $v_w$. For $5 \leq t < 10$ a downward draft attempts to push the drone below the minimal safe elevation (dashed). The second row plots the drone elevation trajectory for a randomly selected subset of runs. Corresponding actions are plotted in the third row. The fourth row evaluates the relative number of runs that violate the safe-zone over time.
It can be seen that both agents undertake corrective actions in order to compensate for the downward wind. However, while the chance-driven agent (left) only proposes upward corrections below the intervention threshold, the goal-driven agent (right) proposes additional downward corrections above the threshold. Furthermore, it can be seen that the maximal empirical violation for the chance-constrained agent mostly remains below the chance constraint target violation probability of $\epsilon=0.01$ (dashed), while the goal-driven agent systematically overshoots the target violation probability, i.e. violates the chance constraint. Compared to the chance-driven agent, the maximal empirical violations for the goal-driven agent are also larger. This effect can be explained in terms of the constrained beliefs. Namely, the chance-driven agent constrains the posterior beliefs, while the goal-driven agent imposes prior constraints on the model. Prior constraints may still be violated by the corresponding posterior beliefs, leading to more pronounced empirical violations.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we formulated chance-constrained optimization of the Bethe free energy in terms of message passing on a factor graph. We showed that, in the factor graph representation of the generative model, chance constraints can be imposed by auxiliary factors that force (a specified portion of) the probability mass of the chance-constrained beliefs inside a designated safe-zone. Message passing on the augmented graph, with the auxiliary factor-nodes included in the graph, then automatically balances the imposed chance constraints with additional (prior) constraints on the generative model. Chance constraints can thus be interpreted as modular click-on extensions to the generative model, similar to conventional factor-nodes \citep{loeliger2004signal}, and can thus be used to complement message-passing formulations on generative neural models \citep{friston2017graphical,van2018forneylab}.
However, because the analytical result for the chance-constrained update includes an inherent discontinuity, direct application of this rule may still lead to message updates that grow prohibitively complex. To remedy this, we proposed an algorithm that approximates the resulting message with a Gaussian form. This algorithm offers a tractable formulation of chance-constrained message passing. The proposed message passing interpretation of chance constraints then vastly enhances the modularity and flexibility of chance-constrained inference, and can accelerate the search for workable models \citep{blei2014build}.
We demonstrated chance-constrained message passing in the context of active inference. We compared the simulated behavior of a chance-driven agent with a goal-driven agent, where the chance constraints are replaced by traditional prior beliefs on future outcomes. The results illustrate how the goal-driven agent continually proposes corrections, whereas the chance-driven agent seizes interventions above a threshold. Chance-constrained ActInf may thus avoid unnecessary interventions and reduce the cost of control.
The results for the chance-driven agent showed that, in the absence of additional prior constraints, the empirical chance constraint violation ratio mostly remains below the pre-set target violation probability. An added prior constraint on controls robustifies control at the cost of prolonged chance constraint violation. Chance-constrained active inference thus weights all imposed constraints on the generative model, allowing e.g., for a trade-off between robust control and empirical chance constraint violation.
\subsection*{Acknowledgments}
This work was supported, in part, by GN Hearing A/S and the Swedish Research Council (under Grants 2015-04011 and 2018-03701).
|
{
"timestamp": "2021-05-07T02:18:59",
"yymm": "2102",
"arxiv_id": "2102.08792",
"language": "en",
"url": "https://arxiv.org/abs/2102.08792"
}
|
\section{Introduction}
CVs are semi-detached binaries, in which a white dwarf (WD) accretes from a low-mass donor star (see e.g.\ \citealt{warner}). In magnetic CVs, the WD has a magnetic field sufficiently strong to alter the accretion flow significantly. These systems are divided into two subtypes, namely polars and intermediate polars (IPs).
Polars are characterized by strong circular and linear polarization, modulated at the orbital period ($P_{orb}$), indicating WD rotation that is synchronized (or very close to synchronized) with the binary orbit. IPs have highly coherent pulsations, at periods $<P_{orb}$, in their X-ray and/or optical light curves, interpreted as the spin modulation of a WD rotating at a period much shorter than the orbit.
\citet{Patterson1979} discovered very stable optical pulsations at a period of 33.07~s in the nearby CV AE Aqr, leading him to suggest that it is an ``oblique rotator'' (a system in which the WD spin and magnetic axes are misaligned, and the accretion flow is magnetically channeled from the inner edge of a truncated accretion disc, onto the WD), the model now used to describe other IPs. \cite{deJager1994} found that the WD in AE Aqr is spinning down at a high rate ($5.6\times 10^{-14}\,{\rm s/s}$). AE Aqr was also one of the first CVs to be detected in the radio \citep{BookbinderLamb1987}, and at $\sim 5 \times 10^{16}\,{\rm erg\,s^{-1}\,Hz^{-1}}$ is amongst the most radio luminous CVs \citep[e.g.][]{Bastian1988, Barrett2017, CoppejansKnigge2020}. The radio emission is modelled as a superposition of synchrotron sources \citep{Bastian1988}.
Besides the 33-s signal, the observed behaviour for which AE Aqr is best known is its spectacular flaring. Many observers have reported large-amplitude aperiodic flaring in optical, X-ray, and radio light curves (on time scales of minutes to tens of minutes), and also in the optical emission lines of this system \citep[e.g.][]{Patterson1979, Bastian1988, Skidmore2003, Welsh1998, ChoiDotani1999, ChoiDotani2006}.
The rapid spin-down, with spin-down power greatly exceeding the radiated power, the lack of evidence for a disc, and the spectacular flares seen in AE Aqr are explained by a ``magnetic propeller'' where the rapidly rotating, strongly magnetic WD expels most of the infalling gas, preventing it from accreting \citep{EracleousHorne1996, WynnKingHorne1997}. AE Aqr remains the only CV that is modelled as a magnetic propeller, and up until recently, had the shortest known WD spin period\footnote{In the last few months, very short WD spin periods were identified in two CVs, CTCVJ2056-3014 and V1460 Her (29.6 and 38.6~s, respectively). These systems however both have at least partial discs and are accretion powered; i.e.\ apart from harbouring rapidly spinning WDs, they are similar to normal IPs, rather than to AE Aqr \citep{LopesdeOliveira2020, Ashley2020}.}.
The recently discovered LAMOST J024048.51+195226.9, also catalogued as CRTS J024048.5+195227, (hereafter J0240) may be another example of a magnetic propeller. \cite{Thorstensen2020} points out several optical properties reminiscent of AE Aqr, including large-amplitude flares on time scales down to $\sim$1 minute, weak or absent He\,{\scriptsize II}\,$\lambda$4686 emission (usually very strong in magnetic CVs, but not in AE Aqr), and emission lines that show irregular variations, with radial velocities that do not seem to trace the orbit. The system has an orbital period of 7.3 hours \citep{Drake2014, Thorstensen2020, LittlefieldGarnavich2020}.
If J0240 really is another propeller system, an observational signature would be a rapid WD spin modulation in optical and/or X-ray light curves (the light curves of \cite{Thorstensen2020} had only 23 and 30~s cadence). One would also expect it to show bright radio emission, by analogy with AE Aqr. Here we present the first radio detection of J0240, as well as optical high-speed photometry and photo-polarimetry. Our observations are described in Section~\ref{sec:observations}. The results are presented and discussed in Section~\ref{sec:results}. Finally, Section~\ref{sec:summary} summarizes our work.
\section{Observations}
\label{sec:observations}
We obtained radio and optical data of J0240 in the second half of 2020, using the MeerKAT radio interferometer, the South African Astronomical Observatory (SAAO) 1-m optical telescope, Lesedi, and the SAAO 74-inch telescope. Table~\ref{tab:obslog} gives a log of the observations.
\subsection{MeerKAT radio interferometer}
A roughly 2 hour observation was taken with MeerKAT on 12 Aug 2020, as part of the ThunderKAT survey (The Hunt for Dynamic and Explosive Radio Transients using MeerKAT; \citealt{Fender2017}). The observation was done in the L-band (centered at 1284 MHz, with a bandwidth of 856 MHz covered by 4096 channels), using 59 of the 64 antennas. It started and ended with scans of the primary calibrator (J0408-6545), and alternated between the secondary calibrator (J0238+1636) and target for the rest of the observation. Visibilities were recorded every 2 seconds, and the 6 target scans were each approximately 15 minutes in length.
The data were calibrated and imaged using the OxKAT\footnote{Available from https://github.com/IanHeywood/oxkat/} pipeline \citep{oxcat}. This implements standard \textsc{casa} \citep[e.g.][]{McMullin2007} routines, as well as the SARAO tricolour flagger\footnote{https://github.com/ska-sa/tricolour/}, and the imaging packages DDFacet \citep{ddfacet} and WSCLEAN \citep{wsclean}. We found that second generation calibration (direction independent self-calibration) does not significantly improve the image quality in the region near the target, and therefore report results obtained from applying only first generation calibration. Noise and flux density measurements were performed with PyBDSF \citep{blobs}.
\subsection{Optical photometry with the Lesedi 1-m telescope}
We observed J0240 on three consecutive nights in September 2020 with the new 1-m optical telescope, Lesedi, at the SAAO site in Sutherland. We performed high-speed photometry in white light (i.e.\ filterless photometry), using the imager SHOC \citep{shoc1, shoc2}. The integration time was 5 s on all nights, and since this is a frame-transfer CCD, there was no dead-time between integrations. Only quite short ($<3$ hours) observations were possible at the northern declination of J0240, and this early in the season.
We used the TEA-Phot\footnote{https://bitbucket.org/DominicBowman/tea-phot/src/master/} code \citep{BowmanHoldsworth2019} to extract the photometry. Since no filter was used, it is not possible to precisely place these data on a standard photometric system. However, by comparing the instrumental magnitudes of several stars in the field with catalogued $B$- and $R$-band measurements, we are able to perform a very rough calibration (to within $0.1$~mag), and verify that J0240 was at about the same brightness as during the observations of \citet{Thorstensen2020}, just above 17th magnitude for most of the time\footnote{Long-term CRTS, ASAS-SN, and ATLAS photometry also show no evidence of outbursts or low states in J0240.}.
\subsection{Optical all-Stokes Polarimetry with the 74-inch telescope}
Photo-polarimetry of J0240 was obtained on a single night in November 2020, over a period of $\sim$1 hour, using the SAAO 74-inch telescope with the HIgh-speed Photo-POlarimeter (HIPPO; \citealt{hippo}). This instrument performs time-resolved, simultaneous all-Stokes observations. No filter was used, implying a very broad bandpass (roughly 3500 to 9000~\AA). The polarimetric data were reduced as described in \cite{hippo}.
\begin{table}
\centering
\caption{Log of the observations of J0240. The MeerKAT radio observations used J0408-6545 as primary calibrator and J0238+1636 as secondary calibrator. For the Lesedi photometry, the integration time (and time resolution) was 5~s on all nights.}
\label{tab:obslog}
\begin{tabular}{ll}
\hline
Start Date and Time (UTC) & Total time on target (hours) \\
\hline
\multicolumn{2}{l}{\textbf{MeerKAT}} \\
2020 Aug 12 00:18:49.7 & 1.501 \\[0.2cm]
\multicolumn{2}{l}{\textbf{Lesedi + SHOC}} \\
2020 Sep 8 00:19:46.0 & 2.499 \\
2020 Sep 9 00:56:22.0 & 2.653 \\
2020 Sep 10 00:07:45.0 & 2.012 \\[0.2cm]
\multicolumn{2}{l}{\textbf{74-inch + HIPPO}} \\
2020 Nov 12 22:27:41.0 & 1.117 \\
\hline
\end{tabular}
\end{table}
\section{Results and Discussion}
\label{sec:results}
\subsection{The radio data}
The radio map, with contours overlayed, of a small area at the center of the MeerKAT field is shown in Fig.~\ref{fig:radiomap}. We detect a bright radio point source coincident with the optical position of J0240, and assume it is the CV. The integrated radio flux density of J0240 is $0.60 \pm 0.02\,{\rm mJy}$, and the RMS noise measured nearby is $9.0\,\mu {\rm Jy\, beam^{-1}}$.
Splitting the 865 MHz bandwidth into 8 frequency sub-bands, we measure an in-band spectral index of $\alpha =-0.6 \pm 0.2$ (where $S_\nu \propto \nu^\alpha$). Note, however, that there are still bandpass calibration uncertainties for MeerKAT, and also that the source is variable (see below).
In the case of AE Aqr, the time averaged spectral index is positive \citep{Abada-Simon1993, Dubus2007}, but it varies rapidly, as may be expected from a changing optical depth in synchrotron emitting blobs \citep{Bastian1988}.
\begin{figure}
\includegraphics[width=\columnwidth]{colcont_36912.eps}
\caption{A $100'' \times 100''$ area of the MeerKAT radio map of the J0240 field, with north up and east to the left. The RMS noise in the vicinity of the point source at the center is $9.0\,\mu {\rm Jy\, beam^{-1}}$, and contours are at 3, 6, 9, and 12 $\times$ the RMS value. The blue cross marks the optical position of J0240, as measured by \emph{Gaia} (the size of this symbol does not denote anything). The inset in the bottom left-hand corner shows the beam shape and size ($11.42'' \times 6.11''$ with a position angle of $29.2^\circ$).}
\label{fig:radiomap}
\end{figure}
At the \emph{Gaia} Early Data Release 3 \citep{gaia, gaiadr3} distance of $620 \pm 30\,{\rm pc}$, J0240 has a specific radio luminosity of $2.7 \pm 0.3 \times 10^{17}\,{\rm erg\,s^{-1}\,Hz^{-1}}$, placing it amongst the most radio-luminous CVs. Fig.~\ref{fig:radiolum} shows radio luminosity as a function of orbital period for CVs belonging to different classes (see \citealt{CoppejansKnigge2020} for an earlier version of this plot). Besides J0240, this includes all CVs with known distances and recent, sensitive radio observations between $\sim$1 and $12\,{\rm GHz}$. In addition, we include the WD pulsar, AR Sco \citep{Marsh2016, Marcote2017}. The highest point in this plot (belonging to the IP V1323 Her) is at slightly lower luminosity than shown in \citet{CoppejansKnigge2020}, because the distance was revised down (from 2240 to 1950~pc) in the \emph{Gaia} EDR3.
While J0240, AE Aqr, and the two normal IPs shown in Fig.~\ref{fig:radiolum} are all similarly radio-luminous, IPs are detected at radio wavelengths at a rate of $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$10\%. \cite{Barrett2017} obtained sensitive VLA observations of more than 40 IPs and candidate IPs, including several well-studied systems at smaller distances than J0240, and (besides AE Aqr) detected only two. These two IPs, V1323 Her and Cas 1 (also known as RX J0153.3+7446) are both at large distances ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 1.5\,{\rm kpc}$), and have only low $S/N$ detections in only a small subset of the VLA observations \citep{Barrett2017}.
\begin{figure*}
\includegraphics[width=160mm]{radiolum.ps}
\caption{Radio luminosities as a function of orbital period for 42 CVs and the WD pulsar AR Sco. The CV classes are distinguished by different symbols. Some of these systems have been observed at a range of radio flux densities, which we indicate with the fine vertical bars. The error bars for J0240 are not shown, but are smaller than the symbol size. Points with error bars are the L-band observations from \citet{Hewitt2020}; other data are at higher frequencies, mainly C- and X-band, and are from \citet{Coppejans2016, Coppejans2015, Kording2008, MillerJones2011, Russell2016, Marsh2016, Marcote2017, Barrett2017, Barrett2020}. }
\label{fig:radiolum}
\end{figure*}
In order to look for radio variability, we split the observations into six 15 minute time bins, corresponding to the on-target scans, and imaged each of these epochs separately. Fig.~\ref{fig:radiolc} shows the resulting radio light curves. The bottom panel is the integrated radio flux density as a function of time, showing that J0240 varies by $>5 \sigma$, on a timescale of tens of minutes.
The top panel of Fig.~\ref{fig:radiolc} shows each time bin further split into two equal frequency bins.
\begin{figure}
\includegraphics[width=\columnwidth]{radlc.ps}
\caption{The radio light curve of J0240, constructed from the data shown in Fig.~\ref{fig:radiomap}. Each time bin is close to 15 minutes. The bottom panel shows the full L-band data, while in the top panel, this was split into 2 frequency sub-bands, centered on 1.07~GHz and 1.50~GHz.}
\label{fig:radiolc}
\end{figure}
If we assume that the radio emission is due to synchrotron radiation, and that we caught the end of a radio flare that evolved from optically thick to optically thin because of a decreasing optical depth to synchrotron self absorption, we can estimate a minimum energy following \cite{FenderBright2019}. Assuming further that the flare peaked at 0.8 mJy (at 1.5 GHz), then, for an integrated radio luminosity of $2.2 \times 10^{31}\,{\rm erg\,s^{-1}}$, the minimum energy is $1.5\times 10^{35}\,{\rm erg}$. This value corresponds to an emitting region size of $1.0 \times 10^{12}\,{\rm cm}$ and a plasma magnetic field of $0.53\,{\rm G}$. This energy is higher than the minimum energy found for a radio flare from SS Cyg by \citet{Fender2019}, but the uncertainty on our estimate is at least an order of magnitude.
\subsection{The optical high-speed photometry}
The optical light curves of J0240 are displayed in Fig.~\ref{fig:optlc}. These are all differential light curves, implying that colour differences between J0240 and comparison stars were ignored in correcting the photometry for atmospheric extinction. The system displays the rapid flickering typically seen in CVs (the observational signature of mass transfer), but these short runs show very little of the flaring reported by \cite{Thorstensen2020}. The photometry is also of fairly low quality, since it was obtained mainly in conditions of poor seeing.
Our Nyquist frequency of 0.1~Hz should be sufficiently high to allow a detection of the WD spin signal. A higher spin frequency would imply a WD above $1\,{\rm M}_\odot$ (see e.g.\ \citealt{Otoniel2020}). While this is of course possible, there is nothing to indicate that the WD in J0240 is more massive than the $\simeq 0.8\,{\rm M}_\odot$ more commonly measured for WDs in CVs (a mass that would imply a break-up spin period closer to 20~s).
\begin{figure}
\includegraphics[width=\columnwidth]{lc_lesedi.ps}
\caption{The unfiltered optical light curves of J0240, at 5-s time resolution, on 3 consecutive nights. Seeing conditions were especially poor on the third night (top panel). The very low points in that light curve are measured in frames with very bad image quality; these spurious points were removed before the Fourier transforms were calculated.}
\label{fig:optlc}
\end{figure}
Fig.~\ref{fig:fts} shows discrete Fourier transforms of our optical light curves. We detect no coherent signals. It would, however, not be at all surprising if this system is in future found to have an optical modulation similar to the 33-s signal in AE Aqr. In his optical data of AE Aqr, \cite{Patterson1979} observed an amplitude exceeding 1\% (corresponding to $2.5 \log(101/100)\simeq 0.01\,{\rm mag}$) at times during flares, but the average amplitude was only 0.2 to 0.3\%. \cite{BeskrovnayaIkhsanovBruch1995} also reported no detection in photometry spanning 4 weeks, implying a conservative upper limit amplitude of $<0.005$ mag.
Fig.~\ref{fig:ftzoom} zooms in on the high frequency end of the Fourier transform of the combined J0240 light curve (this of course has lower white noise than the Fourier transforms of individual light curves, but actually also has less sensitivity to a signal that is present for only a short while). This shows that while there is no persistently present signal at the 1\% level in these data, a 0.2 to 0.3\% signal would be well below the noise level. Our photometry therefore cannot rule out a rapidly spinning WD giving rise to a modulation very similar to the one observed in AE Aqr.
For the interpretation of J0240 as a magnetic propeller, it remains key to demonstrate that this system contains a rapidly spinning WD. Longer optical light curves may show this. Alternatively, X-ray or $UV$ timing would be worth obtaining (J0240 has not yet been detected at X-ray or $UV$ frequencies). The 33-s signal of AE Aqr has an amplitude $>$20\% in X-rays \citep{Patterson1980}.
\begin{figure}
\includegraphics[width=\columnwidth]{fts.ps}
\caption{Fourier transforms of the optical light curves of J0240. In the top 3 panels, Fourier transforms of the light curves taken on each of the 3 nights are shown individually, while the Fourier transform in the bottom panel is of all data combined.}
\label{fig:fts}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{ftzoom.ps}
\caption{The Fourier transform of all 3 nights of photometry combined, showing only the frequency range corresponding to periods between 60 and 10~s. Dashed horizontal lines indicate amplitudes of 0.2, 0.3, and 1\%.}
\label{fig:ftzoom}
\end{figure}
\subsection{The optical polarimetry}
There was no significant detection of polarization. Our measurement of time-averaged circular polarization is $0.10\% \pm 0.07\%$, consistent with a value of zero. For the time-averaged linear polarization, we find $1.3\% \pm 0.1\%$, consistent with an interstellar (or instrumental) origin.
Polarimetric observations of AE Aqr have yielded possible, marginal detections of circular polarization, but at levels of $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.1\%$ (e.g.\ \citealt{Cropper1986}; \citealt{Beskrovnaya1996}; \citealt{Butters2009}).
\section{Summary}
\label{sec:summary}
We have presented radio L-band imaging, and optical high-speed photometry and photo-polarimetry of the recently discovered CV, J0240. Our aim was to examine the similarity of J0240 to the magnetic propeller system, AE Aqr. The high radio luminosity supports the suggestion that this system may be an AE Aqr-like object. However, the most important question, which we are unable to answer, is whether it contains a rapidly spinning WD.
The main results of this study are listed below.
\begin{enumerate}
\item J0240 is detected as a bright radio source. We measure a 1.284 GHz flux density of $0.60 \pm 0.02\,{\rm mJy}$, and an in-band spectral index of $-0.6 \pm 0.2$.
\item The radio luminosity of $2.7 \pm 0.3 \times 10^{17}\,{\rm erg\,s^{-1}\,Hz^{-1}}$ is the second highest yet reported for a CV.
\item The system varies in the radio on a time scale of 10s of minutes.
\item We fail to detect any coherent signal that can be attributed to a rapidly spinning WD in our 3 nights of optical high-speed photometric observations.
\item Although we are able to search for signals with periods down to 10~s, the sensitivity of our high-speed photometry to low amplitude signals is not sufficient to rule out a WD spin modulation similar to the 33-s signal seen in AE Aqr.
\item There was no detection of linear or circular polarization in the optical.
\end{enumerate}
\section*{Acknowledgements}
MLP acknowledges financial support from the National Research Foundation (NRF) and the Newton Fund. PAW acknowledges financial support from the University of Cape Town and the NRF. We thank John Thorstensen for helpful discussions of the system that this work focusses on. Sara Motta kindly looked at the archived \emph{Swift} data of this object, to confirm that there is no X-ray detection. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory (SARAO), which is a facility of the NRF, an agency of the Department of Science and Innovation. We thank the SARAO staff involved in obtaining the MeerKAT observations. We made use of the computing facilities of the Inter-University Institute for Data Intensive Astronomy (IDIA) in this research.
\section*{Data Availability}
The data presented in this article are subject to the standard data access policies of the South African Radio Astronomy Observatory and the South African Astronomical Observatory.
|
{
"timestamp": "2021-02-18T02:20:09",
"yymm": "2102",
"arxiv_id": "2102.08800",
"language": "en",
"url": "https://arxiv.org/abs/2102.08800"
}
|
\section{Introduction}
\label{sec:introduction}
The idea of quantum computing arose in 1982 during a speech by Richard Feynman about the difficulty of simulating quantum mechanical systems with classical computers~\cite{Feynman1982}. Feynman suggested the simulation of these systems using quantum computers, i.e., controlled quantum mechanical systems able to mimic them. Since then, quantum computing and quantum information theory continued to advance, proving that universal quantum computers could become, for some applications, more powerful than Turing machines~\cite{Shor1994}. Quantum computing will potentially have a deep impact on a variety of fields, from quantum simulation in physics and chemistry~\cite{Chiesa2018}, to machine learning~\cite{Lloyd2014,Biamonte2017,Havlicek2019,Zoufal2019,Cong2019}, artificial intelligence~\cite{Tacchino2018} and cryptography~\cite{Ekert1991,Portmann2014,Fitzsimons2017}.
Current quantum computers are noisy, characterized by a reduced number of qubits (5-50) with non-uniform quality and highly constrained connectivity. Such devices may be able to perform tasks which surpass the capabilities of today's most powerful classical digital computers, but noise in quantum gates limits the size of quantum circuits that can be executed reliably.
\subsection{Quantum Compilation Problem}
\label{sec:problem}
The problem of \textit{quantum compilation}, i.e., device-aware implementation of quantum algorithms, is a challenging one. A good quantum compiler must translate an input program into the most efficient equivalent of itself~\cite{Corcoles2020}, getting the most out of the available hardware. In general, the quantum compilation problem is NP-Hard~\cite{Botea2018,Soeken2019}. On noisy devices, quantum compilation is declined in the following tasks: gate synthesis~\cite{Kliuchnikov2016}, which is the decomposition of an arbitrary unitary operation into a quantum circuit made of single-qubit and two-qubit gates from a universal gate set; compliance with the hardware architecture, starting from an initial mapping of the virtual qubits to the physical ones, and moving through subsequent mappings by means of a clever swapping strategy; and noise awareness.
Quality indicators of the compiled quantum algorithm are, for example, circuit depth, number of gates and fidelity of quantum states~\cite{MunozCoreas2019}.
As an example of compliance with the hardware architecture, consider the problem of compiling the circuit in \REF{Fig.}{fig:map_ex_circuit1} onto a device with a coupling map such as the one shown in \REF{Fig.}{fig:map_ex_coupling}. One could choose a trivial initial mapping of virtual qubits to the physical ones, such as the one depicted in red to the left of the circuit in \REF{Fig.}{fig:map_ex_circuit1}. However, with such a mapping, the CNOT between qubits 0 and 3 could not be directly executed on the device. A more suitable mapping is instead the one shown in blue to the left of the circuit in \REF{Fig.}{fig:map_ex_circuit2}, as it enables the execution of all CNOTs onto the device at the cost of inserting only one SWAP gate.
\begin{figure}
\hspace*{\fill}%
\subcaptionbox{\label{fig:map_ex_coupling}}{\includegraphics[width=.2\linewidth]{images/yorktown.pdf}}\hfill%
\subcaptionbox{\label{fig:map_ex_circuit1}}{\includegraphics[width=.4\linewidth]{images/circuit1.pdf}}\hfill%
\subcaptionbox{\label{fig:map_ex_circuit2}}{\includegraphics[width=.4\linewidth]{images/circuit2.pdf}}\hspace*{\fill}%
\caption{Compiling the 5 qubit circuit shown in \textbf{(b)} onto a 5 qubit device, namely \textit{ibmq\_yorktown}, whose coupling map is depicted in \textbf{(a)}. The compiled circuit is shown in \textbf{(c)}.}
\label{fig:mapping_example}
\end{figure}
\subsection{Our Contributions}
\label{sec:contributions}
In this paper, we present novel deterministic algorithms for compiling recurrent quantum circuit patterns in polynomial time. In particular, such patterns appear in quantum circuits that are used to compute the ground state properties of molecular systems using the variational quantum eigensolver (VQE) method together with the RyRz heuristic wavefunction Ans\"atz~\cite{Kandala2017}. We implemented our algorithms in a Python software denoted as PADQC (PAttern-oriented Deterministic Quantum Compiler) that we integrated with Qiskit's SABRE swapping strategy~\cite{Li2019} and compilation routine~\cite{QiskitSDK_short}, from now on denoted as Qiskit(SABRE).
We benchmarked PADQC+Qiskit(SABRE) using different quantum circuits, assuming IBM Quantum hardware. We show that our integrated solution produces -- in general -- output programs that are comparable to those obtained with state-of-art compilers, such as t$|$ket$\rangle$~\cite{Sivarajah2020}, in terms of CNOT count and CNOT depth. In particular, our solution produces unmatched results on RyRz circuits.
The paper is organized as follows. In Section \ref{sec:related}, we discuss the state of the art in quantum compiling. In Section \ref{sec:algorithms}, we present our algorithms. In Section \ref{sec:results}, we illustrate the experimental evaluation of PADQC+Qiskit(SABRE). Finally, in Section \ref{sec:conclusions}, we conclude the paper with an outline of future work.
\section{Related work}
\label{sec:related}
Recently, some noteworthy quantum compiling techniques have been proposed. Here we survey those that have been implemented into actual compilers, and benchmarked.
The approach proposed by Zulehner \textit{et al.}~\cite{Zulehner2019} is to partition the circuit into layers, each layer including gates that can be executed in parallel. For each layer, a compliant CNOT mapping must be found, starting from an initial mapping obtained from the previous layer. Denoting the number of physical qubits as $m$ and the number of logical qubits as $n$, in the worst case there are $m!(m-n)!$ possible mappings. Such a huge search space cannot be explored exhaustively. The $A^*$ search algorithm is adopted, to find the less expensive swap sequence. Moreover, a lookahead strategy is adopted to minimize additional operations to switch between subsequent mappings. The proposed solution is efficient in terms of running time and output depth, but may not be scalable because of the exponential space complexity of the $A^*$ search algorithm~\cite{RussellNorvig2020}.
SABRE by Li \textit{et al.}~\cite{Li2019} is a SWAP-based bidirectional heuristic search method. It requires a preprocessing phase consisting of the following steps. First of all, the distance matrix over the coupling map is computed. Then, the directed acyclic graph that represents the two-qubit gate dependencies of the circuit is generated. A data structure denoted as $F$ (front layer) is initialized as the set of two-qubit gates without unexecuted predecessors. The preprocessing phase ends up with the generation of a random initial mapping. Then, the compiling phase consists in iterating the following steps over $F$, until $F$ is empty. First, all executable gates are removed from $F$ and their successors are added to $F$. Second, for those gates in $F$ that cannot be executed, the best SWAP sequence is selected using an heuristic cost function based on distance matrix. Experiment results show that SABRE can generate hardware-compliant circuits with less or comparable overhead, with respect to the approach proposed by Zulehner \textit{et al.}~\cite{Zulehner2019}.
In Qiskit (version 0.20)~\cite{QiskitSDK_short}, the compiling process is implemented by a customizable Pass Manager that schedules a number of different passes: layout selection, unrolling (i.e., gate synthesis), swap, gate optimization, and more. Four swap strategies are currently available: Basic, Stochastic, Lookahead and SABRE. The Stochastic strategy uses a randomized algorithm to map the input circuit to the selected coupling map. This means that a single run does not guarantee to produce the best result.
Currently, the most advanced quantum compiler is t$|$ket$\rangle$~\cite{Sivarajah2020}, which is written in C++. The compiling process proceeds in two phases: an architecture-independent optimisation phase, which aims to reduce the size and complexity of the circuit; and an architecture-dependent phase, which prepares the circuit for execution on the target machine. The architecture-independent optimisation phase consists of peephole optimizations (targeting small circuit patterns) and macroscopic optimizations (aiming to identify high-level macroscopic structures in the circuit).
The end product of this process is a circuit that can be scheduled for execution by the runtime environment, or simply saved for later.
In Section \ref{sec:results}, we compare our PADQC+Qiskit(SABRE) integration with pure Qiskit(SABRE) and t$|$ket$\rangle$.
\section{Algorithms}
\label{sec:algorithms}
Most compiling approaches aim at finding a general purpose compiler, able to cope with any circuit without the possibility to make assumptions on its structure or characteristics. This kind of solution, although effective in many cases, may not be as much efficient when facing circuits characterized by well-defined peculiar sequences, i.e., \textit{patterns}, of two-qubit operators. This is particularly true if those patterns repeat themselves many times in a circuit and are not compliant with the quantum device connectivity.
This is the case of RyRz circuits used to compute the ground state properties of molecular systems with the variational quantum eigensolver (VQE) method.
These circuits were introduced for the first time in~\cite{Kandala2017} as a heuristic hardware-efficient wavefunction Ans\"atz for the calculation of the electronic structure properties of small molecular systems such as hydrogen H$_{2}$, lithium hydride, LiH, and berillium hydride, BH$_2$, on a quantum computer.
Contrary to other quantum circuits inspired by classical wavefunction expansion techniques (e.g., the coupled cluster expansion~\cite{Peruzzo2014,Barkoutsos2018}), in this case the nature of the circuit is solely motivated by the requirement of producing an entangled wavefunction for the many-electron systems that optimally fits the connectivity of the hardware at disposal. In most cases, the RyRz circuit offers a well balanced compromise between these two requirements.
These circuits, when implemented with full entanglement (\REF{Fig.}{fig:ryrz_circuits}), are characterized by repeated sequences of a pattern that we denote as \textit{inverted CNOT cascade}.
\begin{figure}
\centering
\begin{minipage}{10cm}
\centering
\includegraphics[width=10cm]{images/ryrz.pdf}
\subcaption{}
\end{minipage}
\caption{RyRz circuit example.}
\label{fig:ryrz_circuits}
\end{figure}
In Sections~\ref{sec:inverted_cascade} and \ref{sec:nn_cnot}, we illustrate these patterns and how their features can be exploited to lay out efficient compiling algorithms. From now on, it will be assumed that the connectivity of the quantum device on which the circuit has to be compiled is similar to those featured by IBM Quantum devices~\cite{new_backends} (\REF{Fig.}{fig:coupling_maps}). \REF{Table}{tab:algos} presents an overview of the designed algorithms and their time complexity, which is computed by taking into account the worst case scenario and the running time of the subroutines.
The compilation process starts with \REF{Algorithm}{alg:patterns}, which searches for patterns of interests and transforms them so that they are more easily mappable to the coupling map. Then, \REF{Algorithm}{alg:gate_cancellation} optimizes the circuit, removing double CNOTs and double \textit{H} gates that may results from the previous transformations. Finally, \REF{Algorithm}{alg:chain} finds a suitable initial mapping for the circuit.
\begin{table}
\centering
\resizebox{15cm}{!}{
{\begin{tabular}{l|c|c}
\textbf{Algorithm} & \textbf{Subroutines} & \textbf{Time Complexity} \\\hline
\multirow{2}{*}{\textbf{1} \textsc{patterns}} & \textsc{CheckCascade} & \multirow{2}{*}{$O(g)$}\\
&\textsc{CheckInverseCascade}\\\hline
\textbf{2} \textsc{CheckCascade} & & $O(m)$\\\hline
\textbf{3} \textsc{CnotCancellation} & & $O(lm^2)$\\\hline
\textbf{4} \textsc{GateCancellation} & & $O(lm^2)$\\\hline
\multirow{2}{*}{\textbf{5} \textsc{Chain}} & \textsc{CheckForIsolated} & \multirow{2}{*}{$O(n)$}\\
&\textsc{ExpandChain}\\\hline
\textbf{6} \textsc{CheckForIsolated} & & $O(1)$\\\hline
\textbf{7} \textsc{ExpandChain} & & $O(1)$\\
\end{tabular}}}
\caption{Overview of proposed algorithms and their time complexity. Notation: $n$ is the number of qubits of the device, $m$ is the number of qubits used by the compiled circuit, $g$ is the number of gates in the circuit and $l$ the number of layers.}
\label{tab:algos}
\end{table}
\begin{figure}
\centering
\begin{tabular}{ cc }
\begin{minipage}{5cm}
\centering
\includegraphics[width=5cm]{images/tokyo.pdf}
\subcaption{}
\label{fig:coupling_maps_a}
\end{minipage} &
\begin{minipage}{5cm}
\centering
\includegraphics[width=5cm]{images/almaden.pdf}
\subcaption{}
\label{fig:coupling_maps_b}
\end{minipage}
\end{tabular}
\caption{\textbf{(a)} 20 qubits \textit{ibmq\_tokyo} and \textbf{(b)} \textit{ibmq\_almaden}~\cite{new_backends}.}
\label{fig:coupling_maps}
\end{figure}
\subsection{CNOT cascades}
\label{sec:cnot_cascade}
We are interested in the so called \textit{CNOT cascade}, shown in \REF{Fig.}{fig:cnot_cascade_decomposition_a}. This pattern plays a prominent role in several quantum algorithms such as the one used to produce GHZ states~\cite{GHZ1989,Deffner2017} as shown in \REF{Fig.}{fig:ghz}.
The coupling maps in \REF{Fig.}{fig:coupling_maps} prevent from placing all CNOT gates like in the ideal GHZ circuit, i.e., making a \textit{CNOT cascade} where $n - 1$ qubits control the \textit{n}th qubit. It is indeed possible to turn the ideal GHZ circuit into an equivalent one characterized by a unique sequence of CNOT gates. It would only be a slight change to the technique discussed in a previous work~\cite{Ferrari2018}.
However, this technique works only if the aim is to produce a GHZ state starting from a $|0\rangle^{\otimes n}$. This paper wants to focus on a more general case, where a CNOT cascade could appear at any point in the circuit and no assumption can be done on the state of the system.
\begin{figure}
\centering
\begin{tabular}{ cc }
\begin{minipage}{2.6cm}
\centering
\includegraphics[width=2.6cm]{images/cnot_cascade.pdf}
\subcaption{}
\label{fig:cnot_cascade_decomposition_a}
\end{minipage} &
\begin{minipage}{7cm}
\centering
\includegraphics[width=7cm]{images/nearest_neighbor_cnot.pdf}
\subcaption{}
\label{fig:cnot_cascade_decomposition_b}
\end{minipage}\\
\end{tabular}
\caption{\textbf{(a)} CNOT cascade. \textbf{(b)} Decomposition of a CNOT cascade}
\label{fig:cnot_cascade_decomposition}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{ cc }
\begin{minipage}{4.2cm}
\centering
\includegraphics[width=4.2cm]{images/ideal_ghz.pdf}
\end{minipage}
& \boldmath$\frac{\KET{0}^{\otimes n}+\KET{1}^{\otimes n}}{\sqrt{2}}$\\
\end{tabular}
\caption{GHZ circuit.}
\label{fig:ghz}
\end{figure}
A possible solution is to exploit the nearest neighbor decomposition for a uniformly controlled gate studied by Tucci~\cite{Tucci2004}. The only requirement for Tucci's decomposition is that qubits are to be arranged in a linear chain.
To realize the decomposition in \REF{Fig.}{fig:cnot_cascade_decomposition_b}, \REF{Algorithm}{alg:patterns} analyzes the circuit layer by layer and, when a CNOT gate is encountered, \REF{Algorithm}{alg:check_pattern1} checks if that CNOT is the first of a CNOT cascade. Each encountered CNOT cascade is replaced by its nearest-neighbor decomposition and, at the end, \REF{Algorithm}{alg:patterns} returns a new transformed circuit. Since \REF{Algorithm}{alg:patterns} just loops over all gates in a circuit, its complexity is $O(g)$ with $g$ being equal to the number of gates of the circuit.
\begin{algorithm}
\footnotesize
\caption{\textsc{Patterns($circuit$)}\newline
\footnotesize
\textbf{Input}: a quantum circuit $circuit$\newline
\textbf{Output}: a new transformed circuit
}
\label{alg:patterns}
\begin{algorithmic}[1]
\STATE $new\_circuit \gets \emptyset$
\STATE to\_skip $\gets \emptyset$
\STATE new\_layers $\gets [\emptyset \textbf{\space for \space} 0 \leq i < |circuit\_layers|]$
\FOR{$i=0$ \TO $|circuit\_layers|$}
\IF{$i \neq 0$}
\FORALL{$g \in $ new\_layers[$i-1$]}
\STATE put \textit{g} into \textit{new\_circuit}
\ENDFOR
\ENDIF
\FORALL{$gate \in circuit\_layers[i]$}
\IF{$gate \not \in$ to\_skip}
\IF{\textit{gate} is CNOT}
\STATE transformed $\gets$ \textsc{CheckCascade($circuit\_layers$, $i$, new\_layers)}
\IF{trasnformed $\neq \emptyset$}
\STATE put transformed into to\_skip
\STATE \textbf{continue}
\ELSE
\STATE put $gate$ into to\_skip
\STATE put \textit{gate} into \textit{new\_circuit}
\ENDIF
\ELSE
\STATE put $gate$ into to\_skip
\STATE put \textit{gate} into \textit{new\_circuit}
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\RETURN \textit{new\_circuit}
\end{algorithmic}
\end{algorithm}
\REF{Algorithm}{alg:check_pattern1} analyzes the circuit starting from a CNOT gate; here \textit{before} and \textit{after} are sets of gates that can be applied before and after the decomposition. If a CNOT cascade is \textit{found}, the decomposition is applied between the before and after gates sets, otherwise an empty set is returned. In \REF{Algorithm}{alg:check_pattern1}, $gate_t$ is the target of $gate$ and if $gate$ is a CNOT, $gate_c$ is the control qubit of $gate$. As it is expected that cascades are no longer than the number of qubits in the circuit, the algorithm stops when, after $MAX=2m$ layers have been checked, no pattern is found. The time complexity of \REF{Algorithm}{alg:check_pattern1} is $O(m)$. Usually the number of gates $g$ is greater than $m$, thus, the complexity of \REF{Algorithm}{alg:patterns} is $O(g)$.
\begin{algorithm}
\footnotesize
\caption{\textsc{CheckCascade($layers$, $i$, $new\_layers$)}\newline
\scriptsize
\textbf{Input}: the list of layers in the circuit $layers$; the layer from where to start $i$; the list of $new\_layers$ to be added\newline
\textbf{Output}: the list of gates to skip, $\emptyset$ if no cascade was found
}
\label{alg:check_pattern1}
\begin{multicols}{2}
\begin{algorithmic}[1]
\STATE before $\gets \emptyset$, after $\gets \emptyset$, skip $\gets \emptyset$, off\_limits $\gets \emptyset$, control $\gets cnot_c$, target $\gets cnot_t$, used $\gets \emptyset$, found $\gets$ \textsf{false}, ctrls $\gets \emptyset$, put target into used, $i \gets 0$, $gate \gets circuit[0]$, $c \gets 0$, last $\gets i$
\WHILE{$i < MAX$}
\FORALL{$gate \in layers[i + c]$}
\IF{$gate \in$ skip \AND $gate_t =$ target}
\STATE $c \gets MAX$
\STATE \textbf{break}
\ELSE
\IF{$gate$ is a CNOT}
\IF{$gate_c =$ target}
\STATE $count \gets MAX$
\STATE \textbf{break}
\ELSIF{$gate_c \in$ off\_limits \OR $gate_t \in$ off\_limits}
\STATE put $gate_c$ into off\_limits
\STATE put $gate_c$ into used
\STATE put $gate_t$ into off\_limits
\STATE put $gate_t$ into used
\STATE \textbf{continue}
\ENDIF
\IF{$gate_t =$ target \AND $gate_c \not \in$ ctrls \AND $gate_c \not \in$ used}
\STATE put $gate_c$ into ctrls
\STATE put $gate_c$ into used
\STATE put $gate$ into skip
\ELSIF{$gate_t \neq$ target \AND $gate_c \neq$ target}
\IF{$gate_t \not \in$ used \AND $gate_c \not \in$ used \AND last $< c$}
\STATE last $\gets i + c$
\ELSE
\STATE put $gate_c$
\hspace{\algorithmicindent} into off\_limits
\STATE put $gate_c$
\hspace{\algorithmicindent} into used
\STATE put $gate_t$
\hspace{\algorithmicindent} into off\_limits
\STATE put $gate_t$
\hspace{\algorithmicindent} into used
\IF{last $> i + c - 1$}
\STATE last $\gets i + c - 1$
\ENDIF
\ENDIF
\ELSE
\STATE $count \gets MAX$
\STATE \textbf{break}
\ENDIF
\ELSE
\IF{$gate_t \in$ off\_limits}
\STATE \textbf{continue}
\ELSIF{$gate_t =$ target}
\STATE put $gate$ into after
\STATE put $gate$ into skip
\STATE $c \gets MAX$
\STATE \textbf{break}
\ELSIF{$gate_t \not \in$ used}
\STATE put $gate$ into before
\ELSE
\STATE put $gate$ into after
\ENDIF
\STATE put $gate$ into skip
\ENDIF
\ENDIF
\ENDFOR
\STATE $c \gets c + 1$
\ENDWHILE
\IF{$|$controls$|>1$}
\FORALL{g $\in$ before}
\STATE put g into $new\_layers[$last$]$
\ENDFOR
\FORALL{$x \in$ reversed(ctrls) $\cup$ target}
\STATE put $cnot_{x,x.next}$
\hspace{\algorithmicindent} into $new\_layers[$last$]$
\ENDFOR
\FORALL{$y \in$ ctrls}
\STATE put $cnot_{y,y.next}$
\hspace{\algorithmicindent} into $new\_layers[$last$]$
\ENDFOR
\FORALL{g $\in$ after}
\STATE put g into $new\_layers[$last$]$
\ENDFOR
\RETURN skip
\ENDIF
\RETURN $\emptyset$
\end{algorithmic}
\end{multicols}
\end{algorithm}
We recall that the aim is to compile circuits characterized by repeated patterns. Let us look at the circuit in \REF{Fig.}{fig:cnot_cascade_sequences_a}, where multiple CNOT cascades are repeated one after the other, acting on different target qubits.
\REF{Algorithm}{alg:patterns} outputs the circuit in \REF{Fig.}{fig:cnot_cascade_sequences_b}, which despite being correct, has an increased depth.
\begin{figure}[h!]
\centering
\begin{tabular}{ cc }
\begin{minipage}{5cm}
\centering
\includegraphics[width=5cm]{images/cnot_cascade_sequence.pdf}
\subcaption{}
\label{fig:cnot_cascade_sequences_a}
\end{minipage} &\\
\begin{minipage}{8.5cm}
\centering
\includegraphics[width=8.5cm]{images/cnot_cascade_sequence_decomposed.pdf}
\subcaption{}
\label{fig:cnot_cascade_sequences_b}
\end{minipage} &
\begin{minipage}{2.4cm}
\centering
\includegraphics[width=2.4cm]{images/cnot_cascade_sequence_decomposed_cancelled.pdf}
\subcaption{}
\label{fig:cnot_cascade_sequences_c}
\end{minipage}\\
\end{tabular}
\caption{\textbf{(a)} Circuit with multiple CNOT cascades. \textbf{(b)} Circuit after nearest-neighbor decomposition. \textbf{(c)} Circuit after CNOT cancellation.}
\label{fig:cnot_cascade_sequences}
\end{figure}
Fortunately, if two consecutive CNOT gates act on the same control and target qubit, they elide each other, as a CNOT gate is the inverse gate of itself. \REF{Algorithm}{alg:cnot_cancellation} loops over the circuit to cancel double CNOT gates until no further cancellation can be done.
Since each \textit{layer} in the circuit has at most $m$ gates, in the worst case \REF{Algorithm}{alg:cnot_cancellation} needs $m$ iterations to cancel every CNOT couple. Its time complexity is $O(lm^2)$, where $l$ is the number of layers and usually $m \ll l$.
\begin{algorithm}
\footnotesize
\caption{\textsc{CnotCancellation($circuit$)}\newline
\footnotesize
\textbf{Input}: a quantum circuit $circuit$\newline
\textbf{Output}: a new circuit without double CNOTs
}
\label{alg:cnot_cancellation}
\begin{algorithmic}[1]
\STATE changed $\gets$ \textsf{true}
\WHILE{changed is \textsf{true}}
\STATE changed $\gets$ \textsf{false}
\FORALL{$layer \in circuit$}
\FORALL{$gate \in layer$}
\IF{$gate$ is cnot}
\IF{$cnot_{gate_c,gate_t} \in layer.next$}
\STATE remove gate from \textit{layer}
\STATE remove $cnot_{gate_c,gate_t}$ from \textit{layer.next}
\STATE changed $\gets$ \textsf{true}
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The result of such an optimization is shown in \REF{Fig.}{fig:cnot_cascade_sequences_c}. This new optimized circuit has a much more acceptable depth, even lower than the original circuit shown in \REF{Fig.}{fig:cnot_cascade_sequences_a}.
\subsection{Inverted CNOT cascades}
\label{sec:inverted_cascade}
\begin{figure}
\centering
\includegraphics[width=5.5cm]{images/cnot_cascade_sequence_inverse.pdf}
\caption{Circuit with multiple inverse CNOT cascades.}
\label{fig:cnot_cascade_sequence_inverse}
\end{figure}
The pattern characterizing RyRz circuits, shown in \REF{Fig.}{fig:cnot_cascade_sequence_inverse}, is very similar to the one in \REF{Fig.}{fig:cnot_cascade_decomposition_a}. Indeed this pattern can be turned into the other one by inverting all of its CNOT gates by means of \textit{H} gates on both control and target qubits before and after the CNOT.
Of course adding \textit{H} gates to invert a CNOT would alter the circuit identity, therefore, instead of adding an \textit{H} gate, two \textit{H} gates are added so that they can negate each other's effects and leave the circuit identity untouched. The result of this operation is shown in \REF{Fig.}{fig:cnot_cascade_sequence_inversed_a}.
\begin{figure}
\centering
\begin{tabular}{ cc }
\multicolumn{2}{ c }{
\begin{minipage}{11cm}
\centering
\includegraphics[width=11cm]{images/cnot_cascade_sequence_inverse_inversed.pdf}
\subcaption{}
\label{fig:cnot_cascade_sequence_inversed_a}
\end{minipage}}\\
&\\
\begin{minipage}{8.5cm}
\centering
\includegraphics[width=8.5cm]{images/cnot_cascade_sequence_inverse_h.pdf}
\subcaption{}
\label{fig:cnot_cascade_sequence_inversed_b}
\end{minipage} &
\begin{minipage}{2cm}
\centering
\includegraphics[width=2cm]{images/cnot_cascade_sequence_inverse_cancelled.pdf}
\subcaption{}
\label{fig:cnot_cascade_sequence_inversed_c}
\end{minipage}
\end{tabular}
\caption{\textbf{(a)} Circuit with multiple inverse CNOT cascades after CNOT inversion. \textbf{(b)} Circuit with multiple inverse CNOT cascades after nearest-neighbor decomposition. \textbf{(c)} Circuit with multiple inverse CNOT cascades after gates cancellation.}
\label{fig:cnot_cascade_sequence_inversed}
\end{figure}
Using an algorithm similar to \REF{Algorithm}{alg:check_pattern1}, with the roles of control and target qubits exchanged, and a slight modification to \REF{Algorithm}{alg:patterns}, the circuit in \REF{Fig.}{fig:cnot_cascade_sequence_inversed_a} can be compiled obtaining the one in \REF{Fig.}{fig:cnot_cascade_sequence_inversed_b}. \REF{Algorithm}{alg:gate_cancellation} optimizes the circuit producing the one shown in \REF{Fig.}{fig:cnot_cascade_sequence_inversed_c}. Like in the previous case, the depth of the recompiled circuit is very close to the original.
The time complexity of \REF{Algorithm}{alg:gate_cancellation} is $O(lm^2)$.
\begin{algorithm}
\footnotesize
\caption{\textsc{GateCancellation($circuit$)}\newline
\footnotesize
\textbf{Input}: $circuit$ a quantum circuit\newline
\textbf{Output}: a new circuit without double CNOTs and double \textit{H}
}
\label{alg:gate_cancellation}
\begin{algorithmic}[1]
\STATE changed $\gets$ \TRUE
\WHILE{changed is \textsf{true}}
\STATE changed $\gets$ \textsf{false}
\FORALL{$layer \in circuit$}
\FORALL{$gate \in layer$}
\IF{$gate$ is cnot}
\IF{$cnot_{gate_c,gate_t} \in layer.next$}
\STATE remove gate from \textit{layer}
\STATE remove $cnot_{gate_c,gate_t}$ from \textit{layer.next}
\STATE changed $\gets$ \textsf{true}
\ENDIF
\ELSIF{$gate$ is $H$}
\IF{$gate \in layer.next$}
\STATE remove $gate$ from \textit{layer}
\STATE remove $gate$ from \textit{layer.next}
\STATE changed $\gets$ \textsf{true}
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{Nearest neighbor CNOT sequences}
\label{sec:nn_cnot}
In the previous section we showed how to transform inverted CNOT cascades into nearest-neighbor CNOT sequences. Clearly such a sequence of gates, where every qubit $q_i$ controls $q_{i+1}$, cannot be directly executed on the coupling map in \REF{Fig.}{fig:coupling_maps_b}, as $q_{i}$ is not always connected with $q_{i+1}$.
A possible solution is to find a path in the coupling map such that every qubit $q_i$, with $1<i<n-1$, with $n$ being the number of qubits in the device, has a connection with its nearest neighbors $q_{i-1}$ and $q_{i+1}$. This is related to the problem of finding an \textit{Hamiltonian path}, i.e., a path that visits each vertex of a graph exactly once. The Hamiltonian path problem is a special case of the \textit{Hamiltonian cycle} problem and is known to be NP-Complete, in fact it is an instance of the famous \textit{traveling salesman} problem \cite{Papadimitriou1998}.
Fortunately, one can take advantage of the features of the coupling map such as its regular structure and the fact that every qubit is identified by a number ranging from $0$ to $n$. As each undirected link can be seen as a couple of ingoing and outgoing links, the path obtained resembles a \textit{chain} and will be denoted as such, from now on.
\REF{Algorithm}{alg:chain} computes a chain in an undirected graph $\mathcal{G}$ starting from node $0$, where $\mathcal{C}$ is the chain initialized as empty, $\mathcal{S}$ is the set of nodes to be explored and $\mathcal{N}_x$ is the set of neighbors of node $x$. The algorithm loops over the nodes until the set of explored nodes $\mathcal{E}$ is equal to the set of nodes in $\mathcal{G}$. For every node added to $\mathcal{C}$, the \textsf{chain()} algorithm checks if the node's neighbors lead to a dead end, i.e., are isolated. If one neighbor is found to be isolated, it is added to the set of isolated nodes $\mathcal{I}$ and also to $\mathcal{E}$.
\begin{figure}
\centering
\includegraphics[width=10cm]{images/nn_cnot.pdf}
\caption{Circuit with nearest neighbor CNOT sequences.}
\label{fig:nn_cnot_seq}
\end{figure}
\begin{algorithm}
\footnotesize
\caption{
\textsc{Chain($\mathcal{G}$, $n$)}\newline
\footnotesize
\textbf{Input}: undirected graph $\mathcal{G}$; number of qubits $n$ used by the circuit\newline
\textbf{Output}: a chain $C$ connecting at least $n$ nodes in $\mathcal{G}$
}
\label{alg:chain}
\begin{algorithmic}[1]
\STATE $\mathcal{C} \gets \emptyset$
\STATE $\mathcal{S} \gets$ all nodes of $\mathcal{G}$
\STATE put $0$ into $\mathcal{C}$
\STATE $\mathcal{S} \gets \mathcal{S}/0$
\STATE $\mathcal{E} \gets 0$
\STATE $\mathcal{I} \gets \emptyset$
\STATE $x = \mathcal{C}[|\mathcal{C}|-1]$
\STATE $last\_back\_step \gets -1$
\WHILE{$|\mathcal{E}|<|\mathcal{G}|$}
\STATE $\mathcal{N} \gets \mathcal{N}_x/E$
\IF{$|\mathcal{N}| \neq \emptyset$}
\IF{$x+1 \in \mathcal{N}_x$}
\STATE $x \gets x+1$
\ELSE
\STATE $x \gets min(\mathcal{N}_x)$
\ENDIF
\STATE put $x$ into $\mathcal{E}$
\STATE put $x$ into $\mathcal{C}$
\STATE $\mathcal{S} \gets \mathcal{S}/x$
\IF{$|\mathcal{E}|<|\mathcal{G}|-1$}
\STATE $\mathcal{N} \gets \emptyset$
\FORALL{$q \in \mathcal{N}_x$}
\IF{$q \not\in \mathcal{E}$}
\STATE remove=\textbf{true}
\IF{$|N_q|=1$ \AND $|\mathcal{E}<|G|-1|$}
\STATE put $q$ into $\mathcal{E}$
\STATE $\mathcal{S} \gets \mathcal{S}/q$
\STATE put $q$ into $\mathcal{I}$
\STATE \textbf{continue}
\ENDIF
\FORALL{$r \in \mathcal{N}_q$}
\IF{$r \not\in \mathcal{E}$ \AND $r = x$}
\STATE remove=\textbf{false}
\ENDIF
\IF{remove=\textbf{true}}
\STATE put $q$ into $\mathcal{E}$
\STATE $\mathcal{S} \gets \mathcal{S}/q$
\STATE put $q$ into $\mathcal{I}$
\ENDIF
\ENDFOR
\ENDIF
\ENDFOR
\ENDIF
\ELSE
\IF{$last\_back\_step \neq \mathcal{C}[|\mathcal{C}|-2]$ \AND $|\mathcal{G}|-|\mathcal{E}| > |current - \mathcal{S}[0]|$}
\STATE \textbf{break}
\ENDIF
\STATE put $x$ into $\mathcal{I}$
\STATE $\mathcal{C} \gets \mathcal{C}/x$
\STATE $x \gets \mathcal{C}[|\mathcal{C}|-1]$
\STATE $last\_back\_step \gets x$
\ENDIF
\ENDWHILE
\IF{$|\mathcal{C}| \geq n$}
\RETURN $\mathcal{C}$
\ENDIF
\STATE \textsc{CheckForIsolated($\mathcal{G}$, $\mathcal{C}$, $\mathcal{E}$, $\mathcal{I}$)}
\STATE \textsc{ExpandChain($\mathcal{G}$, $\mathcal{C}$, $\mathcal{I}$, $n$)}
\RETURN $\mathcal{C}$
\end{algorithmic}
\end{algorithm}
After executing \REF{Algorithm}{alg:chain} on the coupling map in \REF{Fig.}{fig:coupling_maps_b}, the path obtained can be used to formulate an initial layout for the circuit in \REF{Fig.}{fig:nn_cnot_seq}, such that logical qubit $q_i$ corresponds to chain element $\mathcal{C}[i]$. \REF{Fig.}{fig:almaden_path} shows the path obtained in the coupling map highlighted in red.
Such an initial layout eliminates the need to use SWAP gates and produces a circuit with ideally no increase in depth.
\begin{figure}
\centering
\includegraphics[width=6cm]{images/almaden_path.pdf}
\caption{Qubit chain in \textit{ibmq\_almaden} highlighted in red.}
\label{fig:almaden_path}
\end{figure}
\begin{algorithm}
\footnotesize
\caption{
\textsc{CheckForIsolated($\mathcal{G}$, $\mathcal{C}$, $\mathcal{E}$, $\mathcal{I}$)}\newline
\footnotesize
\textbf{Input}: an undirected graph $\mathcal{G}$; $\mathcal{C}$ a chain of nodes in $\mathcal{G}$; $\mathcal{E}$ nodes already explored; $\mathcal{I}$ nodes left outside $\mathcal{C}$ during exploration}
\label{alg:isolated}
\begin{algorithmic}[1]
\FOR{$m=0$ \TO $|\mathcal{G}|-1$}
\IF{$m \not\in \mathcal{E}$ \AND $m \not\in \mathcal{I}$}
\FORALL{$i \in \mathcal{I}$}
\IF{$m \in \mathcal{N}_i$}
\STATE put $m$ into $\mathcal{I}$
\STATE put $m$ into $\mathcal{E}$
\STATE \textbf{break}
\ENDIF
\ENDFOR
\FORALL{$n \in \mathcal{N}_m$}
\IF{$n \in \mathcal{C}$}
\STATE put $m$ into $\mathcal{I}$
\STATE put $m$ into $\mathcal{E}$
\STATE \textbf{break}
\ENDIF
\ENDFOR
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\footnotesize
\caption{
\textsc{ExpandChain($\mathcal{G}$, $\mathcal{C}$, $\mathcal{I}$, $n$)}\newline
\footnotesize
\textbf{Input}: an undirected graph $\mathcal{G}$; $\mathcal{C}$ a chain of nodes in $\mathcal{G}$; $\mathcal{I}$ nodes left outside $\mathcal{C}$ during exploration; $n$ the number of qubits used by the circuit}
\label{alg:expand}
\begin{algorithmic}[1]
\STATE $r \gets (n-|\mathcal{C}|)$
\WHILE{$r>0$}
\FORALL{$m \in \mathcal{I}$}
\STATE $x \gets min(\mathcal{N}_m \cap \mathcal{C})$
\IF{$x \neq \emptyset$}
\STATE put $m$ into $\mathcal{C}$ after $x$
\STATE $\mathcal{I} \gets \mathcal{I}/m$
\STATE $r \gets r-1$
\STATE \textbf{break}
\ENDIF
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{algorithm}
Nodes $4$ and $15$ are left outside the chain as it is already sufficiently long, to map the circuit in \REF{Fig.}{fig:nn_cnot_seq}, and adding these two extra qubits would require the use of SWAP gates during compilation. For a larger circuit, then those nodes will be inserted in the chain in a suitable position (nodes $4$ between $3$ and $8$, node $15$ between $16$ and $17$). Cycling through all $n$ nodes in the map, the time complexity of \REF{Algorithm}{alg:chain} is $O(n)$.
\section{Experimental results}
\label{sec:results}
We implemented the algorithms presented in Section \ref{sec:algorithms} in a quantum compiler written in Python language, denoted as PADQC\footnote{Source code: \url{https://github.com/qis-unipr/padqc}}.
In the PADQC framework, quantum circuits are represented by \textsf{QCircuit} objects, which are based on the well known formalism of Directed Acyclic Graphs (DAGs). In DAGs, nodes represent quantum gates and the edges connecting them correspond to qubits and bits. A \textsf{QCircuit} can then be easily converted into Open QASM~\cite{Cross2017open} and vice versa, allowing PADQC to interact with Qiskit.
\begin{table}[h!]
\Huge
\centering
{\resizebox{\textwidth}{!}{
{\begin{tabular}{ | c | c | c | c | c | c | c | c | c | }
\cline{3-9}
\multicolumn{2}{ c }{}& \multicolumn{7}{ |c| }{CNOT Gate Count}\\
\cline{3-9}
\multicolumn{2}{c}{} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c|}{\textit{ibmq\_tokyo}} & \multicolumn{3}{|c|}{\textit{ibmq\_almaden}}\\\hline
Circuit Name & n & Initial & Qiskit(SABRE) & PADQC+Qiskit(SABRE) & t$|$ket$\rangle$ & Qiskit(SABRE) & PADQC+Qiskit(SABRE) & t$|$ket$\rangle$\\ \hline
H2\_RYRZ & 4 & 30 & 30 & 21 & 30 & 60 & 21 & 70\\
LiH\_RYRZ & 12 & 330 & 788 & 101 & 898 & 1256 & 101 & 1260\\
H2O\_RYRZ & 14 & 455 & 1242 & 121 & 1301 & 1763 & 121 & 1738\\
Random\_20q\_RYRZ & 20 & 950 & 2735 & 182 & 3053 & 3976 & 201 & 4146\\
\hline
\end{tabular}}}}
\caption{Circuit CNOT gate count of the compiled quantum chemistry circuits, considering the \textit{ibmq\_tokyo} and \textit{ibmq\_almaden} architectures.}
\label{tab:cnots}
\end{table}
\begin{table}[h!]
\Huge
\centering
{\resizebox{\textwidth}{!}{
{\begin{tabular}{ | c | c | c | c | c | c | c | c | c | }
\cline{3-9}
\multicolumn{2}{ c }{}& \multicolumn{7}{ |c| }{CNOT Depth}\\
\cline{3-9}
\multicolumn{2}{c}{} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c|}{\textit{ibmq\_tokyo}} & \multicolumn{3}{|c|}{\textit{ibmq\_almaden}}\\\hline
Circuit Name & n & Initial & Qiskit(SABRE) & PADQC+Qiskit(SABRE) & t$|$ket$\rangle$ & Qiskit(SABRE) & PADQC+Qiskit(SABRE) & t$|$ket$\rangle$\\ \hline
H2\_RYRZ & 4 & 21 & 21 & 21 & 21 & 60 & 21 & 70\\
LiH\_RYRZ & 12 & 69 & 488 & 101 & 575 & 648 & 101 & 703\\
H2O\_RYRZ & 14 & 81 & 675 & 121 & 813 & 846 & 121 & 875\\
Random\_20q\_RYRZ & 20 & 117 & 1361 & 182 & 1648 & 1562 & 201 & 1629\\
\hline
\end{tabular}}}}
\caption{Circuit CNOT depth of the compiled quantum chemistry circuits, considering the \textit{ibmq\_tokyo} and \textit{ibmq\_almaden} architectures.}
\label{tab:cnots_depth}
\end{table}
Using an Intel Xeon E5-2683v4 2.1GHz with 50 GB of RAM, we benchmarked the quantum compiler with different quantum circuits. In particular, we considered a few quantum chemistry circuits for the RyRz heuristic~\cite{Kandala2017} wavefunction Ans\"atz (\REF{Fig.}{fig:ryrz_circuits}), and an heterogeneous set of quantum circuits that has been used in most reference works~\cite{Li2019,Zulehner2019}. We assumed IBM Quantum hardware, namely \textit{ibmq\_tokyo} and \textit{ibmq\_almaden} architectures. They both have 20 qubits, but their coupling maps are quite different (Fig. \ref{fig:coupling_maps}).
We compared Qiskit(SABRE) with t$|$ket$\rangle$ and Qiskit(SABRE) preceded by PADQC transformations and mapping. Given that on NISQ devices multi-qubit operations tend to have error rates and execution times an order of magnitude worse than single-qubit ones~\cite{Arute2019}, the performance indicators used are CNOT gate gate count and CNOT depth of the circuit, i.e., the depth of the circuit taking into account only CNOT gates.
As shown in \REF{Table}{tab:cnots} and \REF{Table}{tab:cnots_depth}, the combination of PADQC and Qiskit(SABRE) outperforms both Qiskit(SABRE) alone and t$|$ket$\rangle$ on all RyRz circuits tested, regardless of the considered architecture.
We also tested PADQC on a larger set of circuits. We compiled a total of 190 circuits\footnote{Benchmark circuits QASM files: \url{https://github.com/qis-unipr/padqc/tree/master/benchmarks_qasm}}, taken from RevLib~\cite{RevLib}, Quipper~\cite{Quipper} and Scaffold~\cite{SacffCC}, plus the quantum chemistry ones. The results are summarized in \MULTIREF{Fig.}{fig:cnots}{fig:cnots_depth}, where the initial value of the considered metric is on the x-axis and the value for the compiled circuit on the y-axis.
\begin{figure*}
\centering
\begin{tabular}{ c }
\begin{minipage}{14cm}
\centering
\includegraphics[width=14cm]{images/tokyo_tket_results_cnots_zoom.pdf}
\label{fig:tokyo_cnots}
\end{minipage}
\\
\begin{minipage}{14cm}
\centering
\includegraphics[width=14cm]{images/almaden_tket_results_cnots_zoom.pdf}
\label{fig:almaden_cnots}
\end{minipage}\\
\end{tabular}
\caption{Comparing the CNOT gate count of the circuits compiled on \textit{ibmq\_tokyo} and \textit{ibmq\_almaden}.}
\label{fig:cnots}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{ c }
\begin{minipage}{14cm}
\centering
\includegraphics[width=14cm]{images/tokyo_tket_results_cnots_depth_zoom.pdf}
\label{fig:tokyo_cnots_depth}
\end{minipage}
\\
\begin{minipage}{14cm}
\centering
\includegraphics[width=14cm]{images/almaden_tket_results_cnots_depth_zoom.pdf}
\label{fig:almaden_cnots_depth}
\end{minipage}\\
\end{tabular}
\caption{Comparing the CNOT depth of the circuits compiled on \textit{ibmq\_tokyo} and \textit{ibmq\_almaden}.}
\label{fig:cnots_depth}
\end{figure*}
When compiling on the \textit{ibmq\_tokyo} architecture, we can see, with the help of tendency lines calculated using the least squares method, that the combination of PADQC and Qiskit(SABRE) not only boosts Qiskit(SABRE) performance but can also compete with t$|$ket$\rangle$. As we switch to the less connected \textit{ibmq\_almaden} architecture, PADQC can still boost Qiskit(SABRE) performance while being up to par with t$|$ket$\rangle$.
\section{Conclusion}
\label{sec:conclusions}
In this paper, we proposed novel deterministic algorithms for compiling recurrent quantum circuit patterns in polynomial time. Starting from this set of algorithms, we have implemented PADQC, which has two main features. First, it identifies CNOT cascades and exploits useful circuit identities to transform them into CNOT nearest-neighbor sequences. Second, it finds an initial mapping that can comply with circuits characterized by recurrent CNOT nearest-neighbor sequences. Finally, we integrated PADQC with Qiskit’s SABRE swapping strategy and compilation routine.
We illustrated the results of the experimental evaluation of our integrated solution using different quantum programs and assuming IBM Quantum hardware. Among others, we compiled quantum circuits that are used to compute the ground state properties of molecular systems using the VQE method together with the RyRz heuristic wavefunction Ans\"atz.
PADQC+Qiskit(SABRE), in general, produces output programs that are on par with those produced by state-of-art tools, in terms of CNOT count and CNOT depth. In particular, our solution produces unmatched results on RyRz circuits.
In future work, we plan to expand PADQC to other patterns of interests and look for further circuit identities as the one found between nearest-neighbor CNOT sequences and CNOT cascade. These identities proved to be crucial in the quest for optimal quantum compilation.
Moreover, we will investigate to possibility of integrating PADQC pattern transformation and mapping algorithms with alternative swapping strategies, which could produce an appreciable performance improvement.
While effective, CNOT count and CNOT depth are only pseudo-objectives. What really matters is the fidelity of a computation when run on actual quantum hardware.
We believe that the proposed compiler could be enhanced with noise related information (such as gate error rates and decoherence times) with the aim of finding a good trade-off between circuit depth and computation fidelity.
\section*{Acknowledgements}
This research benefited from the HPC (High Performance Computing) facility of the University of Parma, Italy.
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-02-18T02:18:17",
"yymm": "2102",
"arxiv_id": "2102.08765",
"language": "en",
"url": "https://arxiv.org/abs/2102.08765"
}
|
\section{Introduction}
\label{introduction}
High-frequency microstructure data has received growing attention both in academia and industry with the computerisation of financial exchanges and the increase capacity of data storage. The detailed records of order flow and price dynamics provide us with a granular description of short-term supply and demand, and we can take the dynamics of order books into account during the modelling process.
Propelled by the publication of the benchmark dataset \citep{ntakaris2018benchmark} of high-frequency limit order book (LOB) data, there has been a growing interest in research studying LOB data. Recent works by \cite{tsantekidis2017forecasting, sirignano2019universal, zhang2019deeplob, briola2020deep} demonstrate that strong predictive performance can be obtained from modelling high-frequency LOB data and with resulting predictions finding applications in market-making and trade execution which have short holding periods.
In this work, we introduce Market by Order (MBO) data for predictive modelling with deep learning algorithms. MBO data provides full resolution of the underlying market microstructure -- with both LOB data and trade sequences being derived from it.
Despite MBO data being the original raw data source, current literature on high-frequency predictive modelling focuses predominantly on LOBs and, to the best of our knowledge, MBO data has not been used for direct predicting modelling. We showcase that the usage of MBO data as an additional source information to LOB improves predictive performance and MBO data could inspire a range of meaningful features that are related to individual order positions.
A LOB is a record of all outstanding limit orders (passive orders) for an instrument at a given time point and it is sorted into different levels based on submitted prices. At each price level, a LOB only shows the total available quantity. However, any given price level actually consists of many individual orders with different sizes. MBO data is essentially a message-base data feed that allows us to infer the individual queue position for each individual order by reconstructing the order book step by step. A detailed description of MBO data and how it relates to LOB data is presented in Section~\ref{mbo}.
We propose a deep learning model based on MBO data, and in particular, a classification framework is adopted to predict stock price movements. In doing so, we provide a complete analysis of MBO by carefully introducing the data structure and the components of the message-base data feed. A specific data normalisation scheme is introduced to model level information contained in LOBs and to allow model training with multiple instruments. Our dataset consists of MBO data over a period of one year for five highly liquid instruments from the London Stock Exchange. Our testing set contains millions of samples to verify the robustness and generalisation of the results.
In our proposed models, we apply deep learning architectures including LSTMs \citep{hochreiter1997long} and Attention mechanisms \citep{bahdanau2014neural} to model the dynamics of MBO data for market predictions. Our experiments show consistent and robust results from MBO data that are comparable to models that utilise derived LOB data. We observe that predictive models based on MBO data are complementary to LOB models and we propose an ensemble approach which yields superior results. As such, we observe that MBO data adds diversification to the LOB model and improves prediction performance.
The remainder of the paper is organised as follows. After a short literature review in Section~\ref{literature}, we proceed in Section~\ref{mbo} by introducing MBO data, including data preprocessing, normalisation and labelling. Section~\ref{methodology} presents deep learning architectures. We next describe our experiments and present the results of predicting market movements from MBO data in Section~\ref{experiment}. We conclude our findings and discuss promising future extensions in Section~\ref{conclusion}.
\section{Literature}
\label{literature}
Research on the high-frequency microstructure data remains largely focused on modelling the limit order book (LOB), where the classical works are referred to \cite{o1995market, harris2003trading} and a review is presented in \cite{gould2013limit}. However, there is limited work on MBO data in the current literature. NASDAQ \citep{nasdaq} and CME Group \citep{cme} provide a preliminary description on MBO data for introducing their exchange match engines, and the works of \cite{byrd2019abides, belcak2020fast} use MBO data for market simulation to model trading scenarios or to study latency effects. To the best of our knowledge, this paper is the first to use MBO data to predict market movements, filling in this literature gap by using deep learning models.
Deep Learning \citep{goodfellow2016deep} algorithms have been heavily used for predicting high-frequency microstructure data \citep{tsantekidis2017forecasting, sirignano2019universal, briola2020deep, wallbridge2020transformers}.
In particular, \cite{zhang2018bdlob, zhang2019deeplob, zhang2019extending} apply convolutional neural networks and LSTMs to model the dynamics of LOB and demonstrate accuracy improvements over linear models. Unlike traditional time-series models \citep{mills1991time, hamilton2020time} or stochastic models \citep{islam2020comparison} that assume a parametric process for the underlying time-series, deep learning methods are able to capture arbitrary nonlinear relationships without placing any specific assumptions on the input data. Our experiments also suggest that deep networks deliver better results than linear methods for modelling MBO data.
We investigate deep learning models, including LSTMs \citep{hochreiter1997long} and Attention \citep{bahdanau2014neural}, to model MBO data. Attention is used to solve the problem of diminishing performance with long input sequences by utilising information at each hidden state of a recurrent network \citep{bahdanau2014neural, dai2019transformer}, and it can be used for constructing multi-horizon forecasting models \citep{lim2020time}. Our experiment suggests that networks with a recurrent nature lead to good predictive results compared to the state-of-art networks trained with LOB data, suggesting the potential benefits of using MBO data as an additional data source.
\section{Market by Order Data}
\label{mbo}
\subsection{Descriptions of Market by Order Data}
In general, exchanges provide high-frequency microstructure data in three tiers, namely L1, L2 and L3, offering increasingly granular information and capabilities:
\begin{itemize}
\item Level 1 (L1): L1 shows the price and quantity of the last executed trade and displays real time best bid and ask of an order book, also known as quote data;
\item Level 2 (L2): L2 data is more granular than L1 by showing bids and asks at deeper levels of an order book, and it is commonly referred as LOB data;
\item Level 3 (L3): L3 is essentially the MBO data introduced in this work and it provides even more information than L2 as it shows non-aggregated bids and asks placed by individual traders.
\end{itemize}
In this work, we focus on MBO data, which is essentially a message-base data feed that allows us to observe individual actions of market participants. Essentially, it is an order instruction that describes the action of a specific trader at a given time point. In what follows, we focus on the essential components of such messages ignoring certain auxiliary information. Table~\ref{tb:mbo_example} shows an example of sequences of MBO data, where:
\begin{itemize}
\item \textbf{Time stamp} records the time point when an instruction is given;
\item \textbf{ID} shows the unique ID for order identification which is anonymous to others;
\item \textbf{Type} indicates the order type, here limit order (Type = 1) or market order (Type = 2);
\item \textbf{Side} indicates whether an order is buy (1) or sell (2);
\item \textbf{Action} represents the specific instruction where 0 means updating the price or size for the existing order, 1 means adding a new order and 2 means cancelling an existing order. If Action = 2, the entries of Side, Price and Size are N/A as the matching engine will be able to identify and cancel the existing order using the unique ID;
\item \textbf{Price} shows the price level of the instruction;
\item \textbf{Size} shows the size (i.e. number of stocks) of the instruction.
\end{itemize}
\begin{table}[!t]
\tbl{An example of a sequence of market by order data.}
{\begin{tabular}{l|llllll}
\toprule
\textbf{Time stamp} & \textbf{ID} & \textbf{Type} & \textbf{Side} & \textbf{Action} & \textbf{Price} & \textbf{Size} \\
\midrule
2018-01-02 09:21:15.717500766 & 462805645163273214 &1 & N/A & 2 & N/A & N/A \\
2018-01-02 09:21:18.585446702 & 462805645163298476 &1& 1 & 1 & 68.54 & 8334.0 \\
2018-01-02 09:21:20.680552032 & 462805645163297649 &1& 1 & 0 & 68.56 & 3227.0 \\
2018-01-02 09:21:20.944574722 & 462805645163297649 &1& N/A & 2 & N/A & N/A \\
2018-01-02 09:21:20.945483443 & 462805645163298567 &1& 2 & 1 & 68.59 & 5100.0 \\
\bottomrule
\end{tabular}}
\label{tb:mbo_example}
\end{table}
\begin{figure}[p]
\centering
\includegraphics[width=5.2in, height=4in]{mbo_1.pdf}
\includegraphics[width=5.2in, height=4in]{mbo_2.pdf}
\caption{An illustration of how MBO data updates a LOB. \textbf{Top:} An addition of a new limit order; \textbf{Middle top:} A cancellation of an existing order; \textbf{Middle bottom:} An update for a partial cancellation; \textbf{Bottom:} A marketable buy limit order that crosses the spread.}
\label{fig:mbo_mbp}
\end{figure}
A LOB updates whenever there is a new message from the MBO data coming in and this process is illustrated in Figure~\ref{fig:mbo_mbp}, where we show how a MBO message affect a LOB.
For example, if we look at the top of Figure~\ref{fig:mbo_mbp}, a new limit order (ID=46280) is added to the ask side of the order book with price at 70.04 and size of 7580.
The order book updates its status and the new order is added to the right price level. In general, a LOB only shows the total available quantity at each price level but MBO data provides us with extra information by showing individual behaviour. Although, MBO data does not directly indicate which price level the order is added to, our normalisation scheme introduced in the next section allows us to consider this information and we not only obtain a smaller input space but also obtain relevant information comparable with LOB data.
In addition, the usage of MBO data increases transparency and improves the understanding of order book dynamics without disclosing customer identification. Although, we can access to unique order ID but this number is generally assigned sequentially by the exchange match engine \citep{cme} and a private link is provided to the customer, which keeps identification confidential.
Further, unlike LOB data where we sometimes only view limited price levels, MBO data allows us to observe the entire order book with full-depth information. Such a granularity can improve traders' confidence in posting large order size as they can better evaluate the potential market impact by knowing individual queue positions.
\subsection{Data Preprocessing and Normalisation}
We focus on MBO data that represents limit orders because market orders only account for a tiny percentage of total order flow.
Figure~\ref{fig:mbo_normalisation} illustrates the process of data preprocessing and normalisation. In particular, we process the MBO data for an unique ID as:
\begin{itemize}
\item \textbf{Side and Price:} Missing values correspond to updates and cancellations and we fill those with the corresponding values from the original order of that ID;
\item \textbf{Size:} Missing values correspond to full cancellation and we fill those with $0$ to indicate that no shares are outstanding after the action;
\item \textbf{Action:} we change Action to have values -1, 0 and 1. -1 means cancelling an order, 0 means updating price or size for the existing order and 1 means adding a new order;
\item \textbf{Change price} and \textbf{Change size}: We add these two new features to calculate the difference between entries for the price and size of a specific ID to reflect the intention of adding or decreasing positions for the given order.
\end{itemize}
Data preprocessing is applied to every unique order ID and we then normalise the data as:
\begin{itemize}
\item \textbf{Normalised price}: (price - mid-price) / (minimum tick size $\times 100$).\footnote{Mid-price is the mean between the best ask and bid price. Those references prices and their sizes which we use below can be obtained on the fly from the MBO data.} This calculation transforms price to tick change, representing how many ticks the price is away from the mid-price. The deviation of minimum tick size is needed when we train models with multiple instruments as it maps price to a similar scale;
\item \textbf{Normalised size}: size / mid-size. Mid-size is the mean between the current best ask and bid size, which is similar to mid-price.
\item \textbf{Normalised change price}: change price / minimum tick size;
\item \textbf{Normalised change size}: change size / mid-size.\footnote{By mid-size we denote the mean of best bid and ask size, in analogy to mid price. Those values again are obtained on the fly from the MBO data.}
\item \textbf{Side and Action}: remain unchanged.
\end{itemize}
At the end, we remove ``Time stamp'' and ``ID'', leading to 6 features in our feature space. Note that the normalised price essentially represent the price level where the current order is in the order book, taking the level information into account.
\begin{figure}[!t]
\centering
\includegraphics[width=5.5in, height=4.0in]{data_normalisation.pdf}
\caption{An example of preprocessing and normalising the MBO data.}
\label{fig:mbo_normalisation}
\end{figure}
\subsection{Data Labelling}
In this work, we study a classification framework where we want to predict the future market movements into three classes: the market going up, staying stationary or going down. We use mid-prices to create labels and adopt the labelling method in \cite{zhang2019deeplob} to classify movements. In particular, we define
\begin{equation} \label{eq:label}
\begin{split}
&l_t = \frac{m_+(t) - m_-(t)}{m_-(t)}, \\
&m_-(t) = \frac{1}{k} \sum_{i=0}^{k-1} p_{t-i}, \\
&m_+(t) = \frac{1}{k} \sum_{i=1}^k p_{t+i},
\end{split}
\end{equation}
where $p_t$ is the mid-price at time $t$. We denote the prediction horizon as $k$ and it represents the number of arrivals of MBO data, meaning that we are working with tick time instead of clock time. To decide on the label we compare $l_t$ with a threshold ($\alpha$), labelling it as up if $l_t>\alpha$, down if $l_t<-\alpha$ and stationary otherwise. The choice of $\alpha$ is related to the prediction horizon ($k$) and we set $\alpha$ for each instrument to obtain a balanced training set. Our choices of $k$ and $\alpha$ are listed in Section~\ref{experiment} and we show that the dataset are balanced under our choice.
Note that Equation~\eqref{eq:label} introduces a smooth labelling that leads to consistent labels that are better for designing trading signals and the work of \cite{zhang2019deeplob} includes a more detailed discussion demonstrating the effects of different labelling methods. Interested readers are referred to their work for a detailed explanation.
\section{Methodology}
\label{methodology}
In this section, we introduce the different deep learning algorithms studied in our work. For a single input of any time-series, we write $\bm{x}_{1:T}$, where $\bm{x}_{t}$ represents the features at time $t$ and $T$ is the length of the sequence which will later correspond to the length of the lookback of the input.
\subsection{Multilayer perceptrons (MLPs)}
MLPs are canonical neural network models where a typical network is organised into a series of layers in a chain structure, with each layer being a function of the layer that precedes it. We can define the hidden layer of a MLP as:
\begin{equation}
\bm{h}^{(l)} = g^{(l)} (\bm{W}^{(l)} \bm{h}^{(l-1)} + \bm{b}^{(l)} ),
\end{equation}
where $\bm{h}^{(l)} \in \mathbb{R}^{N_{l}}$ represents the $l$-th hidden layer with weights $\bm{W}^{(l)} \in \mathbb{R}^{N_l \times N_{l-1}}$ and biases $\bm{b}^{(l)} \in \mathbb{R}^{N_{l}}$. Here $g^{(l)}(\cdot)$ is the activation function that allows networks to model nonlinearities. The final output is a function of the last hidden layer and we compute objective functions to minimise errors between target outputs and estimates.
However, for MLPs, we first need to flatten $\bm{x}_{1:T}$ and feed it to subsequent hidden layers. Doing this breaks the time dependences and treats features at different time stamps independently. We generally observe inferior results using MLPs and find that recurrent neural networks (RNNs) often deliver better performance. This is because a RNN acts as a memory buffer by summarising past information and recursively updating the hidden state with new observations at each time step of the input \citep{zhang2020deeppo}.
\subsection{Long Short-Term Memory (LSTMs)}
Standard RNNs suffer from vanishing or exploding gradient problems \citep{bengio1994learning} and Long Short-Term Memory networks (LSTMs) are proposed to solve this problem. This is done by operating a gating mechanism that efficiently controls the propagation of past information \citep{hochreiter1997long}. A LSTM updates its hidden state recursively and has a cell state $\bm{c}_t$ coupled with a series of gates at each hidden state. In mathematical terms, we can write
\begin{equation}
\begin{split}
\text{Input gate:} \quad & \bm{i}_t = \sigma (\bm{W}_{i,h} \bm{h}_{t-1} +\bm{W}_{i,x} \bm{x}_{t} + \bm{b}_i ), \\
&\text{with} \ \bm{W}_{i,h} \in \mathbb{R}^{N_h \times N_h}, \bm{W}_{i,x} \in \mathbb{R}^{N_h \times N_x} \ \text{and} \ \bm{b}_i \in \mathbb{R}^{N_h}, \\
\text{Output gate:} \quad & \bm{o}_t = \sigma (\bm{W}_{o,h} \bm{h}_{t-1} +\bm{W}_{o,x} \bm{x}_{t} + \bm{b}_o ),\\
&\text{with} \ \bm{W}_{o,h} \in \mathbb{R}^{N_h \times N_h}, \bm{W}_{o,x} \in \mathbb{R}^{N_h \times N_x} \ \text{and} \ \bm{b}_o \in \mathbb{R}^{N_h}, \\
\text{Forget gate:} \quad & \bm{f}_t = \sigma (\bm{W}_{f,h} \bm{h}_{t-1} +\bm{W}_{f,x} \bm{x}_{t} + \bm{b}_f),\\
&\text{with} \ \bm{W}_{f,h} \in \mathbb{R}^{N_h \times N_h}, \bm{W}_{f,x} \in \mathbb{R}^{N_h \times N_x} \ \text{and} \ \bm{b}_f \in \mathbb{R}^{N_h},
\end{split}
\end{equation}
where $\bm{h}_{t-1}$ is the hidden state of a LSTM at time $t-1$ and $\sigma(\cdot)$ represents the sigmoid activation function. We use $\bm{W}$ and $\bm{b}$ to represent weights and biases at different gate operations. Subsequently, the current cell state and hidden state can be written as:
\begin{equation}
\begin{split}
\text{Cell state:} \quad & \bm{c}_t = \bm{f}_t \odot \bm{c}_{t-1} + \bm{i}_t \odot \text{tanh} (\bm{W}_{c,h} \bm{h}_{t-1} +\bm{W}_{c,x} \bm{x}_{t} + \bm{b}_c), \\
\text{Hidden state:}: \quad & \bm{h}_t = \bm{o}_t \odot \text{tanh} (\bm{c}_t),
\end{split}
\end{equation}
where $\bm{W}_{c,h} \in \mathbb{R}^{N_h \times N_h}$, $\bm{W}_{c,x} \in \mathbb{R}^{N_h \times N_x}$, $\bm{b}_c \in \mathbb{R}^{N_h}$, $\odot$ is the element-wise product and $\text{tanh}(\cdot)$ is the hyperbolic tangent activation function. The hidden state $\bm{h}_t$ summarises the information from past states and current observations, and the gating mechanism efficiently addresses the vanishing gradient problem.
\subsection{Attention Mechanism}
The Attention Mechanism \citep{bahdanau2014neural} is heavily used in machine translation and is proposed to solve the problem of diminishing performance for long input sequences. On the one hand, a LSTM calculates the final output as a function of only the last hidden state. An attention model, on the other hand, with an additional component called context vector, assigns trainable weights to all the hidden states of an input. We can write an attention mechanism for modelling many-to-one problem as:
\begin{equation}
\bm{h}_{t} = \bm{f}_t(\bm{h}_{t-1}, \bm{x}_{t}),
\end{equation}
where $\bm{h}_{t}$ can be the hidden state from a LSTM at time $t$ for an input $\bm{x}_{1:T}$, and we define the context vector $\bm{c}_{T}$ as:
\begin{equation}
\begin{alignedat}{2}
& \text{Convext vector:} &&\bm{c}_{T} = \sum_{t=1}^{T} \alpha(t, T)\bm{h}_{t}, \\
& \text{Attention weights:} \quad &&\alpha(t, T) = \frac{exp(e(t,T))}{\sum_{t=1}^{T} exp(e(t,T))}, \\
&\text{Score:} && e(t,T) = \bm{v}^T \text{tanh}(\bm{W}_h \bm{h}_{t}),
\end{alignedat}
\end{equation}
where $\bm{v} \in \mathbb{R}^{N_h}$ and $\bm{W}_h \in \mathbb{R}^{N_h \times N_h}$ are the trainable weights. We can then obtain the attention vector:
\begin{equation}
\bm{a}_T = \bm{f}(\bm{c}_T, \bm{h}_T) = \text{tanh}(\bm{W}_c[\bm{c}_T; \bm{h}_T]),
\end{equation}
where the final output is a function of $\bm{c}_{T}$, taking information at every hidden state into account.
\section{Experiments}
\label{experiment}
\subsection{Descriptions of Datasets}
Our datasets consist of MBO data for five highly liquid stocks, Lloyds (LLOY), Barclays (BARC), Tesco (TSCO), BT and Vodafone (VOD), for the entire year of 2018 from the London Stock Exchange. From the MBO data one can derive LOB data which we use for our benchmarks and for references prices. Our LOB dataset contains ask and bid information for an order book up to ten levels. For our modelling we remove messages outside ten levels from the MBO data to align the timestamps of two datasets allowing for fair comparisons in the performance analysis. Afterwards, we train two sets of models by separately using the MBO and LOB data with the same targets. A direct comparison can be then made to compare predictive performance using the MBO and LOB data respectively.
For each trading day, we take the data between 08:30:00 and 16:00:00, restricting ourselves to liquid continuous trading hours, excluding any auctions. Overall, we have more than 169 million samples in our dataset and we take the first 6 months as training data, the next 3 months as validation data and the last 3 months as testing data. In the context of high-frequency microstructure data, we have more than 46 million observations in our testing set, providing sufficient scope for verifying the robustness and generalisability of model performance.
We test our models at three prediction horizons ($k = 20, 50, 100$) and list the choices of label parameter ($\alpha$) in Table~\ref{tb:setting}. We choose $\alpha$ for each instrument to have a balanced training set and the proportion of different classes is presented in Figure~\ref{fig:label_class} in Appendix~\ref{add_results}. Overall, the labels are roughly balanced for the testing set as well (noting that those were fixed on the training set). In terms of the lookback window ($T$) of the input, we take the 50 most recent updates of MBO data to form a single input and feed it to our model.
Note that we are working with tick time instead of physical clock time. In other words, the notation of time step refers to the arrival of MBO updates. One advantage of working with tick time is to deal with uneven trading volumes throughout a day. When a market opens with great volatility, we obtain more ticks and the model naturally makes faster predictions.
\begin{table}[!t]
\tbl{Label parameters ($\alpha$) for different prediction horizons and instruments (units in $10^{-4}$).}
{\begin{tabular}{l|lllll}
\toprule
& LLOY & BARC & TSCO & BT & VOD \\
\midrule
k = 20 & 0.25 & 0.35 & 0.10 & 0.40 & 0.22 \\
k = 50 & 0.50 & 0.65 & 0.70 & 0.70 & 0.45 \\
k = 100 & 0.75 & 0.95 & 1.20 & 1.00 & 0.70 \\
\bottomrule
\end{tabular}}
\label{tb:setting}
\end{table}
\subsection{Training Procedure}
For the MBO data, we study the deep learning models (MBO-MLP, MBO-LSTM and MBO-Attention) introduced in Section~\ref{methodology} along with a simple linear model (MBO-LM). We list the values of hyperparameters for different algorithms in Table~\ref{tb:model_setting}, and the Gradient descent with the Adam optimiser \citep{kingma2014adam} is used for training all models. The complete search space of hyperparameters is included in Appendix~\ref{search_space} and we use a grid-search method to select best hyperparameters.
For the LOB data, we include the 10 levels of a limit order book and past 50 observations as a single input. We follow the normalisation scheme in \cite{zhang2019deeplob} and both the MBO and LOB datasets share the same predictive targets, allowing a direct comparison between different models.
We choose state-of-art network architectures as comparison models, including the LOB-LSTM \citep{sirignano2019universal}, LOB-CNN \citep{tsantekidis2017forecasting} and LOB-DeepLOB \citep{zhang2019deeplob}. The details of the network architecture and choices of hyperparameters can be found in their papers. We also include a linear model (LOB-LM) and a multilayer perception (LOB-MLP) as benchmark models.
We use categorical cross-entropy loss as our objective function and the learning is stopped when the validation loss does not decrease for more than 10 epochs. In general, it takes about 30 epochs to finish model training. TensorFlow and Keras \citep{girija2016tensorflow} are used to build all models and four NVIDIA GeForce RTX 2080 are used in our experiment.
\begin{table}[H]
\tbl{Choices of hyperparameters.}
{\begin{tabular}{l|lllllll}
\toprule
& \textbf{$\frac{\text{\# of}}{\text{layers}}$} & \textbf{$\frac{\text{\# of}}{\text{units}}$} & \textbf{$\frac{\text{Learning}}{\text{rate}}$} & \textbf{$\frac{\text{Batch}}{\text{size}}$} & \textbf{$\frac{\text{\# of}}{\text{parameters}}$} \\
\midrule
LM & - & - & 0.0001 & 128 & 903 \\
MLP & 1 & 64 & 0.0001 & 128 & 19459 \\
LSTM & 2 & 64 & 0.0001 & 128 & 51907 \\
Attention & 2 & 64 & 0.0001 & 128 & 72067 \\
\bottomrule
\end{tabular}}
\label{tb:model_setting}
\end{table}
\subsection{Experimental Results}
Table~\ref{tb:results} summarises the results for all models studied (different rows) and one suitable for each different prediction horizons. We use four evaluation metrics (different columns) to make comparisons: Accuracy, Precision, Recall and F1-score. Kolmogorov-Smirnov \citep{massey1951kolmogorov} tests are used to check the statistical significance of results and all differences in evaluation metrics are significant.
\begin{table}[!t]
\tbl{Experimental results for different prediction horizons ($k$).}
{\begin{tabular}{l|llll}
\toprule
\textbf{Model} & \textbf{Accuracy \%} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{F1 \%} \\
\midrule
\multicolumn{5}{c}{\textbf{Prediction horizon k = 20}} \\
\midrule
MBO-LM & 41.81 & 41.16 & 41.81 & 34.97 \\
MBO-MLP & 47.12 & 46.17 & 47.12 & 46.46 \\
MBO-LSTM & 61.94 & 61.60 & 61.94 & 61.75 \\
MBO-Attention & 61.19 & 62.83 & 61.19 & 61.73 \\
\midrule
LOB-LM & 45.71 & 43.44 & 45.71 & 42.38 \\
LOB-MLP & 50.06 & 50.04 & 50.06 & 46.89 \\
LOB-LSTM & 66.09 & 67.53 & 66.09 & 66.68 \\
LOB-CNN & 63.39 & 67.31 & 63.39 & 64.64 \\
LOB-DeepLOB & 68.73 & 68.16 & 68.73 & 68.40\\
\midrule
Ensemble-MBO & 62.35 & 62.92 & 62.35 & 62.56\\
Ensemble-LOB & 67.97 & 68.74 & 67.97 & 68.31\\
Ensemble-MBO-LOB & \textbf{68.95} & \textbf{69.10} & \textbf{68.95} & \textbf{69.02}\\
\midrule
\multicolumn{5}{c}{\textbf{Prediction horizon k = 50}} \\
\midrule
MBO-LM & 41.88 & 38.42 & 41.88 & 36.57 \\
MBO-MLP & 46.39 & 43.07 & 46.39 & 42.33 \\
MBO-LSTM & 58.84 & 59.65 & 58.84 & 59.18 \\
MBO-Attention & 59.31 & 56.10 & 59.31 & 56.88 \\
\midrule
LOB-LM & 46.97 & 44.34 & 46.97 & 41.13 \\
LOB-MLP & 50.56 & 48.46 & 50.56 & 47.25 \\
LOB-LSTM & 64.49 & \textbf{64.88} & 64.49 & 64.65 \\
LOB-CNN & 64.77 & 62.55 & 64.77 & 63.26 \\
LOB-DeepLOB & 66.12 & 64.37 & 65.38 & 64.79\\
\midrule
Ensemble-MBO & 60.03 & 58.45 & 60.03 & 59.05\\
Ensemble-LOB & 65.95 & 64.72 & 65.95 & 65.23\\
Ensemble-MBO-LOB & \textbf{66.17} & 64.78 & \textbf{66.17} & \textbf{65.34}\\
\midrule
\multicolumn{5}{c}{\textbf{Prediction horizon k = 100}} \\
\midrule
MBO-LM & 41.27 & 34.23 & 41.27 & 35.05 \\
MBO-MLP & 44.19 & 42.70 & 44.19 & 40.29 \\
MBO-LSTM & 57.96 & 54.10 & 57.96 & 54.79 \\
MBO-Attention & 56.36 & 53.66 & 56.36 & 53.75 \\
\midrule
LOB-LM & 46.19 & 43.29 & 46.19 & 41.80 \\
LOB-MLP & 48.36 & 47.39 & 48.36 & 43.66 \\
LOB-LSTM & 61.27 & 58.47 & 62.82 & 57.97 \\
LOB-CNN & 61.78 & 56.91 & 61.78 & 55.40 \\
LOB-DeepLOB & 62.82 & \textbf{60.94} & 61.27 & 61.10\\
\midrule
Ensemble-MBO & 56.62 & 54.85 & 56.62 & 55.48\\
Ensemble-LOB & 63.25 & 59.41 & 63.25 & 60.56\\
Ensemble-MBO-LOB & \textbf{63.75} & 60.01 & \textbf{63.75} & \textbf{61.82}\\
\bottomrule
\end{tabular}}
\label{tb:results}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=5.5in, height=2in]{results_correlation.pdf}
\caption{Pearson correlation between different predictive signals for different prediction horizons ($k$). \textbf{Top:} $k=20$; \textbf{Middle:} $k=50$; \textbf{Bottom:} $k=100$.}
\label{fig:results_correlation}
\end{figure}
We observe that the models trained with LOB data are comparable, but slightly outperform the ones using MBO data. While a priori, MBO data contains more information (contents of level and trades), it is harder to model the raw messages rather than LOB snapshots when can be seen as derived or handcrafted features from the MBO data. What is encouraging is that we are able to obtain comparable performance by modelling the raw messages directly. Furthermore, if we look at the Pearson correlation between predictive signals in Figure~\ref{fig:results_correlation}, we can see that predictive signals from the MBO data are less correlated with LOB's signals. This means that we were indeed able to extract different information from the MBO data. It also suggests that a combination of two signals, from MBO and LOB data, can benefit from diversification that reduces signal variance given the low correlation.
To verify this statement, we include three ensemble models in our experiment, where Ensemble-MBO is obtained from MBO-LSTM and MBO-Attention; Ensemble-LOB is from LOB-LSTM, LOB-CNN, and LOB-DeepLOB; and Ensemble-MBO-LOB combines Ensemble-MBO and Ensemble-LOB. A equal weighting scheme is used to construct ensemble models and we can observe that ensemble approaches, in general, improve predictive performance. In particular, Ensemble-MBO-LOB delivers the best performance, indicating the potential benefits of combining the MBO and LOB data.
Since this work aims to study MBO data, we focus on analysing results from the models trained using the MBO data. We can see that the deep learning models outperform the simple linear model, suggesting the existence of nonlinear features in financial time-series, and networks are capable of extracting such features from the raw messages in MBO data. We observe that MBO-MLP delivers inferior results compared to other networks. This is most liekly due to the structure of the MLP which has full connectivity between input and hidden units -- leading MLPs to often underperform when compared to other networks in financial applications with low signal-to-noise ratio. MBO-LSTM and MBO-Attention all have a recurrent structure with parameter sharing that enables hidden states to summarise past information and update status with current observations. Such a process filters unnecessary input components and naturally models the propagation of order flow. This observation has also been reported by \cite{lim2019enhancing, zhang2020deep, zhang2020deeppo} where they find that networks with a recurrent nature deliver better results than MLPs when modelling financial time-series.
Figure~\ref{fig:confusion_matrix} shows the normalised confusion matrices which helps to understand how models perform at predicting each label class. We calculate the accuracy score for every instrument and for each testing day to understand the consistency of our results. This is summarised in the whisker plots in Figure~\ref{fig:daily_acc}. Each point in the whisker plot represents the accuracy score for one testing day, and we make the box represents the median and interquartile range from these scores. We can see that the MBO-LM and MBO-MLP have large interquartile ranges, suggesting high variances in results, while MBO-LSTM and MBO-Attention show consistent and robust results across the entire testing period. These whisker plots allow us to understand the model performance on a daily basis to ensure the generalisability of our methods. In particular, we see that performance is consistent across the entire testing period and not focused on a few days which could be due to noise.
\begin{figure}[!t]
\centering
\includegraphics[width=5.5in, height=1.1in]{confusion_matrix_k20.pdf}
\includegraphics[width=5.5in, height=1.1in]{confusion_matrix_k50.pdf}
\includegraphics[width=5.5in, height=1.1in]{confusion_matrix_k100.pdf}
\caption{Normalised confusion matrices for different prediction horizons ($k$). \textbf{Top:} $k=20$; \textbf{Middle:} $k=50$; \textbf{Bottom:} $k=100$. From the left to right, we display MBO-LM, MBO-MLP, MBO-LSTM and MBO-Attention.}
\label{fig:confusion_matrix}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=5.5in, height=1.6in]{daily_accuracy_k20.pdf}
\includegraphics[width=5.5in, height=1.6in]{daily_accuracy_k50.pdf}
\includegraphics[width=5.5in, height=1.6in]{daily_accuracy_k100.pdf}
\caption{Whisker plots of daily accuracy for different prediction horizons ($k$). \textbf{Top:} $k=20$; \textbf{Middle:} $k=50$; \textbf{Bottom:} $k=100$.}
\label{fig:daily_acc}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this work we introduce deep learning models for Market by Order (MBO) data. To the best of our knowledge this is the first study of predictive modelling of MBO data using data-driven techniques in the academic literature. Current academic research in this direction is primarily focused on LOB data and we hope that this work helps to popularise the usage of MBO which we see as the next frontier in microstructure modelling in financial data science.
We carefully introduce the structure of MBO data and demonstrate a specific normalisation scheme that allows model training with multiple instruments using deep learning. We consider a wide range of deep learning architectures including MLP, LSTM and attention layers. Our dataset consists of millions of sample for highly liquid instruments from the London Stock Exchange, ensuring the consistency and generalisability of our methods.
We compare models trained using MBO and LOB data respectively. We show that we can obtain similar, but slightly inferior, performance by modelling raw MBO messages, when compared to modelling LOB data. While MBO data a priori contains more information, it is harder to model the raw messages rather than LOBs, which can be seen as derived features of the data. Importantly, we show that our models can extract additional information from the MBO data which is not captured by models trained on LOB data. This means that they can add additional value as we demonstrate in an ensemble approach that combines signals from the MBO and LOB data and delivers the best performance.
In subsequent continuation of this work, we can apply MBO data to various financial applications including market-making or trade execution. Further, the work of \cite{briola2021deep} applies Reinforcement Learning (RL) algorithms to high-frequency trading, and it would be interesting to test the effectiveness of using MBO data within a RL framework.
\section*{Acknowledgement(s)}
The authors would like to thank members of Machine Learning Research Group at the University of Oxford for their useful comments. We are most grateful to the Oxford-Man Institute of Quantitative Finance for computing support and data access.
\input{paper.bbl}
|
{
"timestamp": "2021-07-28T02:13:54",
"yymm": "2102",
"arxiv_id": "2102.08811",
"language": "en",
"url": "https://arxiv.org/abs/2102.08811"
}
|
\section{Passage Retrieval Pipeline}
\label{sec:pipeline}
Our passage retrieval pipeline is shown schematically in Figure~\ref{fig1} and works as follows.
Given the original current turn query $Q$ and the conversation history $H$, we first perform query resolution, that is, add missing context from the $H$ to $Q$ to arrive to the resolved query $Q'$~\cite{DBLP:conf/sigir/VoskaridesLRKR20}.
Next, we perform initial retrieval using $Q'$ to get a list of top-k passages $P$.
Finally, for each passage in $P$, we combine the scores of a reranking module and a reading comprehension module to obtain the final ranked list $R$. %
We describe each module of the pipeline below.
\begin{figure}[t]
\includegraphics[width=\textwidth]{qr_diagrams-crop.pdf}
\caption{Our passage retrieval pipeline.} \label{fig1}
\end{figure}
\subsection{Query Resolution}
\label{sec:query-res}
One important challenge in conversational passage retrieval is that the current turn query is often under-specified.
In order to address this challenge, we perform query resolution, that is, add missing context from the conversation history to the current turn query~\cite{DBLP:conf/sigir/VoskaridesLRKR20}.
We use QuReTeC, a binary term classification query resolution model, which uses BERT to classify each term in the conversation history as relevant or not, and adds the relevant terms to the original current turn query.\footnote{We refer the interested reader to the original paper for more details~\cite{DBLP:conf/sigir/VoskaridesLRKR20}.}
Due to BERT's restrictions on the number of tokens, we cannot include the responses to all the previous turn queries in the conversation history.
Thus, we include (i) all the previous turn queries and (ii) the \emph{automatic} canonical response to the previous turn query only (provided by the track organizers).
We use the QuReTeC model described in~\cite{DBLP:conf/sigir/VoskaridesLRKR20} that was trained on gold standard query resolutions derived from the CANARD dataset~\cite{elgohary2019can}.
\subsection{Initial Retrieval}
\label{sec:initial-retrieval}
We perform initial retrieval using BM25. We tuned the parameters on the MS MARCO passage retrieval dataset ($k_1=0.82, b=0.68$).
\subsection{Re-ranking}
\label{sec:reranking}
Here, we re-rank the original ranking list obtained in the initial retrieval step.
The final ranking score is a weighted average of the Re-ranking (BERT) and Reading Comprehension scores, which we describe below. The interpolation weight $w$ is optimized on the TREC CAsT 2019 dataset~\cite{DBLP:conf/sigir/0001XKC20}.
\paragraph{Re-ranking (BERT).} We use a BERT model to get a ranking score for each passage as described in~\cite{nogueira2019passage}. We initialize BERT with \texttt{bert-large} and fine-tuned it on the MS MARCO passage retrieval dataset as described in~\cite{DBLP:journals/corr/abs-2004-14652}.
\paragraph{Reading Comprehension.}
As an additional signal to rank passages we use a reading comprehension model.
The model is a RoBERTa-Large model trained to predict an answer as a text span in a given passage or ``No Answer'' if the passage does not contain the answer. It is fine-tuned on the MRQA dataset~\cite{fisch2019mrqa}.
We compute the reading comprehension score as the sum of the predicted start and end span logits: $(l_{start} + l_{end})$.
\section{Runs}
We submitted 3 automatic runs and 1 manual run.
Automatic runs use the raw current turn query, while the manual run uses the manually rewritten current turn query.
For all runs, we keep the top-\emph{100} ranked passages per query.
\subsection{Automatic runs}
\begin{itemize}
\item \texttt{quretecNoRerank}: Uses QuReTeC for query resolution (Section~\ref{sec:query-res}) and the initial retrieval module (Section~\ref{sec:initial-retrieval}), but does not use re-ranking (Section~\ref{sec:reranking}).
\item \texttt{quretecQR}: Uses the whole retrieval pipeline described in Section~\ref{sec:pipeline}.
\item \texttt{baselineQR}: Uses the whole retrieval pipeline but uses the \emph{automatically} rewritten version of the current turn query provided by the track organizers, instead of QuReTeC.
\end{itemize}
\subsection{Manual run}
\begin{itemize}
\item \texttt{HumanQR}: Uses the whole retrieval pipeline but uses the \emph{manually} rewritten version of the current turn query provided by the track organizers, instead of QuReTeC.
\end{itemize}
\section{Results}
\begin{table*}[t]
\centering
\caption{Experimental results on the TREC CAsT 2020 dataset. Note that apart from our submitted runs, we also report performance of the Median runs for reference (\texttt{Median-auto} and \texttt{Median-manual}).}
\label{tab:main}
\begin{tabular}{llrrrrrr}
\toprule
\textbf{Run} & \textbf{Type} & \textbf{NDCG@3} & \textbf{NDCG@5} & \textbf{MAP} & \textbf{MRR} & \textbf{Recall@100}\\
\midrule
\texttt{quretecNoRerank} & Automatic & 0.171 & 0.170 & 0.107 & 0.406 & 0.285 \\
\texttt{Median-auto} & Automatic & 0.225 & 0.220 & 0.145 & - & - \\
\texttt{baselineQR} & Automatic & 0.319 & 0.302 & 0.158 & 0.556 & 0.266 \\
\texttt{quretecQR} & Automatic & \textbf{0.340} & \textbf{0.320} & \textbf{0.172} & \textbf{0.589} & \textbf{0.285} \\
\midrule
\texttt{Median-manual} & Manual & 0.317 & 0.303 & 0.201 & - & - \\
\texttt{HumanQR} & Manual & \textbf{0.498} & \textbf{0.472} & \textbf{0.270} & \textbf{0.799} & \textbf{0.408}\\
\bottomrule
\end{tabular}
\end{table*}
Table~\ref{tab:main} shows our experimental results.
First, we observe that \texttt{quretecNoRerank} underperforms \texttt{Median-auto}, thus highlighting the importance of the re-ranking module.
Also, we observe that \texttt{quretecQR}, the run that uses the whole pipeline, outperforms \texttt{Median-auto} by a large margin and also outperforms \texttt{baselineQR}, on all reported metrics.
This shows the effectiveness of QuReTeC for query resolution~\cite{DBLP:conf/sigir/VoskaridesLRKR20}.
Moreover, we see that \texttt{quretecQR} is outperformed by \texttt{humanQR} by a large margin, which highlights the need for future work on the task of query resolution~\cite{vakulenko-2021-comparison}.
Lastly, we observe that our manual run (\texttt{humanQR}) outperforms \texttt{Median-manual}, likely because of better (tuned) retrieval modules.
\section{Analysis}
In this section, we analyze our results using the approach introduced in \cite{DBLP:journals/corr/abs-2010-06835}.
\subsection{Quantitative analysis}
\begin{table}[t]
\centering
\caption{Error analysis when using Original, QuReTeC-resolved or Human queries.
For a given query group, if NDCG@3\textgreater0 for the query used then we mark it with $\checkmark$, otherwise we mark it with $\times$ (NDCG@3=0). }\label{tab_stats1}
\begin{tabular}{| l | ccc | c | cc |}
\toprule
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Error type}}} &
\multicolumn{3}{c|}{\textbf{Query}}
& \multirow{2}{*}{\textbf{\#}} & \multicolumn{2}{c|}{\multirow{2}{*}{\textbf{\%}}} \\
& \textbf{Original} & \textbf{QuReTeC}-resolved & \textbf{Human} & & & \\
\midrule
\multirow{4}{*}{Ranking error} & $\times$ & $\times$ & $\times$ & 20 & \multicolumn{1}{r|}{9.6} & \multirow{4}{*}{13.5} \\
& $\checkmark$ & $\times$ & $\times$ & 0 & \multicolumn{1}{r|}{0.0} & \\
& $\times$ & $\checkmark$ & $\times$ & 7 & \multicolumn{1}{r|}{3.4} & \\
& $\checkmark$ & $\checkmark$ & $\times$ & 1 & \multicolumn{1}{r|}{0.5} & \\
\midrule
\multirow{2}{*}{Query resolution error} & $\times$ & $\times$ & $\checkmark$ & 51 & \multicolumn{1}{r|}{24.5} & \multirow{2}{*}{25.5} \\
& $\checkmark$ & $\times$ & $\checkmark$ & 2 & \multicolumn{1}{r|}{1.0} & \\
\midrule
\multirow{2}{*}{No error} & $\times$ & $\checkmark$ & $\checkmark$ & 88 & \multicolumn{1}{r|}{42.2} &
\multirow{2}{*}{61.0} \\
& $\checkmark$ & $\checkmark$ & $\checkmark$ & 39 & \multicolumn{1}{r|}{18.8} & \\
\bottomrule
\end{tabular}
\end{table}
In our pipeline, passage retrieval performance is dependent on the performance of the query resolution module. Thus, we try to estimate the proportion of ranking and query resolution errors separately.
Specifically, we compare passage retrieval performance when using the Original queries, the QuReTeC-resolved queries or Human rewritten queries, and group queries into different types: (i) ranking error, (ii) query resolution error and (iii) no error.
In order to simplify our analysis, we first choose a ranking metric $m$ (e.g., NDCG@3) and a threshold $t$.
We define ranking errors as follows: we assume that Human rewritten queries are always well specified (i.e., they do not need query resolution), and thus poor ranking performance ($m<=t$) when using the Human rewritten queries can be attributed to the ranking modules.
A query resolution error is one for which the Human rewritten query has performance $m>t$, but for which the QuReTeC-resolved query has performance $m<=t$.
Table~\ref{tab_stats1} shows the results of this analysis when using NDCG@3 as the ranking metric $m$ and setting the threshold to $t=0$.
Since we assume that human rewrites are always well specified, all queries with NDCG@3=0 ($\times$ in column Human) are due to errors in retrieval (13.5\%).
Among the queries for which at least one relevant passage was retrieved in the top-3 (\checkmark in column Human), we see that 61.0\% were correctly resolved by QuReTeC, and 25.5\% were not.
This shows that query resolution for conversational passage retrieval has more room for improvement.
In addition, we observe that $(0+1+2+39)/208\approx$20\% of the queries in the dataset do not need resolution, since when using those we can retrieve at least one relevant passage in the top-3 ($\checkmark$ in column Original).
\begin{table}[t]
\centering
\caption{Error analysis when using Original, QuReTeC-resolved or Human queries. $\checkmark$ indicates that the retrieval performance (NDCG@3 or NDCG@5) reached the threshold indicated in the right columns, and $\times$ indicates that it did not reach the threshold. The numbers correspond to the number of queries in each group.}\label{tab_stats}
\begin{tabular}{| ccc | ccc | ccc |}
\toprule
\multicolumn{3}{|c|}{\textbf{Query}} & \multicolumn{3}{c|}{\textbf{NDCG@3}} & \multicolumn{3}{c|}{\textbf{NDCG@5}} \\
\textbf{Original} & \textbf{QuReTeC}-resolved & \textbf{Human} & \textbf{\textgreater 0} & \textbf{\textgreater{}= 0.5} & \textbf{= 1} & \textbf{\textgreater 0} & \textbf{\textgreater{}= 0.5} & \textbf{= 1} \\
\midrule
$\times$ & $\times$ & $\times$ & 20 & 88 & 185 & 17 & 87 & 196 \\
$\checkmark$ & $\times$ & $\times$ & 0 & 2 & 0 & 0 & 0 & 0 \\
$\times$ & $\checkmark$ & $\times$ & 7 & 3 & 1 & 3 & 3 & 3 \\
$\checkmark$ & $\checkmark$ & $\times$ & 1 & 1 & 0 & 1 & 0 & 0 \\
\midrule
$\times$ & $\times$ & $\checkmark$ & 51 & 42 & 10 & 50 & 48 & 4 \\
$\checkmark$ & $\times$ & $\checkmark$ & 2 & 1 & 0 & 0 & 2 & 0 \\
$\times$ & $\checkmark$ & $\checkmark$ & 88 & 65 & 10 & 87 & 59 & 4 \\
$\checkmark$ & $\checkmark$ & $\checkmark$ & 39 & 6 & 2 & 50 & 9 & 1 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tab_stats} shows the same error analysis performed for different thresholds of NDCG@3 and NDCG@5.
We observe that, as the performance threshold increases, the number of ranking errors increases, which indicates that the passage ranking modules have a lot of room for improvement.
Figure~\ref{fig2} shows the same analysis for NDCG@3, for more thresholds.\footnote{The source code for this analysis that allows to produce the visualisation in Figure~\ref{fig2} from the run files is available at \url{https://github.com/svakulenk0/QRQA}}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{visualization-7.png}
\caption{Error analysis results using Original, QuReTeC-resolved and Human queries for all thresholds of NDCG@3 for (0; 1] with intervals of 0.02. Passage ranking errors increase as the NDCG threshold increases (blue). The proportion of correct query resolutions (turquoise) is higher than the number of errors produced by QuReTeC (orange). \textit{Best seen in color}.} \label{fig2}
\end{figure}
\subsection{Qualitative analysis}
\begin{table}[t]
\centering
\caption{Examples where QuReTeC performs worse than Human rewrites.}\label{tab_samples_worse}
\begin{tabularx}{\textwidth}{l@{\hskip .1in}l@{\hskip .1in}Xr}
\toprule
\multicolumn{1}{c}{\textbf{qid}} & & & \multicolumn{1}{c}{\textbf{NDCG@3}} \\
\midrule
\multirow{2}{*}{101\_7} & Human & Does the public pay the First Lady of the \textit{United States}? & 0.864 \\
& QuReTeC & Do we pay the First Lady? melania trump & 0 \\ \midrule
\multirow{2}{*}{101\_8} & Human & Does the public pay Ivanka Trump? & 0.883 \\
& QuReTeC &What about Ivanka? \textit{melania melanija} trump & 0 \\ \midrule
\multirow{2}{*}{102\_5} & Human& How much money is owed to social security? & 0.704 \\
& QuReTeC &How much is owed? \textit{program} social security & 0 \\ \midrule
\multirow{2}{*}{102\_8} & Human& Can social security be fixed? & 0.413 \\
& QuReTeC & Can it be fixed? \textit{checks} social \textit{check} security & 0 \\ \midrule
\multirow{2}{*}{102\_9} & Human & How much of a \textit{tax} increase will keep social security solvent? & 1.000 \\
& QuReTeC & How much of an increase? social security & 0 \\
\bottomrule
\end{tabularx}
\end{table}
In order to gain further insights, we sample cases where using the QuReTeC-resolved queries result in a better or worse retrieval performance than when using the Human rewrites.
In Table~\ref{tab_samples_worse} we show examples where QuReTeC performs worse than Human rewrites.
In these cases, QuReTeC either misses relevant tokens or introduces redundant tokens.
\begin{table}[t]
\centering
\caption{Examples where QuReTeC performs better than Human rewrites.}\label{tab_samples_better}
\begin{tabularx}{\textwidth}{l@{\hskip .1in}l@{\hskip .1in}Xr}
\toprule
\multicolumn{1}{c}{\textbf{qid}} & & & \multicolumn{1}{c}{\textbf{NDCG@3}} \\
\midrule
\multirow{2}{*}{101\_9} & Human & Does the public pay Jared Kushner? & 0 \\
& QuReTeC & And Jared? \textit{ivana donald trump} & 0.296 \\
\midrule
\multirow{2}{*}{105\_3} & Human & Why was George Zimmerman acquitted? & 0 \\
& QuReTeC & Why was he acquitted? george \textit{trayvon martin} zimmerman & 0.202 \\ \midrule
\multirow{2}{*}{93\_6} & Human & What support does the franchise provide? & 0 \\
& QuReTeC & What support does it provide? \textit{king} franchise \textit{agreement} \textit{burger} & 0.521 \\ \midrule
\multirow{2}{*}{98\_7} & Human & Can you show me \textit{vegetarian} recipes with almonds? & 0 \\
& QuReTeC & Oh \textit{almonds}? Can you show me recipes with it? \textit{almonds} & 0.296 \\
\bottomrule
\end{tabularx}
\end{table}
Interestingly, there are also cases in which QuReTeC performs better than Human rewrites (see Table~\ref{tab_samples_better} for examples).
In these examples, QuReTeC introduced tokens from the conversation history that were absent from the manually rewritten queries but which helped to retrieve relevant passages.
\section{Conclusion}
We presented our participation in the TREC CAsT 2020 track.
We found that our best automatic run that uses QuReTeC for query resolution (\texttt{quretecQR}) outperforms both the automatic median run and the run that uses the rewritten queries provided by the organizers (\texttt{baselineQR}).
In addition, we found that our manual run that uses the human rewrites (\texttt{humanQR}) outperforms our best automatic run (\texttt{quretecQR}), which, alongside with our analysis, highlight the need for further work on the task of query resolution for conversational passage retrieval.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-02-18T02:19:57",
"yymm": "2102",
"arxiv_id": "2102.08795",
"language": "en",
"url": "https://arxiv.org/abs/2102.08795"
}
|
\section{Introduction}\label{Sec: Intro}
\IEEEPARstart{S}{ingle} photon light detection and ranging (lidar) has emerged as an important depth imaging technique prevalent in the automobile \cite{Hecht:18,lidarauto}, defence \cite{6159363} and forestry industries\cite{PIERZCHALA2018217}. This modality has the unique advantage of offering very high depth resolution \cite{manipop} even at long-range scenes using low-power (eye-safe) lasers \cite{Pawlikowska2017SinglephotonTI}. The technique has at its core the ability of emitting light pulses and detecting each single-photon as it arrives, thereby obtaining a depth estimate by measuring the round-trip time of individual photons. By using a time correlated single-photon counting (TCSPC) system, a histogram can be formed indicating the time delay between emitted light pulses and detected photons for each pixel, with a proportion of the photons originating from background or ambient light (e.g. the sun). The number of counts per time histogram bin provide information on the depth and reflectivity of the object or scene. The presence of a peak in the histogram indicates an object is present within the range of the lidar system. The location of this object corresponds to the location of the impulse response. If the material is semi-transparent (e.g. glass, water, camouflage) or the laser footprint is large, then multiple peaks with different intensities may exist within a single pixel \cite{manipop}. A standard example of a TCSPC histogram for a given pixel within a scene is shown in Figure \ref{fig: intro hist}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.25]{images_final/introduction/hist_new_intro.eps}
\caption{An example of a TCSPC histogram of a pixel in a complex scene including a semi-transparent material (camouflage) in front of a person which is depicted by the 2 spikes of the histogram respectively.}
\label{fig: intro hist}
\end{figure}
\par The image restoration task reduces down to inferring the positions and intensities of the peaks in the histogram for each pixel in the image. Typically, the time-correlated single-photon counting data is collected in two main approaches, either: (i) the time-stamp of each photon is recorded\cite{manipop}, or (ii) a temporal histogram, as seen in Figure \ref{fig: intro hist}, is constructed which counts the number of photons detected per histogram bin of time-interval $\Delta \tau$ \cite{30frames,3Dstacked}. In either case, the time-correlated single-photon counting data has to be recorded, stored in memory and transferred from the chip for each pixel in the scene. The development of high rate, high resolution, low power ToF image sensors is challenging due to the large data volumes required. This causes a major data processing bottleneck on the device when either the
number of photons per pixel $n$ is large, the time resolution, $\Delta \tau$, is fine or the spatial resolution is high, as the space requirement, power consumption and computational burden of the depth reconstruction algorithms scale with these parameters \cite{tachellaNComms}.
\par Various existing methods have attempted to tackle the trade-off between depth resolution and computational/space complexities. A number of papers \cite{hardware,slidingGate,128x96,128x128,128x128b} propose methods to address the trade-off between depth resolution and the complexities associated with the TCSPC histogram. Henderson et al. \cite{hardware} propose a method that employs a gated procedure to coarsely bin the detected photons, whilst Ren et al. \cite{slidingGate} develop a sliding window approach to achieve high resolution depth. Walker et al. \cite{128x96} calculate the depth directly from the photon time-stamps. However in all of these approaches, the approximations formed on-chip compromise the depth resolution of the image. Della Rocca et al. \cite{128x128,128x128b} proposes to only collect the histograms of photon detections when there is a significant change of activity. This method reduces the data-transfer, as it is only required during specific moments in time. Similarly, Hutchings et al. \cite{3Dstacked} propose a method of discarding photon detections based on activity. However, these methods can potentially remain idle when there is a small change in activity, and can also suffer from a loss of temporal resolution due to coarse histogram binning. Zhang et al. \cite{30frames} propose a method of reducing the transfer of photon detections by performing a coarse to fine approximation of the ToF data. At each scale, a coarse histogram is constructed with a limited number of bins. Multiple histograms of increasing resolution have to be formed, hence the method has an increased total acquisition time and can also suffer from a loss of temporal resolution. In \cite{rapp2020dithered}, Rapp et al. proposed a subtractive dithering for SPAD arrays that increases depth resolution without increasing the overall time-stamp resolution.
\par Compressive sensing strategies have been successfully applied to lidar \cite{7178153,halimi:hal-02298998}, focusing on compressing the information across pixels. Kadambi et al. \cite{7178153} propose to exploit the sparsity of natural scenes in some representation domain (e.g. wavelet transform) to reduce signal acquisition. The depth accuracy is limited by the level of amplitude noise and decay of the impulse response and is therefore limited to the case of one surface per pixel. Furthermore, the proposed method still requires large amounts of single-photon counting data to be transferred off-chip and therefore does not tackle the inherent data transfer bottleneck that we address in this paper. In a similar vein, Halimi et al. \cite{halimi:hal-02298998} propose an adaptive sampling strategy that is scene dependent. By building up regions of interest and data driven depth maps in an iterative manner, they efficiently choose suitable scan positions to reduce acquisition time by up to 8 times in certain scenarios. However, the method relies on building TCSPC histograms and solving a maximization problem at each iteration of their adaptive algorithm. The method therefore has limitations for real-time processing especially when the amount of single-photon counting data is large. These compressive sensing based methods perform compression within the spatial domain and not, in the case of our method, throughout the depth domain and are therefore fundamentally different in practice and can still suffer from data-transfer bottlenecks. The sketched-based method proposed in this paper is complimentary to the compressive sensing based approaches as one can compress along both the spatial and temporal simultaneously. Another approach to reduce the data transfer of the information needed to reconstruct the lidar image is to compress the data on-chip. As highlighted in \cite{maksymova2018review}, standard low-level data compression methods can be used to compress the data on-chip, however these methods can only offer up to a modest $50\%$ data reduction and in some cases involve significant on-chip computation or there are limitations with respect to on-chip storage.
\par In this paper, we propose a novel solution to this bottleneck of existing lidar techniques by calculating on-the-fly summary statistics of the photon time-stamps, a so-called
sketch, based on samples of the characteristic function of the ToF model. Distinct to compressive sensing, the goal here is not to recover the photon counting data but rather the underlying probability distribution. In this sense, we are estimating the probability model directly from some summary statistics and therefore our proposed framework utilises much of the theory found in the generalised method of moments \cite{hansenGeMM,gemmhall}, empirical characteristic function \cite{10.2307/2958763,10.2307/2985144} and compressive learning \cite{gribonval2020statistical,keriven2018sketching,SheehanCICA} literature. The size of the sketch scales with the degrees of freedom of the ToF model (i.e., number of
objects in depth) and not with the number of photons or the fineness of the time resolution, without sacrificing precision in depth. The sketch can be computed for each incoming
photon in an online fashion, only requiring a minimal amount of additional computation which can be performed efficiently on-chip. The sketch can be shown to capture all the salient
information of the histogram, including the ability to explicitly remove background light or dark count effects, in a compact and data-efficient form, suitable for both on-chip processing or off-chip post processing. Furthermore, we develop a compressive lidar image reconstruction algorithm which has computational complexity dependent only on the size of the sketch. Our proposed
method paves the way for high accuracy 3D imaging at fast frame rates with low power consumption. In summary the main contributions of the paper are as follows:
\begin{itemize}
\item We propose a principled approach for compressing time-of-flight information in an online fashion without the requirement to form a histogram and without compromising depth resolution.
\item A compressive single-photon lidar algorithm is proposed which does not scale with either the number of photons or the time-stamp resolution in terms of space and time complexity.
\item The statistical efficiency, given a compression rate (or sketch size), is quantified for different single-photon lidar scenarios, showing that only limited measurements of the characteristic function are needed to achieve negligible information loss.
\end{itemize}
The remainder of the work is organized as follows. Section 2 details the ToF lidar acquisition systems and the ToF observation model used in single-photon lidar and also presents the idea of summary statistics used for parameter estimation. In Section 3 we detail the construction of the sketch using two different sampling schemes and we further demonstrate how our sketched lidar approach can be implemented in an online processing manner. In Section 4 we detail our proposed compressive single-photon reconstruction algorithm that has computational complexity which scales with the sketch size $m$ as well as quantifying the statistical efficiency of the estimated parameters $\theta$. Results of the compressive lidar framework are analysed on both synthetic and real datasets in Section 5. Section 6 finally summarizes our conclusions and discusses future work.
\section{Background}\label{Sec: Background}
\subsection{Photon Counting Lidar Acquisition}
Figure \ref{Fig: Lidar Device} depicts a simplified schematic of a typical lidar device. A laser emits a pulse wave of photons to a scene that triggers the system clock where a single photon avalanche diode (SPAD) is then used to detect individual photons. The SPAD consists of a reverse-biased photodiode which, in the presence of a single photon, induces an avalanche of electrical charge carriers that are directly detectable. A time-to-digital converter (TDC) then converts the signal to a digital time-stamp that updates a timing statistic in an online manner (specific details of the timing statistic are discussed later in the section).\par Conventional lidar devices typically consist of a single SPAD and a pair of scanning mirrors that raster-scan points in a scene. In general, these approaches only register the first photon in the frame therefore multiple laser cycles are required to build an accurate timing statistic before traversing to a new point in the scene. In high ambient conditions, a significant pile-up effect can occur due to the dead-time of the single SPAD \cite{Gyongy:20}. This problem can be alleviated by using multiple SPADs in parallel resulting in multi-event time-stamp collection \cite{Gyongy:20}. In contrast to the first photon approach \cite{Krstajic:15}, SPADs and TDCs acting in parallel can register multiple photons per frame leading to a richer and larger timing statistic. Flood light illumination can also be used instead of slower raster-scan processes. As shown in Figure \ref{Fig: Lidar Device}, multiple SPADs connect to multiple TDCs to allow efficient parallelization, where each TDC accumulates digital time-stamps to the timing statistic.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.2]{images_final/Corrections/LidarDiagram4.eps}
\caption{A schematic of a typical lidar pixel where either one or multiple SPADs and TDCs are used.}
\label{Fig: Lidar Device}
\end{figure}
\par The timing statistic terminology is used to refer to the various methods of collecting and storing the time-stamps that are then transferred off-chip to construct a depth map. The most common timing statistic used is a histogram, seen in Figure \ref{fig: intro hist}, that clusters the digital time-stamps into discretized bins. This method is most commonly used when the number of detected photons is large and can be devised as a compression in itself. In recent years, however, modern lidar devices can produce finer depth resolution causing a data transfer bottleneck as the histogram can become too large to transfer off-chip. To compensate, coarser bin widths can be used to reduce the size of the histogram \cite{hardware}, creating a trade-off between depth resolution and quicker data transfer. In contrast, if the number of photons detected is small, for example in the photon-starved regime \cite{manipop, Kirmani58}, it is more efficient, from a data processing point of view, to store the specific time-stamp of all the detected photons. In general, depth estimation is carried out off-chip as part of a post-processing stage. In this paper, we propose a novel online timing statistic that circumvents the need to either construct and store a large histogram or collect each individual photon time-stamp, leading to substantial compression. As the sketch is constructed only at the timing statistic stage in Figure \ref{Fig: Lidar Device}, any existing techniques that reduce pile-up, e.g. via (parallel) multi-event detection, can be readily used in conjunction with our proposed technique.
\subsection{Lidar Observation Model}\label{Subsec: Lidar Observation Model}
Throughout this section, both the lidar observation model and the constructed sketches are discussed in terms of a single arbitrary pixel in the scene. Let $\tau$ denote the physical time-stamp such that the discretized time-stamp is denoted $t=\frac{\tau}{\Delta\tau}$.
Then, for an arbitrary pixel, the photon count at discretized time-stamp $t=0,1,\dots,T-1$ can be modelled as a Poisson distribution \cite{poissonmodel,altmann1}:
\begin{equation}
\label{Eqn: Poisson Observ Model}
y_{t_k}|(r,b,t_k) \sim \mathcal{P}(r h(t-t_k)+b),
\end{equation}
where $r\geq 0$ denotes the reflectivity of the detected surface, $h(\cdot)$ is the impulse response of the system, $b$ defines the level of background photons and $t_k$ denotes the location of the $k$th surface in the pixel. The number of discretized time-stamp bins over the range of interest is denoted by $T$. For simplicity, here we assume that the integral of the impulse response $H=\sum_{t=0}^{T-1} h(t)$ is constant although the proposed approach can accommodate more complex scenarios. If the lidar system is in free running mode where multiple acquisitions of a surface/object are obtained, then the interval $[0,1,\dots,T-1]$ can be thought of as circular in the sense that time-stamp $T$ is equivalent to the time-stamp $0$.
\par Alternatively, one can instead model the time of arrival of the $p$th photon detected for a single pixel in the scene. We assume there are $K$ distinct reflecting surfaces within the pixel, where $\alpha_k$ and $\alpha_0$ denote the probability that the detected photon originated from the $k$th surface and background sources, respectively. Furthermore, it is assumed that for a single pixel, a total of $n$ photons are detected during the whole acquisition window of the lidar device . Let $x_p=0,1,\dots,T-1$ denote the time-stamp of the $p$th photon where $1\leq p\leq n$, then $x_p$ can be described by a mixture distribution \cite{altmann2}
\begin{equation}
\label{Eqn: Alternative mixture Observ model}
\pi(x_p|\alpha_0,...,\alpha_{K},t_1,...,t_K)= \sum^K_{k=1}\alpha_k\pi_s(x_p|t_k)+\alpha_0\pi_b(x_p),
\end{equation}
where $\sum^K_{k=0}\alpha_k=1$ and the symbol $\pi$ denotes a probability distribution over $x_p$ . The distribution of the photons originating from the signal and background are defined by the distribution $\pi_s(x_p|t)=h(x_p-t)/H$ and the uniform distribution $\pi_b(x_p)=1/T$ over $[0,1,\dots,T-1]$ , respectively. Often in practice, the signal distribution $\pi_s$ is modelled either using a discretized Gaussian distribution over the interval $[0,1,\dots,T-1]$ or through the data driven impulse function which is calculated through experiments. In Section \ref{Sec: Experiments}, we consider both.
\subsection{Summary Statistics}\label{Subsec: Summary Statistics}
Our acquisition goal is to obtain parameter estimates $\theta\coloneqq(\alpha_0,\alpha_1,\dots,\alpha_K,t_1,\dots,t_K)$ of the signal model in (\ref{Eqn: Alternative mixture Observ model}), given the time-stamp of photons detected for each pixel in the scene. Parameter estimation involves the inference of a set of parameters $\theta\in\Theta\subset{\mathbb R}^{2K+1}$ associated to a probability model $\pi(\cdot\mid\theta)$ defined on some space $\mathbf{x}\in{\mathbb R}^d$. In the case of single-photon counting lidar, the dimension $d=1$. Typically, we observe a finite dataset $\mathcal{X}=\{\mathbf{x}_i\}_{i=1}^n$ of $n$ samples which we assume is sampled from the distribution given in (\ref{Eqn: Alternative mixture Observ model}). Maximum likelihood estimation (MLE) is a traditional parameter estimation method whereby a likelihood function associated with the finite data is maximized with respect to the model parameters, e.g.
\begin{equation}
\label{Eqn : MLE}
\hat{\theta} = \argmax_{\theta}\frac{1}{n}\sum^n_{i=1}\log \pi(\mathbf{x}_i\mid\theta).
\end{equation}
\subsubsection{Generalised Method of Moments}
In some cases, the likelihood function might not have a closed form solution nor a computationally tractable approximation \cite{hansenGeMM}. Generalised method of moments \cite{hansenGeMM,gemmhall} (GeMM) is an alternative parameter estimation method where one estimates $\theta$ by matching a collection of generalised moments with an empirical counterpart computed over a set of finite data sampled from the distribution $\pi({\mathbf x}\mid \theta)$. Given a nonlinear function $g:{\mathbb R}^d\rightarrow\mathbb{C}^m$, then we define the expectation constraint
\begin{equation}
\label{Eqn: GeMM constraint}
{\mathbb E} g({\mathbf x};\theta)=0,
\end{equation}
where ${\mathbb E}$ denotes the expectation with respect to the probability distribution $\pi({\mathbf x} \mid \theta)$. Typically, the GeMM estimator is obtained by minimising a quadratic cost of the empirical discrepancy with respect to $\theta$ to try impose the moment constraints of (\ref{Eqn: GeMM constraint}). Let us define
\begin{equation}
\label{eqn: empirical GeMM}
g_n(\mathcal{X};\theta)\coloneqq \frac{1}{n}\sum^{n}_{i=1}g({\mathbf x}_i;\theta),
\end{equation}
calculated over $\mathcal{X}=\{{\mathbf x}_i\}^n_{i=1}$, then a GeMM classically takes the form \cite{hansenGeMM,gemmhall}
\begin{equation}
\label{Eqn: GeMM loss}
\hat{\theta}\coloneqq \argmin_\theta g_n(\mathcal{X};\theta)^T \mathbf{W} g_n(\mathcal{X};\theta),
\end{equation}
where $\mathbf{W}$ is a symmetric positive definite weighting matrix that may depend on $\theta$.
\subsubsection{Compressive Learning}
Building on the concept of GeMM, compressive learning \cite{gribonval2020statistical,keriven2018sketching} utilises generalised moments of the data but with the distinct goal of reducing signal acquisition, space and time complexities. The link to GeMM is established by separating the function $g$ into the following particular form:
\begin{equation}
\label{Eqn : CL sep form GeMM}
g({\mathbf x};\theta) = \Phi({\mathbf x})-{\mathbb E}_\theta\Phi({\mathbf x}),
\end{equation}
where $\Phi:{\mathbb R}^d\mapsto\mathbb{C}^m$ is often referred to as the feature function. The separable form decouples the measured moments, $\Phi({\mathbf x})$, from the parameters $\theta$ that are to be estimated. This is not a usual assumption in GeMM, although it may arise in particular cases. By denoting the empirical mean or the so-called sketch as
\begin{equation}
\label{Eqn: The sketch}
{\mathbf z}_n\coloneqq\frac{1}{n}\sum^{n}_{i=1}\Phi({\mathbf x}_i),
\end{equation}
we can estimate $\theta$ solely from the sketch ${\mathbf z}_n$ by minimising
\begin{equation}
\label{Eqn: CL loss function}
\hat{\theta} = \argmin_\theta \lVert{\mathbf z}_n-{\mathbb E}_\theta\Phi({\mathbf x})\rVert^2_\mathbf{W},
\end{equation}
which is the particular compressive GeMM loss of (\ref{Eqn: GeMM loss}). In Section \ref{Sec: Sketched Lidar Reconstruction}, we explicitly define the weighting matrix $\mathbf{W}$ for compressive single-photon counting lidar. The separable form of $g$ in (\ref{Eqn : CL sep form GeMM}) allows a sketch statistic ${\mathbf z}_n$ to be formed with a single pass of the data without the need to store $\mathcal{X}$, and it can easily be updated on the fly with minimal computational cost. The sketch statistic has size $m$, or size $2m$ if decoupled into its real and imaginary components, which, fundamentally, scales independent of the dimensions of the dataset $\mathcal{X}$, which in the case of single-photon lidar is the photon count $n$ or the binning resolution $T$.
\subsubsection{Empirical Characteristic Function}
A specific type of GeMM is the empirical characteristic function (ECF) estimation\cite{10.2307/2958763,10.2307/2985144,carrasco2000generalization}, and occurs when the generalized moment is chosen to be $\Phi({\mathbf x})=[e^{ {\rm i}\omega_j^T{\mathbf x}}]_{j=1}^m$, where ${\rm i}=\sqrt{-1}$ and $\omega_j$ is a discrete set of frequencies. It is of particular interest as the expectation of $\Phi$, namely $\Psi_\pi(\omega)=\mathbb{E}_\theta e^{{\rm i}\omega^T{\mathbf x}}$, is specifically the characteristic function (CF) of the probability distribution $\pi({\mathbf x}\mid \theta)$ at frequency $\omega$. The CF exists for all distributions, and often has a closed form expression. Moreover, it captures all the information of the probability distribution \cite{osomeFouriermethods}, therefore giving a one-to-one correspondence between the CF and the probability distribution $\pi({\mathbf x}\mid\theta)$. The CF also has the favourable property that it decays in frequency, i.e. $\Psi_\pi(\omega)\rightarrow 0$ as $\omega \rightarrow \infty$, under mild conditions on the probability distribution $\pi({\mathbf x}\mid\theta)$ \cite{osomeFouriermethods,lukacs1952analytic}. For a single depth observation model in (\ref{Eqn: Alternative mixture Observ model}) (i.e. $K=1$) and a discrete impulse response function $h$, we define the characteristic function of the observation model
\begin{align}
\begin{split}
\label{Eqn: Char function Obs Model}
\Psi_\pi(\omega)\;= \;& \alpha_1\Psi_{\pi_s}(\omega)+\alpha_0\Psi_{\pi_b}(\omega)\\
\; =\; & \alpha_1\hat{h}(\omega)e^{{\rm i}\omega t}+\alpha_0 D_{\frac{T-1}{2}}(\omega)
\end{split}
\end{align}
\noindent where $D_n(x)=\frac{\sin((n+1/2)x}{2\pi\sin(x/2)}$ is the Dirichlet kernel function \cite{Dirich_kernel} and $\hat{h}$ denotes the (discrete) Fourier transform of the impulse response function $h$. It should be noted that we could consider different distributions $\pi_b$, and hence CFs, to model the detected photons originating from more complex background sources, for example highly scattering environments like fog. However, this is beyond the scope of this paper. \par The feature function $\Phi$ is a complex valued function of size $m$. With regards to hardware implementation, it is often preferable and convenient to work directly with real valued functions. The complex term $e^{{\rm i}\omega x}$ can be alternatively written as $\cos(\omega x)+{\rm i}\sin(\omega x)$, where $e^{{\rm i}\omega x}$ has been decoupled into its real and imaginary components. As a result, the feature function $\Phi$ can be equivalently written as a real valued feature function $\Phi:{\mathbb R}^d\mapsto{\mathbb R}^{2m}$, consisting of $2m$ real valued terms by stacking the real and complex components, for e.g.
\begin{equation*}
\Phi(x)= \begin{bmatrix}\cos\left(\omega_1 x\right)\\\vdots\\\cos\left(\omega_m x\right)\\[0.5em]
\sin\left(\omega_1 x\right)\\\vdots\\ \sin\left(\omega_m x\right)
\end{bmatrix}.
\end{equation*}
For sake of fair comparison to existing hardware implemented methods in the literature, the results and figures presented represent a sketch of size $2m$, consisting of $2m$ real valued measurements. The nature of the feature function, in terms of it being represented by a complex or real valued function, will be made clear in its context throughout the paper.
\section{Sketched Lidar}\label{Sec: Sketched Lidar}
We start with a warm up example to highlight the potential of using a sketch for single-photon lidar and to motivate the design of the sketch sampling procedure which will be discussed in Section \ref{subsec sampling schemes}.
\subsection{Compressing Single Depth Data }\label{subsec: warm up}
In the absence of photons originating from background sources and the presence of a single surface or object, the sample mean of all the photon time-stamps ($\Phi(x)=x$) is the simplest summary statistic for estimating the single location parameter $t_1$. This only holds in the noiseless case as the sample mean estimate is heavily biased toward the centre of the histogram when background photons are detected.
\par Suppose, we instead observe the cosine and sine of each photon count $x$ with angular frequency $\omega=\frac{2\pi}{T}$, namely
\begin{equation}
\label{eqn: motivate Phi}
\Phi(x)= \begin{bmatrix}\cos\left(\frac{2\pi x}{T}\right)\\[0.5em]
\sin\left(\frac{2\pi x}{T}\right)
\end{bmatrix},
\end{equation}
and denote ${\mathbf z}_n$ the real valued sketch of size $2$ ($m=1$) computed over the dataset $\mathcal{X}$ as in (\ref{Eqn: The sketch}). It is possible to recover an estimate of the single depth location parameter $t_1$ directly from the sketch, without recourse to the data $\mathcal{X}$, via the trigonometric sample mean
\begin{equation}
\label{eqn: motivation estim}
\hat{t}_1 = \frac{T}{2 \pi} \phase \left\{ \sum_{j=1}^n \cos \left(\frac{2\pi x_j}{T}\right) + {\rm i}\sum_{j=1}^n \sin \left(\frac{2\pi x_j}{T} \right)\right\}
\end{equation}
\noindent where $\phase$ denotes the phasor angle. As the background photons are distributed uniformly over the interval $[0,T-1]$ ($\pi_b(x)=\frac{1}{T}$), the expected moment of the photons originating from background sources is zero, ${\mathbb E}_{x\sim\pi_b}\Phi(x)=\mathbf{0}$. The resulting estimate of the single depth parameter $\hat{t}_1$ is therefore an unbiased estimator of the location parameter $t_1$. The estimator in (\ref{eqn: motivation estim}) coincides with the circular mean estimator detailed in \cite{jammalamadaka2001topics}. Here the circular mean requires the first (non-zero) frequency.
\par Throughout the paper, we consider the detection point SBR, defined as $\frac{\sum_{k=1}^K\alpha_k}{\alpha_0}$, and not the raw sensor SBR which can be much lower in practice \cite{detectSBR}. We summarise the above using a simulated example, where a pixel of $T=1000$ histogram bins with a detection point signal-to-background ratio (SBR) of 1 and a total of $n=600$ photons is simulated, where the time-stamp of each photon is denoted by $\mathcal{X}=\{x_i\}^n_{i=1}$. The data was simulated using a Gaussian impulse response function with $\sigma=15$ and a true position at time-stamp $t_1=320$. Computing the sketch ${\mathbf z}_n$ from (\ref{Eqn: The sketch}) and using (\ref{eqn: motivation estim}) we obtain the sketch estimate $\hat{t}_{\text{cm}}=323.3$ and the sample mean estimate of $\hat{t}=434.1$. The TCSPC histogram along with both the circular and standard mean estimates as well as the location parameter $t_1$ are shown in Figure \ref{Fig: motivational hist} where it is evident that the circular mean estimate does not suffer from the noise bias inherent in the sample mean.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.25]{images_final/introduction/motivation_new.eps}
\caption{The TCSPC histogram with $t_1=320$. The circular mean estimate (yellow) and the standard mean estimate (red) superimposed. }
\label{Fig: motivational hist}
\end{figure}
\par Importantly, the sketch formed using the moment in (\ref{eqn: motivate Phi}) is equivalent to the complex valued ECF sketch ${\mathbf z}_n=\frac{1}{n}\sum^n_{j=1}e^{{\rm i}\omega x_j}$ sampled at $\omega=\frac{2\pi}{T}$, decoupled into both its real and imaginary components. In fact, the estimate $\hat{t}_1$ in (\ref{eqn: motivation estim}) is the optimal estimator to the compressive ECF sketch detailed in (\ref{Eqn: CL loss function}) (see Appendix \ref{Appendix: Circ mean}). Principally, we only need to store and transfer 2 values to accurately estimate the depth location of the object or surface, without the requirement to recourse to the original photon time-stamped data. For the remainder of this section, we generalize the approach of forming a sketch in (\ref{Eqn: The sketch}) of arbitrary size and sampling the ECF at multiple frequencies $\left [{\omega_i}\right ]^m_{i=1}$. This will enable us to obtain statistically efficient estimates for the single surface case and to solve more complex lidar scenes including several surfaces with varying intensities where more salient information of the observation model is required.
\subsection{Sampling the ECF}\label{Subsec: Sampling the ECF}
Recall that the observation model $\pi$ in (\ref{Eqn: Alternative mixture Observ model}) is discretized over the interval $[0,T-1]$ which we can consider to be a sufficient sampling if the distribution in (\ref{Eqn: Alternative mixture Observ model}) is approximately bandlimited. As a result, the characteristic function $\Psi_\pi(\omega)$ has a finite basis characterized by the set of frequencies
\begin{equation}
\label{Eqn: Char fun finite basis}
\mleft\{\frac{2\pi j}{T} \;\middle|\; j=0,1,\dots,T-1 \mright\}.
\end{equation}
We can generalise the approach from Section \ref{subsec: warm up} by sampling multiple frequencies from the finite basis in order to construct the ECF sketch. As is the case for the circular mean, the frequencies $\omega=\frac{2\pi j}{T}$ for $j=1,2,\dots,T-1 $ correspond to the zeros in the Dirichlet kernel function associated with the background pdf $\pi_b$ seen in (\ref{Eqn: Char function Obs Model}). We can therefore construct a sketch of arbitrary dimension $m$ that is also \textit{blind} to photons originating from background sources by avoiding the zero frequency $\omega=0$ of the finite basis. As a result, we define the set of orthogonal frequencies by
\begin{equation}
\label{eqn: Orth Set of weights}
\Omega \coloneqq \mleft\{\omega_j=\frac{2\pi j}{T} \;\middle|\; j=1,2,\dots,T-1 \mright\}.
\end{equation}
We coin this set the \textit{orthogonal frequencies} as it defines regions over the interval of the observation model's characteristic function where the signal's contribution is orthogonal to the background's contribution.
\subsubsection{Sampling Schemes}\label{subsec sampling schemes}
In order to construct a sketch, we are ultimately interested in retaining sufficient salient information of the characteristic function $\Psi_\pi$ such that we can identify and estimate the unique location and intensity parameters $\theta$ of the observation model $\pi(x\mid\theta)$ defined in (\ref{Eqn: Alternative mixture Observ model}). It was discussed in Section \ref{Sec: Background} that the CF of a probability distribution decays in frequency, i.e. $\Psi_\pi(\omega)\rightarrow 0$ as $\omega \rightarrow \infty$. Furthermore, as the observation model is discretized over the interval, we assume that the characteristic function of the observation model is approximately band-limited. A natural sampling scheme would therefore be to sample the first $m$ frequencies of the orthogonal frequencies $\Omega$ to capture the maximum energy of the CF. In other words, we could truncate the CF of the observation model whilst avoiding the zero frequency.
\par Alternatively, in \cite{gribonval2020statistical,keriven2018sketching}, provable guarantees for estimating mixture of Gaussian models have been provided, under certain conditions based on random sampling (cf. compressive sensing \cite{eldar_kutyniok_2012}) of the CF. It is understood that the higher frequencies of the CF may provide further information to help discriminate distributions that are close in probability space. Moreover, if the CF decays slowly in frequency then the energy of the CF will be spread more throughout the set of orthogonal frequencies. We therefore provide an alternative sampling scheme whereby we randomly sample the set of orthogonal frequencies with respect to some sampling law $\Lambda$. In a similar design to the frequency sampling pattern proposed in \cite{keriven_conference}, we sample the orthogonal frequencies by
\begin{equation}
(\omega_1,\omega_2,\dots,\omega_m)\sim \Lambda_{\hat{h}},
\end{equation}
where $\Lambda_{\hat{h}}\propto\hat{h}$. To formalize, we consider the follow sampling schemes in order to construct our ECF sketches:
\begin{enumerate}
\item Truncated Orthogonal Sampling: Sample the first $m$ frequencies i.e $j=1,2,\dots,m$ from $\Omega$.
\item Random Orthogonal Sampling: Sample the set of frequencies randomly, governed by the distributing law $\Lambda_{\hat{h}}$.
\end{enumerate}
Depending on the circumstances of the lidar device we might expect one or the other sampling scheme to perform better.
\subsection{Practical Hardware Considerations}
\subsubsection{Online Processing}
One of the major advantages of forming a sketch $\mathbf{z}_n$, as in (\ref{Eqn: The sketch}), is that it is naturally amenable to online processing. Recall that for an arbitrary pixel in the scene, the resulting sketch that can be transferred off-chip is ${\mathbf z}_n=\sum^n_{i=1}\Phi(x_i)$. Algorithm \ref{Alg: Online proccessing} demonstrates how the sketch for a given pixel is updated in real time during an acquisition window where $n$ photons are detected by the SPAD array. For each photon arrival $x_j$ during the acquisition window, an intermediate sketch is accumulated as well as an integer counter. Once the acquisition window is over, the resulting sketch is transferred off-chip for post-processing.
\begin{algorithm}
\caption{Sketch Online Processing}
\label{Alg: Online proccessing}
\begin{algorithmic}
\STATE \textbf{Initialisation:} $\mathbf{z}=0, n=0$
\WHILE{Acquisition Window}
\IF{New Photon Arrival $x_j$}
\STATE $\mathbf{z}\xleftarrow{} \mathbf{z} + \Phi(x_j)$
\STATE $n\xleftarrow{}n+1$
\ENDIF
\ENDWHILE
\STATE $\mathbf{z} \xleftarrow{}\mathbf{z}/n$
\ENSURE The sketch ${\mathbf z}$ is transferred off-chip for post-processing.
\end{algorithmic}
\end{algorithm}
\par This is very beneficial as all that is needed to be stored on-chip is the sketch ${\mathbf z}$ of size $2m$ and an integer counter. As such, forming the sketch in an online processing manner, as in Algorithm \ref{Alg: Online proccessing}, circumvents the need to compute and store a large histogram or store each individual photon time-stamp. Moreover, it should be noted that no further hardware is required to form the sketch and existing lidar devices can be easily adapted to implement our proposed technique.
\par The computation of the sketch itself requires the calculation of the Fourier features i.e. $\cos(2\pi\omega_j/T)$ and $\sin(2\pi\omega_j/T)$, which would have to be computed in real time for each time-stamp. However, various efficient logic-based schemes already exist for performing such computations \cite{cordic_ref} based on either the classic CORDIC algorithms or polynomial approximations. Alternatively, in \cite{schellekens2021asymmetric}, Schellekens et al. show that in principle one can also replace the Fourier features by alternative periodic functions (e.g. square waves or triangle waves) in conjunction with random dithering. Subsequently, we will assume that we have access to sufficiently accurate sketch values and leave details of specific hardware implementation for future work.
\section{Sketched Lidar Reconstruction} \label{Sec: Sketched Lidar Reconstruction}
\subsection{Statistical Estimation}
Once the ECF sketch is constructed using either sampling scheme, we must estimate the parameters $\theta$ of the observation model $\pi(x\mid\theta)$ solely from the sketch ${\mathbf z}_n$. In general, there is no closed form expression to estimate $\theta$ from the sketch of arbitrary size as is the case for the circular mean estimate in (\ref{eqn: motivation estim}). It is well documented in the ECF and GeMM literature, e.g. \cite{10.2307/2958763,10.2307/1912775,gemmhall}, that a complex valued ECF sketch ${\mathbf z}_n$ of size $m$, computed over a finite dataset $\mathcal{X}=\{x_1,\dots,x_n\}$, satisfies the central limit theorem. Formally, a sketch ${\mathbf z}_n\in\mathbb{C}^m$ converges asymptotically to a Gaussian random variable
\begin{equation}
\label{Eqn: sketch central limit theorem}
{\mathbf z}_n\xrightarrow[]{\text{dist}}\mathcal{N}\big(\Psi_\pi,n^{-1}\Sigma_\theta\big),
\end{equation}
where $\Sigma_\theta\in\mathbb{C}^{m\times m}$ has entries $(\Sigma_\theta)_{ij}=\Psi_\pi(\omega_i-\omega_j)-\Psi_\pi(\omega_i)\Psi_\pi(-\omega_j)$ for $i,j=1,2,\dots,m$. The asymptotic normality result in (\ref{Eqn: sketch central limit theorem}) naturally leads to a sketch maximum likelihood estimation (SMLE) algorithm that consists of minimising the following
\begin{equation}
\label{Eqn: SMLE Scheme}
\argmin_\theta\,\, \frac{m}{2}\log\det(\Sigma_\theta)+n({\mathbf z}_n-{\mathbf z}_\theta)^T\Sigma_\theta^{-1}({\mathbf z}_n-{\mathbf z}_\theta),
\end{equation}
where for convenience we denote
${\mathbf z}_\theta=\left[\Psi_\pi(\omega_j)\right]_{j=1}^m$. For an observation model consisting of $K$ surfaces and a general impulse response function $h$, recall that
\begin{equation}
\label{eqn: general sketch analytic}
{\mathbf z}_\theta=\left[\sum^K_{k=1}\alpha_k\hat{h}(\omega_j)e^{{\rm i}\omega_jt_k}\right]_{j=1}^m
\end{equation} and $\theta=(\alpha_0,\alpha_1,\dots,\alpha_K,t_1,\dots,t_K)$. Note that we have dropped the Dirichlet kernel function on the assumption that we are using one of the proposed sampling schemes. Minimising (\ref{Eqn: SMLE Scheme}) is equivalent to minimising the compressive GeMM objective function defined in (\ref{Eqn: CL loss function}) with the weighting function chosen to be $\mathbf{W}=\Sigma_\theta^{-1}$. The weighting matrix $\mathbf{W}=\Sigma_\theta^{-1}$ is asymptotically optimal in the sense that it minimises the variance of the estimator $\hat{\theta}$ from the sketch ${\mathbf z}_n$ \cite{gemmhall}.
\par In practice $\Sigma_\theta$ is $\theta$ dependent as it is a function of the underlying parameters $\theta$ that are to be estimated. There are various well established methods in the GeMM and ECF literature \cite{hansenGeMM,gemmhall} that tackle the difficulty of approximating $\Sigma_\theta$ and estimating $\theta$ simultaneously. In \cite{osomeFouriermethods}, they use the K-L method which iteratively estimates $\Sigma_\theta$ and $\theta$ in a two stage procedure by fixing and updating one at a time, resulting in a computation complexity of $\mathcal{O}\left(m^3\right)$ due to inverting $\Sigma_\theta$. Some particular methods \cite{hansen2step} fix $\Sigma_\theta$ after only a few iterations of the K-L approach to reduce the computational complexity of the algorithm, although this typically comes at the cost of introducing sample bias \cite{HAUSMAN201145}. Occasionally, the covariance matrix is set throughout to be the identity, $\Sigma_\theta=I$, reducing (\ref{Eqn: SMLE Scheme}) to a standard least squares optimization and a computational complexity of $\mathcal{O}\left(m\right)$, however this generally results in a less statistically efficient estimator $\hat{\theta}$\cite{hansenGeMM}. In this paper, we estimate $\Sigma_\theta$ and $\theta$ simultaneously at each iteration. This approach is commonly referred to as Continuous Updating Estimator (CUE) \cite{hansen2step} and obtains estimates that do not produce sample bias like the two-step K-L approach \cite{HAUSMAN201145} and can often lead to more statistically efficient estimators \cite{hansenGeMM}. However, the SMLE method is not restricted to the CUE and in certain situations practitioners may choose to sacrifice unbiased and efficiently optimal estimators for a reduced computational complexity by considering the other methods discussed.
\par The optimisation problem in (\ref{Eqn: SMLE Scheme}) is also typically non convex and can suffer from spurious local minima. For the case when there is only a single surface, we initialise the SMLE algorithm using the analytic circular mean solution in (\ref{eqn: motivation estim}) with minimal added computational overhead. From our experience with synthetic and real data, the circular mean estimate generally initialises the SMLE algorithm within the basin of the global minima, hence the issues associated with non-convex optimization are circumvented. For the case of multiple surfaces, we form a coarse uniform grid across $[0,T-1]^K$ and initialise at the smallest SMLE loss.
\par In the orthogonal sampling scheme, one could alternatively zero-pad the sketches, perform an inverse FFT (iFFT) and find the maximum peak to estimate the depth position of the surface. However, this approach is fundamentally different to that of the orthogonal truncated sketch as the iFFT method is simply a low pass approximation of the TCSPC histogram whereas the proposed SMLE algorithm performs nonlinear parameter fitting. As a result, the iFFT method will be particularly inaccurate at distinguishing between closely spaced reflectors. In contrast to the proposed sketched lidar acquisition, the iFFT method does not take into account the particular nature of the IRF and achieves poor depth accuracy in the presence of a non-symmetric IRF (see Appendix \ref{appendix: comparison to transient imag}). Furthermore, the iFFT approach requires $\mathcal{O}(T)$ off-chip memory complexity in comparison to $\mathcal{O}(m)$ of our proposed SMLE algorithm.
\subsection{Central Limit Theorem} \label{subsec: CLT}
One of the main advantages of the SMLE lidar approach from (\ref{Eqn: sketch central limit theorem}) is that even at low photon levels (i.e. small $n$), the SMLE estimates quickly follow the central limit theorem (CLT) and provide a good approximation of its expectation. In contrast, the TCSPC histogram used for many estimation methods, discussed in Section \ref{Sec: Intro}, is a poor approximation to its expectation as each time-stamp bin $t$ has only a small number of photons. Thus efficient processing of the full histogram data requires careful consideration of the underlying Poisson statistics \cite{VivekLidar}. This is illustrated in Figure \ref{fig: CLT hists} which shows four separate histograms of the error $(\hat{t}-t_1)$ for increasing photon count $n$, along with the asymptotic Gaussian distribution from (\ref{Eqn: sketch central limit theorem}). The estimate $\hat{t}$ was obtained from a real valued sketch of size 2 ($m=1$) using the circular mean estimate in (\ref{eqn: motivation estim}). The simulated data was the same as the motivation example in Section \ref{subsec: warm up} where a Gaussian IRF with $\sigma=15$ was used. The SBR was set at 1 and the total number of time-stamps was $T=1000$. The total photon count varied from $n=10$ to $n=10000$ increasing by a factor of 10 each time. For each photon count, we estimated the location parameter $t_1$ a total of 1000 times where the data $\mathcal{X}=\{x_i\}_{i=1}^m$ was simulated independently for each trial.
\par Even at extremely low photon counts of $n=10$, the error $(\hat{t}-t_1)$ can be reasonably approximated by a Gaussian random variable centred around 0. This suggests that the estimate $\hat{t}$ quickly satisfies the central limit theorem with respect to the photon count $n$. Further analysis of the proposed SMLE algorithm in the photon starved regime can be seen in Appendix \ref{App: Photon Starved Regime}. In the large photon regime ($n=10000$), the estimation error is concentrated tightly around zero and mostly contained within 5 time-stamps. These results suggest that the sketched lidar CLT results of (\ref{Eqn: sketch central limit theorem}) hold even for low photons levels, hence the SMLE loss in (\ref{Eqn: SMLE Scheme}) is a well-justified loss to minimise. A further potential benefit from this asymptotic normality is that it permits us to directly use \textit{plug-and-play} Gaussian denoising algorithms to further improve the imaging performance \cite{tachellaNComms,RappPhotonEfficient}.
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{images_final/Corrections/CLT_ALL.eps}
\caption{ Histograms of the estimation error $(\hat{t}-t_1)$ for increasing photon count $n$ where the sketched lidar estimate (circular mean) is denoted by $\hat{t}$. The expected error distribution in (\ref{Eqn: sketch central limit theorem}) is depicted in red.}
\label{fig: CLT hists}
\end{figure}
\subsection{Statistical Efficiency}\label{Subsec: Statistical Efficiency}
In this section, we calculate the theoretical statistical efficiency of the sketched lidar estimates, $\theta$, that parametrize the observation model $\pi(x\mid\theta)$ in (\ref{Eqn: Alternative mixture Observ model}), and compare them with the estimates obtained using the full data (i.e no compression) using the relative error percentage. The relative error percentage, which will be defined later, is a key metric allowing us to quantify the relative loss of information given a sketch of size $m$ from a statistical point of view.
\par Statistical efficiency is a measure of the variability or quality of an unbiased estimator $\hat{\theta}$ \cite{FisherEfficiency}. The Cram\'er-Rao bound gives a lower bound on the mean squared error of $\hat{\theta}$ \cite{amari2007methods} and therefore provides a best case scenario on the variability of the parameter estimates. Given the observation model $\pi(x\mid\theta)$ and the corresponding Fisher information matrix (FIM), defined as
\begin{equation}
\label{eqn: Fisher info matrix}
\mathcal{I}_{\text{data}}(\theta)\coloneqq n {\mathbb E} \Bigg[\Bigg(\frac{\partial \log \pi(x \mid \theta)}{\partial\theta}\Bigg)^2\Bigg],
\end{equation}
then the optimal Cram\'er-Rao mean squared error, in terms of the full data, is defined as
\begin{equation}
\label{eqn: Optimal MSE}
\text{RMSE}_n\coloneqq\sqrt{\sum^{2K}_{k=1} [\mathcal{I}_{\text{data}}(\theta)^{-1}]_{\{kk\}}}.
\end{equation}
Equivalently, we can compute the FIM for the sketched case using the normality result stated in (\ref{Eqn: sketch central limit theorem}), where the FIM of a multivariate Gaussian distribution \cite{amari2007methods} is defined as
\begin{equation}
\label{Eqn: Sketched FIM}
\big(\mathcal{I}_{\text{sketch}}(\theta)\big)_{ij}\coloneqq n\dfrac{\partial{\mathbf z}_\theta}{\partial\theta_i}\Sigma^{-1}_{\theta_0}\dfrac{\partial{\mathbf z}_\theta}{\partial\theta_j},
\end{equation}
where ${\mathbf z}_\theta$ is the sketch defined in (\ref{eqn: general sketch analytic}). Similarly, we define the optimal sketched Cram\'er-Rao mean squared error as
\begin{equation}
\label{eqn: Optimal Sketch MSE}
\text{RMSE}_m\coloneqq\sqrt{\sum^{2K}_{k=1} [\mathcal{I}_{\text{sketch}}(\theta)^{-1}]_{\{kk\}}}.
\end{equation}
\noindent To quantify the statistical efficiency of an estimate obtained from a real valued sketch of size $2m$, we use the relative error percentage (REP) metric which compares the optimal sketch root mean squared error $\text{RMSE}_m$ with the corresponding full data root mean squared error $\text{RMSE}_n$, defined by
\begin{equation}
\text{REP}\coloneqq100\Bigg(\frac{\text{RMSE}_m-\text{RMSE}_n}{\text{RMSE}_n}\Bigg).
\end{equation}
Notably, the FIM of the sketched statistic in (\ref{Eqn: Sketched FIM}) scales with $n$, hence the REP metric is independent of the photon count. We compare the statistical efficiency of the sketched lidar estimates to the alternative compression technique of coarse binning \cite{hardware} discussed in Section \ref{Sec: Intro}. The coarse binning approach can be seen to be equivalent to constructing a summary statistic
\begin{equation}
\label{eqn: Coarse bin sketch}
\Tilde{{\mathbf z}}_n =\sum^n_{i=1}\left\{\mathbbm{1}_{[(j-1)\Delta_{\tilde{m}},j\Delta_{\tilde{m}}]}(x_i)\right\}_{j=1}^{\tilde{m}},
\end{equation}
where $\Delta_{\tilde{m}}=\big\lceil\frac{T}{\tilde{m}}\big\rceil$ denotes the down-sampling factor, $\tilde{m}$ denotes the number of measurements equivalent to the real-valued sketch size (i.e. $\tilde{m}=2m$) and $\mathbbm{1}_{[t_i,t_i+\Delta_{\tilde{m}}]}(x)$ is the indicator function defined as
\begin{equation}
\label{eqn: Indicator function Coarse}
\mathbbm{1}_{[(j-1)\Delta_{\tilde{m}},j\Delta_{\tilde{m}}]}(x)\coloneqq
\begin{cases}
1 ~&\text{ if }~ x\in {\big[(j-1)\Delta_{\tilde{m}},j\Delta_{\tilde{m}}\big)},\\
0 ~&\text{ Otherwise. }
\end{cases}
\end{equation}
Once the coarse binning sketch has been constructed, traditional estimation methods, for e.g matched filtering \cite{logmatchedfilter} or expectation maximization \cite{EMAlg}, can be employed to estimate the parameters of the observation model.
\par Lidar scenes typically have only 0, 1 or 2 reflectors in the scene, although in some specific applications, for example airborne lidar \cite{LidarFoliage}, tree-canopy foliage can return $K>2$ reflectors. Our proposed method can handle greater number of reflections, however in the following experiments we only consider the typical case where $K=1,2$. Moreover, we choose the setting of the lidar scene (e.g. binning resolution, peak location, intensity) to best replicate a realistic setting as seen in Section \ref{Subsec: Real Data}. In each experiment, we consider two different impulse response functions (IRF), exhibiting both a short and long-tail. Figure \ref{Fig: eff two tailed} depicts the contrasting IRFs and the magnitude of their corresponding characteristic functions, $\Psi_{\pi_s}(\omega)=\hat{h}(\omega)e^{{\rm i}\omega t}$. We evaluate the statistical efficiency of the sketched and coarse binning estimate using the REP as a function of the number of real measurements $2m$ and examine both the random and truncated orthogonal sampling schemes discussed in Section \ref{subsec sampling schemes}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images_final/Corrections/Figure4.eps}
\caption{ The CF (top) of a short (blue solid) and long (red dashed) tailed impulse response function (bottom). }
\label{Fig: eff two tailed}
\end{figure}
\subsubsection{One Surface}
We first evaluate the REP for a single peak case positioned at $t_1=430$, a window size of $T=1000$. We consider both low and high background photon count levels, where the SBR was set at 10 and 1, respectively. Figure \ref{Fig: Unimodal Efficiency} shows the REP metric as a function of the number of real measurements $2m$ for the truncated orthogonal (blue), random orthogonal (red) and coarse binning (orange) compression techniques, where the high (SBR=10) and low (SBR=1) background photon levels are denoted by a solid and dashed line, respectively. The top and bottom plots depict the short and long-tailed IRF, accordingly. We first observe that both sketched lidar sampling schemes approach $0\%$ as the real measurements increase and only a modest number of measurements is needed to obtain a low REP. In contrast, the coarse binning approach exhibits a slow convergence REP and remains high throughout the measurement range. Importantly, we see that the different sketch sampling schemes outperform each other depending on the tail of the IRF and hence the rate of decay of the CF. For instance, the truncated scheme produces a lower REP for the short-tailed IRF, while the random sampling scheme achieves a quicker convergence and a significantly lower REP throughout the measurement range for the long-tailed IRF. This can be explained by Figure \ref{Fig: eff two tailed}, the CF of the short-tailed IRF has the majority of its energy contained within the first few ($m=10$) frequencies, while the CF of the long-tailed IRF has its energy spread more throughout its frequency.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images_final/efficiency/efficiencyUnimodal2.eps}
\caption{The REP as a function of the number of real measurements ($2m$) for a single peak lidar scene.}
\label{Fig: Unimodal Efficiency}
\end{figure}
\subsubsection{Two Surfaces}
We now evaluate the REP for a two peak case positioned at $(t_1,t_2)=(320,570)$, a window size of $T=1000$. The intensity of the two peaks is given by $75\%$ and $25\%$, respectively, simulating an object that is positioned behind a semi-transparent surface. We simulate both low and high background levels, where the SBR was again set at 10 and 1, respectively. Figure \ref{Fig: Bimodal Efficiency} shows the REP metric as a function of the number of real measurements $2m$ for the truncated orthogonal (blue), random orthogonal (red) and coarse binning (orange) compression techniques, where the high (SBR=10) and low (SBR=1) background photon levels are denoted by a solid and dashed line, respectively. The top and bottom plots depict the short and long-tailed IRF, accordingly. We see the same pattern as the single surface case where the REP remains high for the coarse binning compression technique while, in contrast, the sketched lidar converges towards a relatively low REP in a modest number of measurements. We again observe that the truncated scheme performs best on a fast decaying CF, while the random sampling scheme outperforms the truncated counterpart when there is a slow decaying CF. The doubling of the dimension of the parameter $\theta$ by estimating two peaks and intensities, does not have a significant impact on the required number of measurements needed to achieve a relatively low REP. For instance in the high SBR (solid) scenario, the truncated orthogonal sampling scheme requires $20$ real measurements ($m=10$) to achieve a REP less than $1\%$ for the unimodal case compared with a requirement of $24$ real measurements ($m=12$) to achieve the same level of REP for the bimodal case. These theoretical results on the statistical efficiency of the lidar sketch show that only a moderate sketch size is needed to achieve negligible loss of information. The results are based on the asymptotic normality property discussed in (\ref{Eqn: sketch central limit theorem}), and we have seen in Section \ref{Subsec: Syn Data} that in practice this normality result holds even for small photon counts of $n=10$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{images_final/efficiency/efficiencybimodal2.eps}
\caption{The REP as a function of the number of real measurements ($2m$) for a lidar scene with 2 surfaces.}
\label{Fig: Bimodal Efficiency}
\end{figure}
\par In coarse binning, it can be beneficial to broaden the impulse response (while keeping laser power constant) such that it covers more than a single coarse bin. This strategy can achieve (coarse) sub-bin resolution (see for example \cite{Gyongy:20}). Furthermore, Gyongy et al. \cite{Gyongy:20} proposed an algorithm that estimates the depth position continuously, in contrast to quantization limited matched filtering \cite{logmatchedfilter}. We further compare our proposed sketched lidar method to the wide pulse width coarse binning and algorithm used in \cite{Gyongy:20} for a range of SBR values. In the simulation, the photon count was set at $n=100$ and a Gaussian IRF was used. For the wide pulse width, we replicate the lidar device by setting $\sigma_1=0.4$. To compare with the narrow pulse width settings, we set $\sigma_2=5$. In both scenarios, a total of $2m=16$ coarse bins are used. For our proposed SMLE algorithm, we compare the same compression be taking a (real-valued) sketch of size 16 ($m=8$). We evaluate the depth estimation over SBR values ranging between $10^{-1}$ to $10^2$ for 250 Monte-Carlo simulations. The coarse binning CRB is calculated where the best pulse width has been optimally selected for each SBR level.
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{images_final/Corrections/wide_width_compare_new.eps}
\caption{Comparison of the RMSE achieved by wide and narrow Gaussian pulse width coarse binning to our proposed SMLE algorithm.}
\label{Fig: narrow wide width compare}
\end{figure}
\noindent As shown in Figure \ref{Fig: narrow wide width compare}, the coarse sub-bin resolution can indeed improve the resolution with respect to coarse binning in large SBR regimes, but it still falls significantly behind the resolution obtained using the narrowest IRF with a fine scale time-stamp of our proposed sketch method. For instance, at an SBR of 0.23 the wide pulse width achieves a RMSE of 264.6 bins compared to 31.1 and 4.5 bins for the narrow pulse width coarse binning and SMLE, respectively. As the pulse width optimised algorithm in \cite{Gyongy:20} only exhibits significant improvement in the high SBR scenario we do not consider it further in the paper.
\section{Experiments}\label{Sec: Experiments}
\subsection{Experimental set up}
In this section, we evaluate our compressive lidar framework on synthetic and real data with increasingly complex scenes. Our method is compared with classical algorithms working on the full data space (i.e no compression) namely matched filtering \cite{logmatchedfilter} and expectation maximization (EM) \cite{EMAlg}. Moreover, we also compare our results to the alternative compression technique of coarse binning \cite{hardware} discussed in Section \ref{Sec: Intro} and (\ref{eqn: Coarse bin sketch}). Both the matched filtering and EM algorithms estimate the location parameters using the full data and therefore the results obtained from these methods set a benchmark to the estimation accuracy when no compression takes place. For sake of fair comparison, we use the real valued sketch in all the subsequent results, such that the number of real measurements is equivalent to $2m$.
\subsubsection{Processing}
Restoration of depth imaging of single-photon lidar consists of estimating a 3D point cloud from a lidar data cube containing the number of photons $n_{i,j,t}$ in pixel $(i,j)$ at time-stamp $t$, where $i=1,2,\dots,N_r, j=1,2\dots,N_c$ and $t = 0,1,\dots,T-1$. We denote the average photon count for each pixel by $\bar{n}$ and process each pixel $(i,j)$ of the data cube and estimate the true location and intensity parameter, denoted $t_1$ and $\alpha$, respectively. The intensity of a point in pixel $(i,j)$ of the point cloud is calculated by the number of photons in the pixel multiplied by the proportion of the signal i.e. $\alpha_k \sum_{t=0}^{T-1}n_{i,j,t}$. A data driven impulse response is given for each dataset and we can obtain the characteristic function of the IRF by using (\ref{Eqn: Char function Obs Model}).
\subsubsection{Evaluation Metrics}
Two different error metrics are used to evaluate the performance of our proposed sketched lidar framework. We consider the root mean squared error (RMSE) between the reconstructed image and the ground truth. Given that $t_{i,j,k}$ is the location of the $k$th peak in pixel $(i,j)$ and $\hat{t}_{i,j,k}$ the estimated counterpart, then the root mean squared error of the reconstructed image is
\begin{equation}
\text{RMSE} := \sqrt{\frac{1}{KN_rN_c}\sum_{i=1}^{N_r}\sum_{j=1}^{N_c}\sum^K_{k=1}\Big( t_{i,j,k}-\hat{t}_{i,j,k}\Big)^2}.
\end{equation}
The compression of both the sketched lidar and coarse binning approach is measured in terms of the dimension reduction achieved by the statistic with respect to the raw TCSPC data and is quantified by the metric $\max \{\frac{2m}{T},\frac{2m}{n}\}$, which is dependent on the dimensions, $T$ and $n$, of the lidar scene and where the number of real measurements ($2m$) is used for sake of fair comparison.
\subsection{Synthetic Data}\label{Subsec: Syn Data}
We evaluate the sketched lidar framework on a synthetic dataset simulating a pixel in a scene which consists of a single peak response. We chose the parameters that best replicated a realistic lidar scene and that were akin to the real datasets which will be discussed in \ref{Subsec: Real Data}. Therefore, we set the binning resolution at $T=250$, and impulse response was generated with a true Gaussian function where $\sigma=5$. We ran a Monte-Carlo simulation with $1000$ trials to evaluate and compare the performance of our sketched lidar framework for photon counts $n\in(100,1000)$ with varying SBR levels and number of real measurements $2m$. For each trial, we uniformly chose $t_1\sim\mathcal{U}(0,249)$, and estimated $\hat{t}$ for the sketched lidar approach, the iFFT method discussed in Section \ref{Sec: Sketched Lidar Reconstruction} as well the alternative compression technique of coarse binning. As a reference, we computed the matched filter estimate as well as estimating the maximum peak of the full histogram which represent the estimates over the full data (i.e. no compression). We varied the total number of real measurements between $2$ ($m=1$) and $50$ ($m=25$) and increased the SBR ratio from $10^{-2}$ to $10^2$ on a log-scale. Here we only show the results for the truncated orthogonal sampling scheme but we observed in practice that the alternative random orthogonal sampling scheme produces similar results. Figures \ref{Fig: synthetic contour1} and \ref{Fig: synthetic contour2} show the contour plots of the RMSE level of $10\Delta \tau$ (left) and $2\Delta \tau$ (right) for both $n=100$ and $n=1000$, respectively. The sketched lidar (solid blue), coarse binning (orange) and the iFFT (red) methods are depicted alongside the full data approaches of matched filtering (solid black) and maximum peak estimation (green). As discussed in Section \ref{Subsec: Statistical Efficiency}, the full data (dashed black) and the sketched (dashed blue) Cram\'er-Rao bound are given as reference and define the lower bound to the contour plot. Both the sketched lidar and iFFT approach converge quickly towards the full data estimate of matched filtering within 10 real measurements for both RMSE level sets and photon counts. In contrast, the coarse binning approach needs approximately 30 real measurements to achieve a similar performance as our sketched lidar method in achieving a RMSE of 10 bins. Moreover, coarse binning does not attain an RMSE of 2 for $2m\leq 50$ hence does not appear in the right subplot of Figure \ref{Fig: synthetic contour1}. It can be seen for a larger number of real measurements, the iFFT approach begins to diverge. This is because for larger number of measurements, the iFFT produces a less smooth linear approximation of the histogram and therefore it is more challenging to estimate the depth position.
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{images_final/Corrections/n100_10vs2RMSE.eps}
\caption{RMSE level set contour plots for varying SBR levels and number of real measurements $2m$ for a photon count of $n=100$. The RMSE level are $10\Delta \tau$ (left) and $2\Delta \tau$ (right). The legend is defined for both plots.}
\label{Fig: synthetic contour1}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{images_final/Corrections/n1000_10vs2RMSE.eps}
\caption{RMSE level set contour plots for varying SBR levels and number of real measurements $2m$ for a photon count of $n=1000$. The RMSE level are $10\Delta \tau$ (left) and $2\Delta \tau$ (right). The legend is defined for both plots.}
\label{Fig: synthetic contour2}
\end{figure}
\par Figures \ref{Fig: synthetic detection contour1} and \ref{Fig: synthetic detection contour2} show the $95\%$ of peaks detected within the level sets of $10\Delta \tau$ (left) and $3\Delta \tau$ (right). Our proposed sketch method achieves the same estimation performance as the full data matched filtering approach within approximately 12 real measurements ($m=6$) for all varying SBR ratios and photon counts. In contrast, the coarse binning approach requires approximately 45 real measurements, equating to a modest compression of $0.25$, to achieve $95\%$ of detections within $10\Delta \tau$. Furthermore, the coarse binning method could not achieve $95\%$ of detections within $3\Delta \tau$ for all the real measurements considered. These initial results on synthetic lidar data for a range of different SBR ratios and photon counts highlight the clear trade-off between compression and loss of temporal resolution for the coarse binning approach. In contrast, our proposed sketched lidar method overcomes the trade-off between compression and loss of resolution and only requires a very modest sketch size to achieve the same estimation performance as matched filtering using the whole data.
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{images_final/Corrections/n100_10vs2Detection95.eps}
\caption{RMSE level set contour plots for varying SBR levels and number of real measurements $2m$ for a photon count of $n=100$ for detecting $95\%$ of peaks within the level sets of $10\Delta \tau$ (left) and $3\Delta \tau$ (right). The legend is defined for both plots.}
\label{Fig: synthetic detection contour1}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=1\linewidth]{images_final/Corrections/n1000_10vs2Detection95.eps}
\caption{RMSE level set contour plots for varying SBR levels and number of real measurements $2m$ for a photon count of $n=100$ for detecting $95\%$ of peaks within the level sets of $10\Delta \tau$ (left) and $3\Delta \tau$ (right). The legend is defined for both plots.}
\label{Fig: synthetic detection contour2}
\end{figure}
\subsection{Real Data}\label{Subsec: Real Data}
In this section we evaluate our sketched lidar framework on two real datasets of increasing complexity. Namely, a polystyrene head imaged at Heriot-Watt University \cite{altmann1,manipop} which consists mostly of a single peak, and a scene where two humans are standing behind a camouflage net, depicted in \cite{camo,tachellaNComms}, which contains of 2 objects per pixel with varying intensity.
\subsubsection{Polystyrene Head}\label{subsubsec: head dataset} The first scene consists of a polystyrene head placed 40 meters away from the lidar device. The data cube has width and height of 141 pixels, $N_r=N_c=141$ and a total of $T=4613$ time-stamps. A total acquisition time of 100 milliseconds was used for each pixel resulting in an average photon count of $\bar{n}=337$ with an SBR of approximately 6.82. The vast majority of pixels consist of a single peak, although there are a minority of pixels around the borders of the head that consist of two peaks. The parameter set to be estimated for each pixel is $\theta=(t,\alpha)$ of dimension 2. We compare our results with the ground truth obtained from the experiment as well as the full data algorithm of matched filtering and the coarse binning compression technique. As the matched filter is the maximum likelihood estimation of a single peak, we assume each pixel has one surface for the sake of comparison. As a result, we set the SMLE algorithm to estimate a single peak, however in practice we can use detection algorithms, for instance the sketch-based detection scheme proposed in \cite{sheehan2021surface}, to detect the number of surfaces present before estimation. The coarse binning approach is computed using matched filtering once the data cube is down-sampled.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.25]{images_final/Corrections/Figure9.eps}
\caption{ The CF (bottom) of the data driven impulse response function (top) of the polystyrene head dataset. }
\label{Fig: face char func}
\end{figure}
The data driven impulse response function and its corresponding CF obtained from (\ref{Eqn: Char function Obs Model}), are shown in Figure \ref{Fig: face char func}. We only present the results for the truncated orthogonal sampling scheme, from Section \ref{subsec sampling schemes}, but we observed in practice that the alternative random orthogonal sampling scheme produces similar results. We initialise the sketched lidar algorithm using the analytic circular mean solution in (\ref{eqn: motivation estim}).
\par Figure \ref{table: Head images} shows the reconstructed images of the sketched lidar, coarse binning and matched filter approaches, as well as the ground truth image. We first notice that our sketched lidar method sufficiently reconstructs the polystyrene head scene for all sketch sizes, even for the circular mean estimate ($m=1$) in (a). In contrast, the coarse binning approach fails for all corresponding measurements $\tilde{m}$ with significant staircase artifacts arising. Figure \ref{Fig: face RMSE} shows the RMSE, in comparison to the ground truth, as a function of the number of real measurements ($2m$). Here we omit the small proportion of pixels that consist of two peaks from the RMSE calculation for sake of fair comparison with the existing methods that can only estimate a single peak. We observe that our sketched lidar method produces a smaller RMSE as the measurement size increases and achieves a smaller RMSE than the LMF approach for larger measurements. In comparison, the coarse binning method obtain estimates that produce a large RMSE consistently throughout. As such, this suggests that our sketched lidar approach does not compromise reduced resolution in favour of compression which is very apparent in the coarse binning method.
\begin{table}[ht!]
\centering
\begin{tabular}{c|c}
\textbf{\large Sketched Lidar} & \textbf{\large Coarse Binning}\\\\
\textbf{RMSE} $=10.63$ & \textbf{RMSE} $=1430.1$\\
\includegraphics[scale=0.251]{images_final/head/smle_m1_new.eps}& \includegraphics[scale=0.2]{images_final/head/coarsem1.eps} \\
Compression $= 0.0059$ & Compression $= 0.0059$\\
a) Real Measurements $=2$ ($m=1$) &b) Measurements $=2$ \\\\
\textbf{RMSE} $=8.01$ & \textbf{RMSE} $=201.6$ \\
\includegraphics[scale=0.251]{images_final/head/smle_m4_new.eps}& \includegraphics[scale=0.2]{images_final/head/coarsem4.eps} \\
Compression $= 0.0237$ & Compression $= 0.0237$\\
c) Real Measurements $=8$ ($m=4$) &d) Measurements $=8$ \\\\
\textbf{RMSE} $=7.40$ & \textbf{RMSE} $=100.9$ \\
\includegraphics[scale=0.251]{images_final/head/smle_m10_new.eps}& \includegraphics[scale=0.2]{images_final/head/coarsem10.eps} \\
Compression $= 0.0593$ & Compression $= 0.0593$\\
e) Real Measurements $=20$ ($m=10$) &f) Measurements $=20$ \\\\
\textbf{\large Matched Filter} & \textbf{\large Ground Truth}\\\\
\textbf{RMSE} $=6.87$ & \\
\includegraphics[scale=0.251]{images_final/head/cross_corr.eps}& \includegraphics[scale=0.251]{images_final/head/ground_truth_new.eps} \\
g) No Compression &h)\\
\multicolumn{2}{c}{\includegraphics[scale=0.4]{images_final/head/col_bar.eps}}\\
\multicolumn{2}{c}{ \qquad Intensity}\\
\end{tabular}
\captionof{figure}{The face dataset lidar reconstructions of the sketched lidar and coarse binning method for the real valued measurement size $2,8,20$. Both the matched filter reconstruction and the ground truth image are given for comparison. }
\label{table: Head images}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.25]{images_final/head/face_rmse_new2.eps}
\caption{The RMSE as a function of the number of real measurements ($2m$) for the polystyrene head dataset.}
\label{Fig: face RMSE}
\end{figure}
\subsubsection{Humans Behind Camouflage }\label{subsubsec: camo dataset}
The second scene consists of two humans standing behind a camouflage net approximately 320 metres away from the lidar device. Further details can be found of the scene in \cite{camo,tobin2017long}. The data cube has width and height of 32 pixels, $N_r=N_c=32$ and a total of $T=153$ time-stamps. A total acquisition time of 5.6 milliseconds was used for each pixel resulting in an average photon count of $\bar{n}=871$ with an approximate SBR of 2.35. The vast majority of pixels have 2 surfaces (the camouflage net and a human) where the net (first peak) accounts for the biggest intensity. The parameter set to be estimated for each pixel is $\theta=(t_1,t_2,\alpha_1,\alpha_2)$ of dimension 4. We compare our results with the full data EM algorithm as well as the coarse binning compression technique. For this experiment, the coarse binning algorithm uses the EM estimate once the data cube has been down-sampled as the matched filtering algorithm is only applicable to single peak cases. Due to the lack of a ground truth, we compare the reconstructions of the camouflage scene to the full data EM algorithm reconstruction and equate the relevant compression of both the sketched lidar framework and the coarse binning technique. The data driven impulse response function $h$ and its corresponding CF obtained from (\ref{Eqn: Char function Obs Model}), are shown in Figure \ref{Fig: camo char func}. Again, we only present the results for the truncated orthogonal sampling scheme, from Section \ref{subsec sampling schemes}, but we observed in practice that the alternative random orthogonal sampling scheme produces similar results. We uniformly sampled 10 starting points for each of peak $t_1$ and $t_2$ and initialised with the smallest sketched cost function from (\ref{Eqn: CL loss function}).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.48\textwidth]{images_final/Corrections/Figure12.eps}
\caption{ The CF (bottom) of the data driven impulse response function (top) of the camouflage dataset. }
\label{Fig: camo char func}
\end{figure}
\par Figure \ref{table: Camo dataset} shows the reconstructed images of the sketched lidar, coarse binning and EM algorithm methods. Evidently, the reconstruction of our sketched lidar approach becomes better as the number of real measurements ($2m$) increases, for instance the torso of the human positioned near 600 cm has greater clarity in sketch size 20 compared to sketch size 4 where more spurious peaks are detected. However, the sketched lidar reconstruction for $m=2$ is still sufficient in comparison to the EM reconstruction in (g), while in contrast the coarse binning method fails to reconstruct either human for the corresponding number of measurements. The coarse binning method once again suffers from the stair case effect as seen by the lack of width of the first human standing at position 200 cm in (f). Furthermore, the compression due to the coarse binning results in poor depth accuracy as seen by the position of the camouflage net in reconstruction (b) which has a disparity of approximately 120 cm in comparison to the EM reconstruction. Once again, this suggests that our sketched lidar approach does not compromise reduced resolution in favour of compression which is apparent in the coarse binning method.
\begin{table}[ht!]
\centering
\begin{tabular}{c|c}
\textbf{\large Sketched Lidar} & \textbf{\large Coarse Binning}\\
\includegraphics[scale=0.28]{images_final/camouflage/camo_new_m2.eps}& \includegraphics[scale=0.26]{images_final/camouflage/coarse4.eps} \\
Compression $= 0.0261$ & Compression $= 0.0261$\\
a) Real Measurements $=4$ ($m=2$) &b) Measurements $=4$ \\
\includegraphics[scale=0.28]{images_final/camouflage/camo_new_m7.eps}& \includegraphics[scale=0.26]{images_final/camouflage/coarse14.eps} \\
Compression $= 0.0915$ & Compression $= 0.0915$\\
c) Real Measurements $=14$ ($m=7$) &d) Measurements $=14$\\
\includegraphics[scale=0.26]{images_final/camouflage/camo_new_m10.eps}& \includegraphics[scale=0.26]{images_final/camouflage/coarse20.eps} \\
Compression $= 0.1307$ & Compression $= 0.1307$\\
e) Real Measurements $=20$ ($m=10$) &f) Measurements $=20$\\
\noalign{\vskip 5mm}
\multicolumn{2}{c}{\textbf{\large Expectation Maximization}}\\
\noalign{\vskip 5mm}
\multicolumn{2}{c}{\includegraphics[scale=0.3]{images_final/camouflage/em_new.eps}}\\
\noalign{\vskip 2mm}
\multicolumn{2}{c}{g) No Compression}\\
\multicolumn{2}{c}{\includegraphics[scale=0.35]{images_final/camouflage/intensity_bar_crop.eps}}\\
\multicolumn{2}{c}{ \qquad Intensity}\\
\end{tabular}
\captionof{figure}{The camouflage dataset lidar reconstructions of the sketched lidar and coarse binning method for the real valued measurement size ($2m$) of $2,8,20$. Both the matched filter reconstruction and the ground truth image are given for comparison.}
\label{table: Camo dataset}
\end{table}
\section{Conclusion}\label{Sec: Conclusion}
In this paper, we proposed a novel sketching solution to handle the major data processing bottleneck of single-photon lidar caused by the fine resolution of modern high rate, high resolution ToF image sensors. Our approach involved sampling the characteristic function of the observation model to form online statistics that have dimensionality proportional to the number of parameters of the model. Furthermore, we developed an efficient sketching algorithm, inspired by ECF estimation techniques, which has space and time complexity that fundamentally scales with the size of the sketch $m$, and is independent of both photon count and depth resolution. Two sampling schemes are proposed that sample in regions of the characteristic function that are \textit{blind} to photons originating from background sources. As a result, our method obtains estimates of the location and intensity parameters that are unbiased. Our novel sketch based acquisition removes the trade-off between depth resolution and data transfer complexity that is apparent in existing methods. Here we have only considered a simple pixel-wise depth estimate method in the form of the sketched MLE. It should be straightforward to incorporate the sketched statistics into more sophisticated state-of-the-art reconstruction algorithms, such as the real-time 3D algorithm in \cite{tachellaNComms} due to the Gaussian nature of the sketch statistics seen in Section \ref{subsec: CLT}. However, we leave this for future work. Another line of future work would be to use the sketch statistics for other algorithmic purposes such as target detection and multi-reflection detection.
\section*{Acknowledgements}
This work was supported by the ERC Advanced grant, project C-SENSE, (ERC-ADG-2015-694888). Mike. E. Davies is also supported by a Royal Society Wolfson Research Merit Award. The authors would like to thank the single-photon group at HWU
(\url{https://single-photon.com}) for the use of the datasets used in Section \ref{Subsec: Real Data}. The polystyrene head dataset in \ref{subsubsec: head dataset} and the camouflage dataset in \ref{subsubsec: camo dataset} were obtained from \cite{manipop} and \cite{tachellaNComms}, respectively.
\section*{Code Availability}
A MATLAB implementation of the SMLE algorithm is available at the repository \texttt{\url{https://gitlab.com/Tachella/sketched_lidar}}.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2022-01-06T02:18:21",
"yymm": "2102",
"arxiv_id": "2102.08732",
"language": "en",
"url": "https://arxiv.org/abs/2102.08732"
}
|
\section{Introduction}
\noindent An acronym is an abbreviation formed from the initial letters of other words and pronounced as a word. The usage of acronyms in articles and speech has increased as it avoids the effort of remembering long complex terms. However, this increased usage of acronyms has also caused new issues of Acronym Identification (AI) and of Acronym Disambiguation (AD). AI is the process of identifying which parts of a sentence constitute the acronyms and their corresponding long forms, whereas AD is the process of correctly predicting the long form expansion of an acronym given a context of its usage. AI and AD are beneficial for applications like question answering \citep{ackermann2020resolution} and definition extraction (\citet{kumar2020explainable}, \citet{singh-etal-2020-dsc}). Since, both AI and AD tasks are benefited with domain knowledge, manual identification and disambiguation of acronyms by domain experts is possible. However, it is tiresome and expensive. Hence, there is a dire need to develop intelligent systems that can mimic the role of domain experts and can help us automate the task of AI and AD.
In this paper, we present our approach for the shared tasks of Acronym Identification and Acronym Disambiguation held under the workshop of Scientific Document Understanding (SDU). The problem of AI is treated as a sequence tagging problem. For AD, we treat it as a span prediction problem i.e. given a sentence containing an acronym and the possible long forms of that acronym, we aim to extract the span from the possible expansions, which is the most appropriate long form of the acronym as per the context in the sentence. We start the experimentation process for AI with rule based models. The experiments on both tasks are then extended to using Transformers \citep{vaswani2017attention} based architecture, BERT \cite{devlin2018bert} as the backbone of the model; followed by SciBERT \cite{beltagy2019scibert}, which too is a BERT-based model, but is pretrained on text from scientific research papers instead of Wikipedia corpus. In addition, for AD, we experiment with different training procedures, aiming to instill knowledge about various topics into our models.
The rest of the paper is organized as follows : Related works have been discussed in section \ref{sec:rel}, followed by a brief description of the shared task datasets in section \ref{sec:data}. The methodology and experimental settings are covered in sections \ref{sec:methods} and \ref{sec:settings}. Sections \ref{sec:results} and \ref{sec:analysis} contain the results and discussion. Section \ref{sec:conclusion} concludes the paper and also includes scope of future work.
\section{Related Work}
\label{sec:rel}
Initial works on AI incorporate the use of rule-based methods. \citet{park2001hybrid} present rule based methods for finding acronyms in free text. They make use of various patterns, text markers and linguistic cue words to detect acronyms and also their definitions. \citet{schwartz2002simple} make use of the fact that majority acronyms and their long forms are found in close vicinity in sentence, with one of them enclosed between parentheses and thus extract short and long pairs from sentences. They also propose an algorithm for identifying correct long forms.
People have also tried to leverage the use of web-search queries and logs to identify acronym-expansion pairs. A framework for automatic acronym extraction on a large scale was proposed by \citet{jain2007acronym}. They scrape the web for candidate sentences (those containing acronym-expansion pairs) and then identify acronyms-expansion pairs using search query logs and search results. They also try to rank acronym expansions by assigning a score to expansions using various factors. \citet{taneva2013mining} target the problem of finding distinct expansions for an acronym. They make use of query click logs and clustering techniques to extract candidate expansions of acronyms and group them such that each group has a unique meaning. They then assign scores to grouped expansions to find the appropriate expansion.
A comprehensive comparative study between rule-based and machine based methods for identifying and resolving acronyms has been done by \citet{harris2019my}. They collect data from various resources and then experiment with machine based algorithms, crowd-sourcing methods and a game based approach.
\citet{liu2017multi} treat AI as a sequence labelling problem and propose Latent-state Neural Conditional Random Fields model (LNCRF), which are superior to CRFs in handling complex sentences by making use of nonlinear hidden layers. The incorporation of neural networks with CRFs enable learning of better representations from manually created features, which help in better performance.
Many works solve AD task by creating word vectors and then using them to rank the candidates of the acronym with reference to its usage. \citet{mcinnes2011using} correlate acronym disambiguation with word sense disambiguation. They create \nth{2} order vectors of all possible long forms and the acronym with the help of word co-occurrences. The correct long form is then found out using cosine similarity between the vectors. \citet{li2018guess} present an end-to-end pipeline for acronym disambiguation in the domain of enterprise. Due to the lack of mapping of acronym to their long forms, they first use data mining techniques to create a knowledge base. Further, they treat acronym disambiguation as a ranking problem and create ranking models using some manually created features.
With the advent of deep learning, researchers have tried to create more informative word vectors for the previous approach. \citet{wu2015clinical} first use deep learning to create neural word embedding from medical domain data. They combine the word embeddings of a sample text in different ways and then train a Support Vector Machine (SVM) classifier for each acronym. \citet{charbonnier2018using} explore acronym disambiguation in the scientific research domain. They obtain word vectors from text of scientific research papers and create vector representations for the context of the acronym. Distance minimisation between vector of context and acronym expansion, gives the appropriate expansion.
\citet{ciosici2019unsupervised} present an unsupervised approach for acronym disambiguation by treating it as a word prediction problem. They use word2vec \citep{mikolov2013efficient} to simultaneously learn word embeddings and by learning to predict the correct special token (concatenation of short and long form) of a sentence. The obtained word embeddings are used to create representations of the context of the short form and the best expansion of the short form is obtained from the candidates by minimising distance between representations.
Many works also treat AD as a classification problem. \citet{jin2019deep} explore the usage of contextualised BioELMO word embeddings for acronym disambiguation. They train separate BiLSTM classifiers for each acronym which outputs the appropriate expansion when a text is input. They achieve state of the art performance on PubMed dataset. \citet{li2019neural} propose a novel neural topic attention mechanism to learn better contextualised representations for medical term acronym disambiguation. They compare the performance of LSTMs with ELMo embeddings armed with different types of attention mechanisms.
An overview of the submissions made to the shared tasks of AI and AD has been done by the organizers \cite{veyseh2020acronym}.
\section{Datasets}
\label{sec:data}
\citet{veyseh-et-al-2020-what} provide the shared task participants with a dataset for AI and AD tasks called SciAI and SciAD respectively. SciAI contains 17,506 sentences from research papers, in which the boundaries of acronyms and their long forms are labelled using the BIO format. The tag set consists of \textit{B-short}, \textit{B-long}, \textit{I-short}, \textit{I-long} and \textit{O}, ``short" representing the acronym and ``long" representing the expansion respectively. SciAD contains 62,441 instances covering acronyms used in the scientific domain. The dataset contains the sentence, the acronym and the correct expansion of that acronym as per its usage in the sentence. The dataset also contains a dictionary which is a mapping of the acronyms to candidate long forms. Both datasets are different from the existing datasets for AI and AD as they are larger in size and have instances belonging to scientific domain (majority AI and AD datasets belong to the medical domain).
\section{Methodology}
\label{sec:methods}
\subsection{Models}
Since both the tasks are similar, we try out the following models for both of them and then build upon them:
\begin{itemize}
\item \textbf{BERT} : BERT, based on the Transformer architecture consists of multi-attention heads which apply a sequence-to-sequence transformation on the input text sequence. The training objectives of BERT make it unique. The Masked Language Model (MLM) learns to predict a masked token using the left and right context of the text sequence. BERT also learns to predict whether two sentences occur in continuation or not (Next Sentence Prediction).
\item \textbf{SciBERT} : Allen Institute for Artificial Intelligence (AI2) pretrain the base version of BERT (SciBERT) on scientific text from 1.14 million research papers from Semantic Scholar. Owing to the similarity of the domain of the shared task dataset and SciBERT training corpus, we believe the model will be beneficial for the tasks. We use SciBERT with SciVocab in our experiments.
\end{itemize}
\subsection{AI}
\subsubsection{Problem Formulation}
We can easily identify the AI task as a NER (Named Entity Recognition) / BIO tagging task. The tags used in the above methods were short-form and long-form labels of the words in BIO format. One of the interesting experiments that we perform is to make use of ``BIOless" tags. Keeping all factors constant, classifiers ought to work better if the number of classes are less. Tagging is a token classification task. Hence, the tagger should perform better if the number of tags are reduced. The following changes are carried out in the training data to obtain ``BIOless" tags :
\begin{enumerate}
\item \textit{B-short} and \textit{I-short} tags are changed to \textit{B-short}
\item \textit{B-long} and \textit{I-long} tags are changed to \textit{B-long}
\item \textit{O} tags are unchanged.
\end{enumerate}
The models are trained and once the results are obtained, the definition of B, I and O tags viz. beginning, inside and outside, are used to reconstruct the original tags. It is done by changing the first tag in a cluster to \textit{B-short} or \textit{B-long} and the rest of them to \textit{I-short} or \textit{I-long}.
\subsubsection{Models}
We experiment with the following models/variations of the models already mentioned :
\begin{itemize}
\item \textbf{Conditional Random Fields (CRFs)} : Considering labelling of sentences with POS (Parts Of Speech) tags, it is highly probable that a NOUN is followed by a VERB. Therefore, these kinds of task fall under a category which is essentially a combination of classification (classifying a word to one of the POS tags) and graphical modelling (one word influences the POS tag of other words). Thus, these tasks involve predicting a large number of variables that depend on each other as well as on other observed variables.
CRFs are a popular probabilistic method suitable for tasks such as this. They combine the ability of graphical models to compactly model multivariate data with the ability of classification methods to perform prediction using large sets of input features. For the current data, we use the following features as input:
For the \textbf{current word} -
\begin{enumerate}[label=\alph*.]
\item The lower cased version of the word
\item The last three letters of the word
\item If all characters of the word are upper case
\item If the word is title cased
\item The POS tag of the word
\item The first two characters of the POS tag of the word
\item If 60\% of the word is uppercase
\end{enumerate}
For \textbf{neighbouring words} -
\begin{enumerate}[label=\alph*.]
\item The lower cased version of the word
\item If the word is title cased
\item If all characters of the word are upper case
\item The POS tag of the word
\item The first two characters of the POS tag of the word
\end{enumerate}
\item \textbf{BERT base cased} : We use the cased base version of BERT as the backbone of our Transformer-CRF architectur
\item \textbf{SciBERT cased} : We use the cased version of SciBERT as the backbone of our Transformer-CRF architecture.
\end{itemize}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{Blending.pdf}
\caption{Blending for AI}
\label{fig:blend}
\end{figure*}
\subsubsection{Post Modelling Experiments}
The process of ensembling helped to get a major boost in the score of the base models. We used two kinds of ensembling process:
\begin{itemize}
\item \textbf{Majority Voting/Hard Voting} \citep{wu2006using}: The idea here is to simply go with what the majority of the models in the ensemble method are predicting. In the case of classification, the final prediction is the mode of the predictions of the participating models; similarly, in a tagging task or rather token classification, the final prediction for a given sequence is the sequence of modes of the prediction sequence of the participating models.
Assume $y$ is label, $x$ is the token, $N$ is the total number of base taggers employed and $T_i$ is a function that returns 1 if the prediction of the $i^{th}$ tagger is $y$, otherwise 0.
Then, $W(y,x)$ is said to be the \textbf{score} and is defined as:
\[
W(y,x) = \sum_{i=0}^{N} T_i(y,x)
\]
The $y$ with the highest score is chosen as the label of $x$.
\item \textbf{Blending} \citep{sikdar2017feature} : Hereby, we depict our process of blending models (Figure \ref{fig:blend}). The whole process consists of the following 3 stages:
\begin{enumerate}[label=\alph*.]
\item The base models are trained on the training data and then predictions are made on the validation data using these.
\item The predictions obtained in the previous stage are used as the features for this stage. A CRF is fit on these features using 5-fold cross validation.
\item The five trained models obtained in the previous stage are then ensembled using majority voting to make the final prediction.
\end{enumerate}
\end{itemize}
\subsection{AD}
\subsubsection{Problem Formulation}
Many existing works on AD solve the problem as a text classification problem, i.e. given a text and an acronym, classify the long form of the acronym or by developing rich word vector representation to extract the most suitable full form out of some candidate long forms. We, instead, treat AD as a span prediction problem. The model predicts the span containing the correct long form from the concatenated text consisting of the acronym, the candidate long forms of that acronym and the sentence (in the same order). The predicted span is then compared with the candidate long forms and the best match is chosen as per Jaccard score.
Each approach has its own shortcomings. For the classification approach, the size of the model increases with the increase in dictionary size; training models for a large number of classes is difficult. A solution to this problem is to build individual models for acronyms, but the solution might not be feasible if there are many acronyms. For the vector based methods, achieving rich representations is difficult. As for the span prediction approach, the handling of long inputs is difficult and time-consuming. We may have to compromise on the context of the acronym in order to adjust for long sequences.
To prepare our input text for the model, we take advantage of the fact that BERT can encode a pair of sequences together. Therefore, the first sequence is the acronym concatenated with all possible expansions from the dictionary and the second sequence is the input text. Since, some of the input sentences are quite long, we sample tokens from the sentences. In order to input sufficient context of the acronym into the models, we take $n/2$ space delimited tokens to the left of the acronym and $n/2$ space delimited tokens to the right of it, where $n$ is a hyperparameter. We find in our experiments that taking $n$ to be sufficiently large gives almost consistent performance. We fix $n$ to 120 in our experiments.
We experiment with different training approaches and pretrained weights keeping the architecture of our model constant in all cases. The backbone of the architecture is the base version of BERT. The sequence outputs of the last layer of BERT (shape = $(batch\_size, max\_len, 768) )$ is passed through a dense layer to reduce its shape to $(batch\_size, max\_len, 2)$. The output is splitted into 2 parts at the \nth{2} axis to get our token level logits for start position and end position. A pictorial representation of the model can be found in Figure \ref{fig:model}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{SDU_AD.pdf}
\caption{Model Architecture for AD; SP and EP stand for Start Probability and End Probability .}
\label{fig:model}
\end{figure}
\subsubsection{Models}
We experiment with the following models:
\begin{itemize}
\item \textbf{BERT base uncased} : We use the uncased base version of BERT as the backbone of our model.
\item \textbf{SciBERT uncased} : We use the uncased version of SciBERT as the backbone of our model.
\item \textbf{SciBERT uncased with fine tuned LM} : The dataset does not contain samples for all acronym expansions. Hence, models trained only on the provided dataset may suffer when it comes to predicting unseen acronym expansions. We try to instill some knowledge of the acronym expansions in our model by fine tuning the MLM. We scrape Wikipedia for articles (using Wikipedia API \url{https://pypi.org/project/wikipedia/} ) related to the long forms of acronyms present in the dictionary and fine tuned the LM of SciBERT using the data. We then use the new fine tuned model weights for the SciBERT backbone and train it for span prediction.
\item \textbf{SciBERT uncased with 2 stage training} : We train the model in 2 stages using different data. We prepare our own dataset using the articles scrapped from Wikipedia; occurrences of long forms of acronyms are replaced by the acronym. We first train our model on this data and then on the shared task data. This is a supervised approach to help models learn for acronyms and expansions under represented in the shared task data as compared to the above approach which is unsupervised.
\end{itemize}
\subsubsection{Post Modelling Experiments}
\label{sec:post}
\begin{itemize}
\item \textbf{Ensemble} : Since our approach outputs start and end probability distribution over the entire sequence of tokens, we cannot average probabilities from models using different tokenizers. Keeping the above fact in mind, we average the probabilities from the two best models (as per CV) i.e. SciBERT uncased and SciBERT uncased with 2 stage training. The appropriate acronym expansion is then extracted with the help of this averaged probability, which provides robustness in our predictions.
\item \textbf{Ensemble with post-processing} : We also devise a post processing that can help us rectify some of the mistakes of our models to some extent. All the post-processing does is that if a candidate expansion of an acronym is present in the sentence and the acronym is enclosed within parenthesis in the sentence, then that candidate expansion is predicted as the expansion of the acronym. The motivation for devising this post-processing is discussed in Section \ref{sec:analysis}.
\end{itemize}
\section{Experimental Settings}
\label{sec:settings}
For AI task, there are three kinds of experimental settings:
\begin{enumerate}[label=\alph*.]
\item The base models were trained on the training data and evaluated on the validation data.
\item For the better performing base models, we concatenate the training and validation data and perform a 5 fold cross-validation on the concatenated dataset.
\item For blending, we perform a 5 fold cross-validation on the validation data.
\end{enumerate}
For each one of the above settings, training was done for 20 epochs using early stopping with patience of 10. Model optimisation was done using BertAdam with a learning rate of 1e-3, a batch size of 16 and gradient accumulation batch size of 32.
For the AD task, we concatenate the training and validation data and perform a 5 fold stratified cross-validation on the joined dataset (stratified with respect to acronym). The folds are trained for 5 epochs using early stopping with patience of 2 and tolerance of 1e-3. Model optimisation is done using AdamW \citep{loshchilov2018fixing} with a learning rate of 2e-5 and a batch size of 32.
\section{Results}
\label{sec:results}
\subsection{AI}
\label{subsec:results_ai}
The macro F1 scores of our approaches are listed in Table \ref{table:results_AI}. For, the base models, validation is done using the validation data. Only the promising models, in our case SciBERT models, are taken through the arduous cross validation process.
It should also be noted that the folds for the process of cross validation on the modified blending technique are extracted out of the validation data unlike SciBERT models which are cross validated on the combined data (train + validation), and hence the two CV scores are not comparable. The other observations are enumerated as follows:
\begin{enumerate}[label=\alph*.]
\item The official baseline, though rulebased, surpasses CRF.
\item As expected, SciBERT performs better than BERT.
\item As for the BIOless variants:
\begin{itemize}
\item CRFs see a considerably big difference (0.026) between the BIOless and BIO variants. The hypothesis that ``the tagger should perform better if the number of tags are reduced" seems to fail here. The present task of AI seems a bit complex for CRFs as they do not even surpass the baseline score of 0.84. Hence, it would only be justifiable to treat CRFs as an exception with respect to the hypothesis.
\item For all the other models/variations, BIOless is pretty close (a difference of 0.0008 or 0.0002 ) or surpasses the BIO variant(with a relatively larger difference 0.0084 or 0.0013).
\end{itemize}
\item Based on the test score, BIOless variants perform better than their corresponding BIO counterparts.
\item The test score undoubtedly shows the eminence of the modified blending technique.
\end{enumerate}
\begin{table}[htbp]
\centering
\begin{tabular}{|p{3cm}|c|c|c|}
\hline
\textbf{Model} & \textbf{Val} & \textbf{CV} & \textbf{Test} \\ \hline
Baseline & 0.8546 & - & 0.8409 \\ \hline
CRF & 0.8254 & - & - \\ \hline
CRF BIOless & 0.7994 & - & - \\ \hline
BERT cased & 0.9145 & - & - \\ \hline
BERT cased BIOless& 0.9163 & - & - \\ \hline
SciBERT cased & \textbf{0.9173} & - & 0.8921 \\ \hline
SciBERT cased BIOless & 0.9165 & - & 0.9005 \\ \hline
SciBERT cased & - & \textbf{0.9075} & 0.9023 \\ \hline
SciBERT cased BIOless & - & 0.9073 & 0.9036 \\ \hline
Blending with mode ensembling & - & 0.8962 & \textbf{0.9090}\\ \hline
\end{tabular}
\caption{Results of AI task.}
\label{table:results_AI}
\end{table}
Table \ref{table:compareAI} shows a comparison of our results with the top scoring submissions of AI task.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{User / Team Name} & \textbf{Test Score} \\ \hline
zdq & \textbf{0.9330} \\ \hline
qinpersevere & 0.9311 \\ \hline
Mobius & 0.9281 \\ \hline
SciDr (Us) & 0.9090 \\ \hline
\end{tabular}
\caption{Comparison of AI results}
\label{table:compareAI}
\end{table}
\subsection{AD}
We tabulate the macro F1 score of the models in the cross-validation and test setting (in Table \ref{table:results}). The performance of SciBERT is superior to BERT owing to the similarity of pretraining corpus and task dataset. We also observe that the performance of SciBERT uncased and SciBERT uncased with 2 stage training is almost similar in both cross-validation and test, with the latter performing a bit better than former, whereas the performance of the one with fine-tuned LM is lower. A possible reason for this observation can be attributed to the difference between the source of the data used for fine tuning (Wikipedia) and the shared task data (scientific papers). The usage of extra data created using Wikipedia is beneficial for the model since it contains samples for some acronyms under-represented in the task dataset.
\begin{table}[htbp]
\centering
\begin{tabular}{|p{3cm}|p{1.2cm}|p{1.2cm}|}
\hline
\textbf{Model} & \textbf{CV} & \textbf{Test} \\ \hline
Baseline & - & 0.6097 \\ \hline
BERT uncased & 0.7549 & 0.8980 \\ \hline
SciBERT uncased & 0.8423 & 0.9244 \\ \hline
SciBERT uncased with fine tuned LM & 0.8278 & 0.9194 \\ \hline
SciBERT uncased with 2 stage training & \textbf{0.8424} & 0.9292 \\ \hline
Ensemble & - & 0.9303 \\ \hline
Ensemble with post-processing & - & \textbf{0.9319} \\ \hline
\end{tabular}
\caption{Results of AD task.}
\label{table:results}
\end{table}
Table \ref{table:compareAD} lists the scores of the top submissions for AD task.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{User / Team Name} & \textbf{Test Score} \\ \hline
DeepBlueAI & \textbf{0.9405} \\ \hline
qwzhong & 0.9373 \\ \hline
SciDr (Us) & 0.9319 \\ \hline
del2z & 0.9266 \\ \hline
\end{tabular}
\caption{Comparison of AD results}
\label{table:compareAD}
\end{table}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{ner_tag_vis.pdf}
\caption{A few erroneously tagged instances for AI.}
\label{fig:tag_vis}
\end{figure*}
\section{Discussion}
\label{sec:analysis}
\subsection{AI}
The best proposed method for the AI task involves the use of the following three main building blocks:
\begin{itemize}
\item SciBERT as the base model
\item BIOless variant
\item Modified blending technique or the blending method coupled with hard voting.
\end{itemize}
The reason for SciBERT performing better than the BERT model lies in the fact that the pretraining corpus is similar to our dataset. The hypothesis for using BIOless variants instead of the conventional technique seems to hold true (points c, d and e in Subsection \ref{subsec:results_ai}).
\begin{table}[htbp]
\centering
\begin{tabular}{|p{3cm}|c|c|c|}
\hline
\textbf{Model} & \textbf{F1} & \textbf{Precision} & \textbf{Recall}\\ \hline
Baseline & 0.8409 & 0.9131 & 0.7793 \\ \hline
SciBERT cased BIOless with hard voting & 0.9036 & 0.8987 & 0.9086 \\ \hline
Blending with mode ensembling & 0.9090 & 0.9097 & 0.9083\\ \hline
\end{tabular}
\caption{F1, Precision and Recall of some models used in AI Task}
\label{table:detail_score}
\end{table}
Ensembling has always helped in the domain of Machine Learning. The third block viz. modified blending technique is a combination of two propitious methods - blending and hard voting, and ultimately went about to give the best results. The baseline method used by the organizers had a low F1 but the precision obtained was quite good compared to the precision of the SciBERT cased BIOless model with hard voting. The only way to employ the adroitness of the baseline model was to stack it (and some other better performing models) with the SciBERT cased BIOless model. And as is visible in Table \ref{table:detail_score}, the Blended model improved considerably especially with respect to precision.
Figure \ref{fig:tag_vis} represents some of sentences tagged incorrectly by the SciBERT model. Ideally the analysis should have been done on the best model, but it is too complex to interpret it. Having a look at the \textbf{DEV-297} and \textbf{DEV-42}, it is clear that the gold truths have some annotation flaws. HMM is clearly an acronym for Hidden Markov Models and still is not labelled. Similarly, RNN, CNN and WiFi are acronyms for Recurrent Neural Network, Convolutional Neural Network and Wireless Fidelity respectively but only CNN is marked in the ground truth. Also, complicated neural network is no full form but is used to show the complications of RNN and CNN neural networks. Our base model does good in predicting the right tags for there samples.
On the other hand, we find that in \textbf{DEV-1313} and \textbf{DEV-593}, the model has completely failed to identify the long forms, and also misidentified a few short forms. Two likely causes could be as follows:
\begin{itemize}
\item improper tokenization of the dataset
\item ``and", ``-", ``of" etc. in between long forms
\end{itemize}
\begin{table*}[htbp]
\centering
\begin{tabular}{|c|c|p{5cm}|c|c|c|}
\hline
\textbf{Id} & \textbf{Acronym} & \textbf{Text} & \textbf{Normal} & \textbf{Stage} & \textbf{Ensemble} \\ \hline
TS-633 & FM & Ultimately , once we select an FM , the ChI becomes a specific operator . & feature map & fuzzy measure & factorization machines \\ \hline
TS-811 & GS & Additionally , using WSE ( GS search ) we obtained 84.4 accuracy with an FPR of 0.157 and AUC value of 0.918 . & genetic search & google scholar 's & gold standard \\ \hline
TS-5682 & EL & Thus , with EL system ( ) , only two structures are possible for : ( i ) , and ( ii ) , . & external links & euler - lagrange & entity linking \\ \hline
\end{tabular}
\caption{Mismatch of predictions between SciBERT uncased, SciBERT uncased with 2 stage training and their soft ensemble.}
\label{table:mismatch}
\end{table*}
\subsection{AD}
The formulation of AD as a span prediction problem is quite efficient from the performance and computational expense point of view. A complete cross-validation run under the experimental settings can be performed in 6 hours on an average on a NVIDIA Tesla P100.
Speaking about the results, for the out-of-fold predictions of SciBERT uncased, we observe that the model is incorrect mainly for acronyms which do not have many occurrences in the task dataset. This motivated us to attempt instilling knowledge into our models via external data.
We first examine the differences between the test set predictions of SciBERT uncased, SciBERT uncased with 2 stage training and their ensemble (represented as Normal, Stage and Ensemble respectively) to understand the difference between the models and to find out which model is exhibiting more confidence in its prediction.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|p{4cm}|}
\hline
\textbf{Id} & \textbf{Acronym} & \textbf{Text} \\ \hline
TS-5572 & LPP & The LPP can be briefly described as follows . \\ \hline
TS-5830 & GCN & Effect of both kernels added at end to get actual GCN output . \\ \hline
\end{tabular}
\caption{Instances lacking sufficient context for AD.}
\label{table:context}
\end{table}
We examine those samples where all of the three predictions are different (Table \ref{table:mismatch}). It can be observed that the predictions of SciBERT uncased seem pretty appropriate as per the context and the contributions from the Stage model changes the final prediction. However, there are 92 instances in the test predictions where any of the three predictions are different. These are the instances where the ensemble submission gets the test score boost.
We observe that some of the samples in the test set do not contain sufficient context which can help in acronym disambiguation. This can be an issue and it is difficult to say how the models will behave in such situations. Some of the samples are shown in Table \ref{table:context}. For the text with id \textbf{TS-5572}, the possible long forms of LPP are ``locality preserving projections" and ``load planning problem". Both the models predict one of the expansions and both the expansions seem relevant in the given context. Similar arguments can be given for the text with id \textbf{TS-5830}, where the models get confused between ``global convolution networks" and ``graph convolution networks".
Many of the instances in the test set are such that the long form expansion of the acronym is present in the text and the acronym is present within parentheses. Our models correctly predict the long form for most of these instances, but miss out on a few occasions. This motivated us to devise a post-processing for such instances, where we can directly check for such conditions and predict accordingly, overwriting the model predictions.
\section{Conclusion}
\label{sec:conclusion}
We present our approach for Acronym Identification and Acronym Disambiguation in scientific domain. The usage of SciBERT in both tasks is beneficial because of domain and training corpus similarity. We addressed AI as a tagging problem. Our experiments prove the usefulness of data transformation using BIOless tags, and the adroitness of blending incorporated with hard voting. We approached AD as span prediction problem. Our experimental work demonstrates the effect of pretrained weights, external data, ensembling and post-processing. Our analysis provides some interesting insights into some of the shortcomings of the models and also some of the flaws in the dataset annotation. For future work, we can experiment with data augmentation and observe the behaviour of the models for both AI and AD.
\section{Appendix}
The source code of our approaches for AI and AD can be found at :
\begin{itemize}
\item AI : \url{https://github.com/aadarshsingh191198/AAAI-21-SDU-shared-task-1-AI}
\item AD : \url{https://github.com/aadarshsingh191198/AAAI-21-SDU-shared-task-2-AD}
\end{itemize}
\section*{Acknowledgements}
We thank Google Colab and Kaggle for their free computational resources.
|
{
"timestamp": "2021-03-09T02:39:37",
"yymm": "2102",
"arxiv_id": "2102.08818",
"language": "en",
"url": "https://arxiv.org/abs/2102.08818"
}
|
\section*{Acknowledgment}
This paper is a result of many years of work and contributions on various fronts from our several colleagues . We thank Per Bardun, Daniel Glifberg, Antti Jaakkola, Anna Kåhre, Irfan Bekleyen, Yasin Yur, Beste Akkuzu, Oscar Ohlsson, Angelo Centonza, and Marc Mowler.
This work was partially funded by The Scientific and Technological Research Council of Turkey (TUBITAK), under 1515 Frontier RD Laboratories Support Program with project no:5169902.
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
\section{Background}
\label{bg}
\subsection{Mobile Networks}
\label{bg_mn}
Mobile networks have evolved from 2G in 1991 to 3G in 2001 to 4G in 2008. The most recent generation is the 5G in 2018, with the first commercial deployments of 5G happening in 2020. A simplified overview of different generations of mobile networks is shown in Fig. \ref{fig:networks}. On a high level, a mobile network consists of UEs which refer to mobile phones that we use, Radio Access Network (RAN) which are network entities providing wireless radio access to the UEs, and Core Network (CN) which are the set of network functions that among other things do subscription handling, session and mobility management, and packet routing.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/networks.pdf}
\caption {Simplified overview of mobile networks}
\label{fig:networks}
\end{figure}
3GPP \cite{3gpp} is an organization that standardizes mobile networks, including their architecture, protocols, and security. Then, mobile network operators plan and deploy the networks according to their needs, e.g., how many and what combination of base stations to install in an area. The actual deployment options of each generation, especially when it comes to 5G, and interworking between each other is complex and we will not go into their details, rather we briefly mention some RAN aspects that are related to this paper.
Historically, different generations of RAN offer radio access via a radio access technology (RAT) specific to that generation, i.e.,
\begin{itemize}
\item 2G RAT: GSM EDGE Radio Access (GERA)
\item 3G RAT: Universal Terrestrial Radio Access (UTRA)
\item 4G RAT: Evolved-UTRA (E-UTRA)
\end{itemize}
In 5G, however, two types of RAT can co-exist, one 4G RAT (E-UTRA) and another a new one, i.e.,
\begin{itemize} \item 5G RAT: New Radio (NR). \end{itemize}
The RAN uses base stations to offer these RATs. They are known as Base Transceiver Station (BTS) in 2G, NodeB (NB) in 3G, Evolved NB (eNB) in 4G, and next-Generation NB (gNB) in 5G. There also exist ng-eNB (Next-Generation eNB) that connects to a 5G core network and en-gNB (E-UTRA New Radio gNB) that connects to a 4G core network. These base stations support one or more cells, a so-called cell being the smallest coverage area in which the base stations serve the UEs.
\subsection{False Base Station Attacks}
\label{bg_fbs}
False base station is a broad name for a radio device which sets out to impersonate a legitimate base station. Although the name says, "base station", its attack capabilities have outgrown to also impersonate UEs towards the mobile network. It is also known by other names such as IMSI catcher, Stingray, rogue base station, or cell site simulator. A logical illustration of false base station attacks is shown in Fig. \ref{fig:fbsd}.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/fbsd.pdf}
\caption {Logical illustration of false base station attacks}
\label{fig:fbsd}
\end{figure}
One of the main attacks relates to privacy of users, in which an attacker either passively eavesdrops users' identifiers from the radio interface or actively obtains them by communicating with the UEs. The attacker then uses those identifiers to identify or track the users \cite{Shaik2016PracticalAA, Borgaonkar2018NewPT, Hussain20195GReasonerAP, Hussain2019PrivacyAT}. The attacker might also try to fingerprint user traffic \cite{Kohls2019LostTE}. Under stringent assumptions, a resourceful attacker may also exploit implementation flaws \cite{Rupprecht2020CallMM} or vulnerabilities in application layer protocols like Domain Name System (DNS) and Internet Control Message Protocol (ICMP) by altering carefully chosen parts of the data in the radio interface \cite{rupprecht-19-layer-two, Rupprecht2020IMP4GTIA}.
Another set of attacks relates to denial of service (DoS) on UEs and/or mobile network. The attacker may do this by using certain messages that the UEs and the network accept without authentication \cite{Shaik2016PracticalAA, Kim2019TouchingTU}. The attacker may also create favorable radio conditions so that the UEs keep camping on the false base station, thereby, being cut off from all incoming communications from legitimate base stations. Radio conditions created by the attacker may also trigger certain events in the legitimate network like handover failures. Because of these events, some implementations in the network may take disruptive steps such as barring even the legitimate base stations, thereby, triggering service disruption \cite{Shaik2018OnTI}.
Fraud attacks with financial motive may send spam or advertising SMS messages to UEs \cite{Li2017FBSRadarUF} or even try to impersonate them \cite{Chlosta2019LTESD, Rupprecht2020IMP4GTIA}. There could also be non-financial motive in which the attacker may poison UE's location \cite{Hussain2018LTEInspectorAS} or send public warning messages to create panic in public \cite{Lee2019ThisIY, Yang2019HidingIP}.
While false base stations could be used for good of the society, such as tracking down criminals or locating lost children, they could also be used to harm the functioning of the society with dire consequences like unauthorized surveillance, communication sabotage, unsolicited advertising, or even physical harms. Further, with the increase of the connectivity over varieties of devices, false base stations may not just target the mobile phones. Knight et. al \cite{hacking} mention using false base stations in their book related to hacking connected cars.
It is also getting cheaper and easier to setup a false base station. On the hardware side, the overall price required varies around couple of thousands USD \cite{Dabrowski2014, Kohls2019LostTE, Borgaonkar2018NewPT}. On the software side, open source stacks are getting mature and popular \cite{srsLTE, oai}.
Therefore, increased incentive for attacker due to ever-growing connectivity and increased feasibility of deploying a false base station are main reasons that false base station attacks are more important now than ever. It has rightly got more attention in all fronts, media, hackers conferences, academia, standardization bodies, governments, law enforcement agencies, vendors, and operators.
\subsection{Existing Countermeasures}
\label{bg_ec}
While 2G remains most vulnerable among all generations, several attacks have become impractical or the level of difficulty for attackers has significantly increased with newer generation of mobile networks. Especially, 5G has been designed with significant enhancements in terms of privacy and security against false base stations. For example: encryption of permanent identifier (known as SUPI in 5G, equivalent to IMSI in earlier generations) makes it more difficult for false base stations to track users by eavesdropping this identifier over the radio interface; and integrity protection of user traffic enables detection of data alteration by false base stations in the radio interface \cite{3gpp33501, nakarmiblog}. Nevertheless, attacks from a false base station attack are not completely eradicated yet. Some attacks are possible simply because newer networks are required to interwork with 2G networks. Also, attacks may be possible merely because some security features were not activated. 3GPP is currently assessing if and what type of further enhancements can be done in 5G in terms of new protection and detection features \cite{3gpp33809}.
The feature that we will deal in detail through rest of the paper is detection of false base stations in order to make it significantly harder for false base stations to remain stealthy. Even though protection features are in place, detection is still important. It is so because detection is not alternative to protection, rather is complementary, and forms a fundamental function of cybersecurity \cite{NIST}.
\subsection{Related Work on False Base Station Detection}
\label{bg_rw}
We briefly mentioned existing false base station detection systems and their drawbacks in the introduction; here we give more details about them and their limitations. We classify them according to the methods they used. First ones are UE-based detectors that do the detection directly in the UEs (i.e., mobile phones). Second are crowd-sourced detectors that collect information from large number of UEs or dedicated sensors and do detection in some central server independent of mobile network operators. Third ones are network-based detectors which do the detection directly in the mobile network controlled by the mobile network operators.
\subsubsection{UE based Detectors}
\label{bg_ue}
Brenninkmeijer \cite{brenninkmeijer_2016} and Borgaonkar et al. \cite{Park2017WhiteStingrayEI} evaluated detector applications by setting up a test network and comparing how those applications behave. In \cite{brenninkmeijer_2016}, AIMSICD \cite{AIMSICD}, SnoopSnitch \cite{Snoopsnitch} and Cell Spy Catcher \cite{SKIBAPPS} are analyzed. In addition to these, \cite{Park2017WhiteStingrayEI} also evaluated Darshak \cite{darshak} and GSM Spy Finder \cite{GALAN}. Both works use Android based UEs, software defined radio application and USRP hardware for testing environment. Brenninkmeijer found that while those detectors provide some level of alarm to inform about local network configuration changes, they do not perform well even when they have high operational privileges on the UE. Therefore, the evaluated detectors were not recommended for public. Authors in \cite{Park2017WhiteStingrayEI} states that the lack of root access prevents the detectors from detecting some of the attacks, and in general none of the detectors were perfect in terms of identifying false base stations. In \cite{Abodunrin2015DetectionAM}, Dare also proposed an approach that relies on SnoopSnitch and AIMSICD. Measurements were performed first on 3G mode and then in 2G mode by walking around an area while running active tests to identify abnormalities such as unusual power and cell identifiers. The authors say that the detections could not be said to be solid enough and suggest that collaboration is done with mobile network operators.
Simula Research Laboratory conducted an investigation \cite{SimulaResearch} about the reported use of false base stations in Oslo by a daily newspaper \cite{norway1}. The evaluation was done on the data/alarms obtained from a specialized UE called Cryptophone \cite{cryptophone} and a measurement hardware/software called Delma. The investigation suggested that the data/alarm obtained from them have shortcomings and therefore were responsible for false alarms.
Alrashede et al. \cite{Alrashede2019IMSICD} proposed using cell fingerprinting based on cell identifiers and locations to detect presence of false base stations. The detection could be done in mobile phones by using some special software and publicly available cell information data. Their proposal was only theoretical.
\subsubsection{Crowd-sourced Detectors }
\label{bg_cd}
Dabrowski et al. \cite{Dabrowski2014} developed a dedicated stationary device and placed it in the field supplied with good antenna. Their device scanned the area in passive mode for cell related fingerprints to detect anomaly. They also developed a UE application based on Android public API facilitating baseband information and built-in GPS receiver with no root privileges. Their results showed that refinement is needed in such cases where GPS reception is bad like underground even though network coverage is good. SITCH \cite{AshWilson} and SeaGlass \cite{Ney2017SeaGlassEC} follow a similar approach offering the sensor devices. Their initiative further became a project called FADe (Fake Antenna Detection Project) \cite{fade} to detect false base stations in Latin America. They used sensors installed into volunteers’ vehicles. The data gathered were aggregated into a city-wide view and analyzed to find anomalies.
Van Do et al. \cite{Do2015DetectingIU} proposed a solution that uses machine learning to detect abnormal behaviors caused by false base stations on public data set. This study is extended incorporating other machine learning methodologies in \cite{Thuan2016StrengtheningMN} using a signature-based approach with different anomaly indicators like location update, handover use cases and relationship between subscription and UE identifiers. Their experiments are based on publicly available data set from Aftenposten \cite{norway2} aiming to show that there is a potential of applying machine learning techniques.
FBS-Radar \cite{Li2017FBSRadarUF} uses crowdsourced data to detect and geolocate the false base stations. The main threat factor FBS-Radar is dealing with is spam and fraud SMS messages launched by active attackers. The data collection is done by UE and includes suspicious SMS messages and associated meta data, received signal strengths, cell identifier and MAC addresses for WiFi connected UEs. These reports are sent to a cloud for analysis.
\subsubsection{Network-based Detectors}
\label{bg_nd}
Dabrowski et al. \cite{Dabrowski2016TheMS} discussed techniques from a mobile network point of view, e.g., detection of cipher downgrades, transmission delay, and unusual location identifiers. They used data from a network monitoring infrastructure at a real operator network.
Steig et al. \cite{Steig2016ANB} utilized measurement reports sent by UEs to 2G network. Their method analyzes the Absolute Radio Frequency Channel Number (ARFCN) and Base Station Identity Code of a neighbor cell (BSIC) from the measurement reports to identify cells belonging to a false base station.
\subsubsection{Limitations of earlier works}
\label{bg_lim}
UE-based detectors are prone to give false positives because a UE or combination of them cannot know the true state of the view of the network at any given instance. A simple example is when the mobile network operator installs a new base station, all UE-based detectors will determine that it is a false base station because it was never seen before. This is an effectiveness issue.
Another drawback of UE-based detectors is that they almost always require modified UEs or privileged root access, which is not common and could be impossible for some users. This means that users must download and run a special application to collect and analyze measurements. Further, operating systems and baseband chips vary a lot among UEs, meaning that the same detector that works in some UEs may not work in others. These severely reduces the number of measurements accessible for analysis. This is a scalability issue.
In case of crowd-sourced detectors, the UEs report the measurements to a central server, e.g., on the internet for analysis. Otherwise, they share the same principle as UE-based detectors and hence suffer from the same effectiveness and scalability issues mentioned above.
Network-based detectors can be expected to perform better when it comes to analysis because whereas UEs only have knowledge of their local state, a mobile network has knowledge of the global state of the system. A limitation, however, with \cite{Dabrowski2016TheMS} is that their system requires an additional pre-existing network monitoring infrastructure that can collect data from various protocols or network points and supply processed data. Such network monitoring infrastructure may not be present or have widely different capabilities among mobile operators, which greatly affects detection mechanisms in \cite{Dabrowski2016TheMS}. Limitation with \cite{Steig2016ANB} is that they only cover 2G RAT and, besides, their result has not been validated in a real operator network.
\section{Conclusion}
\label{conc}
We presented Murat, a network-based false base station detector, which is capable of detecting false base stations operating in multiple 3GPP Radio Access Technologies (RAT). By validating it in a lab experiment and a real operator trial, we showed that Murat does not require any modification to mobile phones and can work even if data is collected from only one type of RAT in mobile operators' network. These make Murat more effective than other detection systems that either rely on special software on the mobile phones or cover false base stations operating in a single RAT. We also discussed practical insights for a real-world deployment of Murat. Murat's approach was proposed to 3GPP and was adopted in the mobile network standards.
\section{Multi-RAT False Base Station Detector}
\label{murat}
\subsection{Overview}
Our multi-RAT false base station detector, which we call Murat, is illustrated in Fig. \ref{fig:murat}. It falls under the category of network-based detector and it builds on what has been briefly described in the blog post by Nakarmi et al. \cite{earlierblog}.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/murat.pdf}
\caption {Murat, a multi-RAT false base station detector }
\label{fig:murat}
\end{figure}
An attacker may operate false base stations for one or more generations of mobile networks and appear as cells with corresponding RATs. In order to detect those false base stations, Murat comprises of four main components as below:
\subsubsection{Probes}
These are ordinary UEs. Murat capitalizes on the fact that a false base station attack is useless to an attacker if there are no UEs around it, and if UEs are present near it, then some of those UEs could be used as probes for data collection without installing any special software on UEs. Depending upon several factors, such as false base station's transmission power, distance from UEs, signal strength of legitimate base stations, and radio conditions, some UEs may fall victim, while others will not. Those UEs, which are sufficiently near the false base station to receive its signals but have not yet fallen victim to it, observe false cells and report them along with other legitimate cells to the operator's legitimate cells. Another fact that Murat exploits is that even ordinary UEs generally support multiple RATs. For example, majority of UEs that work on today's 4G network can also connect to 3G and 2G. The same will likely continue to happen in future, i.e., newer UEs will support multiple RATs.
\subsubsection{Data Collectors}
These are entities in an ordinary RAN which can obtain the main data used in Murat, i.e., measurement report data sent by UEs. Generally, they are the legitimate base stations belonging to one or more generations which directly engage with UEs and receive reports from them using standard 3GPP procedures. There could be multiple Data collectors to collect measurement reports from different generation/RATs. However, to detect false base stations from a particular generation/RAT, it is not required that legitimate base stations from the same generation/RAT are used as Data collectors. It is so because the actual Probes are UEs that support multiple generations/RATs. Further, some other network functions could also be used as Data collectors even though they do not directly engage with UEs. For example, an operations and maintenance (O\&M) server that manages the RAN could assemble data received by one or more base stations.
\subsubsection{Auxiliary Data}
These are types of data which are other than the measurement reports and can be used in detection of false base stations. They enrich or augment the measurement reports. One example of Auxiliary data is a cell topology data that contains information about base stations present in the operator's mobile network like their location, cells, and RAT types. The cell topology enables the mobile network to compare the UE's view of the mobile network with its own expected view. Another example is files containing customization parameters to be taken into considerations, such as priority timings for detection and thresholds related to signal properties. These parameters enable Murat to adapt or adjust its components when required. Likewise, there could be other Auxiliary data that enables the mobile network to identify that certain phenomenon is abnormal, such as network event logs containing information of UEs being unreachable, abnormal load in base stations, and problems during cell configurations.
\subsubsection{Analyzer}
This component acts as a brain in Murat to detect false base stations and comprises of multiple functions. One of its functions processes the main data (measurement reports) obtained from Data collectors and the Auxiliary data. The Analyzer is able to take input from several Data collectors and different types of Auxiliary data. This means that even though multiple Data collectors may be used for different generation/RAT, a single Analyzer suffices. Other functions in the Analyzer can apply various strategies, e.g., based on rules or machine learning, which use the processed data and identify if any information contained in it indicate presence of false base stations. New strategies can be added to work on the processed data, e.g., sharing or utilizing threat intelligence to and from another operator's mobile network. It is pertinent to note that the Analyzer can be deployed either as a part of RAN or CN according to mobile network operators' choice, e.g., some operators may choose to deploy the Analyzer directly in base stations while others may choose to use a centralized server in their network.
In the sections below, we describe how these components of Murat work in two main steps called data collection and analysis.
\subsection{Data Collection Step}
The data collection step in Murat is a combination of procedures between its components that make the required data available to the Analyzer, as below.
\subsubsection{Data Reporting Procedures}
\label{section:meas_report}
Probes (UEs) and Data collectors (RAN) engage in these procedures which take place in the radio interface and belong to standard 3GPP Radio Resource Control (RRC) procedures enabling measurement reporting mechanism. The measurement reporting mechanism is fundamental to all generation of mobile networks and is necessary even for normal operation, for example, it enables the mobile network to decide when radio conditions are such that a UE needs to be served by a different base station, possible even by a different RAT. For brevity, we will stick to describing the measurement reporting mechanism using terminology of 4G and 5G, illustrated in Fig. \ref{fig:rrc_proc}.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/rrc_proc.pdf}
\caption {RRC procedures carrying measurement configuration and reports }
\label{fig:rrc_proc}
\end{figure}
First, the RAN configures UEs in connected state for measurements using RRC messages, so-called reconfiguration and resume. These messages contain measurement configuration that includes many parameters among which we briefly discuss two which are more relevant, namely, measurement objects and reporting configurations. Measurement objects (such as carrier frequency and cell identifiers) are the radio resources on which UE is asked to perform measurements. A 4G network can configure UEs to do measurements on 4G/intra-frequency, 4G/inter-frequency, inter-RAN 5G/NR, 3G, and 2G. A 5G network can configure mobile phones to do measurements on 5G or inter-RAT 4G and 3G frequencies. Measurement objects in 4G cover more RATs than 5G because mobility from 5G is restricted to 4G and 3G. Reporting configurations consists of reporting criteria and format. In 5G, they also contain reference signal type that indicate the reference signal UE uses for beam and cell measurements. Reporting criteria is what triggers UEs to send a measurement report which can be periodic, or event based (for example neighbor cell's signal getting better than some threshold). Reporting format specifies quantities, e.g., numbers of cells, that UE includes in the measurement report. The reporting configuration could also indicate reporting of Cell Global Identifier (CGI) to get full identifier of a cell. Finally, a measurement object is linked to a reporting configuration by what is called a measurement identifier that identifies a measurement configuration. Multiple measurement identifiers can be used to link several measurement objects to several reporting configurations.
Next, the UEs measure and report according to the measurement configuration in an RRC message called measurement report. The standards mandate that this message is sent by UE only after successful security activation. It means that the message is encrypted, and integrity protected so that unauthorized parties or sniffers cannot read or modify the reports sent by the UEs. The report consists of measurement results identified with measurement identifier so that they can be linked to corresponding measurement configuration. As per measurement configuration, the report may further consist of measurements on serving and neighbor cells such as physical cell identifiers (PCI), received signal received power (RSRP), received signal received quality (RSRQ), signal to interference plus noise ratio (SINR).
As shown in Fig. \ref{fig:rrc_proc}, there is yet another procedure which can also be used by the RAN to configure the capable UEs to collect logs even during idle or inactive states. It is called logged measurement configuration. For the UEs which were configured to perform logged measurements, the RAN can obtain the reports when the UEs come back to connected state by sending RRC message called information request, and UEs will return reports in an information response message. We will not go into further specifics for brevity. Detail information are available in 3GPP TS 36.331 \cite{3gpp36331} for 4G, 3GPP TS 38.331 \cite{3gpp38331} for 5G, see clause 5.5 on Measurements.
\subsubsection{Data Fetch Procedures}
The Analyzer engages using these procedures to obtain data from Data Collectors and the Auxiliary data. These procedures are rather logical and can be materialized in numerous, non-exclusive, ways as follows. One way is to use application layer protocols over TCP/IP connections such as SSH File Transfer Protocol (SFTP). Another way is to use a Network Attached Storage (NAS) or a database server. Tools like Rsync and Secure Copy (SCP) can also be used.
As has been mentioned earlier, Analyzer can be deployed as a part of RAN. In such cases when base stations play the role of both the Data collectors and Analyzer components of Murat, this data fetch procedures become internal to base stations.
\subsection{Analysis Step}
The Analysis step is performed by the Analyzer component of Murat. The overall goal of this step is to identify the presence of false base stations in a mobile operators' network. A high-level overview of the Analysis step is illustrated in Fig. \ref{fig:analysis_steps}. Measurement reports are the main data in the Analysis step. They are used to identify the views of the network from the UEs' perspective, e.g., how many and what types of cell they observed in a certain area. Auxiliary data, on the other hand, are used to form the expected view of the network, e.g., how many and what types of cells are in fact expected to be present in that area. These two views are compared and if the UE's views deviate from the expected view, then such deviations are flagged.
\begin{figure}[h]
\centering
\includegraphics[width=2.5in]{img/analysis.pdf}
\caption {Overview of Analysis step}
\label{fig:analysis_steps}
\end{figure}
One or more functions of the Analyzer component are executed in the Analysis step. Data processor function in Analyzer prepares the data for analysis by parsing the measurement reports obtained from Data collectors and Auxiliary data.
Data prepared by the data processor is then run through one or more strategies to detect false base stations. One strategy is to use a rule-based algorithm that examines selected fields in the measurement reports and flags suspicious neighbor cells as false. Some examples follow. We want to note that following rules are only examples to give an idea of what type of rules may be suitable. They do not reflect universal rules suitable for all networks.
Example rules for intuition:
\begin{itemize}
\item \textbf{Setting}: Only the PCIs between 0 and 450 are allocated to legitimate cells in a region; \textbf{Rule}:\textit{ PCIs in range 0-450 $\rightarrow$ legitimate cell; otherwise $\rightarrow$ false cell }
\item \textbf{Setting}: PCIs between 400 and 410 are not allocated to any legitimate cells in a region; \textbf{Rule}:\textit{ PCIs in range 400-410 $\rightarrow$ false cell; otherwise $\rightarrow$ legitimate cell }
\item \textbf{Setting}: PCIs 312, 313, and 314 are lone cells in a remote region with no neighboring cells; \textbf{Rule}:\textit{ PCIs reported together with 312, 313, and 314 $\rightarrow$ false cell }
\item \textbf{Setting}: All cells other than 263 in a region are put to sleep during non-office hours; \textbf{Rule}:\textit{ PCIs other than 263 reported between 18:00-8:00 $\rightarrow$ false cell }
\item \textbf{Setting}: Historical data in a region show that signal strengths received by UEs is always less than -60 dBm; \textbf{Rule}:\textit{ RSRP $<$ -60 dBm $\rightarrow$ legitimate cell; otherwise $\rightarrow$ false cell }
\item \textbf{Setting}: Historical data in a region show that signal qualities received by UEs is never greater than -9 dB; \textbf{Rule}:\textit{ RSRQ $>$ -9 dB $\rightarrow$ false cell; otherwise $\rightarrow$ legitimate cell }
\item \textbf{Setting}: All 2G cells are phased out from a region; \textbf{Rule}:\textit{ 2G/GERA cells $\rightarrow$ false cell; otherwise $\rightarrow$ otherwise check other rules }
\item \textbf{Setting}: All 3G cells are phased out from a region; \textbf{Rule}:\textit{ 3G/UTRA cells $\rightarrow$ false cell; otherwise $\rightarrow$ otherwise check other rules }
\item \textbf{Setting}: Only three mobile network operators with codes 11, 12, and 13 are allowed to operate in a country; \textbf{Rule}:\textit{ mobile network code among (11,12,13) $\rightarrow$ legitimate cell; otherwise $\rightarrow$ otherwise check other rules }
\end{itemize}
The ranges, thresholds, and other parameters mentioned above can be either hard-coded in rules or taken as input from customization parameter as part of Auxiliary data.
Rule-based strategy works particularly well when the relevant parameters are known clearly and accurately, for example, if a 2G network has been phased out, there cannot be any 2G cells visible to UEs. In those cases, this strategy is simple to implement. So, rule-based strategy can be the first in a detection chain before executing other strategies to detect false base stations.
There could be other cases when rules are impractical because there are no reasonable parameters. For example, when legitimate cells in a wider region are considered, the RSRP/RSRQ values could vary from min to max of the allowed range and all the allowed PCI values may be used. In those cases, more intelligent strategies should be put into place like the ones using machine learning (ML) algorithms. Such strategies are left for future research.
In the following sections, we describe how we realized Murat and validated it by doing lab experiments and operator trial.
\section{Future Research} \label{fr}
Our multi-RAT false base station detector could be augmented with additional features like below. Each of them deserves research topics on its own, nevertheless, we give some initial pointers for interested researchers.
\subsection{Smart distribution of measurements}
A rewarding feature to add would be a smart distribution of measurement load across UEs. Although the network can practically just keep its existing measurement configuration setup and analyze existing data, it could benefit more by being able to actively setup new configurations, e.g., by choosing timings and location for collection of particular type of measurements to best fit its detection needs. However, it is important to keep the demand or effect on UEs to minimum in terms of service interruption or battery consumption. This is where the smart distribution could be beneficial, for example, by being able to distribute the measurement reporting load across the UEs by partitioning the cells to be measured and then configuring each UE to provide measurement reports for the cells only in one of these partitions.
\subsection{Enriched measurement reports}
Another beneficial feature would be enriched measurement configuration and reporting. By enriched, we mean support for collection of new information about camped and neighbor cells that could enhance detection of false base stations attacks. For example, state of the system information broadcast messages received by UEs could be analyzed in the network to detect if an attacker tampered some information like cell barring, support for IP Multimedia Subsystem (IMS) emergency, system information scheduling, and neighbor cell list. Additionally, logged measurements could be enriched so that collection of following information is supported: number of reject messages the UE had received, presence of erratic radio signal not associated with any normal reference signals, presence of signal associated with normal reference signals but without any readable system information, presence of signals associated with normal reference signals and with system information but with the wrong information making it impossible to access the network.
\subsection{Machine learning based detection}
Machine learning based approaches on measurement reports should be investigated. Since false base stations usually transmit at higher-than-normal signal strength in order to lure the nearby UEs, they would most likely induce inherent change in the surrounding radio environment. Hence, it is worth doing research on if machine learning models could be trained to detect disturbances induced by the false base stations in terms of signal properties.
\subsection{Post-detection actions}
There could also be research on reactive actions (protocol level or otherwise) to be taken by operator after detection of false base station. This feature may be a mix of implementation specific and standardized feature, e.g., automatic positioning and containment of detected false base stations.
\section{Introduction}
\label{intro}
Mobile networks provide a fundamental part of our every-day communication that deliver wireless internet access and services on a global scale. They are also increasingly considered as critical infrastructure \cite{criticalUK, criticalUSA}. Therefore, it is crucial to protect them from potential attacks. An important attack vector is the radio interface between the mobile phone and the base stations of a mobile network. Radio devices making use of this attack vector by impersonating a mobile network's base station towards a mobile phone is often referred to as \textit{false base stations}. These devices come in many flavors, some of which also impersonate a mobile phone towards the mobile network.
Attacks that can be performed using false base stations broadly fall under categories as follow - privacy compromise of mobile phone usage, denial of service (DoS) on mobile phones, DoS on mobile network, and frauds. The efficacy of these attacks, however, greatly varies between different generations of mobile networks as each newer generation becomes more resilient than earlier. Nevertheless, due to many reasons like legacy networks that have over lived their time and interworking between wide variety of networks, the mobile network industry is not fully protected against all types of attacks from false base stations. Detection and protection against false base stations is therefore an important topic for mobile network industry and society as a whole.
Over the past couple of years, a number of systems for detecting false base stations have been proposed and prototyped. Most of these implement a data collection capability in the mobile phone, which either perform analysis on the collected data in the mobile phone or send the collected data to a central server for analysis. We refer to the first type as User Equipment (UE)-based, e.g., \cite{AIMSICD, Snoopsnitch, cryptophone}, and the second type as crowd-sourced detectors, e.g., \cite{Ney2017SeaGlassEC, fade, Li2017FBSRadarUF}. What is common for these types is that they determine whether a false base station is present by making use of the view of the network from a mobile phone's perspective. This view is by its very nature limited in comparison to the view from a network's perspective. Not only does a mobile network have knowledge of the global state of the system, whereas a mobile phone only has knowledge of its local state, but the mobile network also has more knowledge of a mobile phone's state than the mobile phone itself. The mobile phone only has sufficient knowledge of its state to operate when being commanded by the network to take certain actions.
Another drawback of these detection systems is that they require modified mobile phones. This means that end users must download and run a special application to collect and analyze measurements or report the measurements to a central server on the internet for analysis. This severely reduces the number of measurements accessible for analysis.
A few false base station detectors are network-based that rely on information collected by the mobile network. However, they do either require a pre-existing network monitoring infrastructure \cite{Dabrowski2016TheMS} or focus on a single 3GPP Radio Access Technologies (RAT) \cite{Steig2016ANB}.
In this paper, a system addressing the issues above -- \emph{Murat} -- is proposed. Murat is a network-based system for detecting false base stations that can operate on multiple 3GPP RATs, without requiring any modification to mobile phones or separate monitoring infrastructure.
Murat builds upon the blog post \cite{earlierblog} that is our earlier work. This paper extends the blog post in three ways. Firstly, it enriches the content of the blog post with background and discusses related works. Secondly, it elaborates the system description which is done only briefly in the blog post. Lastly, this paper adds how the system was validated in a real operator trial.
Murat is designed as a network functionality making use of the global state incorporating information about all connected mobile phones' states, as well as knowledge about the mobile network state and its deployment, configuration and execution history. This gives an information advantage over previously proposed systems.
The core idea underlying our system is to make use of this information advantage by comparing the view of mobile phones connected to a mobile network with the views that the mobile network intends the mobile phones to have. If there is a discrepancy between certain parts of the mobile phones' views and what the network expects, it is an indication that a false base station may be present. This strategy is made possible by the fact that mobile phones regularly report their local views to the mobile network as part of normal operation. Without this reporting, the network would not be able to maintain the wireless connections to the mobile phones.
To give a simple but illustrative example of the core idea, assume the mobile network uses five legitimate base stations configured to serve an area. Each of these base stations needs to have unique identifiers, e.g., to facilitate handovers of mobile phones as they move in and out of radio coverage of different base stations. As the mobile phones move in the area, they measure signal strengths for different base stations and report these measurements to the network. While it is difficult for a single mobile phone, or even a collaborating set of mobile phones, to determine whether any of the measured base stations are legitimate or false, the mobile network has information about which base stations are configured to operate in the area, what combinations of signal strengths can be expected to be measured by a mobile phone, which identifiers should exist in the area and so on. If these parameters deviate too much and too frequently with respect to some thresholds, it indicates that a false base station may be present. Depending on parameter and threshold choices, false positives and false negatives may occur, so system tuning is required for optimal performance.
We evaluated the effectiveness and complexity of deploying Murat by performing a lab experiment and a real operator trial. The lab experiment included a controlled setup with multiple base stations using different 3GPP RATs (2G and 4G) and a mobile phone in a Faraday's cage. This showed the important aspect that even though a mobile phone is connected to a specific 3GPP radio access technology, say 4G, it still can be configured to report measurements for other 3GPP radio access technologies, say 2G. This makes it possible to deploy Murat covering one radio access technology and still be able to detect false base stations operating on other radio access technologies. The operator trial included running Murat in a real operator's 4G network with a planted false base station built using publicly available tools. Both the lab experiment and operator trial showed efficacy of Murat on seamlessly integrating to mobile networks and being able to flag suspicious discrepancies. We proposed our approach to the 3GPP standardization organization, which develops standards for mobile networks. It has been adopted as part of the 5G specifications.
We summarize our main contributions as follows
\begin {itemize}
\item We present a network-based system for detecting false base stations that operate on any 3GPP radio access technology (RAT), without requiring any modification to mobile phones.
\item We verified detection properties of the system both in a controlled lab experiment and in a real operator trial. We also share our insights from them in this paper that provide guidance for real world deployments.
\item We proposed the approach to 3GPP, and they have adopted it in the mobile network standards.
\end {itemize}
The paper is structured as follows. We provide some background information about mobile networks, false base stations, existing countermeasures, and what has been done in the detection area in the Section \ref{bg}. We introduce our system, multi-RAT false base station detector called Murat in Section \ref{murat}. We describe the lab experiment in Section \ref{labex} and the operator trial in Section \ref{optr}. We continue with describing impact of our work on standardization in Section \ref{std}. Effectiveness and limitations of Murat are discussed in Section \ref{eff}. We provide pointers to future research in this topic in Section \ref{fr}, and we conclude the paper in Section \ref{conc}.
\section{Effectiveness and Limitations}
\label{eff}
In this section, we revisit some aspects and discuss Murat's effectiveness as well as limitations that come either from Murat's design or specific implementation choices.
\subsection{Data collection}
\label{eff_dc}
Since Murat fundamentally relies on measurement reports from UEs, it functions as long as at least one or few UEs are connected to the legitimate network. If a false base station lures all UEs so that no UE sends measurement reports to the legitimate network, then Murat will not function. We note that a false base station acting in this manner for a long period of time would result in that no UEs connect to the legitimate base stations in the area. In a relatively well populated area this would in itself be an indication of that something in the area is misbehaving and that an investigation of the cause is necessary.
Measurement reports used in Murat are part of standard 3GPP RRC messages which are encoded in ASN.1 Unaligned Packed Encoding Rules (UPER) format. Therefore, parsing those raw measurement reports would work gracefully even in a multi-vendor deployment of a mobile operator’s network. Nevertheless, some integration efforts, may be necessary to handle cases if some vendors’ base stations provide those reports with proprietary flavors, e.g., already decoded measurement reports in formats like JSON or CSV, with or without compression.
Cell topology data with information on legitimate base stations is an important Auxiliary data that is useful in the analysis. Even though there are some 3GPP standards [1] on data format like XML, it is only likely that different vendors have their own extensions, therefore, cell topology parser would need to adapt accordingly. It is also likely that parsing other Auxiliary data like customization parameters and network event logs would need vendor specific adaptations.
\subsection{Multi-RAT detection}
\label{eff_Multirat}
Our lab experiment and operator trial prove that the general principle of using unmodified ordinary UEs to report their views and comparing it to the network's view works. An important aspect that was also shown in the lab experiment is even though a UE is connected to a specific 3GPP RAT, it can still be asked to report measurements for other 3GPP RATs. Hence, in principle, mobile network operators could deploy the Data collector component of Murat in one of any generations and detect false base stations on all generations. Note that the Analyzer component of Murat can be same regardless of which generation the Data collectors are deployed in.
Nevertheless, it is important to note that the 3GPP standards currently do not allow a 5G network to directly interwork with a 2G network for reasons of security isolation among others. It means that a UE in a 2G network cannot directly switch to a 5G network and vice-versa. Consequently, 2G measurements are not specified in 5G network's measurement report and vice-versa. The same is true for a 3G network but in one direction, i.e., while a UE can switch from a 5G network to a 3G network, the opposite is not allowed. At the time of writing, it is only the 4G network that can fully interwork with all generations. Therefore, as of today, a mobile network operator would gain optimal benefit from Murat by deploying its Data collector to collect measurement reports from 4G since measurement reports in a 4G RAT can contain all 2G, 3G, 4G, and 5G measurements. However, there could be cases when the Data collector is deployed only for a single generation/RAT that Murat does not detect false stations in another generation/RAT. In case of the Data collector is deployed only for a 4G RAT and in a certain area/time, all UEs support only 2G RAT, e.g., configured to operate on 2G-only mode. Since there are no UEs connected to the operator's 4G network in that area/time, Murat will not receive any measurement reports at all in that area/time and a 2G false base station will go undetected even though operational. We consider this as a very rare and improbable case since it is not likely that all users configure their UEs to use a lower generation (and slower) network. It could also be other way round that all the UEs support (or have enabled) only the 4G RAT in a certain area/time. In this case, Murat will not get any 2G RAT measurement reports, meaning that 2G false base stations in that area/time will not be detected. We consider this to be arguably fine because with no UEs supporting 2G RAT, there is no victim and therefore the 2G false base station should be almost useless for the attacker.
\subsection{Operator specific detection}
\label{eff_op}
When deployed in one operator's network, Murat detects false base stations running on frequencies operated by that operator but not in those operated by other operators.
\subsection{Rule-based strategy}
\label{eff_rule}
Rule-based strategy of checking PCIs, which we implemented, works well in scenarios when the attacker is forced to operate its false base station with a PCI not used by the legitimate base stations (e.g., to maintain a stable attack as indicated in \cite{rupprecht-19-layer-two}). However, it could be possible in principle that a resourceful attacker uses some advanced techniques to evades this rule-based detection strategy by operating on same PCIs used by the legitimate cells and still maintain stable attack. In order to detect such false base stations, different strategy utilizing other innate properties of radio communication, e.g., signal strength and quality, would be required and is a topic for a separate study.
We want to stress that static rules are not suitable when the cell topology changes frequently, due to operators using features like automatic neighbor relations (ANR). Manual maintenance of these rules is not practical neither, especially for large scale deployments. Therefore, when a rule-based strategy is used, the rules should be implemented such that they automatically reflect up-to-date cell topology information, e.g., the rules being updated from a (near) real-time database.
\subsection{Neighbor-of-Neighbor technique}
\label{eff_neig}
The neighbor-of-neighbor technique that we implemented in the operator trial served its purpose (see Section \ref{optr}.C). However following points should be kept in mind.
Neighbor-of-neighbor cell information must take frequencies into account to be effective. This is because even though a PCI uniquely identifies a cell in a given area operating on a specific frequency, the same PCI may legitimately be used to identify a different cell in the same area, but one that operates on a different frequency. Consequently, considering any cell with a certain PCI as an acceptable neighbor on all frequencies, can lead toa false negative when a false base station uses the PCI of a legitimate neighbor but on a different frequency. For example, a PCI X could be a neighbor in frequency f1 but not in frequency f2. Considering the PCI X to be a neighbor in f2 would lead to false negative if a false base station uses the PCI X in f2.
Increasing the hop count to high values (e.g., neighbor-of-neighbor-of-neighbor or more) means that even the cells which are geographically far could end up being considered as valid neighbors. This defeats the concept of neighbor and would produce false negatives.
Further, although unlikely, it cannot be ruled out that there won’t be any standalone legitimate cell in a neighborhood which has no handover relation with any other cell whatsoever. Such standalone cells would produce false positives.
Therefore, using the actual geolocation of cells as a first choice to identify neighbors would be a better way.
\section{Standardization}
\label{std}
We first gave our input \cite{S3-170463} to 3GPP's group responsible for security during the study phase of 5G security. Later, when the first set of 5G specifications (called Release 15) were getting finalized, we proposed to include our approach in the formal 5G security standard. After also getting vetted \cite{S3-171568, R1-1711997, R2-1709980, R4-1711318} by other groups in 3GPP who work with RAN aspects, it was adopted into an informative annex of the 5G security specification 3GPP TS 33.501 \cite{3gpp33501}.
\section{Operator Trial}
\label{optr}
\subsection{Environment}
\label{optr_env}
After the lab experiment, we collaborated with a real operator and conducted trial in their 4G network. The trial environment is shown in Fig. \ref{fig:op_trial}. The basic components in the operator trial are same as the lab experiment, since both adhere to Murat's design. So, we will only point out important peculiarities in the operator trial.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/op_trial.pdf}
\caption {Operator trial environment}
\label{fig:op_trial}
\end{figure}
The trial was conducted in an open area without using any Faraday’s cage and time chosen by the operator where multiple legitimate base stations operated. Users would pass by that area, meaning that the UEs (acting as Probes) were ordinary UEs. The trial had only one standalone base station acting as a "planted" false base station and it was used solely for the purpose of test. The "planted" 4G eNB was assembled using publicly available tools. Its hardware part consisted of USRP B210 \cite{usrp_b210}, a software defined radio (SDR) from Ettus Research \cite{ettus}, and its software part was the open-source LTE software suite called, srsLTE \cite{srsLTE}. These components are popular among researchers working with false base station attacks. The planted 4G eNB was turned on couple of times with specific PCI values that were not in use by the operators' legitimate base stations in the closest vicinity. We configured the parameters called \textit{dl\_earfcn}, \textit{mcc}, \textit{mnc}, \textit{tac}, \textit{enb\_id}, \textit{cell\_id}, \textit{phy\_cell\_id}, and executed eNB part of the srsLTE as-is. We did not make any modifications in these hardware or software components. Special cares were taken when operating the planted 4G eNB. Whenever it was turned on, it was turned on only for some minutes. Further, no attack was performed by the planted 4G eNB other than announcing itself as a cell belonging to the operator's network. It was very important to do so in order to minimize any unintentional inconvenience to users nearby.
Expected outcome of this trial was that Murat detects all the PCI values announced by the planted 4G eNB as false because none of them were supposed to be present in the location where it was operated.
\subsection{Data Collection Step}
\label{optr_dc}
The data reporting and fetch procedures were similar to the lab experiment except that the measurement configuration in the operator's network was untouched. The cell trace files contained measurement reports from legitimate eNBs in the area where our planted 4G eNB was placed and during the time covering its operation. A summary of data collection is given in Table ~\ref{table:datacollection}.
\begin{table}[h]
\caption{Summary of data collection}
\label{table:datacollection}
\centering
\begin{tabular}{|c||c|}
\hline
number of eNBs & 5\\
\hline
total number of 4G cells & 36\\
\hline
total number of 4G measurement reports & 7739\\
\hline
number of unique PCIs collected & 156\\
\hline
\end{tabular}
\end{table}
\subsection{Analysis Step}
\label{optr_an}
Similar to the lab experiment, the Analysis step was performed in a separate server and a rule-based strategy was used. However, we only considered the 4G RAT and therefore it was only the reported PCIs that were checked.
Some extra treatments were also needed in the real operator network which we describe next. Fig. \ref{fig:analysis_trial} depicts the overall analysis step in the operator trial.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/fbsd_detection_fr.pdf}
\caption {Illustration of Analysis step in the operator trial}
\label{fig:analysis_trial}
\end{figure}
Unlike in the lab experiment, a strategy to flag a cell with a certain PCI would not work when that PCI does not exist in the cell topology. It is so because the total number of PCIs available for use is only few (504 in 4G) and they are only locally unique. Since a real operator network would have more cells than those, all the PCIs would be used at least in some region and would appear in the cell topology. Hence, in the real operator network, the cell topology parser first needed to identify neighbor PCIs for each cell.
The topology file that we received from the operator did not contain actual geolocation of cells, meaning that there was no obvious metric to use for defining neighbor cells. However, it contained handover relations between cells. When we did an initial test of Murat using similar topology file and considering only the cells with handover relations as legitimate neighbors, Murat had flagged 63 neighbor PCIs as false neighbors. These were false positives -- since we expected 0 false neighbors -- which can be explained by the fact that although the handover relations could be used to identify some neighbor cells, they are not sufficient to identify all neighbor cells. The reason is -- even if a neighbor cell is geographically close to a cell, that neighbor cell does not necessarily also have a direct handover relation with the cell. In other words, it is practically possible that a direct handover relation does not exist between two geographically nearby cells because there is some another cell which is closer to both of them and is the handover candidate for both.
In order to address the above issue identified during initial testing, we devised a technique to identify which cells are geographically close to each other. In this technique, for each cell, we defined neighbors to be combination of cells with immediate handover relations plus cells with handover relation after a N hops. In other words, N-neighbor-of-neighbor was also considered as a neighbor. Using this approach in our initial test resulted in a sharp decrease in false positives to 1 with N = 1, and to 0 with N = 2. Then, we tested this approach using the topology file for the trial and got 0 false positives with N = 1. We note that the choice of N is not strict; its effectiveness and limitations are discussed in Section \ref{eff}.E.
Rest of the Analysis step was to compare the parsed measurement reports against the parsed cell topology containing each cell's identified neighbor PCIs. All the PCIs that the planted 4G eNB was operating on were flagged.
\section{Lab Experiments}
\label{labex}
\subsection{Environment}
\label{labex_env}
We tested Murat in one of our test labs. The main purpose was to verify that it is possible to detect false base stations of different generations even though the network part of Murat is only implemented using measurement reports received by a 4G base station.
Our lab experiment setup consisted of an ordinary UE and a legitimate 4G network with one 4G base station (eNB) operating multiple 4G cells. The UE connected to this legitimate base station and browsed the Internet regularly.
There were two other standalone base stations (one cell each), one of which was a 4G eNB and another was a 2G BTS. They were standalone in the sense that they were not connected to any core network and were merely broadcasting cell information to announce their presence. The standalone 4G eNB was operating a 4G cell with the PCI 204 which was not used in the legitimate 4G network.
Expected outcome of this experiment was that Murat detects the standalone 4G cell as false because a 4G cell with that PCI 204 was not supposed to be present. It was also expected that Murat detects the standalone 2G cell as false because the network was not operating any 2G cells.
The lab setup is shown in Fig. \ref{fig:lab_exp_setup}. We used a Faraday's cage such that the UE was physically put in the Faraday's cage while radio from base stations (technically cells) were fed in via RF cables.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{img/lab_exp_setup.pdf}
\caption {Lab experiment setup}
\label{fig:lab_exp_setup}
\end{figure}
In this setup, the UE was the Murat's Probe, the legitimate 4G eNB was its Data collector, and a separate server was its Analyzer. In the following sections, we describe the data collection and analysis steps in Murat.
\subsection{Data Collection Step}
\label{labex_dc}
The data collection step comprised of data reporting procedures between UE and the 4G eNB, and data fetch procedures between the 4G eNB and the Analyzer.
Standard 3GPP RRC procedures as described earlier in Section \ref{section:meas_report} were used as the data reporting procedures. Both 4G and 2G measurement configurations were sent by the 4G eNB to the UE.
Complete measurement configurations (with various information elements) from our experiments are shown in Fig. \ref{fig:4g_meas_conf} and Fig. \ref{fig:2g_meas_conf} for sake of completeness. We do not explain all the information elements for brevity; readers are referred to \cite{3gpp36331} for details. Information elements relevant for this paper are discussed below.
Fig. \ref{fig:4g_meas_conf} shows an example 4G configuration. This configuration is identified by measurement identifier 4 which further links together the measurement object 1 and reporting configuration 4. Among others, carrier frequency value 100 indicates E-UTRA Absolute Radio Frequency Channel Number (EARFCN) 100, which means band 1; and event A3 means an event when a neighbor cell becomes amount of an offset better than the serving cell.
\begin{figure}[h]
\begin{center}
\begin{lstlisting}
RRC {
pdu value DL-DCCH-Message ::= {
message c1 : rrcConnectionReconfiguration : {
rrc-TransactionIdentifier 1,
criticalExtensions c1:rrcConnectionReconfiguration_r8:{
measConfig {
measObjectToAddModList {
MeasObjectToAddMod {
measObjectId 1,
measObject measObjectEUTRA : {
carrierFreq 100,
allowedMeasBandwidth mbw6,
presenceAntennaPort1 FALSE,
neighCellConfig '10'B,
offsetFreq dB0}}},
reportConfigToAddModList {
ReportConfigToAddMod {
reportConfigId 4,
reportConfig reportConfigEUTRA : {
triggerType event : {
eventId eventA3 : {
a3-Offset-6,
reportOnLeave FALSE},
hysteresis 2,
timeToTrigger ms40},
triggerQuantity rsrp,
reportQuantity both,
maxReportCells 4,
reportInterval ms480,
reportAmount r1}}},
measIdToAddModList {
MeasIdToAddMod {
measId 4,
measObjectId 1,
reportConfigId 4
}}}}}}}
\end{lstlisting}
\end{center}
\caption {Example 4G measurement configuration}
\label{fig:4g_meas_conf}
\end{figure}
Similarly, Fig. \ref{fig:2g_meas_conf} shows an example 2G configuration in which we additionally illustrate how the 4G eNB, after knowing that some 2G cell exists, can instruct the UE to measure addition information on that 2G cell. In this example, the 2G configuration identified as measurement identifier 3 is requesting a Cell Global Identifier (CGI) for a particular 2G cell. The CGI is the globally unique identifier of a cell which also consists of Mobile Country Code (MCC) and Mobile Network Code.
The eNB was configured to save the measurement reports received from the UE as log files, also called cell traces. As a part of data fetch procedure, these cell traces were fetched from the eNB using our internal tool and saved in a shared network folder accessible to the Analyzer. Additionally, we created a cell topology file in a CSV format and saved in the shared network folder. This cell topology file, which was the Auxiliary data, contained MCC, MNC, RAT types, and cell identifiers for the legitimate cells in the network. In this setup, there were only 4G RATs that were legitimate.
\begin{figure}[h]
\begin{center}
\begin{lstlisting}
RRC {
pdu value DL-DCCH-Message ::= {
message c1 : rrcConnectionReconfiguration : {
rrc-TransactionIdentifier 3,
criticalExtensions c1:rrcConnectionReconfiguration_r8:{
measConfig {
measObjectToAddModList {
MeasObjectToAddMod {
measObjectId 2,
measObject measObjectGERAN : {
carrierFreqs {
startingARFCN 12,
bandIndicator dcs1800,
followingARFCNs explicitListOfARFCNs:{}},
offsetFreq 0,
ncc-Permitted '11111111'B,
cellForWhichToReportCGI {
networkColourCode '111'B,
baseStationColourCode '011'B}}}},
reportConfigToAddModList {
ReportConfigToAddMod {
reportConfigId 3,
reportConfig reportConfigInterRAT : {
triggerType periodical : {
purpose reportCGI},
maxReportCells 1,
reportInterval ms1024,
reportAmount r1}}},
measIdToAddModList {
MeasIdToAddMod {
measId 3,
measObjectId 2,
reportConfigId 3
}}}}}}}
\end{lstlisting}
\end{center}
\caption {Example 2G measurement configuration (with CGI request for a particular 2G cell)}
\label{fig:2g_meas_conf}
\end{figure}
\subsection{Analysis Step}
\label{labex_an}
This step was performed at the Analyzer in a separate server as follows. One of the functions in the Analyzer parsed the cell topology which was the Auxiliary data stored in a shared network folder as a CSV file. Another function parsed the cell traces which were also stored in that folder using an internal tool. From the parsed cell traces, RRC measurement reports for the 4G RAT (E-UTRA) and the 2G RAT (GERA) were obtained.
Similar to measurement configurations, complete measurement reports from our experiments are shown in Fig. 9 and Fig. 10 for sake of completeness. However, we discuss only the information elements relevant for this paper.
An example of 4G report is shown in Fig. \ref{fig:4g_meas_rep}. The \textit{measResultListEUTRA} is part of the measurement identifier 4 which relates to the 4G configuration discussed earlier. The UE reported two neighbor 4G cells with their PCIs and signal properties. Next, a rule-based strategy was used to check if the reported PCI and RAT types were as expected or not. The parsed measurement reports were compared against the parsed cell topology. The topology did not have any 4G cell that was assigned the PCI 204, therefore, the reported 4G PCI 204 was flagged as belonging to a false base station.
\begin{figure}[h]
\begin{center}
\begin{lstlisting}
RRC {
pdu value UL-DCCH-Message ::= {
message c1 : measurementReport : {
criticalExtensions c1 : measurementReport_r8 : {
measResults {
measId 4,
measResultPCell {
rsrpResult 41,
rsrqResult 33},
measResultNeighCells measResultListEUTRA : {
MeasResultEUTRA {
physCellId 204,
measResult {
rsrpResult 41,
rsrqResult 2}},
MeasResultEUTRA {
physCellId 366,
measResult {
rsrpResult 41,
rsrqResult 5
}}}}}}}}
\end{lstlisting}
\end{center}
\caption {Example 4G measurement report}
\label{fig:4g_meas_rep}
\end{figure}
Similarly, 2G measurement reports were received and since the topology did not have any 2G cell at all, the 2G cell in the measurement reports was also flagged by the rule-based strategy. Fig. \ref{fig:2g_meas_rep} shows the UE's 2G report in response to the measurement configuration shown earlier in Fig. \ref{fig:2g_meas_conf}. The \textit{measResultListGERAN} is a part of measurement identifier 3 and, as was asked, the UE also reported the neighbor cell's full CGI so that it is known what country and network codes the false base station was using for the 2G cell.
\begin{figure}[h]
\begin{center}
\begin{lstlisting}
RRC {
pdu value UL-DCCH-Message ::= {
message c1 : measurementReport : {
criticalExtensions c1 : measurementReport_r8 : {
measResults {
measId 3,
measResultPCell {
rsrpResult 44,
rsrqResult 31},
measResultNeighCells measResultListGERAN : {
MeasResultGERAN {
carrierFreq {
arfcn 12,
bandIndicator dcs1800},
physCellId {
networkColourCode '111'B,
baseStationColourCode '011'B},
cgi-Info {
cellGlobalId {
plmn-Identity {
mcc {
MCC-MNC-Digit 1,
MCC-MNC-Digit 1,
MCC-MNC-Digit 1},
mnc {
MCC-MNC-Digit 1,
MCC-MNC-Digit 1}},
locationAreaCode '0000000000000001'B,
cellIdentity '0000000001101111'B}},
measResult {
rssi 63
}}}}}}}}
\end{lstlisting}
\end{center}
\caption {Example 2G measurement report}
\label{fig:2g_meas_rep}
\end{figure}
|
{
"timestamp": "2021-02-18T02:19:03",
"yymm": "2102",
"arxiv_id": "2102.08780",
"language": "en",
"url": "https://arxiv.org/abs/2102.08780"
}
|
\section{Introduction}
\label{sec:intro}
Over the past few years, we have seen an increasing concern about user data protection with respect to the data of European users.
This was probably the result of the General Data Protection Regulation (GDPR) which was adopted in April 2016 and came into force in May 2018.
The main difference of this regulation compared to previous legislation is that it includes significant fines for companies which collect users data without the users' consent or some other legal basis.
Such fines can reach up to 20 million euros, or up to 4\% of the annual worldwide turnover of the preceding financial year, whichever is greater.
As a result, several companies, and their associated websites, have started asking their visitors and users for their consent, before collecting (and processing) their data.
Such a consent has been usually collected via cookie banners, which ask users for consent and may give some choices as well.
Indeed, users may be given the choice to accept all cookies, to accept some cookies, or even to reject all cookies.
The choice is entirely up to the user, and the correct implementation of this choice is the responsibility of the website.
Although this sounds completely legal and fully straightforward, deviations have been reported in literature~\cite{fouad:hal-02567022,eijk2019impact,10.1145/3321705.3329806,9152617,utz2019informed}.
For example, some websites claim that some cookies are absolutely necessary for their operation (e.g.,\xspace~for the page to be delivered) or due to legitimate interest (e.g.,\xspace~to improve the product), and can not be rejected by the users.
Thus, users cannot really choose to reject \emph{all} cookies: these necessary cookies cannot be rejected.
Past research studies have noticed some discrepancies between what the users type and what is registered in the website.
For example, the users may provide a negative response (i.e.,\xspace reject all cookies), but the cookie banners may register a positive one (i.e.,\xspace accept all cookies), or the cookie banners may register a positive response even before the users had the opportunity to provide any choice~\cite{9152617}.
All these previous studies focus on cookies and compliance of cookie processing with the GDPR.
In this paper, we set out to explore a slightly different question:
\begin{quote}
\emph{
If a user does not provide consent, or chooses to \textbf{reject} all cookies, do websites use other forms of tracking to track this user?
If so, what are these forms of tracking, and what is the extent of this tracking?
}
\end{quote}
Considering the (i) ever less reliance of third-party trackers on non-permanent, erasable state-like cookies~\cite{mervis2020cookieless} and (ii) recent advances of browser vendors against third-party cookies~\cite{cookiebot2021chrome-cookies,wilander2019tracking-prevention}, it is apparent that the need for identifying how websites treat user consent in case of stateless (cookie-less) tracking is more than timely and urgent.
We address this need and try to fill this exact gap in our understanding, by being the first to investigate what is the GDPR compliance across the Web in the case where websites and trackers do not use cookies, or users do not accept cookies.
Sadly, our results suggest that even when users reject all cookies, websites \emph{do use other forms of tracking} to track users, and process personal data, in violation of GDPR.
Such forms of tracking include \emph{first-party ID leaking\xspace}, \emph{ID synchronization} and \emph{browser fingerprinting}.\footnote{One might think that ID synchronization is a form of tracking using cookies.
This is not really true: although ID synchronization does use (values stored in) cookies, passing such values around is done in an unorthodox manner, completely different from the way cookies are used.}
First-party ID leaking\xspace and ID synchronization are used to pass an identifier (such as a cookie) as an ``argument'' in an HTTP request to a website - different from the website that planted this ID in the first place.
In fact, according to past studies~\cite{papadopoulos2019cookie,10.1145/3178876.3186060,falahrastegar2016tracking}, Web entities may share IDs they have assigned to users and help third-parties re-identify users or create universal IDs.
Browser fingerprinting~\cite{englehardt2016online,acar2014web} uses elaborate approaches to uniquely identify a user through characteristics of her device - characteristics which can be easily found by a website.
Such characteristics may include screen resolution and rendering characteristics, browser fonts and installed plugins etc.\xspace~\cite{eckersley2010unique,mayer2012third,nikiforakis2013cookieless,papadopoulos2017long}.
Combining several of these characteristics can provide a large enough number of entropy bits to uniquely identify a user.
Although these cases of user identification are considered ``personal data processing'' according to GDPR and ePrivacy~\cite{eprivacy2017} regulations, and must be visible to users, they often do not appear in request forms of consent managers deployed by modern websites.
In this study, we highlight exactly that: the lack of transparency and user consent when it comes to websites that deploy user identification techniques like ID synchronization and browser fingerprinting.
\noindent
The contributions of this work are as follows:
\begin{itemize}[leftmargin=0.5cm]
\item We propose a fully automated method for detecting browser fingerprinting on websites using the Chromium Profiler.
\item We crawl close to one million websites and record how they track users using sophisticated forms of tracking (such as first-party ID leaking\xspace, ID synchronization and browser fingerprinting) as a function of users' choices.
\item We find that: (1) More than 75\% of leaks happen despite the fact that users have chosen to reject all cookies; (2) Websites embedded with ID synchronizing third-parties force browsers to engage in several ID synchronizations (3.51\xspace per ID, on average) even before users had a chance to accept or deny consent; (3) Popular websites are more likely to disregard users’ consent choices and engage in first-party ID leaking\xspace and ID synchronization; (4) Browsers leak more information when users choose to reject all cookies than when they choose to take no action at all; (5) Our analysis of tracking per country code reveals significant discrepancies across EU countries.
\item Our methodology can be transformed into an auditing tool for regulators, stakeholders and privacy-policy makers, for verifying compliance with GDPR and users' privacy rights.
\end{itemize}
\section{Background}
\label{sec:background}
In the world of Web, cookies are used to store identifying information for a given user.
However, recent policies and regulations from browser vendors and government bodies~\cite{sameorigin,itp,cookielaw} try to control the exposure of this identifying information to third-parties and for how long.
These policies restrain the ad and tracking industry that relies on re-identifying a user for long periods to serve more targeted ads.
Some of the most popular techniques used by the third-parties include ID synchronization (e.g.,\xspace cookie synchronization~\cite{papadopoulos2019cookie,acar2014web,10.1145/3193111.3193117,olejnik2013selling}) and canvas fingerprinting~\cite{mowery2012pixel}, but also the font-based fingerprinting~\cite{eckersley2010unique}, WebRTC-based fingerprinting, AudioContext fingerprinting, and Battery API fingerprinting~\cite{englehardt2016online}.
\subsection{ID Sharing}
\label{sec:csync}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figs/csync.pdf}
\vspace{-0.3cm}
\caption{Example of an ID synchronization operation. Two entities match the IDs they have assigned to the same user.}
\label{fig:csync}
\end{figure}
Whenever a user visits a new website, a plethora of cookies and IDs are assigned to her, allowing first or third-parties to re-identify her across the Web and build a profile based on her browsing behavior.
These profiles can be later centralized in Data Management Platforms~\cite{10.14778/2536222.2536238}, sold by data brokers~\cite{databrokers}, or used by advertisers to bid in ad auctions~\cite{pachilakis2019no}, ad-retargeting~\cite{iordanou2019eyewnder} and cross-device tracking~\cite{solomos2019cdt-raid}.
For the different Web entities (e.g.,\xspace publishers, analytics, data brokers, advertisers, etc.\xspace) to perform such transactions, all of the different assigned aliases (i.e.,\xspace IDs) that each entity has assigned to the same user, need to be linked (i.e.,\xspace synced) together.
This would reveal that the user that the entity \texttt{A} knows as \texttt{userABC} is the same user that entity \texttt{B} knows as \texttt{user123}.
Figure~\ref{fig:csync} illustrates an example of how this ID synchronization takes place.
Assume a user browsing \texttt{website1.com} and \texttt{website2.com}, in which there are third-parties like \texttt{tracker.com} and \texttt{advertiser.com}, respectively.
Consequently, these two third-parties have the chance to assign an alias to the user and re-identify them in the future.
From now on, \texttt{tracker.com} knows the user with the ID \texttt{user123}, and \texttt{advertiser.com} knows the same user with the ID \texttt{userABC}.
Next, assume that the user lands on \texttt{website3.com}, which includes some JavaScript code from \texttt{tracker.com} making the browser issue a GET request to \texttt{tracker.com} (step 1), who responds back with a REDIRECT request (step 2), instructing the user’s browser to issue another request to its collaborator \texttt{advertiser.com} this time, using a specifically crafted URL (step 3) where the alias it uses (i.e.,\xspace \texttt{user123}) is piggybacked.
When \texttt{advertiser.com} receives the above request from the user it knows as \texttt{userABC}, it learns that the user whom \texttt{tracker.com} knows as \texttt{user123}, and the user \texttt{userABC} are basically the same user.
This allows the two entities to join the different aliases (e.g.,\xspace cookies, device IDs, user IDs, etc.\xspace) a user has on the Web.
In this paper, we study two types of ID sharing: (i) \emph{first-party ID leaking\xspace}, where a first-party alias (e.g.,\xspace a cookie or device ID) is leaked from the visited website to different third-parties, and (ii) \emph{third-party ID synchronization}, where third-parties link together the different third-party aliases they use for the same users.
\subsection{Browser Fingerprinting}
\label{sec:canvas}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figs/fingerprintingMethod.png}
\vspace{-0.5cm}
\caption{Canvas Fingerprinting process as part of the browser fingerprinting methodology used by popular libraries. The website can extract a fingerprint of the user's browser.}
\label{fig:canvasFingerprintint}
\end{figure}
Browser Fingerprinting is a sophisticated set of techniques, which can be used to uniquely identify browser instances without storing any information on the user side (stateless).
It can be used to detect malicious users that create multiple accounts in social networking services, or even stop deceitful orders in e-commerce platforms.
However, this technique can be abused by privacy-violating websites and, therefore, track users across sites, or even de-anonymize private sessions.
In fact, previous work~\cite{mowery2012pixel,laperdrix2016beauty} has shown that this technique provides sufficient bits of entropy to effectively track users, even through the usage of the Tor Browser.
One of the most prevalent and stealthy such fingerprinting techniques is Canvas Fingerprinting: named after the HTML canvas element, which was introduced in the latest version (i.e.,\xspace HTML5).
A canvas element provides the required functionality for drawing graphics using client-side code.
Moreover, canvas fingerprinting relies on WebGL, a cross-platform JavaScript API that enables developers to render advanced graphics using shaders.
As a result, developers have access to rendering functionality, which is performed in a GPU, however, in an HTML context via the canvas element.
Figure~\ref{fig:canvasFingerprintint} demonstrates the process of canvas fingerprinting as part of browser fingerprinting.
Assume (i) a website that contains the fingerprinting code and (ii) a browser instance that can execute JavaScript code.
As a first step, the fingerprinting script creates a canvas element using the built-in interface provided by almost all modern browsers.
Next, the script renders some 2D graphics and text on the canvas.
Usually, the text that is drawn is a pangram.
This means, that it contains all the letters of the English alphabet in order to increase the number of entropy bits.
Different font sizes and font families result in a slightly different text that can affect the final fingerprint.
As a next step, the fingerprinting script needs to extract the content of the canvas and inspect its pixel values (step 3).
This is achieved using the method \texttt{toDataURL()}, provided by the canvas object.
This method returns the Base64 encoding of the canvas' content.
Based on various factors, including fonts that are installed on the user's machine, version of OpenGL and browser's rendering engine, this string can be sufficiently different per user.
Then, the script combines this canvas fingerprint with other information, which can be used as an additional source of entropy (step 4).
This information includes, among others, the host operating system and timezone, its screen resolution, installed plugins, preferred language set in the browser and number of logical processors available on the host.
The output of the combination algorithm is a long string that uniquely identifies the specific browser instance (step 5).
Finally, the identifier is hashed, to produce a fingerprint for this specific browser (step 6) and is usually sent across the network, or even stored as a cookie.
\begin{figure*}[t]
\centering
\includegraphics[width=1.7\columnwidth]{figs/crawlerArchitecture}\vspace{-0.3cm}
\caption{High level overview of our crawling methodology. We use Puppeteer to instrument a web browser and automatically visit websites. The Chrome Profiler is a built-in tool used to record and analyze run-time performance by collecting callsite information and execution statistics. The Cookie Database stores all cookies set by various domains. The Consent-O-matic tool is loaded on browser startup as an extension to handle cookie consent forms. Whenever a request is issued or a response is received, the event dispatcher emits the appropriate event, which is handled by our puppeteer-based crawler.}
\label{fig:CrawlerArchitecture}
\end{figure*}
Tracking techniques need to be transparent to users to avoid raising suspicion or harm the user experience.
As such, browser fingerprinting can be performed in minimum time on any browser that supports JavaScript by using invisible HTML elements and without requiring any privileges or permission from the user.
Consequently, even privacy-aware advanced users that block cookies can be tracked.
Furthermore, browser fingerprinting is difficult to prevent because it relies on native functionality, built in modern browsers.
Users need to either disable JavaScript, or use external browser extensions.
These techniques usually add random noise to some built-in functions, making the fingerprint different, each time the same website attempts to (re)identify a user~\cite{laperdrix2017fprandom, nikiforakis2015privaricator, brave2020}.
\section{Methodology}
\label{sec:methodology}
To investigate the effect of the different options a user is provided with while visiting websites with a consent form, we leverage the Consent-O-matic tool~\cite{consentomatic}.
Consent-O-matic is the state-of-the-art browser extension to automatically detect and handle GDPR consent forms.
Whenever the extension detects a Consent Management Platform (CMP), it logs its info (e.g.,\xspace vendor, encoding, IDs).
Additionally, it can be configured to either accept or reject the different categories of data processing purposes.
In addition to this, we develop a puppeteer-based crawler that instruments a Chrome browser.
By using Consent-O-matic, the browser can automatically perform one of the following three actions when a consent form is detected:
\begin{enumerate}
\item \texttt{Accept All}\xspace: grant consent for all data processing purposes to all third-parties residing in the visited website.
\item \texttt{Reject All}\xspace: deny consent for all data processing purposes to all third-parties residing in the visited website.
\item \texttt{No Action}\xspace: avoid interacting with the form in any way.
\end{enumerate}
By using our instrumented browser, we crawl (with clean state) the landing page of the top 850K\xspace sites of Tranco list~\cite{tranco}.
This list aggregates the ranks from the lists provided by Alexa, Umbrella, and Majestic from 29.07.2020 to 27.08.2020 (pay-level domains retained)\footnote{https://tranco-list.eu/list/Q274/full}.
Whenever a CMP is detected, we crawl the given website 3 times (one for each of the different consent actions), and we store: the HTML, cookiejar, HTTP requests, HTTP responses, JS function calls and CMP info for each case.
It is important to note that, we capture HTTP(S) requests and responses passively, via the emitted Chrome events without mutating or intercepting them.
This ensures that the behavior of the website is not affected by our crawler.
An overview of our crawling methodology is illustrated in Figure~\ref{fig:CrawlerArchitecture}.
\begin{table}[t]
\footnotesize
\caption{Summary of our crawled dataset.
}\vspace{-0.3cm}
\centering
\begin{tabular}{lrr}
\toprule
\textbf{Description} & \textbf{Volume} & \textbf{\% of total} \\
\midrule
Initial set of websites & 850K\xspace & \\
Websites that errored & 219,098 & 25.78\% \\
Websites that were filtered out & \\
(pornographic or no-bots allowed) & 2,689 & 0.32\% \\
Total websites correctly parsed & 628,213\xspace & 73.90\% \\
Websites with a CMP & 27,953\xspace & 3.29\% \\
Websites with a CMP and no error & \\
in all three consent actions & 27,180\xspace & 3.20\% \\
\bottomrule
\end{tabular}
\label{tab:dataset}\vspace{-0.3cm}
\end{table}
\subsection{Data Description and Analysis}
Overall, the crawler (located in EU) visited 850K\xspace sites from August 28\textsuperscript{th} 2020 to September 17\textsuperscript{th} 2020, the Consent-O-matic extension detected 27,953\xspace sites with a CMP (or 4.44\% of the successfully visited sites)\footnote{Inline with related works which report detection rates of 3\% ~\cite{10.1145/3321705.3329806} and 6.2\% \cite{9152617}.
Designing a detection tool with better accuracy is very challenging due to the heterogeneity of the various existing consent management libraries and custom solutions.}, and we collected a total of 108 GB of data for these sites.
Crawls failed at 25.78\% of the initial set of websites (due to error, puppeteer time-out, site inaccessibility, site did not serve EU-based users).
Table~\ref{tab:dataset} summarizes our dataset.
\point{Detecting Third-party ID Synchronization}
We perform an offline analysis on the collected data to detect ID synchronization operations.
Specifically, we examine all application-level network traffic and search for requests that contain unique IDs. For HTTP GET requests, we inspect the URL of the requests and examine their path and parameters.
For HTTP POST requests, we inspect the data stored in the request body.
We report a case of ID synchronization only if a unique ID is delivered to a domain different from the one that assigned it to the user.
This analysis is performed for both first-party and third-party set IDs and in a per-website base.
The majority of these IDs are stored in cookies.
Thus, we parse the value of each and look for strings that can be used as unique IDs.
If this value is a text string representing a JSON object, we get the values stored in key-value pairs in the object\footnote{We purposely ignore the keys found in key-value pairs of JSON objects, since these keys rely on the API which the website uses, and do not contain any useful information that can uniquely identify users.
Treating these keys as possible identifiers would result in multiple false positives.}.
If the object contains inner JSON objects, we recursively obtain all values in all nested levels.
To reduce false positives, we deliberately filter values that include consent information (e.g.,\xspace values of the keys \texttt{euconsent}, \texttt{eupubconsent}, \texttt{\_\_cmpconsent} and \texttt{\_\_cmpiab}).
As described in~\cite{9152617}, such values can be used to share user's consent across different CMPs or third-parties present on the page.
Additionally, we filter out values that are considered common and cannot be used as identifiers: strings that represent dates, timestamps, regions, locale, strings that end with a common file extension (e.g.,\xspace jpg), strings that are URLs (e.g.,\xspace start with www. or http://) and, finally, strings that are prevalent keywords.
To construct a list of such keywords, we use a simplified puppeteer-based crawler to visit over 2.5K websites, and store all cookies.
We manually inspect their values and we identify over 80 keywords that are frequently found in cookies but cannot be used for user identification.
This list includes keywords such as ``homepage'', ``undefined'', ``desktop'', ``not set'' and ``active'', among others.
We also exclude strings that have a length of 5 or less characters as they do not contain enough bits of entropy to uniquely identify a user.
In addition, we see cookie values combining (with a delimiter) identifiers with non-identifying info (e.g.,\xspace, timestamp, locale, etc.\xspace), for example: \emph{foo=\{userID\};15693242;en-US}.
We find less than 0.6\% of such IDs being synced with third-parties.
The last step is to detect the possible IDs in the HTTP traffic.
For each string of the previous step, we examine all HTTP requests targeting domains different than the one that set the cookie, and seek for an exact string match.
We search for these possible IDs in (i) URL parameters, (ii) the body of requests and (iii) the referrer header.
We tokenize the URL parameters using both default (i.e.,\xspace \&) and custom (i.e.,\xspace ``;'') delimiters.
\point{Detecting Browser Fingerprinting}
\label{sec:canvas_det}
As described in \cite{mowery2012pixel} and illustrated in Figure~\ref{fig:canvasFingerprintint}, browser fingerprinting techniques, such as canvas fingerprinting, can be performed using various native methods provided by the browser's run-time environment (e.g.,\xspace \texttt{fillText}).
Past work~\cite{acar2014web,raschke2018uncovering,englehardt2016online,le2017towards} has focused on monitoring these native methods along with their returned values.
By observing the sequence of function calls along with the arguments given to these functions, one can have indications of browser fingerprinting.
Additionally, searching for common arguments found in popular fingerprinting libraries can help increase the level of certainty.
We argue that this method produces multiple false positives, since websites which use the native methods or HTML elements, like the canvas element, legitimately, might be marked as fingerprinting websites.
Indeed, in~\cite{raschke2018uncovering} manual revision of results was required in order to exclude false positives.
To mitigate this, our approach focuses on a higher level of abstraction and does not examine the native (i.e.,\xspace browser's built-in) methods.
This way, we successfully disregard websites that use these methods legitimately (e.g.,\xspace the canvas element for web graphics).
Specifically, to detect browser fingerprinting, we perform JavaScript code profiling and search for specific function calls that indicate the presence of a fingerprinting library.
Our method reduces the number of true positives, but ensures that the results are trustworthy.
Moreover, this method can be utilized by a fully-automated crawler, without the need of manual intervention.
In particular, we analyse the open-source version of one of the most widely-used fingerprinting JavaScript libraries: FingerprintJS \cite{fingerprintjs}.
We extract the full list of functions used during the process of browser fingerprinting.
We then focus on functions that consist of multiple operations and require a significant number of execution cycles.
This ensures that they will always be sampled by the profiler.
Moreover, we ignore functions that have common names (e.g.,\xspace \texttt{map} or \texttt{isIE}) and functions that can be utilized by general purpose code to perform actions not necessarily related to fingerprinting (e.g.,\xspace \texttt{getRegularPlugins}).
As a result, we conclude that the execution of the functions \texttt{getCanvasFp}, \texttt{getWebglFp}, \texttt{Fingerprint2} and \texttt{Fingerprint2.get} indicate browser fingerprinting.
These functions indicate clear intent to fingerprint the user's browser and uniquely identify them.
Next, to fully automate the detection of browser fingerprinting, we modify our puppeteer-based crawler to start with the built-in profiler tool of the Chromium browser, enabled.
This was achieved using Puppeteer's ability to create a session for the Chrome DevTools protocol~\cite{chromeDevTools}.
Additionally, we set the sampling interval of the profiler to 500 $\mu$s, which results to 2K samples per second.
The output of the Chromium profiler is a list of profile nodes.
Each node contains information about samples, in addition to a unique ID and a call frame.
Using this call frame, we extract the function name along with the URL of the JavaScript script that contains the specific function.
This enables us to search for fingerprinting functions, as well as identify the exact script that performs browser fingerprinting.
\point{Limitation:} Although our mechanism is fully automated, we must acknowledge that our fingerprinting detection process may miss minified or obfuscated fingerprinting scripts.
\section{Analysis of Consent}
\label{sec:measurements}
In this section, we present our measurements and analyze the behavior of websites across three types of visits: when consent is (i) rejected (\texttt{Reject All}\xspace), (ii) granted (\texttt{Accept All}\xspace), and (iii) not responded to (\texttt{No Action}\xspace).
\subsection{Cookie Consent and Third-Parties}
\label{subsec:general-cookie}
Fist, we study how websites change their user tracking behavior depending on the consent provided (or not), via number of third-parties they interact with.
Therefore, we measure the number of unique third-parties running a script on the websites of our dataset before and after a user consent action.
In Figure~\ref{fig:ThirdPartiesEcosystem}, we plot the results (min, 25th percentile, median, 75th percentile, max\xspace) for the three user actions.
Two-sided non-parametric Kolmogorov–Smirnov tests for the three cases demonstrated that the three distributions are statistically different, with p-value<$10^{-10}$ ( $D_{\footnotesize{(noaction,rejectall)}}$=$0.038$,
$D_{\footnotesize{(rejectall,acceptall)}}$=$0.061$,
$D_{\footnotesize{(acceptall,noaction)}}$=$0.097$).
As we can see in Fig.~\ref{fig:ThirdPartiesEcosystem}, in the \texttt{No Action}\xspace and in the \texttt{Accept All}\xspace case, there are 16\xspace and 19\xspace third-parties running in the median website, respectively.
Surprisingly, in the \texttt{Reject All}\xspace case, there were more (i.e.,\xspace~17\xspace) third-parties running in the median website and may reach up to 29 for the 75th percentile.
This suggests that interacting with the consent manager has an impact on the number of third-parties in the visited website.
Specifically, there are more third-parties running in the median website when consent was denied.
\subsection{Sharing User IDs with Third-Parties}
\point{First-party ID leaking\xspace}
In our next experiment, we set out to explore the cases where a first-party ID (e.g.,\xspace cookie, device ID~\cite{nikiforakis2013cookieless}), previously set by the visited website, is getting leaked to third-parties.
Therefore, we measure how many first-party ID leaking\xspace operations are being performed in a website as a function of the three aforementioned user choices.
One would expect that there are \emph{no} such operations before the user makes a choice (i.e.,\xspace \texttt{No Action}\xspace), as well as when the users rejects all cookies (i.e.,\xspace \texttt{Reject All}\xspace\/).
However, as shown in Table~\ref{tbl:detected_syncs}, among the websites that present their users with a cookie consent banner, we found 14,238 of them to perform first-party ID leaking\xspace even before their users had the opportunity to register their preferences (\texttt{No Action}\xspace case).
To our surprise, when users \texttt{Reject All}\xspace cookies, the first-party ID leaking\xspace only gets worse, with more than 15,334 of them engaging in it.
\begin{table}[t]
\caption{Number of websites detected (i) leaking their first-party user IDs and (ii) having third-parties that perform synchronizations of user IDs.
}
\label{tbl:detected_syncs}
\footnotesize
\centering\vspace{-0.3cm}
\begin{tabular}{lcc}
\toprule
{\bf Consent} & {\bf Websites engaging in } & {\bf Websites with third-party} \\
{\bf Action} & {\bf first-party ID leaking\xspace} & {\bf ID synchronization} \\
\texttt{No Action}\xspace & 14,238 (52.38\%) & 6,533 (24.03\%) \\
\texttt{Reject All}\xspace & 15,334 (56.41\%) & 7,123 (26.20\%) \\
\texttt{Accept All}\xspace & 17,764 (65.35\%) & 8,048 (29.61\%) \\ \bottomrule
\end{tabular}\vspace{-0.2cm}
\end{table}
\begin{table}[t]
\caption{Average number of unique third-parties learning a user ID. A user's browser leaks first-party IDs to 2.14\xspace third-parties and engages on 3.51\xspace synchronizations per third-party ID, on average, even before the user accepted or rejected consent.}
\label{tbl:averageUniqueThirdParties}
\centering
\footnotesize\vspace{-0.3cm}
\begin{tabular}{lccc}
\toprule
\bf ID & \textbf{\texttt{No Action}\xspace} & \textbf{\texttt{Reject All}\xspace} & \textbf{\texttt{Accept All}\xspace} \\
\midrule
First-party ID & 2.14\xspace & 2.49\xspace & 3.04\xspace \\
Third-party ID & 3.51\xspace & 3.91\xspace & 4.86\xspace \\
\bottomrule
\end{tabular}\vspace{-0.2cm}
\end{table}
\begin{table}[t]
\caption{Top-5 third-parties that learn the highest number of first-party IDs per consent action in our dataset.}
\label{tbl:topIdLeakingThirdParties}
\centering
\footnotesize\vspace{-0.3cm}
\begin{tabular}{llll}
\toprule
\bf \# & \textbf{\texttt{No Action}\xspace} & \textbf{\texttt{Reject All}\xspace} & \textbf{\texttt{Accept All}\xspace} \\
\midrule
1. & facebook.com & facebook.com & facebook.com \\
& 18.87\% & 18.29\% & 19.48\% \\
2. & google-analytics.com & google-analytics.com & google-analytics.com \\
& 18.85\% & 17.28\% & 15.99\% \\
3. & bing.com & bing.com & bing.com \\
& 9.64\% & 8.84\% & 10.27\% \\
4. & hubspot.com & doubleclick.net & doubleclick.net \\
& 6.66\% & 6.60\% & 6.82\% \\
5. & doubleclick.net & hubspot.com & hubspot.com \\
& 4.68\% & 5.86\% & 5.99\% \\
\bottomrule
\end{tabular}\vspace{-0.2cm}
\end{table}
Next, we explore what is the extent of these leaks.
Table~\ref{tbl:averageUniqueThirdParties} shows the average number of first-party ID leaking\xspace, as a function of the three user choices.
There are 2.14\xspace first-party ID leaks\xspace even before the user has the opportunity to accept cookies or not (blue bar-\texttt{No Action}\xspace).
To make matters worse, if the user chooses to reject all cookies (red bar-\texttt{Reject All}\xspace), a first-party ID may be leaked to even more third-parties, on average (2.49\xspace).
Furthermore, in Table~\ref{tbl:averageUniqueThirdParties}, we measure the average number of third-parties that learn a first-party in the websites we detected this phenomenon.
The difference between \texttt{Reject All}\xspace and \texttt{Accept All}\xspace is rather small: in the average website, choosing \texttt{Accept All}\xspace leaks first-party IDs to 3.04\xspace third-parties, when \texttt{Reject All}\xspace leaks IDs to 2.49\xspace third-parties, i.e.,\xspace~about 25\% less.
The difference between the two is hardly significant, implying that more than 75\% of the third-parties that will learn a first-party ID, do so without user's consent!
In Table~\ref{tbl:topIdLeakingThirdParties}, we show the top-5 third-parties in our dataset that learn the most first-party IDs across all websites for each of the three consent options.
Facebook with its social plugin, Google with its analytics tracker and ad-exchange (Doubleclick) modules, and Microsoft (Bing) occupy the top positions in all three consent options.
\begin{quote}
\noindent{{\bf Finding}: Browsers leak more information when users choose to reject all cookies than to take no action at all.
In fact, more than 75\% of the leaks happen despite the fact that users have chosen to reject all cookies.
}
\end{quote}
\begin{table}[t]
\caption{Top-5 third-parties with highest number of third-party synchronisations per consent action in our dataset.}
\label{tbl:topCookieSyncThirdParties}
\centering
\footnotesize\vspace{-0.3cm}
\begin{tabular}{llll}
\toprule
\bf \# & \textbf{\texttt{No Action}\xspace} & \textbf{\texttt{Reject All}\xspace} & \textbf{\texttt{Accept All}\xspace} \\
\midrule
1. & doubleclick.net & doubleclick.net & doubleclick.net \\
& 21.15\% & 21.47\% & 20.22\% \\
2. & everesttech.net & everesttech.net & everesttech.net \\
& 13.21\% & 12.10\% & 10.89\% \\
3. & scorecardresearch.com & facebook.com & facebook.com \\
& 10.59\% & 9.95\% & 9.61\% \\
4. & facebook.com & scorecardresearch.com & ad.gt \\
& 10.15\% & 9.61\% & 9.54\% \\
5. & taboola.com & google-analytics.com & taboola.com \\
& 9.68\% & 8.30\% & 8.49\% \\
\bottomrule
\end{tabular}\vspace{-0.2cm}
\end{table}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1.1\linewidth]{figs/ranks_idLeaking}
\caption{Unique third-parties in ID leaking per website rank.}
\label{fig:rank}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1.1\linewidth]{figs/normalized_ranks_idLeaking_bars.png}
\caption{Unique third-parties in ID leaking per website rank (normalized).}
\label{fig:rank_perc}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1.1\linewidth]{figs/normalized_ranks_idLeaking_trend}
\caption{Unique third-parties in ID leaking per website rank (linear fit).}
\label{fig:rank_trend}
\end{subfigure}
\caption{First-party ID leaking\xspace as a function of the website's popularity.
(a) Average number of unique third-parties involved in ID leaking per rank range of the website
(b) This figure plots the same information as Fig.~\ref{fig:rank}, with the exception that all \texttt{Accept All}\xspace values are normalized to 100\%.
(c) This figure plots the same information as Fig.~\ref{fig:rank_perc}, with the exception that all \texttt{Accept All}\xspace values are normalized to 100\%. \texttt{Reject All}\xspace and \texttt{No Action}\xspace points have been fitted with a straight line. The line suggests an increasing trend implying less popular sites are more aggressive at disregarding user choices.}
\label{fig:rank_all}
\end{figure*}
\point{Third-party ID synchronization}
Apart from sharing the first-party IDs they assign to the visiting users, websites also host third-parties that (as described in Section~\ref{sec:csync}) need to synchronize the different user IDs they use for the same users, in order to merge their databases on the back-end.
This way, they can later target specific groups of users~\cite{agarwal2020stop}, sell their data~\cite{selldata}, or use these data in ad-auctions~\cite{10.1145/3131365.3131397,pachilakis2019measuring}.
This type of leakage is worse than the first-party ID leaking, since (1) it is not in the control of the websites themselves, (2) via this mechanism, third-parties \emph{that are not present on the website} can be alerted of a user's presence.
As shown in Table~\ref{tbl:detected_syncs}, from the websites that present their users with a consent manager, we found 6,533 websites hosting third-parties that conduct synchronization of IDs before users had the opportunity to register their choices (\texttt{No Action}\xspace).
If users \texttt{Reject All}\xspace cookies, then even more websites (7,123) engage in ID synchronization.
Although consistent with the finding of the previous subsection (first-party ID leaking\xspace), this fact sadly shows that websites employ sophisticated forms of tracking totally disregarding user consent preferences.
To quantify the extent of the phenomenon that happens as a function of the three consent choices examined, in Table~\ref{tbl:averageUniqueThirdParties} we measure the average number of unique third-parties synchronizing a user ID.
When the user takes \texttt{No Action}\xspace, their browser engages in 3.51\xspace synchronizations, on average.
This means that when the user is asked for GDPR compliance, and before even responding, their browser already leaked at least one third-party ID to more than three other third-parties.
To make matters worse, if the user responds negatively and chooses to \texttt{Reject All}\xspace cookies, their cookies may get synced with even more third-parties: 3.91\xspace, on average.
In Table~\ref{tbl:topCookieSyncThirdParties}, we show the top-5 third-parties conducting the highest number of synchronizations across websites, for each of the consent options.
This time, Google's ad-exchange platform {\tt doubleclick.net} and Amazon tracker {\tt everestTech.net} are the top-2 in all three consent options.
\begin{quote}
\noindent{{\bf Finding}: Websites with embedded third-parties that synchronize the IDs they have assigned for the same user, force browsers to engage in 3.51\xspace synchronizations, on average, even before the users had any chance to accept or reject consent.}
\end{quote}
\begin{table}[t]
\caption{Websites performing browser fingerprinting.}
\label{tbl:fingerprinting}
\centering
\footnotesize\vspace{-0.3cm}
\begin{tabular}{lrr}
\toprule
\textbf{Description} & \textbf{Volume} & \textbf{\% of total} \\
\midrule
\texttt{No Action}\xspace & 279 & 1.03\% \\
\texttt{Reject All}\xspace & 285 & 1.05\% \\
\texttt{Accept All}\xspace & 330 & 1.21\% \\
In at least one consent action & 336 & 1.24\% \\
\midrule
In all 3 cases & 247 & 73.5\% \\
Only in \texttt{Accept All}\xspace case & 47 & 13.9\% \\
Only in \texttt{Reject All}\xspace case & 3 & 0.9\% \\
Wait for action & 7 & 2\% \\
\bottomrule
\end{tabular}\vspace{-0.2cm}
\end{table}
\subsection{Browser Fingerprinting}
In our next experiment, we set out to explore whether websites track users differently via browser fingerprinting, given the different user responses to the requested cookie consent.
By using the methodology presented in Section~\ref{sec:canvas_det}, we detect the number of websites performing browser fingerprinting across the different types of visit.
Table~\ref{tbl:fingerprinting} presents our findings and, as we can see, the action of the user has no significant impact on the websites' fingerprinting operations.
Specifically, 279 websites perform browser fingerprinting even before the user had the opportunity to respond to the consent request (i.e.,\xspace \texttt{No Action}\xspace).
Even worse, if the user chooses to \texttt{Reject All}\xspace cookies, even more websites engaged in browser fingerprinting: 285 websites.
Interestingly, we see 73.5\% of the fingerprinting websites perform browser fingerprinting no matter what the user consent action is.
In addition, we see that only 2\% of these websites wait for user's action before starting their fingerprinting operation.
Only 13.9\% of them perform browser fingerprinting only when the user gives consent, and 0.9\% of the websites perform browser fingerprinting \emph{only if the user rejects giving consent}.
It is apparent that these websites are using browser fingerprinting as a fallback mechanism in case they are not allowed (by the GDPR) to set a cookie on the user side.
It is important to stress at this point that based on Article 4/Recital 30~\cite{recital30}, GDPR regards the process of user identifying information and not cookies per se.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1.1\columnwidth]{figs/ranks_cookieSync.png}
\caption{Unique third-parties involved in ID synchronizations per website rank.}
\label{fig:cs_rank}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1.1\columnwidth]{figs/normalized_ranks_cookieSync_bars.png}
\caption{Unique third-parties involved in ID synchronizations per website rank (normalized).}
\label{fig:cs_rank_perc}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1.1\columnwidth]{figs/normalized_ranks_cookieSync_trend.png}
\caption{Unique third-parties involved in ID synchronizations per website rank (linear fit).}
\label{fig:cs_rank_trend}
\end{subfigure}
\caption{ID Synchronization as a function of the website's popularity.
(a) Average number of unique third-parties involved in synchronizations per rank range of the website.
(b) This figure plots the same information as Fig.~\ref{fig:cs_rank}, with the exception that all \texttt{Accept All}\xspace values are normalized to 100\%.
(c) This figure plots the same information as Fig.~\ref{fig:cs_rank_perc}, with the exception that all \texttt{Accept All}\xspace values are normalized to 100\%.
\texttt{Reject All}\xspace and \texttt{No Action}\xspace points have been fitted with a straight line. We see that the line suggests an increasing trend implying that less popular sites are more aggressive at disregarding user choices.}
\label{fig:cs_rank_all}
\end{figure*}
\begin{quote}
\noindent{{\bf Finding}: Although websites ask users for cookie consent, they do not take into account this consent when they perform browser fingerprinting.}
\end{quote}
\begin{figure*}[t]
\centering
\subfloat[Number of unique third-parties learning a first-party ID.]
{
\hspace{0.4cm}
\includegraphics[width=0.85\linewidth]{figs/tld_idLeaking.png}
\label{fig:tld_norm}
}
\newline
\subfloat[Normalized number of unique third-parties learning a first-party ID.]
{
\includegraphics[width=0.85\linewidth]{figs/normalized_tld_idLeaking} \label{fig:tld_perc}
}
\label{fig:tld}
\caption{Number of unique third-parties learning a first-party ID as a function of the top-level domain per country code.
(a) This figure plots the absolute values.
(b) This figure plots the same information as in (a), with the difference that the max value (\texttt{Accept All}\xspace) is normalized to 100\%.
This enables us to compare websites that have different magnitudes of leakage.
We see that websites in different domain names have very different behavior.
For example websites in \emph{.fr} make 1.5 leaks before the user gives consent, close to 2.8 leaks when the user rejects all cookies, and more than 5 leaks when the user accepts all cookies.
On the other end of the spectrum, user choices in websites in \emph{.cz} seem to have little impact: they leak to 3.7 third-parties both in cases when users choose to \texttt{Reject All}\xspace cookies, and in cases where users choose to \texttt{Accept All}\xspace cookies.}
\end{figure*}
\begin{figure*}[t]
\subfloat[Number of unique third-parties engaged in third-party ID synchronization.]
{
\hspace{0.4cm}\includegraphics[width=0.85\linewidth]{figs/tld_cookieSync.png}
\label{fig:tld_cs}
}
\newline
\subfloat[Normalized number of unique third-parties engaged in ID synchronization.]
{\centering
\includegraphics[width=0.85\linewidth]{figs/normalized_tld_cookieSync.png}
\label{fig:tld_cs_perc}
}
\label{fig:tld_cs_main}\vspace{-0.3cm}
\caption{Number of unique third-parties engaged in third-party ID synchronization as a function of the top-level domain per country code.
(a) This figure plots the absolute values.
(b) This figure plots the same information as in (a), with the difference that the max value (\texttt{Accept All}\xspace) is normalized to 100\%.
This enables us to compare sites that have different magnitudes of leakage.
As in first-party ID leaking\xspace, websites in \emph{.fr} engage in less third-party ID synchronization without the user's consent.
On the other end of the spectrum, user choices in sites in \emph{.cz} seem to have little impact: they engage in approximately 4.4 third-party ID synchronizations both when users choose to \texttt{Reject All}\xspace cookies, and in cases where users choose to \texttt{Accept All}\xspace cookies.}
\end{figure*}
\subsection{Does website popularity matter?}
In our next experiment, we explore whether a website's popularity impacts how the website respects the user's choices.
For this reason, we grouped the websites into buckets based on their popularity: the first bucket contains the top 50K websites in the Tranco list, the next bucket contains the next 50K most popular sites (i.e.,\xspace~ranking 50K-100K), etc.\xspace
In Figure~\ref{fig:rank}, we show the extent of first-party ID leaking\xspace for the different buckets for the three cases we study: \texttt{No Action}\xspace (blue bar), \texttt{Reject All}\xspace (red bar), and \texttt{Accept All}\xspace (green bar).
We see that as the popularity of the website decreases (right part of the plot), all bars tend to decrease, implying that the magnitude of tracking through first-party ID leaking\xspace decreases as well.
In Figure~\ref{fig:rank_perc}, we normalize the values (so that the \texttt{Accept All}\xspace corresponds to the same 100\% value).
We see that in this case, blue bars and red bars tend to have a slightly increasing trend to the right.
That is, less popular websites tend to be slightly more aggressive in disregarding user choices.
For example, popular websites (0, 50K) do 67\% of their first-party ID leaking\xspace before the user makes any choice, while less popular sites (400K, 450K) make 77\% of their first-party ID leaking\xspace before the user makes any choice.
In Figure~\ref{fig:rank_trend}, we show an interpolation of the results using a straight line.
In both consent actions (\texttt{No Action}\xspace and \texttt{Reject All}\xspace), we see a positive slope ($R^2$=$0.42$ and $R^2$=$0.04$, respectively).
Similarly, in Figure \ref{fig:cs_rank_all}, we see the same trend across the popularity buckets of websites hosting synchronizing third-parties: less popular websites (towards the right-end of the figures) tend to be more aggressive at disregarding user choices.
\begin{quote}
\noindent{{\bf Finding}: Less popular websites are more aggressive at disregarding users' consent choices and engage in first-party ID leaking\xspace and third-party ID synchronizations.}
\end{quote}
\subsection{Does the hosting country matter?}
Next, we study how the websites hosted in different countries (represented by their country code top-level domain (ccTLD)) treat the user consent.
In Figure~\ref{fig:tld_norm}, we present the results for the case of first-party ID leaks.
As we see, a higher percent of Europe-based ccTLDs respect the choices of the users (i.e.,\xspace less first-party ID leaking\xspace): websites with ccTLD=\emph{fr} (France), \emph{dk} (Denmark), \emph{nl} (Netherlands), \emph{at} (Austria) and \emph{de} (Germany), leak first-parties to less number of third-parties than websites with non Europe-based ccTLDs (e.g.,\xspace~\emph{uk} (UK), \emph{ca} (Canada), \emph{au} (Australia)), where the choices of the user do not have any impact.
In Figure~\ref{fig:tld_perc}, we normalize the results based on the \texttt{Accept All}\xspace.
Websites on the right part of the figure tend to disrespect users choices: the difference between \texttt{Accept All}\xspace and \texttt{Reject All}\xspace in sites ending in \emph{.cz} and \emph{.ch} seems to be negligible.
Thus, whether the user chooses \texttt{Accept All}\xspace or \texttt{Reject All}\xspace makes little difference.
On the contrary, websites on the left part of the figure seem to respect user choices more.
For example, the difference between \texttt{Accept All}\xspace and \texttt{Reject All}\xspace for \emph{.fr} websites seem to be close to 50\%.
Similarly, the difference between \texttt{No Action}\xspace and \texttt{Accept All}\xspace for \emph{.fr} websites seems to be more than 70\%.
Surprisingly, we see the ccTLD of \emph{.eu} being on the right side of the figure, which means that there is an increased number of websites in this ccTLD, not yet compliant with GDPR.
Thus, although not perfect, user choices for the websites on the left part of the figure have a meaningful effect, in contrast to websites on the right part.
Similarly, in Figure \ref{fig:tld_cs_perc}, we plot the same results for third-party ID synchronization.
We see that, again, European ccTLDs: \emph{.fr}, \emph{.dk}, \emph{.nl}, \emph{.at} and \emph{.de}, tend to perform less third-party ID synchronization when there is no consent given by the user, than websites with non Europe-based ccTLDs: (e.g.,\xspace~\emph{uk} (UK), \emph{ca} (Canada), \emph{au} (Australia), \emph{.ch} (Chile)), where the user choices have a much smaller impact.
Surprisingly, we see two European ccTLDs, \emph{.gr} (Greece) and \emph{.cz} (Czech Republic), not performing like other European ccTLDs.
To make matters worse, websites of \emph{.cz} perform more synchronizations when users deny giving consent.
\section{Ineffective Consent: Edge cases}
\label{subsec:edge-cases}
In our dataset, we observed 73 websites that interact with over 100 unique third-parties each, in at least one of the three types of visit.
One such example with extreme behavior is \texttt{laprovence.com}.
When a user visits the website and gives \texttt{Accept All}\xspace consent, the website interacts with 159 different third-parties and performs synchronization for multiple IDs with 59 of these parties.
We observed the values of 37 unique third-party cookies being leaked to third-parties different from the cookie's owner.
In the \texttt{Reject All}\xspace case, the website interacts with 80 third-parties, and performs synchronization for at least one ID with 16 for them.
Interestingly, when the user lands on the website with a clean session and performs \texttt{No Action}\xspace, but simply waits, the website interacts with 97 third-parties and performs synchronization with 29 of them.
Regarding first-party ID leaking, we observed that multiple websites store a cookie labeled as ``necessary'', but then proceed to leak its value to various third-parties.
For example, \texttt{harryanddavid.com} leaks the values of 28 different first-party cookies in the \texttt{Reject All}\xspace and \texttt{No Action}\xspace cases.
Also, \texttt{diariodepontevedra.es} and \texttt{asivaespana.com}, in the \texttt{Reject All}\xspace and \texttt{Accept All}\xspace cases, respectively, perform ID leaking with 38 different third-parties for more than one ID.
In addition, \texttt{camer.be} interacts with 91 unique third-parties in the \texttt{No Action}\xspace case, 94 in the \texttt{Accept All}\xspace case, and surprisingly with 131 in the \texttt{Reject All}\xspace case.
For the \texttt{Reject All}\xspace case, this website is also involved in a major third-party ID synchronization operation.
At the time of crawling, the website interacted with the third-party \texttt{taboola.com}, which stored a cookie with name \texttt{t\_gid} and value \texttt{884d05cc-335c-4226-ab94-7ab6114fef6a-\linebreak~tuct65bfbc8}.
This value was sent to 20 other third-parties.
One interesting finding is that this cookie is stored only when the user declines consent (i.e.,\xspace \texttt{Reject All}\xspace).
Similarly, \texttt{cnnturk.com} is also involved in a major third-party ID synchronization operation.
Specifically, when the user lands on the website, a third-party called \texttt{lijit.com} stores the cookie \texttt{\_ljtrtb\_42}.
The value of this cookie is then sent to 21 other third-parties.
Interestingly, this behavior is observed only after the user has interacted with the cookie consent form (i.e.,\xspace \texttt{Reject All}\xspace and \texttt{Accept All}\xspace cases).
One example value of this cookie that we observed during the \texttt{Accept All}\xspace case is \texttt{c98d9202-8774-4e11-8c90-99d9cb879930-\linebreak~tuct65c0de5}, which can be used to uniquely identify a user.
Note that \texttt{lijit.com} is an ad-serving domain, which can be found in multiple blacklists for tracking domains.
Finally, \texttt{glamour.com} leaks a unique identifier which is set as the value of a first-party cookie.
Specifically, when a user lands on the website, a cookie called \texttt{CN\_xid} is stored, with one example value being \texttt{73a4ff1f-ff45-4943-bdaa-73658b00bd42}.
Then, this value is sent to exactly 21 unique third-parties.
The third-parties that receive the value of the cookie are exactly the same for all 3 types of visits.
An interesting finding is that the third-parties that receive this value are not only domains known for advertising and analytics (e.g.,\xspace~\texttt{google-analytics.com} and \texttt{securepubads.g.doubleclick.net}), but also legitimate and mainstream websites like \texttt{vogue.com} and \texttt{wired.com}.
\section{Related Work}
\label{sec:related}
The recent increased interest of regulators and governments around the privacy rights of Internet users did not result only in rules like GDPR and California Consumer Privacy Act (CCPA), but also in an important body of research.
In~\cite{fouad:hal-02567022}, authors investigate the legal compliance of purposes for 20K third-party cookies collected.
Their findings show that purposes declared in cookie policies do not comply with the purpose specification principle in 95\% of cases.
In~\cite{dabrowski2019measuring}, authors collect cookies from the Alexa Top 100K websites and compare their cookie behavior from different vantage points, to investigate whether there are differences in cookie setting when accessing Internet services from different jurisdictions.
Additionally, they study whether cookie setting behavior has changed over time by comparing today's results with a dataset from 2016.
In~\cite{10.1145/3321705.3329806} authors perform an evaluation of the tracking performed in 2K high-traffic websites, hosted both inside and outside the EU.
Specifically, they evaluate the information presented to users and the actual tracking implemented through cookies.
Their results show that the GDPR has impacted website behavior in a truly global way.
US-based websites behave similarly to EU-based ones, while third-party opt-out services reduce the amount of tracking, even for websites which do not put any effort in respecting GDPR.
On the other hand, they show that cookies can identify users when visiting more than 90\% of the websites they crawled, and they encountered a large number of websites that present deceiving information, making it it very difficult, if at all possible, for users to avoid being tracked.
Similar to this work, in~\cite{eijk2019impact}, authors crawl 1.5K EU, US, and Canadian websites from 18 countries and analyze the cookie notices they find.
Using a series of regression models, they find that a website's Top Level Domain explains a substantial portion of the variance in cookie notice metrics, but the users vantage point does not, which means that websites follow one set of privacy rules for all their users.
In~\cite{utz2019informed}, authors study the common properties of the graphical user interface of consent notices and conduct three experiments with more than 80K unique users on a German website, to investigate the influence of notice position, type of choice, and content framing on consent.
Their results show that (i) users are more likely to interact with a notice shown in the lower left part of the screen, (ii) users are willing to accept tracking compared to mechanisms that require them to allow cookie use for each category or company individually, (iii) the wide-spread practice of nudging has a large effect on the choices users make.
In~\cite{urban2018unwanted} authors study the impact of the legislation on cookie syncing between third-parties.
They show that the general structure of how the entities are arranged is not affected by the GDPR, but the new regulation has a statistically significant impact on the number of connections that shrunk by 40\% in the GDPR era.
In an effort closest to ours, Matte et al.\xspace analyzed the GDPR and ePrivacy Directive across 23K European websites to identify legal violations in implementations of cookie banners based on the storage of consent~\cite{9152617}.
That is, they (i) capture the user's choice (consent or not), (ii) measure whether the websites register the same response as the user's choice, (iii) measure whether websites register any response \emph{before} the users click their preference.
They found that: 141 websites register positive consent even if the user has not made their choice; 236 websites nudge the users towards accepting consent by pre-selecting options; and 27 websites store a positive consent even if the user has explicitly opted out.
Performing extensive tests on 560 websites, they found at least one violation in 54\% of them.
Although our work and~\cite{9152617} share similar goals, they clearly have significant differences.
First, although~\cite{9152617} focuses on cookies as the main tracking mechanism, in this work, we focus on post-cookie tracking mechanisms including browser fingerprinting, ID leaking, and ID synchronization.
In this aspect, we explore whether sites use such post-cookie tracking mechanisms to bypass any consent the user has provided for cookies.
Second,~\cite{9152617} focuses on whether the Cookie Management Provider registers the same response as the user's input.
We follow a different methodology and measure \emph{not} the response registered, but the actual tracking mechanisms that are activated when the users access a website.
\section{Discussion}
\point{GDPR compliance}
One question that comes to mind is whether these websites are in violation of the GDPR and the ePrivacy Directive.
Obviously, one cannot make such an umbrella statement for all the websites studied in this paper.
Such violations should be studied on a case-by-case basis.
Even further, each website is different, and may have a legal basis to collect user data that goes beyond the user consent.
What we identify in this paper is a \emph{disparity} between (i) what the users perceive about the collection of their data, and (ii) what some websites implement with respect to data processing.
Indeed, by being shown a cookie consent banner, users perceive that they are being asked to give their permission to the website to collect and process their data.
Even further, when they are given several choices, users feel that they are empowered to give a fine-grain permission, which will obviously be taken into account.
Unfortunately, this perception of the users is completely different from what various websites implement.
In this paper, we saw that several websites collect (and share with third-parties) information about their users, even before the users had the opportunity to register their preference.
Even worse, when the users said that they would like to reject all cookies, collection of their data even intensified.
Indeed, each website is ultimately responsible for the consent asked from their visitor.
However, it is not obvious if the legal responsibility is shifted to the Consent Management Platform (CMP).
Nonetheless, and considering our results, it is hard to believe that all these publishers do not respect the users' consent choices without intention (e.g.,\xspace~due to software bug, bad developer practices or wrong integration with their CMP).
Interestingly, existing literature, websites and blog-posts around the GDPR and changes it brought on the Internet and user tracking~\cite{solomos2020gdpr-changes}, focus solely on how identifiable information stored in cookies is maintained.
However, as highlighted here, the GDPR is not only about cookies.
Instead, we aim to increase user awareness regarding the GDPR (non)compliance of deployed stateless (i.e.,\xspace~cookie-less) tracking, and influence a change in language used in consent request statements, to be GDPR-compliant and reflect closely what the websites do in reality, in comparison to what is explained to the user.
Furthermore, our analysis of tracking per country code reveals significant discrepancies across EU (or not) countries.
These results highlight the lack of effort from specific local governments regarding the digital privacy rights of their citizens.
Our results can motivate them to take action and increase the GDPR enforcement to make websites hosted in their countries aligned with the rest of EU countries with respect to the GDPR compliance.
\point{Inbound vs. Outbound Information}
Although user tracking without user consent is generally undesirable, in this paper, we studied some sophisticated approaches to user tracking (such as first-party ID leaking\xspace and ID synchronization) which involved not only data collection, but also data sharing with third-parties.
Indeed, both approaches, provide to third-parties identifiers associated with the current user.
In this way, third-parties will be able to know that this user has visited the specific website (even if they are not embedded in that website).
And this happens even before the user has given any permission for data collection on the cookie consent banner.
To put it simply: the website has already told third-parties that this user has just visited, while the user still makes up their mind whether to give consent for data collection or not.
Thus, the user is asked for consenting to something that has already happened and it will keep on happening even if the user denies consent.
\point{Edge Cases}
Someone could argue that the edge-cases studied in this paper are momentary, and cannot be held against websites as proof of non-GDPR compliance.
However, even though we acknowledge the dynamicity of websites, we made a best effort to provide results that were repeatable across multiple crawls.
In fact, changes in third-parties embedded in a website could change their intensity of tracking.
We anticipate such changes are transient and infrequent in websites, and that high intensity of tracking is repeatable.
\point{Methodology}
The methodology we presented in this paper can be transformed into an auditing tool for regulators, stakeholders and privacy-policy makers, for verifying compliance with the GDPR, ePrivacy Directive, and users' privacy rights.
Our approach links together the
(i) requested user consent of webmaster with
(ii) actions taken by the website based on the particular consent given.
Apart from these entities, browser vendors have already shown interest in blocking bad policies on websites~\cite{cookiebot2021chrome-cookies,wilander2019tracking-prevention,dunn2020chrome-blocking-downloads} and our methodology can help towards exactly these goals.
Specifically, by following our methodology, browser vendors can detect at run-time stateless device fingerprinting attempts~\cite{brave2020} and compare these actions with given user consent.
\section{Conclusion}
\label{sec:conclusion}
Over the past couple of years, an increasing number of websites have started to present users with cookie consent banners: pop-up windows that ask for user's permission to send/receive cookies.
Such banners provide a variety of choices including (i) accept all, (ii) reject all, and (iii) accept some cookies.
In this paper, we study whether these websites that present users with cookie consent banners, track their users using ``non cookie'' approaches including first-party ID leaking\xspace, third-party ID synchronization and browser fingerprinting.
In our experiments, we found 15,334 websites that track their users using first-party ID leaking\xspace.
Even further, this tracking happened despite the fact that users of these websites had rejected all cookies in the cookie consent banner!
In fact, most of these websites (14,238) had started the first-party ID leaking\xspace tracking even before the users had any opportunity to register their consent choice.
Therefore, we highlight a gap between what users expect to happen when they see a cookie consent banner and what several websites do as a result of users' choices.
We feel that research like this helps increase transparency on the Web and expose websites which do not correspond to users' expectations, and are non-GDPR compliant.
Future work could focus on even harder questions such as:
How should third-parties connect into CMP prompts?
Is it intentional that some third-parties only take action on ``reject all'' option? If yes, why?
Are some CMPs better than others with respect to GDPR compliance?
Are all these privacy violations the website's, the CMP's, or the third-party's fault?
|
{
"timestamp": "2021-02-18T02:19:02",
"yymm": "2102",
"arxiv_id": "2102.08779",
"language": "en",
"url": "https://arxiv.org/abs/2102.08779"
}
|
\section{Introduction}
Offline handwritten text recognition consists in recognizing the text from a scanned document.
This task is usually performed in two steps, by two different neural networks. In a first step, the document image is cut into text regions: this is the segmentation step. Then, Optical Character Recognition (OCR) is applied on each text region images. Following the advances of the recognition process over time, segmentation was performed on larger and larger entities, from the character in the early ages, to text lines more recently, gradually decreasing the amount of segmentation labels required to train the system.
As a matter of fact, producing segmentation labels by hand, in addition to transcription labels, is costly. Moreover, the use of a two-step process requires to clearly define what a line should be in a non-latent pivot format, \textit{i.e}. a line of text, to generate target labels. However, the definition of a text line raises several questions that prevent to optimize its detection in order to maximize the recognition performance: is a text line a bounding box, a polygon, a set of pixels or a baseline? How should it be measured? Which loss should it be trained with? Given all these open questions, we claim that segmentation and recognition should be trained in an end-to-end fashion using a latent space between both stages. Indeed, this allows to circumvent any text line definition, while leveraging the annotation needs.
In this paper, we propose the \modelname{} (\modelacc{}), an end-to-end recurrence-free Fully Convolutional Network (FCN) model free of those issues, further reducing the needs for labels in two ways. First, the proposed model performs OCR at paragraph level, so it does not need line-level segmentation labels. Second, it does not even require line breaks in the transcription labels. The proposed model totally circumvents the line segmentation problem using a very straightforward approach. The input paragraph image is analysed in a 2D fashion using a classical fully convolutional architecture, leading to a 2D latent space that is reshaped into a 1D sequential latent space. Finally, the CTC loss is simply used to align the 1D character prediction sequence with the paragraph transcription.
This paper is organized as follows. Related Works are presented in Section \ref{related_work}. The proposed \modelacc{} architecture is described in Section \ref{architecture}. Section \ref{experiments} presents the experimental environment and the results. We draw conclusion in Section \ref{conclusion}.
\section{Related Works}
\label{related_work}
In the literature, multi-line text recognition is mainly carried out in two steps. First, a text region (line/word) segmentation is performed \cite{Renton2018,DHSegment,ARUNet}; then, an OCR is applied on the extracted text regions images \cite{Coquenet2019,Coquenet2020,Yousef_line,Michael2019} thanks to the Connectionist Temporal Classification (CTC) \cite{CTC}. As shown in \cite{Coquenet2021}, deep neural networks perform well on both task separately but, when put together, errors in the segmentation stage leads to errors in the OCR stage, leading to higher Character Error Rate (CER).
Recently, one can notice a trend towards the use of unified models. We can classify them into two categories: those performing a text region segmentation prior to the recognition and those without explicit segmentation.
\subsection{Segmentation-based approaches}
Segmentation-based approaches, by definition, require line or word segmentation labels, in addition to the associated transcription label; so the line breaks must be annotated.
Among these approaches, \cite{Carbonell2019,Carbonell2020,Chung2020} are based on object-detection methods: a Region Proposal Network (RPN), followed by a non-maximal suppression process and Region Of Interest (ROI), generates line or word bounding boxes. An OCR is then applied on these bounding boxes.
Other approaches are based on predicting the start-of-line coordinates. While in \cite{Moysset2017} the line is considered horizontal, in \cite{Wigington2018,Wigington2019}, lines are normalized, recurrently predicting coordinates. Finally, an OCR is applied on these lines.
\subsection{Segmentation-free approaches}
Since they do not explicitly segment the input image, segmentation-free approaches do not require any segmentation labels; the models can be trained using transcription labels only.
In \cite{Bluche2016,Coquenet2021}, the proposed models incorporate an attention mechanism to recurrently generate line features, performing a kind of implicit line segmentation. Indeed, an encoder generates features from the input image; then, the attention process sequentially selects features to focus on. Finally, a decoder predicts characters from these features. \cite{Bluche2017} proposed a similar approach with an implicit character segmentation.
To our knowledge, only two other works have proposed segmentation-free approaches for multi-line text recognition. \cite{Schall2018} focuses on the loss to tackle the two-dimensional aspect of the task, providing a Multi-Dimensional Connectionist Classification (MDCC). Ground truth transcription labels are converted to a two-dimensional model, using a Conditional Random Field (CRF). It enables to jump from one line to the next one, adding a new line separator label in addition to the standard CTC blank label. In \cite{Yousef2020}, the model is trained to unfold the input multi-line text image into a sequence of lines, thus forming a single large line. Thus, the task is reduced to a one-dimensional problem and the model can be trained with the standard CTC loss.
\cite{Schall2018} and \cite{Bluche2017} are part of the first works proposed for multi-line text recognition, but they remain below the state of the art. While \cite{Bluche2016} requires pretraining on line-level images, \cite{Coquenet2020} requires line breaks in the transcription labels. \cite{Yousef2020} is the only model that can be trained from scratch, without any segmentation labels nor line breaks in the transcription labels; but, as a counterpart, it requires some hyperparameters to be adapted for each dataset. Indeed, it requires input images of fixed sizes and includes intermediate bilinear interpolations with fixed dimensions which are specific to each dataset.
In this work, we propose an end-to-end model trained in the same conditions, \textit{i.e.} with paragraph transcription as the only used label, without any line breaks. Instead of unfolding the input image as in \cite{Yousef2020}, we propose to train a model to both predict and align characters so as to get vertical separation between lines, preserving the two-dimensional nature of the task. Contrary to the work presented in \cite{Yousef2020}, the proposed model is able to handle variable size input images, making it flexible enough to be used on multiple datasets without modifying any hyperparameter.
\section{SPAN Architecture}
\label{architecture}
We propose an end-to-end model to perform the optical character recognition of paragraphs. We wanted to keep the original shapes of the input images in order to preserve both their ratio and their details as well as to be flexible enough to adapt to a large variety of datasets. To this end, we use a Fully Convolutional Network as the encoder to analyse the 2D paragraph images. An implicit line segmentation is performed by reducing the vertical axis through row concatenation, reshaping the 2D latent space into a 1D latent space, acting as a collapse operator for this dimension. The training process is based on the standard CTC loss that aligns the label sequence with the data in the 1D latent space without any needs for line breaks in the annotation.
Figure \ref{fig:archi-overview} shows an overview of the model architecture: it consists of a FCN encoder which extracts the features. Then, a convolutional layer predicts the character probabilities. Finally, the rows of predictions are concatenated to obtain one single large row of predictions. This brings us back to a one-dimensional sequence alignment problem which is handled with the standard CTC loss.
\begin{figure*}[htbp!]
\centering
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{img/archi_overview.pdf}
\caption{Global architecture overview.}
\label{fig:archi-overview}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{img/encoder_overview.pdf}
\caption{FCN Encoder overview. CB: Convolution Block, DSCB: Depthwise Separable Convolution Block.}
\label{fig:encoder-overview}
\end{subfigure}
\caption{Model visualization. \ref{fig:archi-overview} presents an overview of the architecture and \ref{fig:encoder-overview} focuses on the encoder.}
\label{fig:model-overview}
\end{figure*}
\subsection{Encoder}
The purpose of the encoder is to extract features from the input images. It implies some convolutions with stride in order to reduce the memory consumption: it takes an input image $\tens{X} \in \mathbb{R}^{H \times W \times C}$ and outputs some feature maps $\tens{f} \in \mathbb{R}^{\frac{H}{32} \times \frac{W}{8} \times 512}$ where H, W and C are respectively the height, the width and the number of channels (C=1 for a grayscale image, C=3 for a RGB image). The encoder architecture is depicted in Figure \ref{fig:encoder-overview}. It corresponds to the encoder proposed in \cite{Coquenet2021}, to which the number of channels has been modified going from 16-256 up to 32-512. It is made up of a succession of Convolution Blocks (CB) and Depthwise Convolution Blocks (DSCB).
CB is defined as two convolutional layers followed by instance normalization and a third convolutional layer. This third convolutional layer has a stride of $1 \times 1$ for $CB\_1$, $2 \times 2$ for $CB\_2$ to $CB\_4$ and $2 \times 1$ for $CB\_5$ to $CB\_6$.
DSCB follows the same structure as CB but the convolutional layers are superseded by Depthwise Separable Convolutions \cite{DepthSepConv} in order to reduce the number of parameters at stake. Moreover, the third DSC has always a stride of $1 \times 1$. This enables to introduce residual connections with element-wise sum operator between the DSCB.
For both blocks, convolutional layers have a $3 \times 3$ kernel, $1 \times 1$ padding and are followed by ReLU activations. In addition, Diffused Mix Dropout (DMD)\cite{Coquenet2021} is used with three potential locations inside each block to reduce overfiting.
\subsection{Decoder}
The decoder aims at predicting and aligning the probabilities of the characters and the CTC blank label for each 2D position of the features $\tens{f}$.
The decoder is made up of a single convolutional layer with kernel $5 \times 5$, stride $1 \times 1$ and padding $2 \times 2$. It outputs $N+1$ channels, $N$ being the size of the charset. Finally, the $\frac{H}{32}$ rows are concatenated to obtain the one-dimensional prediction sequence $p\in \mathbb{R}^{(\frac{H}{32} \cdot \frac{W}{8}) \times (N+1)}$ as depicted in Figure \ref{fig:reshape}. The CTC loss is then computed between this one-dimensional prediction sequence and the paragraph transcription ground truth, without line breaks.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\textwidth]{img/reshape_operation.pdf}
\caption{Reshape operation and loss visualization. No computations are performed in the reshape operation, both left and right tensors represent characters and CTC blank label probabilities. The CTC loss is computed between the one-dimensional probabilities sequence and the paragraph transcription.}
\label{fig:reshape}
\end{figure}
We can highlight some important aspects about the decoder:
\begin{itemize}
\item In this work, the CTC blank label has a new function. Indeed, in standard OCR applied to text lines, the CTC blank label enables to recognize two identical successive characters and to predict "nothing", acting like a joker. Here, it is also used to separate lines in a two-dimensional context, as it allows to label line spacing in the input image.
\item One should notice that the prediction occurs before reshaping to 1D, which allows to take advantage of the two-dimensional context in the decision layer. This enables to localize the previous and next lines, and to align the predictions of the same text line on the same row \textit{i.e.}, and to separate it from the other text line predictions.
\item Since the prediction rows are concatenated, they are processed sequentially; nothing prevents the model from predicting the beginning of the text line on one row and the end on the next one as long as there is enough space between this text line and the following one. In Section \ref{section:results}, we show that this allows us to process inclined lines.
\end{itemize}
\section{Experimental study}
\label{experiments}
\subsection{Datasets}
We evaluate our model on three popular datasets at paragraph level: RIMES \cite{RIMES}, IAM \cite{IAM} and READ 2016 \cite{ICFHR_READ2016}.
\subsubsection{RIMES}
We used the RIMES dataset which is made up of French handwritten paragraphs, produced in the context of writing mails scenarios. The images are gray-scaled and have a resolution of 300 dpi. In the official split, 1,500 paragraphs are dedicated to training and 100 paragraphs to evaluation. The last 100 training images are used for validation so as to be comparable with the state of the art.
\subsubsection{IAM}
The IAM dataset corresponds to handwritten copy of English text passages extracted from the LOB corpus. The images are gray-scaled handwritten paragraph with a resolution of 300 dpi. In this work, we used the unofficial but commonly used split as detailed in Table \ref{table:split}.
\subsubsection{READ 2016}
The READ 2016 dataset corresponds to Early Modern German handwriting. It has been proposed in the ICFHR 2016 competition on handwritten text recognition. It is a subset of the Ratsprotokolle collection, used in the READ project. Images are in color and we used the paragraph level segmentation. We assume that the images have a resolution of around 300 dpi too.
\subsubsection{}
In section \ref{experiments}, some experiments implies pretraining using the line level images of these three datasets. The corresponding splits are shown in Table \ref{table:split}.
\begin{table}[!h]
\caption{Datasets split in training, validation and test sets and associated charset size}
\centering
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{ c c c c c c}
\hline
Dataset & Level & Training & Validation & Test & Charset size\\
\hline
\hline
\multirow{2}{*}{RIMES} & Line & 10,532 & 801 & 778 & \multirow{2}{*}{100}\\
& Paragraph & 1,400 & 100 & 100 & \\
\hline
\multirow{2}{*}{IAM} & Line & 6,482 & 976 & 2,915 & \multirow{2}{*}{79}\\
& Paragraph & 747 & 116 & 336 & \\
\hline
\multirow{2}{*}{READ 2016} & Line & 8,349 & 1,040 & 1,138 & \multirow{2}{*}{89}\\
& Paragraph & 1,584 & 179 & 197 & \\
\hline
\end{tabular}
}
\label{table:split}
\end{table}
Paragraph image examples from these three datasets are depicted in Figure \ref{fig:dataset}. IAM layout is the more structured and regular. RIMES brings some irregularities in terms of line spacing, text inclination and horizontal text alignment. Finally, the READ 2016 dataset is more complex in terms of noise, text line separation (due to ascents and descents) and size variety.
\begin{figure*}[htbp!]
\centering
\begin{minipage}[c]{0.45\textwidth}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth, frame]{img/sample_iam.png}
\caption{IAM}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth, frame]{img/sample_rimes.png}
\caption{RIMES }
\end{subfigure}
\end{minipage}
\hfill
\begin{minipage}[c]{0.45\textwidth}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth, frame]{img/sample_read2016.jpeg}
\caption{READ 2016}
\end{subfigure}
\end{minipage}
\caption{Paragraph image examples from the RIMES, IAM and READ 2016 datasets.}
\label{fig:dataset}
\end{figure*}
\subsection{Preprocessing}
Paragraph images are downscaled by a factor of 2 through a bilinear interpolation leading to a resolution of 150 dpi. Gray-scaled images are converted into RGB images concatenating the same values three times, for transfer learning purposes. They are then normalized (zero mean and unit variance) considering the channels independently.
\subsection{Data Augmentation}
Data augmentation is applied at training time to reduce over-fitting. The augmentation techniques are used in this order: resolution modification, perspective transformation, elastic distortion and random projective transformation (from \cite{Yousef2020}), dilation and erosion, brightness, and contrast adjustment and sign flipping. Each transformation has a probability of 0.2 to be applied. Except for perspective transformation, elastic distortion and random projective transformation which are mutually exclusive, each augmentation technique can be combined with the others.
\subsection{Metrics}
The Character Error Rate (CER) and the Word Error Rate (WER) are used to evaluate the quality of the text recognition. They are both computed with the Levenshtein distance between the ground truth text and the predicted text at paragraph level, without line breaks. Those edit distances are then normalized by the length of the ground truth.
Other metrics are provided in the following experiments such as the number of parameters implied by the models.
\subsection{Training details}
We used the Pytorch framework to train and evaluate our models. In all experiments, the networks are trained with the Adam optimizer, with an initial learning rate of $10^{-4}$. Trainings are performed on a single GPU Tesla V100 (32Gb), during 2 days, with a mini-batch size of 4 for paragraph images and 16 for text lines images.
\subsection{Additional information}
We do not use any post-processing \textit{i.e.} we do not use any language model nor lexicon constraint. Moreover, we only use best path decoding to get the final predictions from the character probabilities lattice. We use exactly the same training configuration from one dataset to another, without model modification, except for the last layer which depends on the charset size.
\subsection{Results}
\label{section:results}
\subsubsection{Comparison with state of the art}
In this section, we compare our approach to state-of-the-art models on the RIMES, IAM and READ 2016 datasets, at paragraph level and in the same conditions \textit{i.e.} without language model nor lexicon constraint.
Prior to compare the obtained results to the state of the art, it is important to understand the experimental conditions of each of the methods. Table \ref{table:comparison} shows model details that should be taken into account to fairly compare the following tables of results. Quantitative metrics are computed for the IAM dataset, without automatic mixed precision (for a fair comparison with respect to the memory usage). From left to right, the columns respectively denote the architecture, the number of trainable parameters, the maximum GPU memory usage during training (data augmentation included), the minimum transcription level required, the minimum segmentation level required, the use of PreTraining (PT) on subimages, the use of specific Curriculum Learning (CL) and finally the Hyperparameter Adaptation (HA) requirements from one dataset to another.
\begin{table*}[!h]
\caption{Requirements comparison of the \modelacc{} with the state-of-the-art approaches.}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ l c c c c c c c c}
\hline
Architecture & \# Param. & Max memory & Transcription label & Seg. label & PT & CL & HA\\
\hline
\hline
\cite{Carbonell2019} RPN+CNN+BLSTM & & & Word & Word & \\
\cite{Chung2020} RPN+CNN+BLSTM & & & Word & Word & \\
\cite{Wigington2018} RPN+CNN+BLSTM & & & Line & Line \\
\cite{Coquenet2021} FCN+LSTM* & 2.7 M & 2.2 Gb & Paragraph + line breaks & Paragraph & Line & \ding{55} & \ding{55}\\
\cite{Bluche2017} CNN+MDLSTM** & & & Paragraph & Paragraph & Line & \checkmark & \ding{55} \\
\cite{Bluche2016} CNN+MDLSTM* & & & Paragraph & Paragraph & Line & \checkmark & \ding{55}\\
\cite{Yousef2020} GFCN & 16.4 M & 8.8 Gb & Paragraph & Paragraph & \ding{55} & \ding{55} & \checkmark\\
{[}This work - SPAN{]} FCN & 19.2 M & 5.1 Gb & Paragraph & Paragraph & Line & \ding{55} & \ding{55}\\
\hline
* with line-level attention\\
** with character-level attention\\
\end{tabular}
}
\label{table:comparison}
\end{table*}
As one can see, models from \cite{Carbonell2019,Chung2020,Wigington2018} require transcription and segmentation labels at word or line levels to be trained, which implies more costly annotations. The models from \cite{Bluche2016,Bluche2017,Coquenet2021} and the \modelacc{} are pretrained on text line images to speed up convergence and to reach better results, thus also using line segmentation and transcription labels even if it is not strictly necessary. While the model from \cite{Coquenet2021} needs line breaks in the transcription annotation, \cite{Bluche2016,Bluche2017} used a specific curriculum learning method for training. In \cite{Yousef2020}, some hyperparameters must be modified from one dataset to another in order to reach optimal performance, namely the fixed input dimension and two intermediate upsampling sizes which are crucial. We do not have such problem since we are working with input images of variable size and we focus on the resolution to be robust to the variety of datasets. Moreover, despite a larger number of parameters (+ 17\% compared to \cite{Yousef2020}), the \modelacc{} requires less GPU memory which is a critical point when training deep neural networks.
Table \ref{table:rimes-pg}, \ref{table:iam-pg} and \ref{table:read2016-pg} show the results of the \modelacc{} compared to the state-of-the-art approaches, for the RIMES, IAM and READ 2016 datasets respectively. One can notice that we reach competitive results on those three datasets, each having its own complexities, without any hyperparameter adaptation. Results here includes model pretraining on line images but the model can be trained without pretraining \textit{i.e.} without using any line-level annotation, while keeping competitive results, as shown in Section \ref{section:pretrain}.
\begin{table}[!h]
\caption{Comparison of the \modelacc{} results with the state-of-the-art approaches at the paragraph level on the RIMES dataset.}
\centering
\resizebox{0.7\linewidth}{!}{
\begin{tabular}{ l c c c c }
\hline
\multirow{2}{*}{Architecture} & CER (\%) & WER (\%) & CER (\%) & WER (\%) \\
& validation & validation & test & test\\
\hline
\hline
\cite{Bluche2016} & 2.5 & 12.0 & 2.9 & 12.6 \\
\cite{Wigington2018} & & & 2.1 & 9.3 \\
\cite{Coquenet2021} & \textbf{1.74} & \textbf{8.72} & \textbf{1.90} & \textbf{8.83} \\
This work - \modelacc{} & 3.56 & 14.29 & 4.17 & 15.61 \\
\hline
\end{tabular}
}
\label{table:rimes-pg}
\end{table}
\begin{table*}[!h]
\caption{Comparison of the \modelacc{} results with the state-of-the-art approaches at the paragraph level on the IAM dataset.}
\centering
\resizebox{0.7\linewidth}{!}{
\begin{tabular}{ l c c c c}
\hline
\multirow{2}{*}{Architecture} & CER (\%) & WER (\%) & CER (\%) & WER (\%) \\
& validation & validation & test & test \\
\hline
\hline
\cite{Carbonell2019}*& 13.8 & & 15.6 & \\
\cite{Chung2020} & & & 8.5 & \\
\cite{Wigington2018}& & & 6.4 & 23.2 \\
\cite{Coquenet2021}& \textbf{3.04} & \textbf{12.69} & \textbf{4.32} & \textbf{16.24} \\
\cite{Bluche2017}& & & 16.2 & \\
\cite{Bluche2016}& 4.9 & 17.1 & 7.9 & 24.6 \\
\cite{Yousef2020} & & & 4.7 & \\
This work - \modelacc{} & 3.57 & 15.31 & 5.45 & 19.83 \\
\hline
\multicolumn{5}{l}{*Results are given for page level}\\
\end{tabular}
}
\label{table:iam-pg}
\end{table*}
\begin{table}[!h]
\caption{Comparison of the \modelacc{} results with the state-of-the-art approaches at the paragraph level on the READ 2016 dataset.}
\centering
\resizebox{0.7\linewidth}{!}{
\begin{tabular}{ l c c c c}
\hline
\multirow{2}{*}{Architecture} & CER (\%) & WER (\%) & CER (\%) & WER (\%) \\
& validation & validation & test & test\\
\hline
\hline
\cite{Coquenet2021} & \textbf{3.75} & \textbf{18.61} & \textbf{3.63} & \textbf{16.75} \\
This work - \modelacc{} & 5.09 & 23.69 & 6.20 & 25.69\\
\hline
\end{tabular}
}
\label{table:read2016-pg}
\end{table}
\subsubsection{\modelacc{} prediction visualization}
Figure \ref{fig:viz} presents a visualization of the \modelacc{} prediction for an example of the RIMES test set. Character predictions are shown in red; they seem like rectangle since they are resized to fit the input image size (the features size is $\frac{H}{32} \times \frac{W}{8}$). Combined with the receptive field effect, this explains the shift that can occur between the prediction and the text. As one can notice, text line predictions are totally aligned, or aligned by blocks; the lines are well separated by blank labels, which act as line spacing labels. As one can see, this block alignment enables to handle downward inclined lines, especially for lines 3 and 4. Moreover, the model does not degrade in the presence of large line spacing.
\begin{figure*}[h!]
\begin{minipage}[b]{0.55\textwidth}
\includegraphics[width=\textwidth, frame]{img/RIMES-test_86_att.png}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\scriptsize{
Monsieur, depuis la 02 Ferries 2007 je suis\\
\\
de\\
enu maman d'un petit garçon, j'aimer\\
ais avoir\\
un rendez-vous\\
s pour sousoire à une m\\
ntuelle sont\\
\\
\\
\\
\\
\\
Je vous pris d'agrée, m\\
ansieurs, mes\\
salutation dis\\
tinguées\\
}
\end{minipage}
\par\medskip
Monsieur, depuis l\textbf{a} 02 Fe\textbf{r}rie\textbf{s} 2007 je suis de\textbf{\textit{v}}enu maman d'un petit garçon, j'aimerais avoir un rendez-vous\textbf{s} pour sous\textbf{o}ire à une m\textbf{n}tuelle s\textbf{o}nt\textbf{\textit{é. }}Je vous pris d'agrée, m\textbf{a}nsieurs, mes salutation\textbf{\textit{s}} distinguées
\caption{\modelacc{} predictions visualization for a RIMES test example. Left: 2D characters predictions are projected on the input image. Red color indicates a character prediction while transparency means blank label prediction. Right: row by row text prediction. Bottom: full text prediction where errors are shown in bold and missing letters are shown in italic.}
\label{fig:viz}
\end{figure*}
\subsubsection{Impact of pretraining}
\label{section:pretrain}
In this experiment, we try to highlight the impact of pretraining on the \modelacc{} results. To this end, we compare two pretraining methods at line level: one focusing only on the optical recognition task and the second one focusing on both recognition and prediction alignment. Let's define the following training approaches:
\begin{itemize}
\item \modelacc{}-Line-R\&A: the \modelacc{} is trained with line-level images. Here, the network has to learn both the recognition and the alignment tasks.
\item Pool-Line-R: a new model is trained with line-level images to only focus on the recognition task. This network consists in the previous defined \modelacc{} encoder followed by an Adaptive MaxPooling to collapse the vertical axis; then, a convolutional layer predicts the characters and blank label probabilities. This is the standard way to process text line images, as in \cite{Coquenet2020}. Since the prediction is already in one dimension, the network does not need to care about vertical alignment.
\item \modelacc{}-Scratch: the \modelacc{} is trained directly on paragraph images without pretraining.
\item \modelacc{}-PT-R: \modelacc{} weights are initialized with Pool-Line-R ones. It is then trained with paragraph images.
\item \modelacc{}-PT-R\&A: \modelacc{} weights are initialized with \modelacc{}-Line-R\&A ones. It is then trained with paragraph images.
\end{itemize}
One can note that the vertical receptive field is bigger than line image heights. Thus, when the model switches from line images to paragraph images, the decision benefits from more context, which replaces part of the previously used padding.
Results are given in Table \ref{table:pretrain}. Focusing on the line-level section, one can notice that, as expected, we reached better results on text lines when the task is reduced to optical recognition compared to the task of recognition and alignment, whatever the dataset. This leads to a CER improvement of 0.94 point for IAM, 0.79 point for RIMES and 0.38 point for READ 2016.
Now, comparing the paragraph level approaches, one can notice that, except for the RIMES CER, pretraining leads to better results, and sometimes by far (-2.93 points of CER for READ 2016); moreover, pretraining on an easier task, \textit{i.e.} only on the optical recognition, is even more efficient.
\begin{table}[!h]
\caption{Impact of pretraining the SPAN on line images for the IAM, RIMES and READ 2016 datasets. Results are given on the test sets.}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ l c c c c c c}
\hline
\multirow{2}{*}{Approach}& \multicolumn{2}{c}{IAM} & \multicolumn{2}{c}{RIMES} & \multicolumn{2}{c}{READ 2016} \\
& CER (\%) & WER (\%) & CER (\%) & WER (\%) & CER (\%) & WER (\%) \\
\hline
\hline
\textbf{Line-level training} \\
Pool-Line-R & \textbf{4.82} & \textbf{18.17} & \textbf{3.02} & \textbf{10.73} & \textbf{4.56} & \textbf{21.07}\\
\modelacc{}-Line-R\&A & 5.76 & 21.33 & 3.81 & 13.80 & 4.94 & 22.19\\
\\
\textbf{Paragraph-level training} \\
\modelacc{}-Scratch & 6.46 & 23.75 & \textbf{4.15} & 16.31 & 9.13 & 36.63\\
\modelacc{}-PT-R & \textbf{5.45} & \textbf{19.83} & 4.74 & \textbf{15.55} & \textbf{6.20} & \textbf{25.69}\\
\modelacc{}-PT-R\&A & 5.78 & 21.16 & 4.17 & 15.71 & 6.62 & 27.38\\
\hline
\end{tabular}
}
\label{table:pretrain}
\end{table}
The RIMES CER value can be explained by the difference between the CTC loss and the levenshtein distance which are not the same. As a matter of fact, generally, a lower CTC loss implies a lower CER but it is not always true. Indeed, Figure \ref{fig:losses} shows the different loss CTC training curves for the three datasets. This time, we can clearly see that, even for RIMES, pretraining on the recognition task only is more beneficial. This figure also demonstrates the convergence speed up brought by these pretraining approaches.
\begin{figure*}[h!]
\centering
\begin{minipage}[c]{0.45\textwidth}
\begin{subfigure}[b]{\textwidth}
\centering
\includegraphics[width=\textwidth]{img/loss_pretraining_iam.png}
\caption{IAM}
\end{subfigure}
\end{minipage}
\hfill
\begin{minipage}[c]{0.45\textwidth}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{img/loss_pretraining-rimes.png}
\caption{RIMES}
\end{subfigure}
\end{minipage}
\begin{minipage}[c]{0.45\textwidth}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{img/loss_pretraining_read.png}
\caption{READ 2016}
\end{subfigure}
\end{minipage}
\caption{Training curves comparison between the different pretraining approaches, on the RIMES, IAM and READ 2016 datasets.}
\label{fig:losses}
\end{figure*}
\subsubsection{Discussion}
As we have seen in Figure \ref{fig:viz}, the 2D prediction keeps the spatial information. As such, we can assume that the \modelacc{} could be used as a primary stage of a deeper end-to-end network that could handle more complex tasks such as word spotting in handwritten digitized document. Moreover, since we are using the standard CTC loss, one can easily add standard character or word language model to further improve the results. However, it has to be noticed that this model is limited to single-column multi-line text images due to the row concatenation operation. Moreover, the \modelacc{} can handle easily handle downward sloping lines but cannot handle upward ones due to the fixed reshaping order.
\section{Conclusion}
\label{conclusion}
In this paper, we proposed the \modelname{}, an end-to-end recurrence-free segmentation-free FCN model performing OCR at paragraph level. It reaches competitive results on the RIMES, IAM and READ 2016 datasets without any model architecture or training adaptation from one to another. It follows a new training approach bringing several other advantages. First, it only needs transcription label at paragraph level (without line breaks), leveraging the need for handmade annotation, which is a critical point for a deep learning system. Second, training this model is as simple as training a line-level OCR with the CTC loss. Finally, it can handle variable image input sizes, making it robust enough to adapt to multiple datasets; it is also able to deal with downward inclined text lines.
\section*{Acknowledgments}
The present work was performed using computing resources of CRIANN (Normandy, France) and HPC resources from GENCI-IDRIS (Grant 2020-AD011012155). This work was financially supported by the French Defense Innovation Agency and by the Normandy region.
\vspace{-0.4cm}
\begin{figure}[!h]
\centering
\includegraphics[height=1.8cm]{img/AID.jpg}
~
\includegraphics[height=1.8cm]{img/logo-region-normandie.jpg}
\label{data}
\end{figure}
\vspace{-0.5cm}
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-02-18T02:17:33",
"yymm": "2102",
"arxiv_id": "2102.08742",
"language": "en",
"url": "https://arxiv.org/abs/2102.08742"
}
|
\section{Introduction}
Deep neural networks (NNs) are widely used in computer vision (CV).
Their main strength lies in their ability to find complex underlying features in images.
A common method for training an NN is to minimize the cross-entropy loss, which is equivalent to maximizing the negative log-likelihood between the empirical distribution of the training set and the probability distribution defined by the model.
This relies on the independent and identically distributed (i.i.d.) assumptions as underlying rules of basic machine learning, which state that the examples in each dataset are independent of each other, that train and test set are identically distributed and drawn from the same probability distribution~\cite{DBLP:books/daglib/0040158}.
However, if the train and test domains follow different image distributions the i.i.d. assumptions are violated, and deep learning leads to unpredictable and poor results~\cite{tan2018survey}.
This has been demonstrated by using adversarially constructed examples~\cite{Goodfellow2015ExplainingAH} or variations in the test images such as noise, blur, and JPEG compression~\cite{Hendrycks2019BenchmarkingNN}.
Authors in~\cite{DAmour2020UnderspecificationPC} even claim that any standard NN suffers from such an unpredictable distribution shift when it is deployed in the real world.
Transfer learning approaches that deal with such distribution shifts can be grouped into three main categories as depicted in Figure~\ref{fig:currentState}: a) \emph{Multiple Domains Multiple Models (MDMM)}; b) \emph{Single Domain Single Model (SDSM)}; and c) \emph{Multiple Domains Single Model (MDSM)}.
MDMM approaches treat all datasets as independent and train a respective model for each of them.
Therefore, these approaches are very costly to train, and learned knowledge cannot be transferred between datasets.
SDSM approaches train a single model on a large dataset merged from many smaller ones.
However, it is difficult to create a balanced dataset required by the NN to learn a general representation suitable for all domains.
MDSM approaches train a single model on various datasets at different stages, and can therefore transfer learned knowledge to new domains.
However, if trained with the standard cross entropy these models suffer from an unpredictable and error-prone knowledge transfer and \emph{catastrophic forgetting}, where learned knowledge from previous datasets tends to be forgotten after training on the current dataset.
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.27\textwidth}
\centering
\includegraphics[width=\textwidth]{img/current1.jpg}
\caption{MDMM}
\label{fig:current1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{img/current2.jpg}
\caption{SDSM}
\label{fig:current2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{img/current3.jpg}
\caption{MDSM}
\label{fig:current3}
\end{subfigure}
\caption{Categorization of domain adaptation approaches: a) Multiple Domains Multiple Models (MDMM); b) Single Domain Single Model (SDSM); and c) Multiple Domains Single Model (MDSM).}
\label{fig:currentState}
\end{figure*}
To reduce the high dependency on the training domain, pre-training methods that generate rich embedding spaces seem to be a promising research direction for CV and natural language processing (NLP).
Exploring these embedding spaces, it is found that NNs encode visually similar classes close to each other when sufficient training data is available.
Recently, the idea of training an NN with an image-independent embedding space in form of language embeddings has also been proven to be beneficial for CV tasks~\cite{DBLP:conf/eccv/JoulinMJV16,radford2learning,DBLP:journals/corr/abs-2010-00747}.
In this paper, we introduce the \emph{knowledge graph neural network} (KG-NN), a novel approach to learn a visual model using a knowledge graph (KG) and its knowledge graph embedding $\vec{h}_{KG}$ as a trainer.
More concretely, a domain-invariant embedding space using a KG and an appropriate KG embedding algorithm is constructed.
We then train KG-NN with a contrastive loss function to adapt its visual embedding to $\vec{h}_{KG}$ given by the KG.
KG-NN, therefore, learns the relevant features of the images by connecting semantically similar classes and distinguishing them from different ones.
The benefit is two-fold.
First, KG-NN will be more robust to distribution shifts since the embedding space is independent of the dataset distribution, and second, it is enriched with additional semantic data in a controlled manner.
To investigate the generalization and adaption of KG-NN in real-world scenarios, the task of visual transfer learning provides a suitable testing environment.
Transfer learning tasks consist of a source and a target dataset, differing in terms of their underlying distribution, e.g sensors, environments, countries.
A domain generalization task has only access to labeled source data, whereas the domain adaptation task contains a small amount of additional labeled target data.
For domain generalization - \emph{Scenario 1}, we performed two experiments: 1) object classification, where the NN is trained on the mini-ImageNet~\cite{DBLP:conf/nips/VinyalsBLKW16} dataset and evaluated on derivatives; 2) road sign recognition, where the NN is trained on the German Traffic Signs Dataset (GTSRB)~\cite{Stallkamp2012ManVC} and evaluate on the Chinese Traffic Signs Dataset (CTSD)~\cite{Yang2016TowardsRT}.
For domain adaptation - \emph{Scenario 2}, we train the NN on GTSRB and additional labeled target data from CTSD.
In both scenarios, the respective KGs are developed in Resource Description Framework (RDF) representation.
RDF provides the necessary means for an easy and flexible extension of the defined schemas and allows for enriching and interlinking entities in the KGs with complementary information from other sources.
The generality of our approach becomes apparent in the fact that it can be assigned to any of the three categories illustrated in Figure~\ref{fig:currentState} since we provide an alternative and enriched training method for NNs.
While in this paper, we only compare with approaches from the third category, our results indicate that KG-NN is significantly more accurate compared to a conventional approach based on the cross-entropy loss in any domain-changing scenario.
Our main contributions of this paper are summarized as follows:
\begin{itemize}
\item We introduce KG-NN, a neuro-symbolic approach that uses prior domain-invariant knowledge captured by a KG to train an NN.
\item We adapt a contrastive loss function to combine knowledge graph embeddings with the visual embeddings learned by the NN.
\item We evaluate the KG-NN approach in domain generalization and domain adaptation tasks on two different scenarios with respective image datasets.
\end{itemize}
The paper starts with the definition of preliminaries in Section~\ref{sec:preliminaries}.
Section~\ref{sec:knowledge graph as a trainer} presents a detailed description of KG-NN where a KG is used as a trainer.
Section~\ref{sec:evaluation} provides an evaluation on two datasets in a domain generalization and domain adaptation task.
Related work is outlined in Section~\ref{sec:related-work}.
We conclude the paper and provide an outlook on future directions in Section~\ref{sec:conclusion}.
\section{Preliminaries}
\label{sec:preliminaries}
\paragraph{Knowledge Graph.}
We adopt the definition given by Hogan et al.~\cite{DBLP:journals/corr/abs-2003-02320} where a knowledge graph is \emph{a graph of data aiming to accumulate and convey real-world knowledge, where entities are represented by nodes and relationships between entities are represented by edges}.
In its most basic form, a KG is a set of triples $G = {H, R, T}$, where $H$ is a set of entities, $T \subseteq E \times L $, is a set of entities or literal values and $R$, a set of relationships which connects $H$ and $T$.
\paragraph{Knowledge Graph Embedding.} A knowledge graph embedding $\vec{h}_{KG}$ is a representation of entities and edges of a KG in a high-dimensional vector space while preserving its latent structure~\cite{DBLP:journals/corr/abs-2003-02320}.
Related to language embeddings, we count $\vec{h}_{KG}$ as a form of a semantic embedding $\vec{h}_{s}$.
The $\vec{h}_{KG}$ is learned by a knowledge graph embedding method $KGE(\cdot)$ using entities and relations encoded in the $KG$.
Individual vectors, corresponding to the entities from the $KG$ represented in $\vec{h}_{KG}$ are denoted as $\vec{h}_{KG,a}$ with dimensionality $d_P$.
\paragraph{Visual Embedding.}
An \emph{encoder network} $E(\cdot)$ is part of the NN and maps images $\vec{x}$ to a visual embedding $\vec{h}_{v} = E(\vec{x}) \in \PazoBB{R}^{d_E}$, where the activations of the final pooling layer and thus the representation layer have a dimensionality $d_E$, where $d_E$ depends on the encoder network that is used. If the encoder network is learned using a semantic embedding, we define it as $\vec{h}_{v(s)}$.
If the semantic embedding is given by a KG we further denote the visual-semantic embedding as $\vec{h}_{v(KG)}$.
\paragraph{Visual Projection.}
A \emph{projection network} $P(\cdot)$ maps the normalized embedding vectors $\vec{h}_{v}$ into a visual projection $\vec{z} = P(\vec{h}_{v}) \in \PazoBB{R}^{d_P}$ in which it is compared with the class-label representation of the $\vec{h}_{KG}$.
For the projection network $P(\cdot)$, we use a multi-layer perceptron \cite{DBLP:books/sp/HastieFT01} with a single hidden layer, an input dimensionality $d_E$, and output vector of size $d_P$ to match the dimensionality of $\vec{h}_{KG}$.
\paragraph{Transfer Learning.}
A formal definition of transfer learning is presented in \cite{DBLP:conf/emnlp/RuderP17} as follows: \emph{Given a source domain $D_S$ with input data $X_S$, a corresponding source task $T_S$ with labels $Y_S$, as well as a target domain $D_T$ with input data $X_T$ and a target task $T_T$ with labels $Y_T$, the objective of transfer learning is to learn the target conditional probability distribution $P_T (Y_T | X_T )$ with the information gained from $D_S$ and $T_S$ where $D_S \neq D_T$ or $T_S \neq T_T$}.
Transfer learning with no target data at training is referred to as domain generalization, whereas supervised domain adaptation has access to a small amount of labeled target data.
\section{Knowledge Graph as a Trainer}
\label{sec:knowledge graph as a trainer}
In this section, we define the basic terminology of the KG-NN approach as well as the underlying pipeline for the realization of a transfer learning task.
The main objective of KG-NN is incorporating prior knowledge into the deep learning pipeline using a knowledge graph as a trainer.
As depicted in Figure~\ref{fig:Knowledge Graph as a Trainer overview}, the class labels of a given dataset are infused to the NN in form of a high-dimensional vector of the knowledge graph embedding space $\vec{h}_{KG}$, instead of using the standard one-hot encoded vectors.
This $\vec{h}_{KG}$ shown in Figure~\ref{fig:Knowledge Graph as a Trainer semantic} is generated from a KG using a knowledge graph embedding method $KGE(\cdot)$.
It incorporates domain-invariant relations to other classes inside or outside the dataset and therefore enriches the NN with auxiliary knowledge in an indirect manner.
To guide the adaption of the NN to the $\vec{h}_{KG}$ space, we use the \emph{contrastive knowledge graph embedding loss}.
It compares the respective outputs from the visual feature extractor with the class label vectors of the $\vec{h}_{KG}$ forming a visual-sematic embedding space $\vec{h}_{v(KG)}$.
As a result, the learned NN projects respective images close to their representations given by the $\vec{h}_{KG}$.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{img/contrakg_a.pdf}
\subcaption{Training abstraction of $\vec{h}_{v(KG)}$.}
\label{fig:Knowledge Graph as a Trainer overview}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{img/contrakg_b.pdf}
\subcaption{Knowledge graph embedding $\vec{h}_{KG}$.}
\label{fig:Knowledge Graph as a Trainer semantic}
\end{subfigure}
\caption{KG-NN Approach: a) the main building blocks for learning a visual-semantic embedding space $\vec{h}_{v(KG)}$ using a knowledge graph as a trainer; b) the 2D projection of the semantic-embedding $\vec{h}_{KG}$ represented in a knowledge graph.}
\label{fig:Knowledge Graph as a Trainer}
\end{figure*}
\paragraph{Contrastive Knowledge Graph Embedding Loss.}
\label{subsec:ContrastiveKnowledgeGraphEmbeddingLoss}
We derive the contrastive knowledge graph embedding loss from the supervised contrastive loss~\cite{DBLP:conf/nips/KhoslaTWSTIMLK20,DBLP:conf/icml/ChenK0H20} which extend the multi-class N-pair loss~\cite{DBLP:conf/nips/Sohn16} or InfoNCE loss~\cite{DBLP:journals/corr/abs-1807-03748} with class label information.
Instead of contrasting images in the batch against an anchor image, we adapt the loss to contrast images of the batch against the class label representation of the $\vec{h}_{KG}$.
A batch consists of 2N training samples, two augmented versions for each of the N training images.
Within a batch, an anchor i $\in \{1...2N\}$ is selected that corresponds to a specific class label $\vec{y}_i$ and therefore assigns a specific embedding vector of the $\vec{h}_{KG}$, $h_{KG,i}$.
Positive samples are all samples that correspond to the same class label as the anchor i.
The numerator in the loss function computes a similarity score between the anchor vector of the $\vec{h}_{KG}$, $\vec{h}_{KG,i}$, and the visual projection vector of a positive sample in the batch, $\vec{z}_{j}$.
The denominator computes the similarity score between the anchor vector of the $\vec{h}_{KG}$ and the visual projection vector of all other samples $\vec{z}_{k}$ in the batch.
We choose the cosine similarity as the distance measure in the high-dimensional space.
For each anchor $i$, there can be many positive samples, which contribute to the final loss, where $N_{\vec{y}_i}$ is their total number.
The KG-based contrastive loss function is then given by:
\begin{equation}
\mathcal{L}_{KG} = \sum^{2N}_{i=1} \mathcal{L}_{KG,i}
\label{eq:1}
\end{equation}
with
\begin{equation}
\mathcal{L}_{KG,i} = \frac{-1}{2N_{\vec{y}_i} - 1} \sum_{j = 1}^{2N} \mathds{1}_{i \neq j} \cdot \mathds{1}_{\vec{y_i=\vec{y}_j}} \cdot \log \frac{\exp{(\vec{h}_{KG,i} \cdot \vec{z}_j / \tau)}}{\sum^{2N}_{k=1} \mathds{1}_{i \neq k} \exp{(\vec{h}_{KG,i} \cdot \vec{z}_k / \tau)}}
\label{eq:2}
\end{equation}
\noindent where $\vec{z}_l = P(E(\vec{x}))$, $\mathds{1}_{k \neq i} \in \{0, 1\}$ is an indicator function that returns 1 iff $k \neq i$ evaluates as true, and $\tau > 0$ is a predefined scalar temperature parameter.
During optimization of the loss function $ \mathcal{L}_{KG} $, the NN learns its weights by mapping its projection $\vec{z}_l$ to the $\vec{h}_{KG}$ space.
\begin{figure}[tb]
\centering
\small
\includegraphics[width=\textwidth]{img/ArchitectureGeneric.jpg}
\caption{The designed pipeline consisting of five phases where a knowledge graph acts as a trainer supporting adaption and generalization:
\emph{Knowledge Graph Construction};
\emph{Knowledge Graph Embedding};
\emph{Source Domain Pre-Training};
\emph{Target Domain Pre-Training};
and \emph{Linear Layer Training}.}
\label{fig:architecture}
\end{figure}
\subsection{Adaptation to a Labeled Target Domain}
\label{subsec:Adaptation to a Labeled Target Domain}
Training robust NNs is crucial in real-world scenarios as deployment domains typically differ from the training ones.
The knowledge graph as a trainer can influence how an NN should behave in different environments by providing a stable embedding space.
However, if the domain gap is quite large, it is beneficial to fine-tune the NN on labeled data of the target domain.
We design a training pipeline to support a transfer learning scenario where a small amount of labeled target data exists.
An overview of this pipeline comprised of five consecutive phases is shown in Figure~\ref{fig:architecture}.
\paragraph{Knowledge Graph Construction.}
Knowledge graphs can represent prior knowledge encoded with rich semantics in a graph structure.
Based on the selected scenario, underlying knowledge of one or multiple domains is conceptualized and formalized into a KG.
Since KGs are manually curated by human experts, it is possible to define an underlying schema comprising multiple classes from different domains.
This joint representation of domains enables inferring relations between classes, which can then be transferred into high-dimensional vector space.
\paragraph{Knowledge Graph Embedding.}
The KG is transformed into a knowledge graph embedding space $\vec{h}_{KG}$ via a knowledge graph embedding method $KGE(\cdot)$.
There are various approaches to generate these dense vectors that encode all entities and relations within the KG~\cite{Nickel2016HolographicEO,DBLP:conf/aaai/DettmersMS018,DBLP:conf/naacl/NguyenNNP18}.
Note that KG-NN can operate on any $\vec{h}_{KG}$ generated by any $KGE(\cdot)$, as an $\vec{h}_{KG}$ only reflects similarities between entities by distances and positions in the vector space.
Thus, if entities share many properties in the KG, they are closely located in space.
\paragraph{Source Domain Pre-Training.}
We train KG-NN from scratch using the KG as a trainer and do not initialize the NN with pre-trained weights from ImageNet~\cite{DBLP:journals/ijcv/RussakovskyDSKS15}
As the $\vec{h}_{v(KG)}$ space of KG-NN depends on the KG instead of the source dataset, KG-NN can be applied to other domains following the same semantic relations given by the KG.
This property is shown on the domain generalization task.
\paragraph{Target Domain Pre-Training.}
Small amounts of labeled target data can usually be gathered with manageable effort.
However, just fine-tuning an NN with additional target domain data using the cross-entropy loss leads to catastrophic forgetting and thus poor accuracy.
We assume that this happens because the NN tries to find a new $\vec{h}_{v}$ that fits the target domain, but differs from the embedding obtained from the source domain.
In contrast, NNs optimized on the source domain using a KG as a trainer, can simply be enriched with additional target data using the same training method.
Therefore, KG-NN pre-trained on the source domain, is retrained on the target dataset using the same $\vec{h}_{KG}$.
\paragraph{Linear Layer Training.}
For adaption to a downstream task like classification, we add a randomly-initialized linear fully-connected layer to the trained encoder network.
The size of the output vector depends on the number of classes.
This linear layer is trained with the default cross-entropy loss, while the parameters of the encoder network remain unchanged.
\section{Experiment}
\label{sec:evaluation}
We conduct experiments on two different scenarios with multiple datasets to demonstrate the benefit of training an NN using a knowledge graph as a trainer, which leads to more accurate and more robust models in terms of the distribution shift.
We compare KG-NN with two baselines: 1) CE, which trains the NN using the supervised cross-entropy loss; and 2) SupCon, which trains the NN with the (self-)supervised contrastive loss~\cite{DBLP:conf/nips/KhoslaTWSTIMLK20}.
We chose CE, as it is typically used for training NNs, as well as SupCon, as this approach utilizes a similar contrastive loss function, however without the incorporation of prior knowledge and supervision.
CE and SupCon learn an embedding layer based on the data distribution of the source dataset, whereas KG-NN relies on the embedding given by the knowledge graph.
To qualitatively evaluate the influence of the knowledge graph embedding we further compare against GloVe, a variation of KG-NN that uses a GloVe~\cite{DBLP:conf/emnlp/PenningtonSM14} language embedding instead of $\vec{h}_{KG}$.
All approaches use the same ResNet-50~\cite{DBLP:conf/cvpr/HeZRS16} backend as encoder network and only differ in their method how this encoder network is trained.
Two different scenarios are defined to analyze our approach to concrete transfer learning tasks.
\emph{Scenario 1} - we investigate the sensitivity to distribution shifts using a domain generalization task.
Therefore, we train:
a) KG-NN, CE, SupCon, and GloVe from scratch on mini-ImageNet and evaluate on its derivatives, ImageNetV2~\cite{DBLP:conf/icml/RechtRSS19}, ImageNet-R~\cite{hendrycks2020many}, ImageNet-Sketch~\cite{wang2019learning} and ImageNet-A~\cite{hendrycks2019nae};
b) KG-NN, CE, and SupCon from scratch on GTSRB, and evaluate on CTSD.
\emph{Scenario 2} - we focus on supervised domain adaptation, a more practical scenario where KG-NN, CE, and SupCon are trained on GTSRB and fully retrained on CTSD with a small amount of target data.
Note that we exclude GloVe when using GTSRB/CTSD since the language embedding does not contain a specific representation for each roadsign class and therefore can not be applied straight forward.
\subsection{Scenario 1 - Domain Generalization}
Domain generalization describes the task of learning generalized models on a source domain so that they can be used on unseen target domains.
Therefore, KG-NN is used without the target domain pre-training phase.
\subsubsection{Experiment 1 - Wordnet-Subset with mini-ImageNet}
\paragraph{Dataset Settings.}
As source domain, we use mini-ImageNet, a derivative of the ImageNet dataset, consisting of 60K color images of size 84 × 84 with 100 classes, each having 600 examples.
Compared to ImageNet, this dataset fits in memory on modern machines, making it very convenient for rapid prototyping and experimentation.
For the evaluation, we use the target domains:
ImageNetV2, which contains 10 new test images per class and closely follows the original labeling protocol;
ImageNet-R, which has art, cartoons, deviantart, etc. renditions of 200 ImageNet classes resulting in 30,000 images;
ImageNet-Sketch comprising 50,000 images, 50 images for each of the 1000 ImageNet classes; and \mbox{ImageNet-A}, which contains real-world, unmodified, and naturally occurring examples that cause machine learning model's performance to significantly degrade.
\paragraph{Knowledge Graph and KG Embedding Space.}
WordNet is a lexical database containing nouns, verbs, adjectives, and adverbs of the English language structured into respective synsets~\cite{DBLP:journals/cacm/Miller95,wordnet}.
Each synset is an underlying concept consisting of a collection of synonyms as well as its relations to other synsets.
The \emph{Mini WordNet Knowledge Graph} (MWKG) is created by extracting the respective synsets of each label from the mini-ImageNet dataset from~\cite{wordnetRDF} into RDF representation.
These synsets are grouped based on the lexical domain they pertain to, e.g. \emph{animal}, \emph{artifact}, or \emph{food}.
They are represented as classes and further described with relations such as: \emph{hypernym}, \emph{meronym}, \emph{synset-member}.
Additionally, a shallow taxonomy is established by extracting the parents of each synset including their relationships and attributes.
In total, MWKG contains 198 classes with 8 annotation properties.
We transfer MWKG into a 300-dimensional $\vec{h}_{KG}$ using the MRGCN~\cite{DBLP:journals/corr/abs-2003-12383}, which exploits the literal information in addition to classes and their relationships.
To realize that, we use the MRGCN's node classification feature to build the $\vec{h}_{KG}$ that explicitly clusters the six lexical domains: animal, artifact, communication, food, object, and plant.
\paragraph{Training Details.}
All models use a ResNet-50 backend and are pre-trained with a batch size of 1024 on the mini-ImageNet dataset.
We resize the images to 32x32 for fast prototyping.
KG-NN and SupCon are trained for 1000 epochs using their respective contrastive loss function, stochastic gradient descent (SGD) with a learning rate of 0.5, cosine annealing, and a temperature of $\tau = 0.5$.
CE is trained for 500 epochs with the cross-entropy loss and SGD with a learning rate of 0.8.
For the \emph{linear-layer phase}, we train an \emph{one-layer} MLP on top of the frozen encoder networks of KG-NN, SupCon, and CE, with an adam optimizer and a learning rate of 0.0004.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{img/miniImageNet.pdf}
\caption{Accuracy of the domain generalization task using mini-ImageNet as source and multiple derivatives as target domains. We compare KG-NN with the standard CE, SupCon, a version of our loss without auxiliary knowledge of a KG, and GloVe, a version of KG-NN using a language embedding instead of a $\vec{h}_{KG}$.}
\label{fig:mini-ImageNet experiment}
\end{figure}
\paragraph{Evaluation.}
We evaluate the models on ImageNetV2, ImageNet-R, ImageNet-Sketch, and ImageNet-A.
KG-NN outperforms CE, SupCon, and GloVe on the trained source as well as on unknown target domains as shown in Figure~\ref{fig:mini-ImageNet experiment}.
This means that KG-NN makes use of the additional semantic information.
It can be seen that CE fails particularly when the domain gap increases.
We assume that this happens due to its high specialization on the source domain.
SupCon cannot reach the performance of CE on the source dataset, however, it outperforms CE on more general target tasks.
We see that pre-training on a more generic self-supervised task helps the NN to extract more general features.
GloVe, the version of KG-NN that relies on a language embedding instead of a KG, is also outperformed by KG-NN.
We see that the performance of KG-NN depends on the quality of the embedding space, which we can control manually using different KGs or $KGE(\cdot)$s.
\subsubsection{Experiment 2 - RoadSign KG with GTSRB and CTSD}
\paragraph{Dataset Settings.}
The German Traffic Sign Dataset (GTSRB), which contains $51,970$ images of $43$ road signs, is used as the source domain, and the Chinese Traffic Sign Dataset (CTSD), which contains $6,164$ images of $58$ road signs, as the target domain.
We resize all images to a uniform size of $32x32$ pixels.
Note that we do not cut out the road signs, but take the whole image for classification.
Both datasets contain a domain shift as they were recorded with different cameras in different countries and hence have different appearances.
\paragraph{Knowledge Graph and KG Embedding Space.}
First, we construct a small knowledge graph for traffic sign recognition (RSKG) that contains all classes of both datasets incorporated in an underlying domain ontology.
To encode the formal semantics of road signs from different countries and standards, we first develop the \emph{RoadSign} ontology.
It contains classes (e.g. RoadSign, Shape, Icon, Color), relationships (e.g. hasShape, hasIcon, hasColor) and attributes (e.g. label, textWithinSign, etc).
The actual road signs that exist within given datasets are represented as concrete \emph{individuals}.
Note that this information is extracted from externally available road-sign standards, without accessing the datasets.
Currently, RSKG contains 18 classes, 11 object properties, 2 datatype properties, and 101 individuals.
It is important to mention that the knowledge graph can be further populated with concrete road signs instances from other countries.
This would enrich RSKG and could help to find inter-relations between the domains.
We transfer RSKG into a 300-dimensional $\vec{h}_{KG}$ by using MRGCN~\cite{DBLP:journals/corr/abs-2003-12383} as we also want to exploit its literal information.
Therefore, we use MRGCN in the node classification task to build a $\vec{h}_{KG}$ that explicitly clusters the five classes: danger, informative, mandatory, prohibitory, and warning.
\paragraph{Training Details.}
We use the same training setting and hyperparameters as in the experiment with the mini-ImageNet dataset.
\paragraph{Evaluation.}
Figure~\ref{fig:gtsrb experiment} shows that KG-NN outperforms CE by 0.8\% on the source and by 7.1\% on the target dataset.
It can be seen that KG-NN exceeds the accuracy of SupCon by 55.0\% on GTSRB and by 35.7\% on CTSD.
SupCon with its self-supervised loss needs large datasets to form a good embedding space, however, both datasets are quite small and from the special domain of road-sign-recognition.
We do not compare against a GloVe embedding, as there are no instances for specific road signs and no clear procedure on how to generate these instances from a text corpus.
Overall, KG-NN performs better and is more robust to unforeseen distribution shifts using the same amount of training data.
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\textwidth]{img/GTSRB_DG.pdf}
\caption{Accuracy of the domain generalization task using GTSRB as the source and CTSD as the target domain. We compare KG-NN with the standard CE and SupCon, a version of our loss without auxiliary knowledge of a KG.}
\label{fig:gtsrb experiment}
\end{figure}
\subsection{Scenario 2 - Supervised Domain Adaptation}
Supervised domain adaptation describes the task of transfer learning that adapts models learned on a source domain to a specific labeled target domain.
We claim that an NN learned using an image-data-independent $\vec{h}_{KG}$ can adapt to new domains and new classes better as both domains use the same embedding space.
For this experiment, we use the same settings described in Experiment 2.
First, KG-NN, CE, and SupCon, are pre-trained on the source dataset.
Second, we use the encoder networks of each NN and presume the pre-training on the target dataset.
The NNs are retrained with different amounts of labeled target data.
The one-shot (58) experiment uses 58 images, one image for each class of the CTSD target dataset.
The five-shot (290) experiment uses 290 images, five images for each class of the CTSD.
The 10\% (416) experiment uses 416 images, 10\% of images of the CTSD.
The 50\% (2083) experiment uses 2,083 images, 50\% of images of the CTSD.
The 100\% (4165) experiment uses 4,165 images, 100\% of images of the CTSD target dataset.
Similar to the previous experiments, we use the \emph{linear layer phase} to adopt the pre-trained encoder network to the target task.
As shown in Figure~\ref{fig:diagram}, all experiments are evaluated on the full CTSD target dataset and on the 25 common classes of the GTSRB source dataset.
\begin{figure}[tb]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{img/diagram_a.pdf}
\caption{Evaluation on CTSD}\label{fig:diagram_a}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{img/diagram_b.pdf}
\caption{Evaluation on GTSRB}\label{fig:diagram_b}
\end{subfigure}
\caption{Comparison of KG-NN, SupCon, and CE on the test dataset of the target domain for five different amounts of target data: a) evaluates the NNs on the target domain; b) evaluates the same NNs on the initial source domain to reflect \emph{catastrophic forgetting} phenomena.}
\label{fig:diagram}
\end{figure}
Evaluating the approaches on the initial source domain, we find that all NN suffer from \emph{catastrophic forgetting}, as depicted in Figure~\ref{fig:diagram_b}.
If 100\% of target data is used for training, the accuracy of CE drops from 96.1\% to 49.5\%, the accuracy of SupCon drops from 41.9\% to 37.2\%, and the accuracy of KG-NN drops from 96.9\% to 60.7\% on the source domain.
This means that KG-NN is still the best performing model on the source domain, even after retraining on a target domain with an increased difference to CE from 0.8\% to 11.2\%.
We think that the fixed embedding space between source and target domain helps to overcome the issue of \emph{catastrophic forgetting}.
If we compare the approaches on the target domain as illustrated in Figure~\ref{fig:diagram_a}, we see that KG-NN achieves an accuracy of 88.1\%, which is an improvement by 5.9\% over standard CE and by 23.4\% over SupCon.
Since we operate on transfer learning, an additional target-only baseline is introduced.
Thus, CE is initialized with weights pre-trained on ImageNet, instead of using the source domain to pre-train the parameters of the NN.
We see that the target-only baseline suffers from fewer target data in $D_T$ yielding only 53.1\% accuracy as the ImageNet initialization does not suit well for the task of road sign recognition.
All approaches seem to be able to transfer some knowledge from the source domain $D_S$ to the target domain $D_T$ outperforming the target-only baseline.
However, KG-NN significantly outperforms the baseline by 35.0\%, whereas CE improves by 29.1\% and SupCon by 11.6\%.
Interestingly, with less than five target images per class, which is fewer than 7\% of target data, KG-NN surpasses the performance of the target-only baseline.
We observe KG-NN always outperforms CE by approximately 10\% of accuracy.
When compared to SupCon, we see the accuracy difference even increases if more labeled target data is available.
In the one-shot scenario, KG-NN outperforms CE by 12.2\% of accuracy, in the five-shot-scenario by 13.8\%, in the 10\%-scenario by 11.2\%, in the 50\%-scenario by 10.7\%, and on the full target dataset by 5.9\%.
In the one-shot scenario KG-NN outperforms SupCon by 10.3\% of accuracy, in the five-shot-scenario by 25.4\%, in the 10\%-scenario by 25\%, in the 50\%-scenario by 31.6\%, and on the full target dataset by 23.4\%.
\section{Related Work}
\label{sec:related-work}
Embedding spaces trained with the cross-entropy loss tend to be specialized embedding spaces for a particular domain.
To reduce the high dependency on the training domain, pre-training methods that generate rich embedding spaces seem to be a promising research direction for CV and NLP.
Most neuro-symbolic approaches only learn a transformation function, e.g., MLP, on top of a pre-trained $\vec{h}_{v}$.
We refer to these models as visual-semantic transformation models.
Since the weights of the visual feature extractor are a really important part of robust object recognition, recent approaches have shown that learning a visual-semantic feature extractor from scratch improves generalization capabilities and makes the NN applicable to further downstream and transfer learning tasks~\cite{radford2learning}.
We refer to these models as visual-semantic feature extractors.
\paragraph{Neural Networks improved by Knowledge Graphs}
Most of the works that combine KGs with NNs use WordNet~\cite{Wang2018ZeroShotRV}, small-scale label~\cite{DBLP:conf/cvpr/LeeFYW18,DBLP:conf/cvpr/ChenWWG19} or scene~\cite{DBLP:conf/cvpr/ChenLFG18} graphs as KG.
However, the capacity of WordNet as a lexical database is limited.
Large-scale KGs such as DBPedia~\cite{DBLP:conf/semweb/AuerBKLCI07} or ConceptNet~\cite{DBLP:conf/aaai/SpeerCH17} encode additional semantic information by using higher order relations between concepts.
Although their applications are still sparse in the visual domain, there are a few works that have shown promising results.
DBPedia is already used in the field of explainable AI~\cite{DBLP:journals/corr/abs-1901-08547,DBLP:series/ssw/LecueCPC20}, object detection~\cite{DBLP:journals/corr/abs-1908-04385}, and visual question answering~\cite{DBLP:conf/ijcai/WangWSDH17}; and ConceptNet is used for video classification~\cite{DBLP:journals/corr/abs-1711-01714} and zero-shot action recognition~\cite{DBLP:conf/aaai/GaoZX19}.
However, all approaches use the KG only as a post-validation step on a pre-trained visual feature extractor, while KG-NN learns the visual feature extractor by itself based on the KG.
\paragraph{Visual-Semantic Transformation Models} are learned via a transformation function, e.g. MLP, from a pre-trained $\vec{h}_{v}$ into $\vec{h}_{s}$.
One of the first approaches that use $\vec{h}_{s}$ with NNs is the work from Mitchell et al.~\cite{Mitchell1191}.
They use word embeddings derived from text corpus statistics to generate neural activity patterns, i.e. images.
Instead of generating images from text, Palatucci et al.~\cite{DBLP:conf/nips/PalatucciPHM09} learn a linear regression model to map neural activity patterns into word embedding space.
In their work, they improve zero-shot learning by extrapolating the knowledge gathered from in the $\vec{h}_{s}$ related classes to novel classes.
Socher et al.~\cite{DBLP:conf/nips/SocherGMN13} present a model for zero-shot learning that learns a transformation function between an $\vec{h}_{v}$ space, obtained by an unsupervised feature extraction method, and an $\vec{h}_{s}$, based on an NN-based language model.
The authors trained a 2-layer NN with the MSE loss to transform the $\vec{h}_{v}$ into the word embedding of 8 classes.
Frome et al.~\cite{DBLP:conf/nips/FromeCSBDRM13} introduce the deep visual-semantic embedding model DeViSE that extends the approach from 8 known and 2 unknown classes to 1000 known classes for the image model and up to 20,000 unknown classes.
Therefore, they pre-train their visual feature extractor using ImageNet and their $\vec{h}_{s}$ based on the Word2Vec~\cite{DBLP:conf/nips/MikolovSCCD13} language model, exposed to the text of a single online encyclopedia.
In contrast to Socher et al.~\cite{DBLP:conf/nips/SocherGMN13}, DeVISE learns a linear transformation function between the $\vec{h}_{v}$ space and the $\vec{h}_{s}$ space using a combination of dot-product similarity and hinge rank loss since MSE distance fails in high dimensional space.
Norouzi et al.~\cite{DBLP:journals/corr/NorouziMBSSFCD13} propose \emph{convex combination of semantic embeddings} (ConSE), a simple framework for constructing a zero-shot learning classifier.
ConSE uses a semantic word embedding model to reason about the predicted output scores of the NN-based image classifier.
To predict unknown classes, it performs a convex combination of the classes in the $\vec{h}_{s}$ space, weighted by their predicted output scores of the NN.
Similarly, Zhang et al.~\cite{DBLP:conf/iccv/ZhangS15a} introduce the \emph{semantic similarity embedding} (SSE), which models target data instances as a mixture of seen class proportions.
SSE builds a semantic space where each novel class could be represented as a probabilistic mixture of the projected source attribute vectors of the seen classes.
Akata et al.~\cite{DBLP:journals/pami/AkataPHS16} refer to their $\vec{h}_{s}$ space transformations as label embedding methods.
They compared transformation functions from the $\vec{h}_{v}$ space to the attribute label embedding space, the hierarchy label embedding space, and the Word2Vec label embedding space, in which embedded classes can share features among themselves.
\paragraph{Visual-Semantic Feature Extractors:}
The approaches mentioned so far only learn a transformation from $\vec{h}_{v}$ to $\vec{h}_{s}$.
However, the parameters of the feature extractor are not affected by the auxiliary information.
Thus, if the feature extractor cannot detect visual features due to the domain shift problem, the performance of the final prediction suffers.
Instead of maximizing the likelihood on the output, some approaches maximize the energy (i.e. difference between the prediction and the excepted result) directly on the embedding space to learn the NN.
Hadsell et al.~\cite{Hadsell2006DimensionalityRB} introduce the contrastive loss for a \emph{siamese architecture} to learn a robust embedding space from unlabeled data.
They show that their self-supervised energy-based method can learn a lighting and rotation-invariant embedding space.
Recently, many approaches claim that training an embedding space in a self-supervised manner using the contrastive loss tends to find a more general and domain-invariant representation~\cite{DBLP:conf/icml/ChenK0H20,He2020MomentumCF}.
Furthermore, Tian et al.~\cite{Tian2020RethinkingFI} show that learning an embedding space using the contrastive loss, followed by training a supervised linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods.
Joulin et al.~\cite{DBLP:conf/eccv/JoulinMJV16} demonstrate that feature extractors trained to predict words in image captions can learn useful visual-semantic embedding spaces $\vec{h}_{v(s)}$.
Further, Radford et al.~\cite{radford2learning} proposed a simple and general pre-training of an NN with natural language supervision using a dataset of 400 million image-text pairs collected from the internet and the contrastive objective of Zhang et al.~\cite{DBLP:journals/corr/abs-2010-00747}.
To the best of our knowledge, there is no work that learn a visual feature extractor using a KG or its embedding space $\vec{h}_{KG}$.
We choose to use prior knowledge encoded in a knowledge graph instead of using the unstructured knowledge of a language embedding as they are highly dependent on their text corpus, inconsistent, and do not incorporate expert knowledge.
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we propose KG-NN, a knowledge graph-based approach that enables NN to learn more robust and controlled embedding spaces for transfer learning tasks.
The core idea of our approach is to use domain-invariant knowledge represented in a KG, transform it into a vector space using knowledge graph embedding algorithms, and train an NN so that its embedding space is adapted to the domain-invariant embeddings given by the KG.
Using our KG-based contrastive loss function, we force the NN to adapt its $\vec{h}_{v}$ space to the domain-invariant space $\vec{h}_{KG}$ given by the KG, thus forming $\vec{h}_{v(KG)}$.
Our experimental results show that NNs benefit from exploiting prior knowledge.
As a result, it increases the accuracy on known and unknown domains and allows them to keep up with NNs trained with the cross-entropy loss despite requiring significantly less training data.
There are several directions of future work:
First, identifying discriminative factors to best influence the domain-invariant space.
Therefore, further investigations are needed to determine \email{what} knowledge is relevant and should be modeled in the KG to enable transfer learning.
Second, analyzing \email{how} the prior knowledge can be modeled and represented best, e.g., via n-ary relations or hyper-relational graphs.
Third, exploring various embedding techniques to operate on multi-modal information or Riemannian metrics to exploit hierarchical relations.
And finally, evaluating different contrasting dimensions and knowledge infusion techniques
could lead to further improvements.
We believe that the construction of task-specific knowledge graph embeddings and their combination with learned embeddings of NNs will help to build more interpretable, more robust, and more accurate machine learning models, while at the same time requiring less training data.
\section{Acknowledgement}
This publication was created as part of the research project "KI Delta Learning" (project number: 19A19013D) funded by the Federal Ministry for Economic Affairs and Energy (BMWi) on the basis of a decision by the German Bundestag.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-07-13T02:37:25",
"yymm": "2102",
"arxiv_id": "2102.08747",
"language": "en",
"url": "https://arxiv.org/abs/2102.08747"
}
|
\section{Introduction}
The short time Fourier transform (STFT) has many applications specially in signal and image processing for providing good time-frequency localization. Singularities of discontinuity across a curve such an edges in an image in multidimensional often hold the key information. To tackle the directional singularity Candes \cite{candes1998ridgelets} first introduced ridgelet transform, which is the wavelet transform in Radon domain. Ridgelets are constant along the ridge or hyperplane. Now a days Curvelets and shearlets play important role to represent directional selectivity as they give optimal sparse approximations for a class of bivariate functions exhibiting anisotropic features \cite{candes2004new,candes2005continuous,kutyniok2009resolution,han2020microlocal}. The relation between wavelet transform and the above directional representation is the above transform projects a hyperplane singularity into a point singularity, then takes one-dimensional wavelet transform. We recall that the short time Fourier transform (STFT) \cite{grochenig2001foundations} of a function $f\in L^2({\mathbb R}^n)$ w.r.t. a window $g\in L^2({\mathbb R}^n)$, is the function $V_gf(x,\omega)=\int_{{\mathbb R}^n}f(t)\overline{g(t-x)}e^{-2\pi i\omega t}dt$, i.e.,
\begin{equation}\label{repif}
V_gf(x,\omega)=\widehat{f.T_x\bar{g}}(\omega).
\end{equation}
Since the STFT enjoy the orthogonality relation, it gives a full reconstruction of a function/distribution $f$ for $f,g,h\in L^2({\mathbb R}^n)$ and $\widehat{f}, \widehat{G}\in L^1({\mathbb R}^n)$ as
\begin{equation}
f(u)=\frac{1}{\langle h,g\rangle}\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}V_gf(x,\omega)M_{\omega}T_x h(u)d\omega dx,\ \ \textrm{for\ all}\ u\in {\mathbb R}^n,
\end{equation}
where $M_{\omega}$ and $T_x$ are the modulation and translation operator respectively. But the STFT does not give the directional information about the function or distribution. For directional sensitivity of time-frequency decomposition Grafakos and Sansing \cite{grafakos2008gabor} introduced a variant of STFT of function $f\in L^2({\mathbb R})$ using Gabor ridge function $e^{2\pi i m (s-t)}.g(s-t)$ on $S^{n-1}\times {\mathbb R} \times {\mathbb R}$ as
\begin{equation}
(\xi,x,\omega)\rightarrow \int_{{\mathbb R}^n}f(t)\overline{g(\xi.t-x)}e^{-2\pi i \omega (\xi.t)}dt.
\end{equation}
But this transform does not provide a full reconstruction of a function $f$ and for this the authors in \cite{grafakos2008gabor} use the derivative of Gabor ridge function, i.e. ``weighted Gabor ridge functions" for the analysis and synthesis of the functions $f$. Moreover, this transform lost the Fourier transform representation \eqref{repif} of STFT, since $x\in{\mathbb R}^n$ and $\omega\in {\mathbb R}$. Modifying the idea, Giv \cite{giv2013directional} introduced directional short-time Fourier transform and studied some useful properties like orthogonality, full reconstruction formula. Recently Mejjaoli and Omri \cite{HMSO2020spectral} introduced a generalized two-wavelet multiplier using directional STFT and studied the $L^p$ boundedness and compactness of the two-wavelet multiplier. \\
Our aim in this paper is to give a generalization of two-wavelet multiplier on locally compact abelian topological groups $G$ associated to right $H$-translation invariant functions. For this as a generalization of directional STFT, we will define a transform $\mathcal{D}_H^gf(\omega,zH)$ of $f\in L^2(G)$ using a character $\omega\in \widehat{G}$ and a window $g\in L^2(G/H)$, where $H$ is a closed subgroup of $G$. In Section \ref{FoT}, we first define the STFT, $\mathcal{D}_H^gf$, $f\in L^1(G) $ and$g\in L^\infty(G/H)$ on $G/H$ which we have shown satisfies certain orthogonality relations. In this section we have also defined the generalized multiplier and generalized two-wavelet multiplier. In Section \ref{bound}, \ref{SchttenN} and \ref{lpbdd} we have shown that the generalized multiplier is bounded, Schatten class and $L^p-$ bounded, $1\leq p\leq \infty$ for all symbol $\sigma$ in $L^p(\widehat{G}\times G/H), $ $1\leq p\leq \infty$. Finally, in Section \ref{Compt} we have shown that the generalized wavelet mulpliers are ompact operators for all symbol $\sigma$ in $L^1(\widehat{G}\times G/H)$. In Section \ref{LPOp} we have defined the generalized Landau-Pollak-Slepian operator and shown that its is actually unitarily equivalent to scaler multiples of the generalized two-wavelet multiplier.
\section{Fourier like transform on locally compact abelian topological groups associated to the coset of closed subgroup}\label{FoT}
$G$ denotes a locally compact abelian topological group with the Haar measure $dm_{G}$ and $\widehat{G}$ is the dual group of $G$ with the Haar measure $dm_{\widehat{G}}$ such that $dm_{\widehat{G}}$ is the dual measure of $dm_{G}$. Let $H$ be a closed subgroup of an LCA group $G$ with the Haar measure $dm_{H}$. The annihilator of $H$ is the set $H^{\perp}\subset \widehat{G}$ given by $H^{\perp}=\lbrace \eta\in \widehat{G}: \langle y,\eta\rangle=1\ \textrm{for\ all}\ y\in H\rbrace$. Moreover, $H^{\perp}$ is a closed subgroup of $\widehat{G}$, is topologically isomorphic with the character group of $G/H$ and we have the followings:
$$(H^{\perp})^{\perp}=H\ \ \ \ \textrm{and}\ \ \ \ \widehat{H}=\widehat{G}/H^{\perp}.$$
Let $f$ be any function in $L^1(G)$ then according
to Theorem 28.54 in \cite{EHKAR1997abstract}, the function $x\rightarrow \int_{H}f(xy)dm_{H}(y)$ depends only on the [left] coset of $H$ containing $x$, is a function on the quotient group $G/H$. Moreover, the function $R_H$ is Haar measurable on $G/H$ and belongs to $L^1(G/H)$, where $R_H$ is the function $xH\rightarrow \int_{H}f(xy)dm_{H}(y)$. We normalize the Haar measure so that
\begin{equation}\label{weilsformula}
\int_{G/H}R_H(xH)dm_{G/H}(xH)=\int_{G/H}\int_{H}f(xy)dm_{H}(y)dm_{G/H}(xH)=\int_G f(x)dm_{G}(x).
\end{equation}
For $R_H\in L^1(G/H)$, the Fourier transform of $R_H$ is defined by $$\widehat{R_H}(\chi^+)=\int_{G/H}R_H(xH)\overline{\chi^+(xH)}dm_{G/H}(xH).$$
where the character $\chi^+(\in H^{\perp})$ of the group $G/H$ is defined by $\chi^+(xH)=\chi(x)$ for $\chi\in \widehat{G}$. The relation between the Fourier transform of the function on group $G/H$ and function on $G$ is the following:
\begin{align}\label{fourierslice}
\widehat{R_H}(\chi^+)&=\int_{G/H}\int_{H}f(xy)dm_{H}(y)\overline{\chi^+(xH)}dm_{G/H}(xH)\nonumber\\
&=\int_{G/H}\int_{H}f(xy)\overline{\chi(xy)}dm_{H}(y)dm_{G/H}(xH)\nonumber\\
&=\widehat{f}(\chi)
\end{align}
Before describing the operator first we recall the short time Fourier transform (STFT) for locally compact abelian group. Given an appropriate window $g\in L^2(G)$, the STFT of $f\in L^2(G)$ w.r.t. $g$ at a point $(x,\omega)\in G\times \widehat{G}$ is denoted as
$$\mathcal{V}_gf(x,\omega)=\int_G f(y)\overline{g}(x^{-1}y)\overline{\omega(y)}dm_G(y)=\langle f,M_{\omega}T_xg\rangle=(f.T_x\overline{g})^{\wedge}(\omega).$$
The STFT enjoy the following properties for $f,g\in L^2(G)$,
\begin{align}
\int_{G}\int_{\widehat{G}}|\mathcal{V}_g f(x,\omega)|^2dxd\omega=\|g\|_2^2 \|f\|_2^2.
\end{align}
For $f,g\in L^2(G)$ and $\widehat{f},\widehat{G}\in L^1(\widehat{G})$, we have the reconstruction formula for $f$ as
\begin{align}
f(u)=\frac{1}{\langle h,g\rangle}\int_G\int_{\widehat{G}} \mathcal{V}_g f(x,\omega)M_\omega T_x h(u)dm_{\widehat{G}}(\omega) dm_G(x)\ \ \textrm{for\ all}\ u\in G.
\end{align}
Now for the closed subspace $H\subset G$, window $g\in L^\infty(G/H)$ and $zH\in G/H$, we define the STFT like transform of $f\in L^1(G)$ using right-$H$-translation-invariant functions on $G$ as a function on $G/H\times \widehat{G}$ as
\begin{equation}\label{Dg}
\mathcal{D}_H^gf(\omega,zH)=\int_G f(x)\overline{w(x)g(z^{-1}xH)}dm_G(x)=\int_{G}M_{-w}f(x)g(z^{-1}xH)dm_G(x).
\end{equation}
Next we show how the operator $\mathcal{D}_H^gf(\omega,zH)$ can be written with the inner product on $G/H$ in a natural way.
\begin{theorem}\label{transformip}
If $g\in L^\infty(G/H)$ and $f\in L^1(G)$, then for every $(\omega,zH)\in \widehat{G}\times G/H$
\begin{equation}
\mathcal{D}_H^gf(\omega,zH)=({f(x)\overline{g(z^{-1}xH)}})^{\wedge}(\omega)=\langle R_H(M_{-\omega}f),T_{zH}g\rangle_{G/H}=\left(R_H(M_{-\omega}f)*g\right)(zH).
\end{equation}
\end{theorem}
\begin{proof}
The result can be easily followed in view of Weils formula \eqref{weilsformula} and the following:
\begin{eqnarray*}
\mathcal{D}_H^gf(\omega,zH)&=&\int_G f(x)\overline{w(x)g(z^{-1}xH)}dx\\
&=&\int_{G/H}\int_{H}f(xy)\overline{w(xy)g(z^{-1}xyH)}dm_{H}(y)dm_{G/H}(xH)\\
&=&\int_{G/H}R_H(M_{-\omega}f)(xH)\overline{g(z^{-1}xH)}dm_{G/H}(xH)\\
&=&\left(R_H(M_{-\omega}f)*g\right)(zH),
\end{eqnarray*}
where for the functions $f_1,f_2\in L^1(G/H)$ the convolution in the quotient space is defined by
$$
(f_1*f_2)(xH)=\int_{G/H}f_1(yH)f_2(y^{-1}xH)dm_{G/H}(yH).
$$
\end{proof}
The transform $\mathcal{D}_H^gf(\omega,zH)$ can be regarded as a STFT in the quotient group $G/H$ through the following theorem:
\begin{theorem}
If $f\in L^1(G)$, $\widehat{f}\in L^1(\widehat{G})$ and $g\in L^1(G/H)\cap L^\infty(G/H)$, then the transform $\mathcal{D}_H^gf(\omega,zH)$ is the STFT of the function $T_{-\omega}\widehat{f}$ with respect to the window $\widehat{G}$, evaluated at $(0,-zH)$.
\end{theorem}
\begin{proof}
From the previous theorem we have $\mathcal{D}_H^gf(\omega,zH)=\langle R_H(M_{-\omega}f),T_{zH}g\rangle_{G/H}$. Since $f\in L^1(G)$ and $\widehat{f}\in L^1(\widehat{G})$ then $R_H(M_{-\omega}f)\in L^1(G/H)$ and $\widehat{R_H(M_{-\omega}f)}\in L^1(G/H)$. So $R_H(M_{-\omega}f)\in L^2(G/H)$. Also $g\in L^2(G/H)$. Hence using Plancherel's theorem and \eqref{fourierslice} we can write $\mathcal{D}_H^gf(\omega,zH)$ the followings for $\eta\in H^{\perp}$:
\begin{align*}
\mathcal{D}_H^gf(\omega,zH)&=\langle \widehat{R_H(M_{-\omega}f)}(\eta),\widehat{T_{zH}g}(\eta)\rangle_{G/H}\\
&=\langle \widehat{M_{-\omega}f}(\eta),M_{-zH}\widehat{g}(\eta)\rangle_{H^{\perp}}\\
&=\langle T_{-\omega}\widehat{f}(\eta),M_{-zH}\widehat{g}(\eta)\rangle_{H^{\perp}}.
\end{align*}
Hence proved.
\end{proof}
The transform $\mathcal{D}_H^gf(\omega,zH)$ satisfy the following orthogonality relations:
\begin{theorem}\label{orthothm}
\begin{itemize}
\item[(i)] For every directional window $g\in L^{\infty}(G/H)$, the operator $\mathcal{D}_g$ is bounded from $L^1(G)$ into $L^{\infty}(\widehat{G}\times G/H)$ and the operator norm satisfies
\begin{equation}\label{Dgb}
\|D_H^g\|\leq \|g\|_{L^{\infty}(G/H)}.
\end{equation}
\item[(ii)] Suppose $g_1,g_2\in L^{\infty}(G/H)$ and $f_1,f_2\in L^1(G)\cap L^2(G)$. If at least one of the $g_i$'s is in $L^1(G/H)$, then $\mathcal{D}_H^gf(\omega,zH)$ satisfies
\begin{equation}\label{orthogonalitysp}
\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f_1(\omega,zH)\overline{\mathcal{D}_H^{g_2}f_2(\omega,zH)}dm_{\widehat{G}}(w)dm_{G/H}(zH)=\langle f_1,f_2 \rangle_{L^2(G)} \langle g_2,g_1\rangle_{L^2(G/H)}.
\end{equation}
Moreover, if $g\in L^1(G/H)\cap L^\infty (G/H)$ and $f\in L^1(G)\cap L^2(G)$, then $\mathcal{D}_H^{g}f(\omega,zH)\in L^2(G/H\times\widehat{G})$ and
\begin{equation}\label{orthogonalitynorm}
\|\mathcal{D}_H^{g}f\|_{L^2(G/H\times\widehat{G})}=\|g\|_{L^2(G/H)}\|f\|_{L^2(G)}.
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
The proof of first part follows from Equation \eqref{Dg}.\\
Since $g_i\in L^{\infty}(G/H)$ and $f_i\in L^1(G)\cap L^2(G)$, $f_i(x)\overline{w(x)g_i(z^{-1}xH)}\in L^1(G)\cap L^2(G)$ for $i=1,2$. Using Plancherel's theorem we can write the followings:
\begin{align*}
&\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f_1(\omega,zH)\mathcal{D}_H^{g_2}f_2(\omega,zH)dm_{\widehat{G}}(w)dm_{G/H}(zH)\\
&= \int_{\widehat{G}\times G/H}({f_1(x)\overline{g_1(z^{-1}xH)}})^{\wedge}(\omega)\overline{({f_2(x)\overline{g_2(z^{-1}xH)}})^{\wedge}(\omega)}dm_{\widehat{G}}(w)dm_{G/H}(zH)\\
&= \int_{{G}\times G/H} f_1(x)\overline{g_1(z^{-1}xH)}\overline{f_2(x)}g_2(z^{-1}xH)dm_{G}(w)dm_{G/H}(zH)\\
&= \int_{G} f_1(x)\overline{f_2(x)}\left(\int_{G/H}\overline{g_1(z^{-1}xH)}g_2(z^{-1}xH)(w)dm_{G/H}(zH)\right)dm_{G}(x).
\end{align*}
Hence the proof of \eqref{orthogonalitysp} follows by noting
$$\langle g_2,g_1\rangle_{G/H} = \int_{G/H}\overline{g_1(z^{-1}xH)}g_2(z^{-1}xH)(w)dm_{G/H}(zH),$$ since $G/H$ is unimodular group. The equation \eqref{orthogonalitynorm} follows as a consequence of \eqref{orthogonalitysp}.
\end{proof}
\begin{cor}\label{corinversion}
Suppose $g_1,g_2\in L^\infty(G/H)$ and $f\in L^1(G)\cap L^2(G)$. If at least one of the $g_i$'s is in $L^1(G/H)$ and $\langle g_2,g_1\rangle\neq 0$, then
$$f(x)=\frac{1}{\langle g_2,g_1\rangle}\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f(\omega,zH)\omega(x)g_2(z^{-1}xH)dm_{\widehat{G}}(w)dm_{G/H}(zH).$$
Moreover, for non-zero function $g\in L^1(G/H)\cap L^\infty(G/H)$
$$f(x)=\frac{1}{\|g\|_{L^2(G/H)}^2}\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g}f(\omega,zH)\omega(x)g(z^{-1}xH)dm_{\widehat{G}}(w)dm_{G/H}(zH)$$
\end{cor}
\begin{proof}
The proof follows by using Theorem~\ref{orthothm}.
\end{proof}
\begin{rem}
The weak version of the above Corollary~\ref{corinversion} means that for every function $u$ in $L^1(G)\cap L^2(G)$ there exist unique $f\in L^2(G)$, s.t.
$$\langle f,u\rangle=\frac{1}{\langle g_2,g_1\rangle}\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f_1(\omega,zH)\overline{\mathcal{D}_H^{g_2}u(\omega,zH)}dm_{\widehat{G}}(w)dm_{G/H}(zH)$$
which is nothing but the equation \eqref{orthogonalitysp}.
\end{rem}
\begin{prop}
We assume that $g\in L^1(G/H)\cap L^{\infty}(G/H)$, $f\in L^1(G)\cap L^2(G)$ and $p$ belongs in $[2,\infty]$. We have
\begin{equation}\label{Dgp}
\|D_g(f)\|_{L^p(\widehat{G}\times G/H)}\leq \|g\|_{L^p(G/H)}\|f\|_{L^{p^{\prime}}(G)}.
\end{equation}
\end{prop}
\begin{exmp}
Here we give one example where $G=\mathbb R^n$, $H=\lbrace x\in G: x.\theta=1,\ \theta\in S^{n-1}\rbrace$. For any element in $G/H$ we can write $xH=(x.\theta) \theta H$. Then the quotient group $G/H=\lbrace t\theta H,t\in \mathbb R\rbrace$. Hence the right-$H$-translation-invariant functions can be written as
\begin{align*}
R_H f(xH)=R_H f(t\theta H)=\int_{H}f(t\theta h)dm_H(h)=\int_{z.\theta=t}f(z)dz=R_\theta f(t),
\end{align*}
where $R_\theta f(t)$ is the function on $S^{n-1}\times R$, which is the Radon transform of $f$. In this case the transform $\mathcal{D}_H^{g}f(\omega,zH)$ is represented using Theorem~\ref{transformip} as
\begin{align*}
\mathcal{D}_H^{g}f(\omega,zH)=\int_{G/H}R_H(M_{-\omega}f)(xH)\overline{g(z^{-1}xH)}dm_{G/H}(xH).
\end{align*}
The quotient group is defined by $G/H=\lbrace t\theta H, t\in \mathbb R \rbrace$. Also, the characterization of the quotient group is the subgroup $H$ of $G$ defines an equivalence relation in $G$ by $g_1=g_2$ iff $g_2^{-1}g_1\in H$, i.e., for $g_1,g_2\in \mathbb{R}^n$ and $\theta\in S^{n-1}$, $g_1.\theta=g_2.\theta$. So the norms are equal and hence the quotient group $G/H$ is characterized by $\mathbb{R}$. Hence for $z\in \mathbb{R}$, $\mathcal{D}_H^{g}f(\omega,zH)$ is defined on $S^{n-1}\times \mathbb{R}\times \mathbb{R}^n$ as
\begin{align*}
\mathcal{D}_H^{g}f(\omega,zH)=\int_{t\in \mathbb R}R_\theta(M_{-\omega}f)(t)\overline{g(t-z)}dt=&\int_{t\in \mathbb R}\int_{x.\theta=t}f(x)\overline{w(x)}dx\overline{g(t-z)}dt\\
=&\int_{t\in \mathbb R}\int_{x.\theta=t}f(x)\overline{w(x)}\overline{g(x.\theta-z)}dxdt\\
=&\int_{\mathbb{R}^n}f(x)\overline{w(x)}\overline{g(x.\theta-z)}dx,
\end{align*}
which is the directional short-time Fourier transform of the function $f$ with respect to window $g$ \cite{giv2013directional, HMSO2020spectral}.
\end{exmp}
We define the generalized multiplier and generalized two-wavelet multiplier in the following.
\begin{defn}\label{generalizedmultiplier}
Let $\sigma\in L^{\infty}(\widehat{G}\times G/H),$ we define the linear operator
$M_{\sigma, g}:L^2(G)\rightarrow L^2(G)$ by $$M_{\sigma, g}(f)=(D_{H}^g)^{-1}(\sigma D_{H}^gf).$$ This operator is called the generalized multiplier, where $0\neq g\in L^1(G/H)\cap L^{\infty}(G/H)$.
\end{defn}
\begin{defn}\label{def}
Let $u,v$ be a measurable functions on $G$ and $\sigma$ be measurable function on $\widehat{G}\times G/H$, we define the generalized two wavelet multiplier operator denoted by $P_{u,v,g}(\sigma)$ on $L^p(G), 1\leq p\leq\infty$ defined by, for $g\neq 0, g\in L^1(G/H)\cap L^{\infty}(G/H)$,
$$ P_{u,v,g}(\sigma)(f)(t)=\int_{\widehat{G}\times G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)g_{w,zH}(t)v(t)dm_{\widehat{G}}(\omega)dm_{G/H}(zH),$$ where $g_{w,zH}(x)=g(z^{-1}xH)w(x)$.\\
$P_{u,v,g}(\sigma)$ in a weak sense, for $f\in L^p(G), 1\leq p\leq\infty$ and $h\in L^{p^{\prime}}(G)$,
$$\langle P_{u,v,g}(\sigma)(f),h\rangle=\int_{\widehat{G}}\int_{G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$
\end{defn}
\begin{prop}\label{adjoint}
Let $f$ be in $L^p(G)$ and $h$ in $L^{p^{\prime}}$, where $1\leq p<\infty,\ \frac{1}{p}+\frac{1}{p^{\prime}}=1$. Then $$P^{\ast}_{u,v,g}(\sigma)=P_{v,u,g}(\overline{\sigma}).$$
\end{prop}
\begin{proof}
For $f$ be in $L^p(G)$ and $h$ in $L^{p^{\prime}}$,
\begin{align*}
& \langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}\\
& =\int_{\widehat{G}}\int_{G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \overline{\int_{\widehat{G}}\int_{G/H}\overline{\sigma(w,zH)D_{H}^{g}(uf)(w,zH)}D_{H}^g(vh)(w,zH)dm_{\widehat{G}}(\omega)dm_{G/H}(zH)}\\
& = \overline{\langle P_{v,u,g}(\overline{\sigma})(h), f\rangle}\\
& = \langle f, P_{v,u,g}(\overline{\sigma})(h)\rangle.
\end{align*}
Hence we get, $$P^{\ast}_{u,v,g}(\sigma)(h)=P_{v,u,g}(\overline{\sigma})(h).$$
This complete the proof.
\end{proof}
\begin{prop}
Let $\sigma\in L^1(\widehat{G}\times G/H)\cup L^{\infty}(\widehat{G}\times G/H)$ and $u,v\in L^2(G)\cap L^{\infty}(G)$. Then $$\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}=\|g\|_{L^2(G/H)}^2\langle \overline{v}M_{\sigma,g}(uf),h\rangle_{L^2(G)}.$$
\end{prop}
\begin{proof}
In view of Definition~\ref{generalizedmultiplier} and Theorem~\ref{orthothm} we conclude
\begin{equation}
\begin{split}
\langle P_{u,v,g}(\sigma)(f),h\rangle & =\int_{\widehat{G}}\int_{G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}}\int_{G/H}D_{H}^g(M_{\sigma,g}(uf))(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \|g\|_{L^2(G/H)}^2\int_{G}M_{\sigma,g}(uf)(t)\overline{(vh)(t)}dt\\
& = \|g\|_{L^2(G/H)}^2\langle \overline{v}M_{\sigma,g}(uf),h\rangle_{L^2(G)}.
\end{split}
\end{equation}\end{proof}
\section{$L^2-$boundedness of generalized two wavelet multiplier}\label{bound}
In this section we will show the operators $$P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$$ are bounded linear operators for all symbol $\sigma$ in $L^p(\widehat{G}\times G/H),~~~1\leq p\leq \infty$. \\
Let us assume $u,v$ be in $L^2(G)\cap L^{\infty}(G)$ such that
$$\|u\|_{L^2(G)}=\|v\|_{L^2(G)}=1.$$
\begin{prop}\label{S infty1}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, then the generalized two wavelet multiplier $P_{u,v,g}(\sigma)$ is in $S_{\infty}$.
\end{prop}
\begin{proof}
For every functions $f$ and $h$ in $L^2(G)$, it follows from Definition \ref{def}, Equation \eqref{Dg} and Cauchy-Schwarz inequality,
\begin{equation}
\begin{split}
& |\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}| \\ & \leq\int_{\widehat{G}\times G/H}|\sigma(w,zH)D_{H}^g(uf)(w,zH)D_{H}^g(vh)(w,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|D_H^g(uf)\|_{L^{\infty}(\widehat{G}\times G/H)}\|D_H^g(vh)\|_{{L^{\infty}(\widehat{G}\times G/H)}} \int_{\widehat{G}\times G/H}|\sigma(w,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|u\|_{L^2(G)}\|f\|_{L^2(G)}\|g\|_{L^{\infty}(G/H)}^2\|v\|_{L^2(G)}\|h\|_{L^2(G)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.
\end{split}
\end{equation}
Hence $$\|P_{u,v,g}(\sigma)\|_{S_{\infty}}\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}^2.$$
\end{proof}
\begin{prop}\label{S infty}
Let $\sigma $ be in $L^{\infty}(\widehat{G}\times G/H)$, then the generalized two wavelet multiplier operator $P_{u,v,g}(\sigma)$ is in $S_{\infty}$.
\end{prop}
\begin{proof}
Here $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$.
For every functions $f$ and $h$ in $L^2(G)$ we have the followings from Definition \ref{def}, Cauchy-Schwarz's inequality and Plancherel formula \ref{orthogonalitynorm} :
\begin{equation}
\begin{split}
& |\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}| \\
& \leq\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)D_{H}^g(uf)(\omega,zH)D_{H}^g(vh)(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|D_H^g(uf)\|_{L^{2}(\widehat{G}\times G/H)}\|D_H^g(vh)\|_{{L^{2}(\widehat{G}\times G/H)}}\|\sigma\|_{L^{\infty}(\widehat{G}\times G/H)}\\
& \leq \|u\|_{L^{\infty}(G)}\|f\|_{L^2(G)}\|g\|_{L^2(G/H)}^2\|v\|_{L^{\infty}(G)}\|h\|_{L^2(G)}\|\sigma\|_{L^{\infty}(\widehat{G}\times G/H)}.
\end{split}
\end{equation}\end{proof}
Now we can proceed to interpret the generalized two wavelet multiplier
$$P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$$
to all symbol $\sigma$ in $L^p(\widehat{G}\times G/H)~~1\leq p\leq\infty$ being in $S_{\infty}$. It is given in the following theorem.
\begin{theorem}\label{Lpbounded}
Let $\sigma\in L^p(\widehat{G}\times G/H),1\leq p\leq\infty$. Then there exists a unique bounded linear operator $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$ s.t. $$\|P_{u,v,g}(\sigma)\|_{S_{\infty}}\leq \|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^2(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|\sigma\|_{L^p(\widehat{G}\times G/H)}.$$
\end{theorem}
\begin{proof}
Let $f\in L^2(G) $ and $T:L^1(\widehat{G}\times G/H)\cap L^{\infty}(\widehat{G}\times G/H)\rightarrow L^2(G)$ given by $$T(\sigma):= P_{u,v,g}(\sigma)f.$$
We have from Proposition \ref{S infty1} $$\|P_{u,v,g}(\sigma)f\|_{L^2(G)}\leq \|f\|_{L^2(G)}\|g\|_{L^{\infty}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}$$ and from Proposition \ref{S infty} $$\|P_{u,v,g}(\sigma)f\|_{L^2(G)}\leq \|f\|_{L^2(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^{\infty}(\widehat{G}\times G/H)}\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)}.$$
By Riesz-Thorin interpolation theorem, $T$ may uniquely extended to $L^p(\widehat{G}\times G/H), 1\leq p\leq\infty$ and $$\|T(\sigma)\|_{L^2(G)}=\|P_{u,v,g}(\sigma)f\|_{L^2(G)}.$$
So $$\|P_{u,v,g}(\sigma)f\|_{L^2(G)}\leq \|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^2(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|f\|_{L^2(G)}\|\sigma\|_{L^p(\widehat{G}\times G/H)}.$$
\end{proof}
\section{Schatten class boundednes of generalized two wavelet multiplier}\label{SchttenN}
Our aim is to show the linear operators $$P_{u,v,g}:L^2(G)\rightarrow L^2(G)$$ are in Schatten class $S_p$ for all symbol $\sigma$ in $L^p(\widehat{G}\times G/H),~~~1\leq p\leq \infty$.
\begin{prop}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H),$ then the generalized two wavelet multiplier $P_{u,v,g}(\sigma)$ is in $S_2$ and we have $$\|P_{u,v,g}(\sigma)\|_{S_2}\leq \|g\|_{L^{\infty}(G/H)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{prop}
\begin{proof}
Let $\{ \phi_{j}:j=1,2,\cdots\}$ be an orthonormal basis for $L^2(G)$.
$$\sum_{j=1}^{\infty}\|P_{u,v,g}(\sigma)(\phi_{j})\|_{L^2(G)}^{2}=\sum_{j=1}^{\infty}\langle P_{u,v,g}(\sigma)\phi_{j}, P_{u,v,g}(\sigma)\phi_{j}\rangle .$$
Now using Definition \ref{def}, Fubini's theorem, Parseval's identity and Proposition \ref{adjoint}, we have
\begin{equation*}
\begin{split}
&\langle P_{u,v,g}(\sigma)\phi_{j}, P_{u,v,g}(\sigma)\phi_{j}\rangle\\
& =\int_{\widehat{G}\times G/H}\sigma(w,zH)D_H^g(u\phi _j)(w,zH)\overline{D_H^g(vP_{u,v,g}\sigma\phi _{j})(w,zH)} dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(w,zH)\langle u\phi _j,\overline{g_{w,zH}}\rangle_{L^2(G)}\overline{\langle vP_{u,v,g}(\sigma)(\phi _j), \overline{g_{w,zH}}\rangle_{L^2(G)}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(w,zH)\langle\phi_j, \overline{ug_{w,zH}}\rangle\overline{\langle P_{u,v,g}(\sigma)(\phi_j), \overline{g_{w,zH}}\rangle}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).
\end{split}
\end{equation*}
So
\begin{align*}
&\sum_{j=1}^{\infty}\|P_{u,v,g}(\sigma)(\phi_{j})\|_{L^2(G)}^{2}\\
&= \sum_{j=1}^{\infty}\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle\phi_j, \overline{ug_{\omega,zH}}\rangle\overline{\langle P_{u,v,g}(\sigma)(\phi_j), \overline{vg_{\omega,zH}}\rangle}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\sum_{j=1}^{\infty}\langle\phi_j, \overline{ug_{\omega,zH}}\rangle\langle P_{v,u,g}^{\ast}(\sigma)(\overline{vg_{\omega,zH}}),\phi_j \rangle dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle P_{v,u,g}^{\ast}(\sigma)(\overline{vg_{\omega,zH}}),\sum_{j=1}^{\infty}\phi_{j}\langle \phi_{j},\overline{ug_{\omega,zH}}\rangle\rangle dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle P_{v,u,g}^{\ast}(\sigma)(\overline{vg_{\omega,zH}}),\overline{ug_{\omega,zH}}\rangle_{L^2(G)} dm_{\widehat{G}}(\omega)dm_{G/H}(zH).
\end{align*}
Thus
\begin{equation*}
\begin{split}
&\sum_{j=1}^{\infty}\|P_{u,v,g}(\sigma)(\phi_{j})\|_{L^2(G)}^{2}\\
& \leq \|g\|_{L^2(G/H)}^2\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\|P_{u,v,g}^{\ast}(\sigma)\|_{S_{\infty}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|g\|_{L^{\infty}(G/H)}^4\|\sigma\|_{L^1(\widehat{G}\times G/H)}^2.
\end{split}
\end{equation*}\end{proof}
\begin{prop}
Let $\sigma$ be a symbol in $L^p(\widehat{G}\times G/H), 1\leq p <\infty.$ Then the generalized two-wavelet multiplier $P_{u,v,g}(\sigma)$ is compact.
\end{prop}
\begin{proof}
We know $L^1(\widehat{G}\times G/H)\cap L^{\infty}(\widehat{G}\times G/H)$ is dense in $L^p(\widehat{G}\times G/H),(1\leq p <\infty)$. Let $\sigma\in L^p(\widehat{G}\times G/H)$, then there exist $\{\sigma_n\}_{n\in\mathbb{N}}\subset L^1(\widehat{G}\times G/H)\cap L^{\infty}(\widehat{G}\times G/H)$ s.t. $\sigma _{n}\rightarrow\sigma$ in $L^{p}(\widehat{G}\times G/H)$ as $n\rightarrow\infty$.\\
Now by Theorem \ref{Lpbounded}
\begin{align*}
&\|P_{u,v,g}(\sigma_n)-P_{u,v,g}(\sigma)\|_{B(L^2(G))}
=\|P_{u,v,g}(\sigma_n-\sigma)\|_{B(L^2(G))}\\
& \leq \|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^2(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|\sigma_{n}-\sigma\|_{L^p(\widehat{G}\times G/H)}.
\end{align*}
So $P_{u,v,g}(\sigma_n)\rightarrow P_{u,v,g}(\sigma)$ in $B(L^2(G))$. Hence $P_{u,v,g}(\sigma)$ is compact.
\end{proof}
\begin{theorem}\label{S1}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$. Then $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$ is in $S_1$.
\end{theorem}
\begin{proof}
Here $\sigma\in L^1(\widehat{G}\times G/H)$ then $P_{u,v,g}(\sigma)$ is in $S_2$. So there exists an orthonormal basis $\{\phi_j:j=1,2,3,\cdots\}$ for the orthogonal compliment of the kernel $P_{u,v,g}(\sigma)$ consisting of eigenvectors of $|P_{u,v,g}(\sigma)|$ and the subset $\{\psi_j:j=1,2,3,\cdots\}$ of $L^2(G)$ such that $$P_{u,v,g}(\sigma)(f)=\sum_{j=1}^{\infty}s_j\langle f,\phi_j\rangle_{L^2(G)}\psi_{j},$$
where $s_j, j=1,2,3,\cdots$ are positive singular values of $P_{u,v,g}(\sigma)$ corresponding to $\phi_j$.\\
Then $$\|P_{u,v,g}(\sigma)\|_{S_1}=\sum_{j=1}^{\infty}s_j=\sum_{j=1}^{\infty}\langle P_{u,v,g}(\sigma)(\phi_j),\psi_j\rangle_{L^2(G)}.$$
Now by Bessel's inequality, Cauchy-Schwarz's inequality, Fubini's theorem and the fact $\|u\|_{L^2(G)}=\|u\|_{L^2(G)}=1,$ we get
\begin{align*}
& \sum_{j=1}^{\infty}\langle P_{u,v,g}(\sigma)(\phi_j),\psi_j\rangle_{L^2(G)}\\
& =\sum_{j=1}^{\infty}\int_{\widehat{G}\times G/H}\sigma(\omega,zH)D_{H}^g(u\phi_j)(\omega,zH)\overline{D_H^g(v\psi_j)(\omega,zH)} dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\sum_{j=1}^{\infty}\langle\phi_j,\overline{ug_{\omega,zH}}\rangle_{L^2(G)}\langle\overline{vg_{\omega,zH}},\psi_{j}\rangle_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\Big(\sum_{j=1}^{\infty}|\langle\phi_j,\overline{ug_{\omega,zH}}\rangle_{L^2(G)}|^2\Big)^{1/2}\\
& \Big(\sum_{j=1}^{\infty}|\langle\overline{vg_{\omega,zH}},\psi_{j}\rangle_{L^2(G)}|^2\Big)^{1/2}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\|ug_{\omega,zH}\|_{L^2(G)}\|vg_{\omega,zH}\|_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|g\|_{L^2(G/H)}^2.
\end{align*}
\end{proof}
\begin{cor}
For $\sigma\in L^1(\widehat{G}\times G/H)$, we have the following trace formula;
$$\text{tr}(P_{u,v,g}(\sigma))=\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle vg_{\omega,zH},ug_{\omega,zH}\rangle_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$
\end{cor}
\begin{proof}
Let $\{ \phi_k:k=1,2,\cdots\}$ be an orthonormal basis for $L^2(G)$. By Theorem \ref{S1}, the generalized two wavelet multiplier $P_{u,v,g}(\sigma)$ is in $S_1$.\\
Now by definition of trace formula , Fubini's theorem and Parseval's identity, we have
\begin{align*}
& \text{tr}(P_{u,v,g}(\sigma))\\
& = \sum_{k=1}^{\infty}\langle P_{u,v,g}(\sigma)(\phi_k),\phi_k\rangle_{L^2(G)}\\
& =\sum_{k=1}^{\infty}\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle \phi_{k},\overline{ug_{\omega,zH}}\rangle_{L^2(G)}\overline{\langle \phi_{k},\overline{vg_{\omega,zH}}\rangle_{L^2(G)}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& =\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\sum_{k=1}^{\infty}\langle \phi_{k},\overline{ug_{\omega,zH}}\rangle_{L^2(G)}\overline{\langle \phi_{k},\overline{vg_{\omega,zH}}\rangle_{L^2(G)}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle\overline{ug_{\omega,zH}},\overline{vg_{\omega,zH}}\rangle_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).
\end{align*}\end{proof}
We give a result $$P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$$ will be in $S_{p}$ for $\sigma\in L^p(G),~~1\leq p\leq \infty$.
\begin{cor}
Let $\sigma\in L^p(\widehat{G}\times G/H), 1\leq p\leq\infty$. Then the generalized two wavelet multiplier $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$ is in $S_p$ and we have $$\|P_{u,v,g}\|_{S_p}\leq\|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^{2}(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|\sigma\|_{L^p(\widehat{G}\times G/H)}.$$
\end{cor}
\begin{proof}
The proof follows from above Theorem \ref{S1} and Proposition \ref{S infty} and interpolation theorem in \cite{wong}.\end{proof}
\section{$L^p-$boundedness of generalized two wavelet multiplier for $1\leq p\leq\infty$}\label{lpbdd}
Our aim is to show the linear operators $$P_{u,v,g}:L^p(G)\rightarrow L^p(G)$$ are bounded operators for all symbol $\sigma$ in $L^r(\widehat{G}\times G/H),~~~1< r\leq \infty$ for all $p\in [\frac{2r}{r+1},\frac{2r}{r-1}]$ and for $r=1, p\in [1,\infty].$
Let us assume $0\neq g\in L^{\infty}(G/H)$ for this section.
\begin{prop}\label{L1}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H), u\in L^{\infty}(G)$ and $v\in L^1(G)$, then the generalized two wavelet multiplier
$$P_{u,v,g}(\sigma):L^1(G)\rightarrow L^1(G)$$ is a bounded operator and
we have $$\|P_{u,v,g}(\sigma)\|_{B(L^1(G))}\leq \|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\|g\|_{L^{\infty}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{prop}
\begin{proof}
We know that $$P_{u,v,g}(\sigma)(f)(t)=\int_{\widehat{G}\times G/H}\sigma(\omega,zH)D_{H}^g(uf)(\omega,zH)g_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$
Now by Equation \ref{Dg} and Equation \ref{Dgb}
\begin{align*}
& \|P_{u,v,g}(\sigma)(f)\|_{L^1(G)}\\
& \leq \int_{\widehat{G}\times G/H}\int_{G}|\sigma(\omega,zH)||D_{H}^g(uf)(\omega,zH)||g_{\omega,zH}(t)v(t)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)dt\\
& \leq \int_{\widehat{G}\times G/H}\int_{G}|\sigma(\omega,zH)||\langle uf,g_{\omega,zH}\rangle_{L^2(G)}|\|g\|_{L^{\infty}(G/H)}|v(t)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)dt\\
& \leq \|g\|_{L^{\infty}(G/H)}^2 \|f\|_{L^1(G)}\|u\|_{L^{\infty}(G)}\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\int_{G}|v(t)|dt\\
& = \|g\|_{L^{\infty}(G/H)}^2 \|f\|_{L^1(G)}\|u\|_{L^{\infty}(G)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}\|v\|_{L^1(G)}.
\end{align*}
\end{proof}
\begin{prop}\label{L infty}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H), u\in L^1(G)$ and $v\in L^{\infty}(G)$, then the generalized two wavelet multiplier
$$P_{u,v,g}(\sigma):L^{\infty}(G)\rightarrow L^{\infty}(G)$$ is a bounded operator and we have
$$\|P_{u,v,g}(\sigma)\|_{B(L^{\infty}(G))}\leq \|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{prop}
\begin{proof}
Let $f$ be in $L^{\infty}(G)$, then by Equation \ref{Dgb} and Equation \ref{Dg} we have
\begin{align*}
& |P_{u,v,g}(\sigma)(f)(t)| \\
& \leq\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||D_{H}^g(uf)(\omega,zH)||g_{\omega,zH}(t)||\overline{v(t)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}\|f\|_{L^{\infty}(G)}^2.
\end{align*}
Let $\|f\|_{L^{\infty}(G)}=1$, hence
$$\|P_{u,v,g}(\sigma)\|_{B(L^{\infty}(G))}\leq \|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$\end{proof}
\begin{theorem}\label{th}
Let $u, v\in L^1(G)\cap L^{\infty}(G).$ Then for all $\sigma\in L^1(\widehat{G}\times G/H),$ there exist a unique bounded linear operator
$$P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G),~~~~~ 1\leq p\leq\infty,$$ s.t. $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq \|u\|_{L^1(G)}^{1/p^{\prime}}\|v\|_{L^1(G)}^{1/p}\|u\|_{L^{\infty}(G)}^{1/p}\|v\|_{L^{\infty}(G)}^{1/p^{\prime}}\|g\|_{L^{\infty}(G/H)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{theorem}
\begin{proof}
We know
$$P_{u,v,g}(\sigma):L^{1}(G)\rightarrow L^1(G)$$ is the adjoint of
$$P_{v,u,g}(\overline{\sigma}):L^{\infty}(G)\rightarrow L^{\infty}(G).$$
Now by Proposition \ref{L1} and Proposition \ref{L infty} and interpolation theorem, for $1\leq p\leq \infty$, $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq \|u\|_{L^1(G)}^{1/p^{\prime}}\|v\|_{L^1(G)}^{1/p}\|u\|_{L^{\infty}(G)}^{1/p}\|v\|_{L^{\infty}(G)}^{1/p^{\prime}}\|g\|_{L^{\infty}(G/H)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{proof}
We give another version of $L^p$- boundedness of Theorem \ref{th}.
\begin{theorem}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, $u$ and $v$ in $L^1(G)\cap L^{\infty}(G)$. Then there exists a unique bounded linear operator $P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)~~~1\leq p\leq \infty$ s.t.
$$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq\text{max}\Big(\|u\|_{L^1(G)}\|v\|_{L^{\infty}(G)},\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\Big)\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{theorem}
\begin{proof}
From Definition \ref{def},
\begin{align*}
& P_{u,v,g}(\sigma)(f)(t)\\ &=\int_{\widehat{G}\times G/H}\sigma(\omega,zH)D_{H}^g(uf)(\omega,zH)g_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\int_G u(s)f(s)\overline{g_{\omega,zH}(s)}dsg_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)
\end{align*}
So the integral operator is
$$P_{u,v,g}(\sigma)(f)(t)=\int_{G}N(t;s)f(s)ds,$$ with the kernel $$N(t;s)=\int_{\widehat{G}\times G/H}\sigma(\omega,zH) u(s)\overline{g_{\omega,zH}(s)}g_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$
Now
\begin{equation}
\begin{split}
\int_{G}|N(t;s)|dt & \leq \int_{G}\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)| |u(s)||\overline{g_{\omega,zH}(s)}||g_{\omega,zH}(t)||\overline{v(t)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)dt\\
& \leq \|u\|_{L^{\infty}(G)}\|g\|_{L^{\infty}(G/H)}\|g\|_{L^{\infty}(G/H)}\|v\|_{L^{1}(G)}\|\sigma\|_{L^{1}(\widehat{G}\times G/H)}.
\end{split}
\end{equation}
Similarly
$$\int_{G}|N(t;s)|ds\leq \|u\|_{L^{1}(G)}\|g\|_{L^{\infty}(G/H)}\|g\|_{L^{\infty}(G/H)}\|v\|_{L^{\infty}(G)}\|\sigma\|_{L^{1}(\widehat{G}\times G/H)}.$$
Thus by Schur's test \cite{Fo}, we can conclude $P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)$ bounded and $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq\text{max}\Big(\|u\|_{L^1(G)}\|v\|_{L^{\infty}(G)},\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\Big)\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{proof}
\begin{prop}\label{p>1}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, $v$ in $L^p(G)$ and $u$ in $L^{p^{\prime}}(G)$ for $1< p\leq\infty$ then the generalized two-wavelet multiplier $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ is a bounded linear operator and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^2(G))}\leq \|u\|L^{p^{\prime}}(G)\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{prop}
\begin{proof}
For any $f\in L^p(G)$, consider the linear functional $I_{f}:L^{p^{\prime}}(G)\rightarrow\mathbb{C}$ defined by $$I_{f}(h)=\langle h,P_{u,v,g}(\sigma)(f)\rangle_{L^2(G)}.$$
Now from Definition \ref{def}, Equation \ref{Dg} and Holder's inequality, we have
\begin{align*}
&|\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}|\\
& \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||D_{H}^g(uf)(\omega,zH)||D_{H}^g(vh)(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& = \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||\langle uf, g_{\omega,zH}\rangle_{L^2(G)}||\langle vh,g_{\omega,zH}\rangle_{L^2(G)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||\langle u, \overline{f}g_{\omega,zH}\rangle_{L^2(G)}||\langle h,\overline{v}g_{\omega,zH}\rangle_{L^2(G)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)}\|h\|_{L^{p^{\prime}}(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)}\|h\|_{L^{p^{\prime}}(G)}.
\end{align*}
By Riesz representation theorem $$\|P_{u,v,g}(\sigma)(f)\|=\|I_{f}\|_{B({L^{p^{\prime}}(G)})}.$$
So $$|I_{f}(h)|\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)}\|h\|_{L^{p^{\prime}}(G)}.$$ Then
$$\|I_{f}\|_{B({L^{p^{\prime}}(G)})}\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)},$$ and hence
$$\|P_{u,v,g}(\sigma)\|_{B({L^{p}(G)})}\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2.$$
\end{proof}
We give the result for $p=1$ of Proposition \ref{p>1}.
\begin{theorem}
Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, $v$ in $L^p(G)$ and $u$ in $L^{p^{\prime}}(G)$ for $1\leq p\leq\infty$ then the generalized two-wavelet multiplier $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ is a bounded linear operator and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^2(G))}\leq \|u\|_{L^{p^{\prime}}}(G)\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$
\end{theorem}
\begin{proof}
We can show using Proposition \ref{p>1} and Proposition \ref{L1}.
\end{proof}
Let us consider $0\neq g\in L^1(G/H)\cap L^{\infty}(G/H)\subset L^q(G/H),~~~(1<q<\infty)$ for rest of this section.
\begin{theorem}
Let $\sigma$ be in $L^r(\widehat{G}\times G/H),~~~~ r\in [1,2]$ and $u,v\in L^1(G)\cap L^{\infty}(G).$ Then there exists a unique bounded linear operator $ P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)$ for all $p\in [r,r^{\prime}]$ and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq c_1^tc^{1-t}_2\|\sigma\|_{L^r(\widehat{G}\times G/H)}, ~~~~\frac{t}{r}+\frac{1-t}{r^{\prime}}=\frac{1}{p}.$$
\end{theorem}
\begin{proof}
Consider the linear functional $$I:(L^1(\widehat{G}\times G/H)\cap L^2(\widehat{G}\times G/H))\times(L^1(G)\cap L^2(G))\rightarrow L^1(G)\cap L^2(G),$$
$$(\sigma, f)\mapsto P_{u,v,g}(\sigma)(f).$$
By Proposition \ref{L1} for $\sigma\in L^1(\widehat{G}\times G/H), f\in L^1(G)$,
$$\|I(\sigma,f)\|_{L^1(G)}=\|P_{u,v,g}(\sigma)(f)\|_{L^1(G)}\leq \|f\|_{L^1(G)}\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\|g\|_{L^{\infty}(G/H)}^2\|\sigma\|.$$
By Theorem \ref{S infty}, for $\sigma\in L^2(\widehat{G}\times G/H), ~~ f\in L^2(G)$,
\begin{equation*}
\begin{split}
\|I(\sigma, f)\|_{L^2(G)}& =\|P_{u,v,g}(\sigma)(f)\|_{L^2(G)}\leq \|f\|_{L^2(G)}\Big(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)} \Big)^{1/2}\\
& \|\sigma\|_{L^2(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}\|g\|_{L^2(G/H)}.
\end{split}
\end{equation*}
By multilinear interpolation theory, we get a unique bounded operator $$I:L^r(\widehat{G}\times G/H)\times L^r(\widehat{G}\times G/H)\rightarrow L^r(G)$$ such that
$$\|I(\sigma, f)\|_{L^r(G)}\leq c_1\|f\|_{L^r(G)}\|\sigma\|_{L^r(\widehat{G}\times G/H)},$$
where $$c_1=\Big( \|g\|_{L^{\infty}(G/H)}^2\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\Big)^{\theta}\Big( \|g\|_{L^{2}(G/H)}^2\|g\|_{L^{\infty}(G/H)}\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)}\Big)^{\frac{1-\theta}{2}},$$ and $\frac{\theta}{1}+\frac{1-\theta}{2}=\frac{1}{r}.$\\
By definition of $I$, $$\|P_{u,v,g}(\sigma)\|_{B(L^r(G))}\leq c_1 \|\sigma\|_{L^r(\widehat{G}\times G/H)}.$$
Also $P_{v,u,g}(\overline{\sigma})$ is the adjoint of $P_{u,v,g}(\sigma)$, so $P_{u,v,g}(\overline{\sigma})$ is a bounded linear operator on $L^{r^{\prime}}(G)$ with the operator norm
$$\|P_{v,u,g}(\overline{\sigma})\|_{B(L^{r^{\prime}}(G))}=\|P_{u,v,g}(\sigma)\|_{B(L^{r}(G))}\leq c_2\|\sigma\|_{L^r(\widehat{G}\times G/H)},$$ where $$c_2=\Big( \|g\|_{L^{\infty}(G/H)}^2\|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\Big)^{\theta}\Big( \|g\|_{L^{2}(G/H)}^2\|g\|_{L^{\infty}(G/H)}\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)}\Big)^{\frac{1-\theta}{2}}.$$
By interpolation theorem, for $p\in [r,r^{\prime}]$,
$$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq c_1^t c_2^{1-t}\|\sigma\|_{L^r(\widehat{G}\times G/H)}.$$
\end{proof}
\begin{theorem}
Let $\sigma$ be in $L^r(\widehat{G}\times G/H),~~r\in[1,2)$ and $u,v\in L^r(G)\cap L^{\infty}(G).$ Then there exists a linear bounded operator
$P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ for all $p\in[r,r^{\prime}]$ and we have
\begin{align*}
&\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\\
& \leq \|g\|_{L^{\infty}(G/H)}\|g\|_{L^{r^{\prime}}}(G/H)\Big( \|u\|_{L^r(G)}\|v\|_{L^{\infty}(G)}\Big)^t\Big( \|v\|_{L^r(G)}
\|u\|_{L^{\infty}(G)}\Big)^{1-t}\|\sigma\|_{L^r(\widehat{G}\times G/H)},
\end{align*}
where $t=\frac{r-p}{p(r-2)}$.
\end{theorem}
\begin{proof}
For any $f\in L^{r^{\prime}}(G)$ and $h\in L^r(G)$
\begin{align*}
& |\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}|\\
& \leq \int_{\widehat{G}\times G/H}|\Delta(\omega,zH)||D_{H}^g(uf)(\omega,zH)||D_{H}^g(vh)(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
& \leq \|D_{H}^g(uf)\|_{L^{\infty}(\widehat{G}\times G/H)}\|D_{H}^g(vh)\|_{L^{r^{\prime}}(\widehat{G}\times G/H)}\|\sigma\|_{L^r(\widehat{G}\times G/H)}.
\end{align*}
By Equation \eqref{Dg}
$$\|D_{H}^g(uf)\|_{L^{\infty}(\widehat{G}\times G/H)}\leq \|g\|_{L^{\infty}(G/H)}\|u\|_{L^r(G)}\|f\|_{L^{r^{\prime}}(G)}.$$
By Equation \eqref{Dgp}
$$\|D_{H}^g(vh)\|_{L^{r^{\prime}}(\widehat{G}\times G/H)}\leq \|g\|_{L^{r^{\prime}}(G/H)}\|v\|_{L^{\infty}(G)}\|h\|_{L^{r}(G)}.$$
So
\begin{align*}
&|\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}|\\
&\leq \sigma\|_{L^r(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}\|u\|_{L^r(G)}\|f\|_{L^{r^{\prime}}(G)}\|g\|_{L^{r^{\prime}}(G/H)}\|v\|_{L^{\infty}(G)}\|h\|_{L^{r}(G)}.
\end{align*}
Thus $$\|P_{u,v,g}(\sigma)\|_{B(L^{r^{\prime}}(G))}\leq \sigma\|_{L^r(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}\|u\|_{L^r(G)}\|g\|_{L^{r^{\prime}}(G/H)}\|v\|_{L^{\infty}(G)}. $$
\end{proof}
\section{Compactness of generalized two wavelet multipliers }\label{Compt}
Our aim is to show the linear operators $$P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)$$ are compact operators for all symbol $\sigma$ in $L^1(\widehat{G}\times G/H)$.
\begin{prop}\label{pr}
Under the same hypothesis of Theorem \ref{th}, the generalized two wavelet multiplier $P_{u,v,g}(\sigma):L^1(G)\rightarrow L^1(G)$ is compact.
\end{prop}
\begin{proof}
Let $f_{n}\in L^1(G)$ s.t. $f_n \rightarrow 0$ weakly in $L^1(G)$. We need to show, $P_{u,v,g}(\sigma)(f_n)\rightarrow 0$ in $L^1(G)$.
\begin{small}
$$
\|P_{u,v,g}(\sigma)(f_n)\|_{L^1(G)} \leq \int_{\widehat{G}\times G/H}\int_{G}|\sigma(\omega,zH)||\langle f_n,g_{\omega,zH}u\rangle_{L^2(G)}||g_{\omega,zH}(t)v(t)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$
\end{small}
Now $$\sigma(\omega,zH)||\langle f_n,g_{\omega,zH}u\rangle_{L^2(G)}||g_{\omega,zH}(t)v(t)|\leq C\|g\|_{L^{\infty}(G/H)}^2 \sigma(\omega,zH)\|u\|_{L^{\infty}(G)}|v(t)|.$$
By Dominated convergence theorem we conclude
$$\lim_{n\rightarrow\infty}\|P_{u,v,g}(\sigma)f_n\|_{L^1(G)}=0$$
\end{proof}
\begin{theorem}
Under the hypothesis of Theorem \ref{th}, the bounded operator $$P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$$ is compact for $1\leq p\leq\infty$.
\end{theorem}
\begin{proof}
We know $P_{u,v,g}(\sigma): L^{\infty}(G)\rightarrow L^{\infty}(G)$ is the adjoint operator of $P_{u,v,g}(\sigma): L^1(G)\rightarrow L^1(G)$, which is compact by Proposition \ref{pr}. Hence by interpolation operator theorem for compactness on $L^1(G)$ and on $L^{\infty}(G)$ can be extended to compactness of $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ for $1< p<\infty$.
\end{proof}
\section{Landau-Pollak-Slepian operator}\label{LPOp}
Suppose $C_i, D$ and $\Omega$ are a compact neighbourhood of identity elements of $G$, $G/H$ and $\widehat{G}$ respectively for $i=1,2$. Define the operators $Q_R$ and $P_{R_i}$ as
\begin{align*}
Q_R:L^2(G/H\times \widehat{G})\rightarrow L^2(G/H\times \widehat{G})\ \ \textrm{and}\ \ P_{R_i}:L^2(G/H\times \widehat{G})\rightarrow L^2(G/H\times \widehat{G})
\end{align*}
by
\begin{align}
Q_Rg(xH,\omega)=\chi_{D\times \Omega}(xH,\Omega)g(xH,\Omega)
\end{align}
and
\begin{align}
P_{R_i}g(xH,\omega)=\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}g(xH,\omega)\right).
\end{align}
Now we present some properties of the above operator in the next proposition.
\begin{prop}
The operators $Q_R$ and $P_{R_i}$ are self-adjoint projections.
\end{prop}
\begin{proof}
For $h,\phi\in G/H\times \widehat{G}$, we have
\begin{align*}
\langle P_{R_i}h,\phi\rangle = \langle \mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h\right),\phi\rangle
\end{align*}
Now we can write the following in view of Theorem~\ref{orthothm}
\begin{align*}
\langle P_{R_i}h,\phi\rangle =& \langle \chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h,(\mathcal{D}_H^g)^{-1}\phi\rangle \ \ \|g\|^2_{L^2(G/H)}\\
=& \langle (\mathcal{D}_H^g)^{-1}h,\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\rangle \ \ \|g\|^2_{L^2(G/H)}\\
=& \langle h,\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\right) \rangle\\
=& \langle h,P_{R_i}\phi\rangle.
\end{align*}
Hence the operator $P_{R_i}$ is the self-adjoint operator. Also $P_{R_i}$ is the projection operator by noting
\begin{align*}
\langle P_{R_i}^2h,\phi\rangle &= \langle P_{R_i}h,P_{R_i}\phi\rangle\\
&= \langle\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h\right),\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\right)\rangle\\
&= \|g\|^2_{L^2(G/H)}\langle \chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h,\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\rangle\\
&= \|g\|^2_{L^2(G/H)}\langle \chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h,(\mathcal{D}_H^g)^{-1}\phi\rangle\\
&= \langle \mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h\right),\phi\rangle\\
&= \langle P_{R_i}h,\phi\rangle.
\end{align*}
Similarly, $Q_R$ is also self-adjoint projection operator.
\end{proof}
The linear operator $P_{R_2} Q_R P_{R_1}: L^2(G/H\times \widehat{G})\rightarrow L^2(G/H\times \widehat{G})$ is called the generalized Landau-Pollak-Slepian operator. Now we will obtain the relation between the generalized Landau-Pollak-Slepian operator and the generalized two-wavelet multiplier.
\begin{theorem}
Let $u$ and $v$ are the functions of $G$ defined by
\begin{align*}
u=\frac{1}{\sqrt{|C_1|}}\chi_{C_1}(x)\ \ \ \textrm{and}\ \ \ v=\frac{1}{\sqrt{|C_2|}}\chi_{C_2}(x)
\end{align*}
then the generalized Landau-Pollak-Slepian operator
$$P_{R_2} Q_R P_{R_1}: L^2(G/H\times \widehat{G})\rightarrow L^2(G/h\times \widehat{G})$$
is unitary equivalent to a scalar multiple of the generalized two-wavelet multiplier
$$P_{u,v,g}\left( \chi_{D\times \Omega}\right): L^2(G)\rightarrow L^2(G).$$
In fact
$$P_{R_2} Q_R P_{R_1}=\frac{\alpha(R_1,R_2)}{\|g\|_{L^2(G/H)}^2}\ \mathcal{D}_H^g\left(P_{u,v,g}\left( \chi_{D\times \Omega}\right)\right){\mathcal{D}_H^g}^{-1}$$
where $\alpha(R_1,R_2)=\sqrt{|C_1||C_2|}$, where $|C_i|$ is the Haar measure of the open set $C_i$.
\end{theorem}
\begin{proof}
Clearly, $\|u\|_{L^2(G)}=1=\|v\|_{L^2(G)}$. We have
\begin{align*}
\langle P_{u,v,g}\left( \chi_{D\times \Omega}\right)f_1,f_2\rangle _{L^2(G)}=\int_{\chi_{D\times \Omega}}D_{H}^{g}(uf_1)(\omega,zH)\overline{D_{H}^g(vf_2)(\omega,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH),
\end{align*}
where
\begin{align*}
D_{H}^{g}(uf_1) &=\int_G u(x)f_1(x)\overline{w(x)g(z^{-1}xH)}dm_G(x)\\
&=\frac{1}{\sqrt{|C_1|}}\int_{C_1} f_1(x)\overline{w(x)g(z^{-1}xH)}dm_G(x)\\
&=\frac{1}{\sqrt{|C_1|}}P_{R_1}\left(D_{H}^{g}(f_1)\right).
\end{align*}
Hence we can write
\begin{align*}
& \langle P_{u,v,g}\left( \chi_{D\times \Omega}\right)f_1,f_2\rangle _{L^2(G)}\\
&=\frac{1}{\sqrt{|C_1C_2|}}\int_{\chi_{D\times \Omega}}P_{R_1}\left(D_{H}^{g}(f_1)\right)\overline{P_{R_2}\left(D_{H}^{g}(f_2)\right)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
&=\frac{1}{\sqrt{|C_1C_2|}}\int_{G/H\times \widehat{G}}Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right)\overline{P_{R_2}\left(D_{H}^{g}(f_2)\right)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\
&=\frac{1}{\sqrt{|C_1C_2|}}\langle Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right),P_{R_2}\left(D_{H}^{g}(f_2)\right)\rangle_{L^2(G/H\times \widehat{G})}\\
&=\frac{1}{\sqrt{|C_1C_2|}}\langle P_{R_2}Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right),\left(D_{H}^{g}(f_2)\right)\rangle_{L^2(G/H\times \widehat{G})}\\
&= \frac{\|g\|_{L^2(G/H)}^2}{\sqrt{|C_1C_2|}}\langle \left(D_{H}^{g}\right)^{-1}P_{R_2}Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right),f_2\rangle_{L^2(G/H\times \widehat{G})}
\end{align*}
\end{proof}
|
{
"timestamp": "2021-03-11T02:16:18",
"yymm": "2102",
"arxiv_id": "2102.08748",
"language": "en",
"url": "https://arxiv.org/abs/2102.08748"
}
|
\section{Introduction}
Dielectric barrier discharges (DBDs) are plasma discharges incorporating at least one layer of dielectric material separating the two electrodes. The dielectric barrier limits the charge transfer and thus the current flow typically producing a non thermal plasma at atmospheric conditions. This non thermal nature allows for the efficient generation of reactive species thereby providing multiple possibilities in biomedical, surface, and industrial applications \cite{Brandenburg2017,HHKim2004}. \DIFaddbegin \DIFadd{DBDs are classifiable into two main categorical descriptors: volumetric and surface DBDs. }\DIFaddend Volume dielectric barrier discharges (VDBDs) are classifiable from DBDs by having a gas gap and a dielectric barrier present between the two electrodes, producing either homogeneous or filamantary like plasmas depending on the conditions \cite{Kogelschatz2010}. Surface dielectric barrier discharges (SDBDs) on the other hand, have only the dielectric layer directly separating the two electrodes; a plasma is thereby only able to ignite along the surface of the dielectric. Due to the possibility of having a thin structure, SDBDs may have particularly low flow resistance and are therefore commonly researched for gas treatment or flow control purposes \cite{Brandenburg2017,Moreau2007,Mueller2007,Corke2010,HHKim2004}. SDBDs have the capability of being built in many unique geometrical configurations ranging in symmetry providing either a single axis or multiple axes for plasma propagation. They may also allow for either a single phase, anodic or cathodic plasma, or a dual phase ignition process.
Throughout the 1990s SDBDs have been well investigated as potential actuators for gas flow control \cite{Brandenburg2017,HHKim2004,Moreau2007,Corke2010}. For such purposes an asymmetric geometry, where one electrode is offset from the opposite electrode and possibly completely submerged by the dielectric, is typically used \DIFdelbegin \DIFdel{\mbox
\cite{Corke2010,Akishev2012,Audier2014,Biganzoli2012,Debien2012,GAO2017,Peng2019,Xiahua2016,Soloviev2017,Starikovskii2009,Unfer2010,Che2012,Hu2018,Shao2013,Soloviev2018}}\hspace{0pt
}\DIFdelend \DIFaddbegin \DIFadd{\mbox
\cite{Corke2010,Akishev2012,Audier2014,Biganzoli2012,Debien2012,GAO2017,Peng2019,Xiahua2016,Soloviev2017,Starikovskii2009,Unfer2010,Che2012,Hu2018,Shao2013,Soloviev2018,Opaits2008,Sato2019}}\hspace{0pt
}\DIFaddend . Much effort has been put into controlling the \DIFdelbegin \DIFdel{aerodynamic effects and their corresponding plasma behaviors }\DIFdelend \DIFaddbegin \DIFadd{plasma behaviors, such as densities and surface charge deposition, and their corresponding aerodynamic effects }\DIFaddend from said SDBD configurations \DIFdelbegin \DIFdel{\mbox
\cite{Corke2010,Opaits2012,Audier2014}}\hspace{0pt
}\DIFdelend \DIFaddbegin \DIFadd{\mbox
\cite{Opaits2008,Corke2010,Opaits2012,Audier2014,Sato2019}}\hspace{0pt
}\DIFaddend . It has also been shown that AC and pulsed waveforms can significantly modulate the plasma profiles (at positive and negative voltage \DIFdelbegin \DIFdel{phase}\DIFdelend \DIFaddbegin \DIFadd{phases}\DIFaddend ) \cite{Akishev2012,Audier2014,Biganzoli2012,Che2012,Debien2012,Hu2018,Soloviev2017,Soloviev2018,Starikovskii2009,Unfer2010}.
In recent years, SDBDs have undergone extensive investigation for gas purification for industrial and environmental protection applications \cite{Brandenburg2017, Mueller2007,HHKim2004}. Absolutely calibrated two wavelength emission spectroscopy has been used in order to characterize a symmetric SDBD under tailored voltage waveforms \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019}. \DIFaddbegin \DIFadd{The waveform under experimental investigation is a damped sine wave with multiple $\mu$s period, adjustable peak to peak voltage, and pulsed in the kHz regime. }\DIFaddend Additional emission spectroscopy, absorption spectroscopy, and Fourier transform infrared (FTIR) spectroscopy methods have also been used to measure various species densities and chemical modifications of cystine. Furthermore, flame ionization detectors, gas chromatography-mass spectroscopy, and ion energy analyzer quadrupole mass spectroscopy are all being used to investigate and characterize the conversion of volatile organic compounds into non-harmful and non-toxic compounds \cite{Schuecke2020}. Furthermore, the inclusion of pre gas heating and catalyst coatings are being investigated for higher conversion efficiencies \cite{Schuecke2020,Peters2021}.
In many applications, like chemical processing and gas purification, the interaction between a plasma and a catalyst yields synergistic effects resulting in enhanced performances \cite{HHKim2004,HHKim1999}. As such, various structures of catalytic material are often inserted into traditional DBD reactors including, but not limited to: spheres, honeycombs, 3D fibre deposition structures and coatings of the dielectric barrier itself \cite{Zhang2018,HHKim1999}. The synergistic effect is obtained via two primary methods. Firstly, the altered geometry along with tailored voltage \DIFdelbegin \DIFdel{waveforms }\DIFdelend \DIFaddbegin \DIFadd{waveforms }\DIFaddend influence the discharge characteristics \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Zhang2015}. Secondly, the plasma distribution determines the effective contact area of the catalyst thereby altering the morphology and work function of the catalyst \cite{Neyts2014,Zhang2017}. This leads to a great importance on generating a controllable plasma density and spatial distribution \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Shang2019}.
The above studies, although very interesting, were mostly based on experiments of submerged \DIFdelbegin \DIFdel{DBDs }\DIFdelend \DIFaddbegin \DIFadd{SDBDs }\DIFaddend where the plasma discharge is confined to one side of the dielectric plate providing investigations only into a single phase ignition process \cite{Akishev2012,Audier2014,Biganzoli2012,Corke2010,Debien2012,GAO2017,Moreau2007,Opaits2012,Peng2019,Xiahua2016,Shang2019,Shang2019,Soloviev2017,Starikovskii2009}. That is to say that only either an anodic or cathodic phase plasma is present, but never both simultaneously. This single phase nature limits the effective volume and surface area of the plasma which \DIFdelbegin \DIFdel{may greatly reduce the performance of the application.
Additionally}\DIFdelend \DIFaddbegin \DIFadd{defines the effective catalytic surface area exposed to the plasma species in plasma enhanced catalysis. As such, the catalyst performance is potentially limited to a great extent in a single phase SDBD. In gas treatment conditions, an SDBD electrode system is very likely to be placed along the central plane parallel to gas flow in order to minimize flow resistance and increase the treatment volume. Under these conditions, it is very clear that utilizing an SDBD electrode system which ignites on both sides of the dielectric plate will improve the treatment volume, and as such efficiency of the process.
}
\DIFadd{Unfortunately}\DIFaddend , most theoretical investigations utilizing circuit models \cite{Pipa2012,Peeters2014,Pipa2020_PowerDBDEQC}\DIFaddbegin \DIFadd{, }\DIFaddend global models, molecular dynamic models \cite{Neyts2014}, fluid models \cite{Che2012,Peng2019,Soloviev2018}, and even particle-in-cell/Monte Carlo collision (PIC/MCC) models \cite{Zhang2015,Zhang2017,Zhang2018} of \DIFdelbegin \DIFdel{SDBDs }\DIFdelend \DIFaddbegin \DIFadd{(S)DBDs }\DIFaddend and packed bed reactors provide limited insights into the underlying mechanisms of the plasma propagation \cite{Mujahid2018,Mujahid2020,mujahid2020Propagation}. \DIFaddbegin \DIFadd{No contributions on the theoretical investigation of a dual phase symmetric SDBD could be found by the authors, pointing to a significant lacking of knowledge of such configurations is present. }\DIFaddend The inherent mechanisms behind the evolution of the plasma discharge in asymmetric and even more so symmetric SDBDs is still not fully understood. \DIFdelbegin \DIFdel{This demands a more detailed simulation for the dynamic behavior of the plasma during the ignition process. Lastly, no contributions on the theoretical investigation of a dual phase SDBD could be found by the authors, meaning a significant lacking of knowledge of such configurations is present.}\DIFdelend \DIFaddbegin \DIFadd{It is not yet clear how a simultaneous positive and negative surface streamer (above and below the dielectric) can interact with each other, and to what extent, if any, do they enhance one another. It is not clear how the streamers respond to tailored voltage waveforms, nor what the optimized conditions are for generating large treatment volumes. It is unknown to what extent the surface streamers interact with an active surface such as a catalyst. These are crucial pieces of information to ensure good plasma enhanced catalysis performance. Additionally, many experiments, such as optical emission spectroscopy, still have open questions as to whether the results are more representative of the streamer bulk or the highly dynamic streamer head. These concerns demand a more detailed simulation for the dynamic behavior of the positive and negative streamers in a dual phase symmetric SDBD during the ignition process.}\DIFaddend
\DIFaddbegin \begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{NegativStreamer_Initial.eps}
\caption{\DIFaddFL{Schematic detailing the negative streamer formed via an anode oriented electron avalanche.}}
\label{fig:NegativeStreamer}
\end{figure}
\DIFaddbegin \begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{PositivStreamer_Initial.eps}
\caption{\DIFaddFL{Schematic detailing the positive streamer, which forms via a cathode oriented propagation front.}}
\label{fig:PositiveStreamer}
\end{figure}
\DIFaddend Therefore, in the present work we computationally investigate the plasma propagation of a symmetric, dual phase SDBD, hereby referred to as the twin SDBD\DIFaddbegin \DIFadd{, under various voltage waveform conditions}\DIFaddend . The particular geometry of the twin SDBD ensures that both an anodic and cathodic phase plasma are simultaneously ignited, separated by the dielectric barrier, and are physically symmetric about the metallic electrodes. The symmetric geometry does not only give rise to a higher plasma surface coverage, but also enables a direct comparison between the positive streamers on the anode side versus the negative streamers on the cathode side as well as the interaction between the two. The \DIFdelbegin \DIFdel{theoretical }\DIFdelend \DIFaddbegin \DIFadd{numerical }\DIFaddend investigations are carried out by means of a 2D PIC/MCC simulation software known as \DIFdelbegin \DIFdel{VSIM \mbox
\cite{NIETER2004}}\hspace{0pt
.
\DIFaddbegin \DIFadd{VSim, a multi-physics simulation tool, which combines the Finite-Difference Time-Domain (FDTD), PIC, and Charged Fluid (Finite Volume) methods for simulating electrical gas discharges. \mbox
\cite{NIETER2004}}\hspace{0pt
. The insights provided by this work are not only applicable to the twin SDBD and similar geometries, but also to other SDBD geometries, asymmetric ones included via a deeper understanding of the streamer propagation and form.}\DIFaddend
\DIFaddbegin \DIFadd{To provide a basis of understanding the streamer dynamics in a twin SDBD, that will be revealed in this work, we briefly recall the fundamentals of positive and negative streamer dynamics in a DBD. A negative streamer, see \mbox
\cref{fig:NegativeStreamer}}\hspace{0pt
, ignites through an anode oriented electron avalanche: electrons, which are accelerated against the direction of the electric field, collide with the background gas. Ionization takes place causing an exponential growth of electrons and ions, creating a quasineutral bulk plasma that propagates from the cathode to the anode. A positive streamer, see \mbox
\cref{fig:PositiveStreamer}}\hspace{0pt
, is also created via electron collisions, but is somewhat more complex. The cathode oriented positively charged streamer head attracts the electrons which cause ionization in front of the streamer head, resulting in an ionization wave. This ionization wave propagates from the anode to the cathode, leaving behind a quasineutral bulk plasma. Branches may form from the streamer head creating additional ionization waves; branching is more readily observed in gas mixtures that are susceptible to self induced photo ionization. Under short timescales, a few nanoseconds and less, a feature very similar to a low pressure sheath forms. The positive streamer head floats above the cathode due to an absence of available electrons, thus creating a region with a very strong electric field. Given an appropriate amount of time, the positive ions do reach the cathode due to their own velocities. At the dielectric(s), any charges that reach the surface adhere to it and charge it. These surface charges repel incoming like charges along the surface, causing both positive and negative streamers to spread out. Due to the lightweight electrons, this effect is more prominent in negative streamers; however, the floating nature of positive streamers can also facilitate a similar effect. For a deeper understanding we refer the reader to Nijdam \textit{et. al.} and to Zhang \textit{et.al.} \mbox
\cite{Nijdam2020,Zhang2021} }\hspace{0pt
where the dynamics of positive and negative streamers of a VDBD via PIC/MCC simulations are detailed.}\DIFaddend
\DIFaddbegin
\DIFadd{This paper is structured as follows: First in \mbox
\cref{Model} }\hspace{0pt
the computational model and geometry are described. Following this, in \mbox
\cref{Results} }\hspace{0pt
the results of the various simulations are presented: the DC results in sub-\mbox
\cref{SingleStreamers,DualStreamers}}\hspace{0pt
, and the AC results in sub-\mbox
\cref{ACStreamers}}\hspace{0pt
. Finally, in \mbox
\cref{Conclusion} }\hspace{0pt
our closing remarks and conclusions are discussed.
}\DIFaddend
\DIFdel{This paper is structured as follows: First in sub-\mbox
\cref{Ignition} }\hspace{0pt
a detailed summary of how an atmospheric pressure plasma ignites in a parallel plate electrode with a single dielectric layer on one electrode is presented; this provides the basis of understanding from a simplified perspective. Following in \mbox
\cref{Model} }\hspace{0pt
the physical system and simulated model are described. In sub-\mbox
\cref{Electrode Geometry} }\hspace{0pt
typical experimental conditions of the device under investigation are presented. Sequentially in sub-\mbox
\cref{Sim Model,Sim Geometry,Waveform} }\hspace{0pt
the simulation method, approximated geometry of the physical system, and investigated voltage waveforms are explained. Following this, in \mbox
\cref{Results} }\hspace{0pt
the results of the various simulations are presented. Initially in sub-\mbox
\cref{SingleStreamers}}\hspace{0pt
, a simulated DC voltage is used to investigate the positive and negative streamers each ignited individually. In sub-\mbox
\cref{DualStreamers} }\hspace{0pt
the same DC voltage is used to simulate the ignition of both streamers simultaneously. Lastly, in sub-\mbox
\cref{ACStreamers} }\hspace{0pt
an AC voltage profile is used to simulate and investigate the phase switching mechanisms of the discharge. Finally in \mbox
\cref{Conclusion} }\hspace{0pt
our closing remarks and conclusions are discussed. The insights provided by this work are not only applicable to the twin SDBD and similar geometries, but also to other SDBD geometries, asymmetric ones included via a deeper understanding of the streamer propagation and form.}\DIFdelend
\DIFdelbegin \subsection{\DIFdel{Parallel plate atmospheric pressure DBD ignition}}
\addtocounter{subsection}{-1
\DIFdel{For simplicity of explanation, in order to describe the ignition mechanics of an atmospheric pressure DBD in air, we will consider a simplified parallel plate electrode setup, with one electrode covered by a dielectric barrier. Typical VDBD setups tend to have both electrodes covered by a dielectric; however, many have only one covered. Furthermore, nearly all SDBDsetups have only one electrode covered by the dielectric. It should be noted, only the initial ignition and propagation mechanics are explained, and the self extinguishing effect that DBDs exhibit is not explained.
\DIFdel{For the most part, similar features of a single streamer may be found whether the metallic electrode acts as the anode or the cathode, or whether the direction of propagation is oriented towards the cathode or the anode. That is to say, that a quasi neutral bulk plasma known as the streamer, and a spatial region at the front of the streamer forms, known as the streamer head, which is not quasi neutral. The direction of the propagation dictates which charge the streamer head has, be it positive or negative. Additionally, the asymmetry of the dielectric barrier produces a slight difference in the overall behavior. This rises from the natural ability of the dielectric barrier being able to hold onto charges for long periods of time, on the multiple seconds timescale \mbox
\cite{Brandenburg2013}}\hspace{0pt
. When the dielectric barrier is uncharged, this effect only changes the ability of the streamer to reach high densities, }\textit{\DIFdel{i.e.}}
\DIFdel{prevents an arc from forming. When the dielectric barrier is charged however, for example from a previous discharge, this additionally produces local effects which promote streamer ignition at the locations of the charges. The lateral components of the electric fields of multiple streamers will cause themselves to organize into a minimal energy state, analogous to efficient packaging problems of a given geometrical shape. Often times, this results in a self patterning of the filaments.
\DIFdelbegin \subsubsection{\DIFdel{Positive streamer}}
\addtocounter{subsubsection}{-1
\DIFdel{Consider a positive voltage being applied to the uncovered electrode (anode), and the covered electrode being grounded, as shown in \mbox
\cref{fig:PositiveStreamer}}\hspace{0pt
(a ). Under these conditions, a straight and parallel electric field is formed oriented from the anode to the dielectric surface (cathode). Now consider a locally distributed collection of free electrons located within the gas volume near the anode. These electrons are accelerated in the opposite direction of the electric field, towards the anode. As the electrons migrate towards the anode, they rapidly gain speed and thus kinetic energy. Along the way, some collisions take place with either other electrons or the air molecules causing an energy exchange. If enough energy is imparted upon an N$_2$ or O$_2$ molecule via these collisions with the kinetically excited electrons , or a multitude of collisions, then ionization events may occur. These ionization events typically create additional electrons which are also accelerated towards the anode, thus the number of electrons increases in an exponential manner, which is the so called avalanche effect. These events also typically create positive ions which are accelerated towards the cathode . However, due to the extreme mass difference, over 1000 times different, the positive ions are approximately at rest when compared to the electrons.
\DIFdel{When the electrons reach the metallic anode, they are freely absorbed. However, the positive ions are left behind resulting in a net positive charge near the anode surface. This net positive charge results in the potential that is the same polarity as that which is applied at the anode, thus a virtual anode is formed. However, the distance between the virtual anode and the cathode is smaller; therefore, a strong electric field is formed. Constantly, cosmic radiation and other random events are creating new free electrons within the gas volume. Self induced photo ionization also creates new electrons in a stochastic manner. Similar to the initial electrons, these new electrons are attracted towards the positive ions which function as the virtual anode. More ionization events occur as the electrons approach the virtual anode, creating more positive ions even further away from the physical electrode. The electrons behind this new net positive charge quickly orient themselves with the physical anode, and the virtual anode, and the remaining positive ions into a quasi neutral bulk plasma, known as the streamer.
\DIFdel{The continuation of this process leads to a growing high density plasma bulk with a positively charged streamer head that propagates from the anode towards the cathode. However, due to the locality of the initial electrons, the initial produced positive streamer headis also local to the anode. Thus, the streamer head is initially small, but as it propagates towards the cathode it expands in the lateral direction as well. It is important to note, that the ions within the positive streamer head do not themselves significantly move, but it is the formation of new charges and the growing quasi neutral bulk that creates the propagation front. Because background radiation and self induced photo ionization are random events, it is very common to see multiple branches form along the sides of the primary streamer . Each of these branches behaves as positive streamer itself. This branching effect is more prominent in gases that are more susceptible to photo ionization.}\DIFdelend
\DIFdelbegin
\DIFdel{As the plasma, the positively charged streamer head, and any branches approach the cathode, the space between the streamer head, which is the virtual anode, and the cathode naturally reduces. Even though the electric field strength increases due to the decreased distance, this results in a smaller space for newly created electrons to gain enough energy to cause ionization. Therefore, the positive streamer head eventually stops propagating and "floats " at some distance above the cathode. The floating head is only able to expand further in the lateral direction, where newly created electrons have enough space to gain enough energy to cause new ionization events. This floating nature is analogous to a low pressure sheath formation, where there is a spatial region of electron depletion and extremely high electric fields; given enough time the positive charges within the streamer head and bulk plasma would be accelerated towards to the cathode in this sheath like region. If negative surface charges were present due to a previous discharge, the sheath like region would be expected to have a decreased thickness due to the higher potential difference between the streamer head and the charged dielectric surface.}\DIFdelend
\DIFdelbegin \DIFdel{Assuming now that the location of the dielectric surface is reversed, meaning that the anode is now covered and the cathode uncovered, as shown in \mbox
\cref{fig:PositiveStreamer}}\hspace{0pt
(b), much of what has been described still occurs. The electrons are accelerated towards the anode and ionization events begin to take place. A local positively charged streamer head forms. Electrons produced via the background radiation are subsequently attracted towards this streamer head. Thus, the initially small streamer head also expands in the lateral direction as it propagates towards the cathode. Branching of the streamer may also occur. Dissimilar to the previous scenario, the electrons are not freely absorbed by the now covered anode. Instead, electrons are first absorbed by the dielectric surface, such that a negative charge builds up. Additionally, electrons within the bulk streamer that are attracted to the anode are also repelled by the negative surface charges, such that the charging of the dielectric spreads outacross the surface the dielectric in a small localized region. This charging effect reduces the effective electric field within the gas volume, resulting in a lower density streamer. This effect gives rise to the self canceling nature of DBDs.
\DIFdelbegin
\subsubsection{\DIFdel{Negative streamer}}
\addtocounter{subsubsection}{-1
\DIFdel{Consider now a Negative voltage being applied to the uncovered electrode (cathode), and the covered electrode being grounded, as shown in \mbox
\cref{fig:NegativeStreamer}}\hspace{0pt
(a). Under these conditions, a straight and parallel electric field is formed oriented from the dielectric surface (anode) to the cathode. Now consider a locally distributed collection of free electrons located within the gas volume near the cathode. Just like the positive streamer case, electrons are accelerated towards the anode and ionization events begin to take place. The number of electrons begins to exponentially increase.
This electron avalanche is continuously accelerated towards the anode, creating more and more electrons and leaving behind a quasi neutral bulk streamer. Due to this, at the front of the plasma streamer there will be a small region primarily consisting of negative charges. This is the negatively charged streamer head.
\DIFdel{Dissimilar to the positive streamer, the electrons are not freely absorbed by the anode, which is in this scenario is the dielectric barrier. Instead, the electrons collect on the surface of the dielectric, as previously described. Due to large electron avalanche, the electrons are able to significantly spread across the surface of the dielectric barrier.
This is often times referred to as a surface streamer. As the electrons spread across the surface, strong electric fields between the surface charges and the dielectric surface form thereby promoting the propagation of the surface streamer. In this manner, the electric field between the anode and the cathode is significantly reduced, which also leads towards the self canceling nature of DBDs. Additionally, this leads to larger volumes and as such lower plasma densities. However, identical to the positive streamer, a positively charged spatial region is able to form near the cathode, as electrons are repelled away from the cathode. This creates a feature very similar to a low pressure sheath.
\DIFdel{Alternatively, the cathode could be covered by the dielectric barrier, as shown in \mbox
\cref{fig:NegativeStreamer}}\hspace{0pt
(b). In this manner, an initial collection of electrons near the cathode behaves near identically as before. However, due to the anode being uncovered, the electrons within the bulk streamer are able to be absorbed by the anode. In this scenario, the uncharged dielectric surface on the cathode has a minimal effect on the streamer, as a positively charged spatial region still forms regardless, }\textit{\DIFdel{i.e.}}
\DIFdel{no electrons are absorbed by the dielectric surface. Under previously charged conditions, it should be expected that the floating region of this positively charged spatial region is reduced in thickness due to a stronger potential.
\DIFdel{For such a geometrical setup, one uncovered metallic electrode parallel to an electrode covered by a dielectric barrier, the breakdown can be described by a positive or negative streamer . The annotative difference is determined by the location of the initial charges responsible for streamer ignition. The positive streamer forms with the charges being located near the anode, and the negative streamer forms with the charges being near the cathode. The physical differences lie within the propagation method and different densities, as the negative streamers tend to have smaller plasma densities, and the additional surface plasma structure. Negative streamers start at the cathode and propagate via an electron avalanche. Positive streamers start at and are anchored to the anode and propagate via a positively charged streamer head which floats above the cathode. When the dielectric barrier is covering the anode, both the anchored electrons from the positive streamer and the electron avalanche from the negative streamer charge the dielectric surface. This surface charging spreads the plasma out along the surface of the dielectric, albeit more prominently for the negative streamer case. Both the positive and negative streamer create a cathode directed positively charged spatial regime and a quasi neutral bulk plasma. When considering an initial uniform distribution of seed electrons, a combination of both a positive and negative streamer effect could be observed.}\DIFdelend
\DIFdelbegin \section{\DIFdel{Experimental setup and computational model}}
\DIFdelbegin \subsection{\DIFdel{Experiments under investigation}}
\DIFdelbegin \DIFdel{The geometry to be simulated is chosen to resemble that of the twin SDBD electrode intended for use in gas treatment applications and was first experimentally presented in \mbox
\cite{Offerhaus2017} }\hspace{0pt
and subsequently in \mbox
\cite{Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt
. Under experimental operation, the twin SDBD electrode is placed inside of a sealed chamber with synthetic quartz windows for optical observations. The chamber is regulated at $1\,atm$ of pressure with a controllable feed gas mixture. The twin SDBD may be ignited by various tailored high voltage pulsed waveforms. The pulsed nature of the waveforms allow for higher instantaneous plasma powers which are responsible for chemical activity, but lower averaged powers which are responsible for material failure.
\DIFdel{The novel electrode configuration utilizes two nickle coated metallic grids printed onto an alpha alumina oxide ($\alpha$-Al$_2$O$_3$) ceramic plate acting as the dielectric barrier. The grid structures are symmetrically printed on both of the normal faces of the ceramic plate where one side is powered by the voltage waveform of choice, while the other is grounded. Due to this symmetry, all edges of the metallic traces on both top and bottom are susceptible to surface plasma ignition. Thus, positive and negative streamer are simultaneously ignited, which thereby warrants the name "twin SDBD". These two streamer phases are physically separated by the dielectric barrier; however, under bipolar voltage conditions the phases periodically switch. A computer generated graphical example of the electrode when ignited is shown in \mbox
\cref{fig:Electrode}}\hspace{0pt
. This unique structure is different from VDBDs or submerged SDBDs which typically only allow for a single phase plasma ignition. Furthermore, this physical and electrical symmetry allows for a larger plasma surface coverage of the dielectric barrier, thereby ensuring a higher treatment efficiency \mbox
\cite{Offerhaus2017,Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt
. Lastly, the structure allows for the simultaneous experimental measurement of various plasma properties of both the positive and negative streamer phases. Most intriguingly, the two opposite phased plasma streamers are able to affect one another during ignition, potentially altering the spatial size and form of the streamers.}\DIFdelend
\DIFdel{The nickle coated metallic electrodes of the twin SDBD form a $10\,mm$ square lattice totalling in size to $150\,mm$ x $50\,mm$. The metallic traces are $0.450\,mm$ wide and extend approximately $0.020\,mm$ above the surface of the dielectric. The dielectric thickness is $0.635\,mm$. The shape and size of the lattice were chosen based on the ease of manufacturing as well as the reproducibility of measurements. The metallic trace itself has a cross sectional profile similar to a re-curve bow. A cross sectional photo of the electrode profile has been taken with a scanning electron microscope and is shown in \mbox
\cref{fig:SEM}}\hspace{0pt
.}
\addtocounter{section}{-1
\DIFdelend \DIFaddbegin \section{\DIFadd{Computational model}}
\DIFaddend \label{Model}
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{SDBD_Elektrode.png}
\caption{\DIFdelFL{3D} Computer generated graphic showing the physical structure of the SDBD electrode under consideration. A metallic lattice (\DIFaddFL{dark grey} structure) is printed symmetrically on both the top (visible) and bottom (hidden) faces of the Al$_2$O$_3$ dielectric barrier (\DIFdelFL{white} \DIFaddFL{light grey} material). Due to the strong curvature of the electric field lines when under operation, the plasma (purple structure) ignites along the edges of the metallic lattice.}
\label{fig:Electrode}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{NeuElektrode_GitterLinie0001.jpg}
\caption{SEM image of electrode cross section. The bulk, homologous material is the Al$_2$O$_3$ dielectric. The hump like structure with larger grains is the metallic electrode trace. \DIFdelFL{A thin nickel coating is also slightly visible along the outside of the trace.}}
\label{fig:SEM}
\end{figure}
\DIFaddbegin \DIFadd{The geometry to be simulated is chosen to resemble that of the twin SDBD electrode intended for use in gas treatment applications and was first experimentally presented in \mbox
\cite{Offerhaus2017} }\hspace{0pt
and subsequently in \mbox
\cite{Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt
. The authors defer the readers to these references for a detailed description of the twin SDBD system under question. It is important to reiterate that this device consists of a dielectric plate, with metallic grids placed on the surface of the dielectric on both sides. A computer rendered sketch of the system can be seen in \mbox
\cref{fig:Electrode}}\hspace{0pt
. These grids serve as electrodes. The system is built with both a geometric and electrical symmetry, such that both a positive and negative streamer are simultaneously ignited on either side of the dielectric under any given sufficiently high voltage conditions, which thereby warrants the name ”twin SDBD”. The metallic traces of the electrode system have been imaged with a scanning electron microscope for a more accurate depiction of the electrodes within the simulations. An example image of the cross sectional view of the metallic traces can be seen in \mbox
\cref{fig:SEM}}\hspace{0pt
, which shows the curved nature of the metallic traces located on the dielectric, which is included in the simulation.} \DIFaddend
\subsection{Particle in Cell/Monte Carlo Collision model}
A 2D PIC/MCC model is used to study the plasma propagation of the twin SDBD based on the \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend simulation software \cite{NIETER2004}. \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend is being widely used and has been validated \cite{NIETER2004,Zhang2015,Zhang2017}. \DIFaddbegin \DIFadd{As these investigations taken place under similar conditions presented here (atmospheric pressure DBDs, nanosecond timescales and micrometer length scales), we operate under the assumption that our model is also valid. Additionally, the usage of PIC/MCC simulations to investigate the COST-Jet at atmospheric pressure yield realistic results that agree well with experiments, \mbox
\cite{Bischoff2018,Korolov2019,Korolov2020}}\hspace{0pt
, proving that PIC/MCC models can indeed be used at atmospheric conditions. }\DIFaddend The PIC/MCC simulations performed in \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend are based on an explicit solver and \DIFdel{an electrostatic method} \DIFadd{the electrostatic approximation of Maxwell's equations}, which were described in detail in \cite{Birdsall1991}. The PIC/MCC model takes advantage of accounting for the detailed kinetic behavior of charged particles which \DIFdelbegin \DIFdel{is }\DIFdelend \DIFaddbegin \DIFadd{may be }\DIFaddend important for the evolution of \DIFdelbegin \DIFdel{plasma streamers }\DIFdelend \DIFaddbegin \DIFadd{electron avalanches and branching mechanisms, and therefore, the plasma streamer profiles}\DIFaddend . Air at atmospheric pressure is used as the discharge gas, with a constant density of background molecules, $80\,\%$ N$_2$ and $20\,\%$ O$_2$, at 300$\,$K. Free electrons, N$_2^+$, O$_2^+$ and O$_2^-$ ions are traced throughout the simulation, which are represented as super-particles, i.e. one super-particle corresponds to a certain number of real particles defined by their numerical weighting, initially starting at $20\cdot10^3$ real particles per super particle \cite{Birdsall1991}. \DIFdelbegin \DIFdel{Elastic, excitation, ionization, and attachment collisions of electrons with O$_2$ and N$_2$ gas molecules make up the considered reaction mechanisms as explained in more detail by \mbox
\cite{Zhang2017}}\hspace{0pt
. The corresponding cross sections and threshold energies are adopted from the LXCat database and literature \mbox
\cite{LiebermannAndLichtenberg,Furman2002,A_V_Phelps1999,PANCHESHNYI2012,LXCATdatabase}}\hspace{0pt
. At the surface of the dielectric barrier, only electron absorption is considered, }\textit{\DIFdel{i.e.}}
\DIFdel{no electron reflection or surface electron emission is considered.}\DIFdelend
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{MCC_PIC_Flowdens.eps}
\caption{Logic flow diagram of the PIC/MCC algorithim. One complete loop of the flow diagram represents one time stamp of the PIC/MCC code. During each successive change in the time step of the simulation, all sub algorithms are performed. Particles are pushed, merged, collided, generated, the densities are determined, and analyzed for electrical forces.}
\label{fig:ModelFlow}
\end{figure}
In order to numerically initiate the plasma discharge, a uniform distribution of seed electrons is placed within the free space of the simulated geometry. These seed electron super-particles have a density corresponding to $1\cdot10^{15}\,$m$^{-3}$. Realistically, seed electrons are present due to cosmic radiation and environmental photo-ionization producing background electrons, as well as remaining charges from previous plasma discharges. \DIFaddbegin \DIFadd{The initial electron density was chosen as such in order to increase the initial weighting of the super particles, and thereby the simulation speed. The high initial density increases the speed of the initial electron avalanches and streamer breakdown. As seen later on, maximum achieved densities are on the order of $1\cdot10^{22}\,$m$^{-3}$, which is much higher than the initial density; therefore, the final profiles and mechanisms would not change if a lower initial density was chosen. Thus, the high initial density serves to increasing the simulation speed while not altering the results of the simulations. }\DIFaddend It should be noted that the usage of uniform seed electrons does not consider local effects of previous discharges.
As the plasma streamers evolve, the particle number of each considered species will rapidly increase due to the ionization avalanches. To account for this and to reduce the computation time, the weight of each super-particle is adaptive. A merger algorithm conserving both momentum and energy will combine same species super-particles when the number of said super-particles exceeds a threshold value of 10 super-particles respective to each cell of the simulation mesh. As the particle numbers only increase within the considered simulated time, no de-merger algorithm is implemented. This adaptive weight and merger algorithm is described in more detail in \cite{Zhang2017}.
\DIFaddbegin
\DIFadd{Elastic, excitation, ionization, and attachment collisions of electrons with O$_2$ and N$_2$ gas molecules make up the considered reaction mechanisms as explained in more detail by \mbox
\cite{Zhang2017}}\hspace{0pt
. The corresponding cross sections and threshold energies are adopted from the LXCat database and literature \mbox
\cite{LiebermannAndLichtenberg,Furman2002,A_V_Phelps1999,PANCHESHNYI2012,LXCATdatabase}}\hspace{0pt
. At the surface of the dielectric barrier, only electron absorption is considered, }\textit{\DIFadd{i.e.}} \DIFadd{no electron reflection or surface electron emission is considered. Reported in \mbox
\cite{Zhang2015,Zhang2017}}\hspace{0pt
, the inclusion of secondary electron emission, SEE, surface coefficients do not significantly alter the form of the simulated positive streamers, due to the floating nature of the streamer head. The negative streamer; however, propagates along the surface of the dielectric barrier, and as such, SEE coefficients would be more critical. The inclusion of SEE coefficients would theoretically increase the number of "background" electrons available for streamer propagation, and as such the streamers would propagate faster; however, their forms should not strongly change. Additionally, due to the lower electric fields of the negative streamer and the very short considered timescales, the effect of ion induced SEE would be very limited within this investigation.}\DIFaddend
\begin{figure*}[t]
\centering
\subfloat{
\label{fig:Geom(a)}
\includegraphics[width=0.885\textwidth]{GeoLarge.eps}}
\\
\subfloat{
\label{fig:Geom(b)}
\includegraphics[width=0.885\textwidth]{GeoSmall.eps}}
\caption{Schematic of the simulation regimes. Subfigure (a) and (b) correspond to the DC and AC simulated geometries respectively. The color scale corresponds to the different materials as follows: I) air ($80\,\%$ N$_2$ and $20\,\%$ O$_2$), II) Al$_2$O$_3$ dielectric, III) grounded electrode, IV) powered electrode. The boxed in regions denoted with (i) correspond to the regions that are presented in greater detail for the rest of the publication.}
\label{fig:Geometry}
\end{figure*}
With each successive timestamp of the model, a particle pusher, particle merger, and Monte Carlo collision algorithms for all particle species follow in succession. After the collisions, a new electron super particle is added to the simulation regime, the density of each cell is calculated, and Poisson's equation is solved in order to get the electric forces being applied to each particle, after which the cycle repeats. A diagram of the general flow is shown in \cref{fig:ModelFlow}.
\subsection{Simulated geometry}
\label{Sim Geometry}
The geometry to be simulated is a cross section of the twin SDBD described \DIFdelbegin \DIFdel{above in \mbox
\cref{Electrode Geometry}}\hspace{0pt
}\DIFdelend \DIFaddbegin \DIFadd{ in \mbox
\cite{Offerhaus2017,Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}}\hspace{0pt
, and shown in \mbox
\cref{fig:Electrode}}\hspace{0pt
}\DIFaddend . The twin SDBD simultaneously produces positive and negative phased plasma streamers along the edges of the metallic traces; however, the two phases are separated by the \DIFaddbegin \DIFadd{Al$_2$O$_3$ }\DIFaddend dielectric barrier. On either side of the dielectric barrier, ignition on opposite edges of the respective metallic trace can be considered as two individual but same-phased streamers. Two different simulation geometries, referred to as geometry(a) and geometry(b), are considered in order to appropriately resolve the interaction of both the same-phased and respectively opposite-phased plasma streamers. Simulation geometry(a) and simulation geometry(b) are presented in \cref{fig:Geometry}. In total geometry(a) contains a 2D plane that is $9.6\,$mm x $1.2\,$mm in Cartesian X and Y coordinates. The plane is uniformly divided into square cells with unit length of $2.4\,\mu$m resulting in a square lattice of 4000 x 500 cells. \DIFaddbegin \DIFadd{The grid size was chosen based off of the Courant limit, $c\cdot dt<dx$, where $c$ is the speed of light and $dx$ is the grid size. }\DIFaddend Geometry(b) utilizes the same size grid cell, but uses only 1000 x 500 cells resulting in a \DIFaddbegin \DIFadd{total }\DIFaddend width of $2.4\,$mm. \DIFdelbegin \DIFdel{However, for }\DIFdelend \DIFaddbegin \DIFadd{For }\DIFaddend ease of comparison, results from a zoomed in region of size 500 x 500 cells from both simulated geometries are presented for the rest of the paper. The respective regions are outlined by a dashed \DIFdelbegin \DIFdel{lined }\DIFdelend \DIFaddbegin \DIFadd{line }\DIFaddend and annotated with $(i)$ in \cref{fig:Geometry}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{Efield_arrows.eps}
\caption{Electric field distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. \DIFdelbeginFL \DIFdelFL{Absolute }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{The }\DIFaddendFL magnitude of the electric field is plotted on a linear intensity color scale\DIFdelbeginFL \DIFdelFL{. The }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{, where the }\DIFaddendFL threshold value for the minimum intensity \DIFdelbeginFL \DIFdelFL{scale }\DIFdelendFL is chosen to be $1\cdot10^{6}\,$V/m. The normalized direction of the electric field is shown via the vector \DIFdelbeginFL \DIFdelFL{arrows}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{field}\DIFaddendFL .}
\label{fig:EField}
\includegraphics[width=0.885\textwidth]{Potential_arrows.eps}
\caption{Electric potential distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. The \DIFdelbeginFL \DIFdelFL{magnitude of the }\DIFdelendFL electric potential is plotted on a linear intensity color scale. \DIFdelbeginFL \DIFdelFL{The }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{Additionally, the }\DIFaddendFL normalized direction of the electric field is shown via the vector \DIFdelbeginFL \DIFdelFL{arrows}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{field}\DIFaddendFL .}
\label{fig:EPotential}
\end{figure*}
Firstly, to investigate the interactivity of two same-phase streamers, positive-positive or negative-negative, two anodes and two cathodes are included in simulation geometry(a). The two same-phase electrodes are simulated with the same potential under DC \DIFdelbegin \DIFdel{condtions }\DIFdelend \DIFaddbegin \DIFadd{conditions }\DIFaddend and are separated in the X-direction by $9.5\,$mm, corresponding to the distance between the edges of two neighboring and parallel metallic traces of the physical electrode. In order to minimize the computational time, the X boundaries of geometry(a) correspond to the vertical center lines of the metallic traces. Simulation geometry(a) may be seen in \cref{fig:Geom(a)}. Later, in \DIFdelbegin \DIFdel{\mbox
\cref{SingleStreamers} }\hspace{0pt
}\DIFdelend \DIFaddbegin \DIFadd{\mbox
\cref{SingleStreamers,DualStreamers} }\hspace{0pt
}\DIFaddend it is deduced that minimal interactivity is observed between two same-phase streamers. This is due to the limited spatial propagation of the plasma streamers on the considered timescales. Therefore, it is appropriate to simulate a section centered about just one metallic trace under the same timescales, thus a second simulation geometry is investigated. In simulation geometry(b) only one set of electrodes \DIFdelbegin \DIFdel{are considered, are }\DIFdelend \DIFaddbegin \DIFadd{is considered, is }\DIFaddend only simulated under AC conditions, and \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{is }\DIFaddend centered about the X-axis with the walls being $1.2\,$mm away from either side of their center line. \DIFaddbegin \DIFadd{Concerns about the reduced simulation domain having an effect on the calculated electric field strengths are mitigated by the naturally very fast reducing field strength as a function of the square of the distance from the electrodes. The usage of Neumann boundary conditions additionally improves the accuracy, as the simulation walls are not forced to a specific potential. }\DIFaddend Simulation geometry(b) may be seen in \cref{fig:Geom(b)}.
Both considered geometries of the 2D PIC/MCC model represent a cross sectional view of the electrode structure, where the anodes and cathodes are separated along the Y-axis by the dielectric barrier. The dielectric is located in the middle of the Y-axis, was chosen to be $0.500\,$mm thick and expands the whole X-direction\DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{, and is simulated with a dielectric constant of 9. }\DIFaddend In this representation, the Z-direction would equate to the length (or width) of the physical electrode setup but is mathematically treated as constant/homogeneous. This results in a simulation regime that is most valid for a planar section in the middle of any grid structure. In both geometries, the electrode structure itself is a geometrical composition of multiple tangent arcs resulting in a "hump" like structure. This electrode structure is used to approximate the real geometric structure of the metallic traces which can be seen in \cref{fig:SEM}. It should be noted that the simulated aspect ratios of the electrode thickness and width to the dielectric thickness is significantly different from reality; however, this was chosen as such in order to avoid numerical issues which would arise from using an appropriately sized simulation grid for realistic aspect ratios. Furthermore, the reduced dielectric thickness of the simulations versus the actual electrode configuration should not lead to any major differences in the interpretations of this paper, as it is the surface of the dielectric that plays a much more important role. By using a reduced dielectric thickness, we are able to increase the number of computational cells available for the plasma propagation, without increasing the entire simulation domain.
Particle densities and electric fields are resolved using a cutting-cell technique in order to handle the irregular geometry\DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{, through contributions of neighboring cells. The authors refer the reader to references \mbox
\cite{Smithe2008,Meierbachtol2015,loverich2010} }\hspace{0pt
for more information. }\DIFaddend Neumann boundary conditions are used in all directions to ensure a smooth electric potential distribution at the boundaries of the simulation walls. The timesteps are non adaptive and fixed at $2\cdot10^{-13}\,$s. \DIFdelbegin
\DIFdelend Similar to \cite{Likhanskii2010}, a singular new electron super-particle is randomly added to the simulation domain at each timestep in order to account for random events such as cosmic radiation, photo-ionization, \textit{etc.} as described in \cite{Ebert2006,E_M_van_Veldhuizen2002,Qiu2017}. These random events are beyond the scope of the available \DIFdelbegin \DIFdel{VSIM }\DIFdelend \DIFaddbegin \DIFadd{VSim }\DIFaddend functions. The seed electrons, both background and newly loaded electrons, are both sufficient in the simulation region to support streamer propagation as well as to not interfere with the plasma bulk as they are far fewer compared to the generated plasma. The generated plasma density profile is also much smaller than the simulation domain in both considered geometries.
\subsection{Waveform variation}
\label{Waveform}
In all considered simulations and both geometries, the electrode(s) above the dielectric barrier are treated as the powered electrode(s) while the bottom electrode(s) are held constant at $0\,$V. This choice is arbitrary and due to the physical symmetry of the system would provide only mirrored results if the \DIFdelbegin \DIFdel{the }\DIFdelend opposite choice, either inverse polarity and/or choice of powered electrode\DIFaddbegin \DIFadd{, }\DIFaddend was made. Initially, a constant positive $8\,$kV potential is applied to geometry(a), thus the two powered electrodes take the role of the anodes while the bottom two are the cathodes. The initial electric field distribution can be seen in \cref{fig:EField}(a) and the initial potential distribution can be seen in \cref{fig:EPotential}(a). \DIFaddbegin \DIFadd{Within both figures, the magnitudes of the presented quantity are shown via the color scale, and the normalized direction of the electric field are additionally presented for further clarity. The normalized direction is presented as a vector field, where the X and Y directions of the vectors are the normalized X and Y values of the electric field at that grid cell. Naturally, the magnitude of the electric field is obtained from the square root of the sum of the X and Y components squared: $E_{mag} = \sqrt{E_X^2 + E_Y^2}$.
}\DIFaddend
First, in order to investigate solely the role of the positive streamers, only the top half of the simulation area is seeded with the initial electrons. Likewise, the bottom half is subsequently seeded in a second simulation in order to solely investigate the negative streamers. Third, both halves are identically seeded thereby investigating the interplay and differences of both discharges igniting simultaneously under the DC voltage conditions. These three conditions are applied to geometry(a) only. Lastly, a varying voltage waveform is investigated.
\begin{figure}[t]\centering
\includegraphics[width=0.4425\textwidth]{ACVoltage.eps}
\caption{Applied voltage waveform of the AC simulations. Dashed lines labeled a through f at 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns respectively represent the timestamps at which results are presented in \DIFdelFL{ \mbox
\cref{DualStreamers}}\hspace{0pt}
\DIFaddFL{ \mbox
\cref{ACStreamers}}\hspace{0pt}
.}
\label{fig:ACPulseform}
\end{figure}
Geometry(b) is only investigated under the AC conditions shown in \cref{fig:ACPulseform}. Under these conditions, the role of the anode and cathode switches twice; thereby giving insights into the extreme dynamics of fast voltage streamer switching. Initially, the applied voltage potential sharply rises within $0.1\,$ns to the $8\,$kV maximum which is then held constant for $0.7\,$ns. During this time, the anode is located on the top side of the dielectric barrier. At $0.8\,$ns, the voltage is decreased at the same rate, $80\,$kV/s, reaching the minimum applied voltage of $-8\,$kV at $1\,ns$ making the top side of the dielectric barrier the cathode. Again, this minimum value is held constant for $0.7\,$ns until switching back to the positive $8\,$kV potential, again switching the location of the anode and cathode. Without considering any plasma propagation, the base electric field distribution for both a positive and negative applied potential are shown in \cref{fig:EField} and the equivalent potential distribution can be seen in \cref{fig:EPotential}.
\DIFaddbegin \DIFadd{All conditions are simulated for up to a maximum of $2\,$ns, thereby only revealing the inception phase of the streamers. The insights revealed within the \mbox
\cref{Results} }\hspace{0pt
are consistent with other PIC/MCC models investigating DBD streamers in structured and porous catalytic surfaces \mbox
\cite{Zhang2018,Zhang2018Porous}}\hspace{0pt
, which also are simulated in the ns timescales. Additionally, the phenomenon of a floating positive surface discharge is also observed in various fluid models \mbox
\cite{Babaeva2016,Yan2014}}\hspace{0pt
. Therefore, the authors believe the results presented throughout this paper, even given the short time scales, are reasonable. The results reported below are meant for a qualitative understanding of the streamer dynamics in a twin SDBD. The general conclusions for more natural voltage waveforms, such as continuous sine waves, can be drawn, and could warrant further studies considering a real RF source. However, the results obtained in this work are particularly relevant for tailored voltage waveforms, which is a hot topic of current research and is trending towards shorter pulses and steeper rise times.}\DIFaddend
\section{Results and Discussion}
\label{Results}
\subsection{Single Streamer Dynamics} \label{SingleStreamers}
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{ne_up_log.eps}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the positive streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged streamer head leading to streamer propagation, II) shaded region showing location of electron depletion\DIFaddFL{, \textit{i.e.} sheath like feature}, III) potential/failed positive streamer branch.}
\label{fig:DC PositiveStreamer}
\end{figure*}
Under the 8$\,$kV DC conditions with seed electrons present only on the anodic side of geometry(a), the propagation of an anodic phased plasma streamer, also known as positive streamer is simulated and presented in \cref{fig:DC PositiveStreamer}. The initial electric field distribution is shown in \cref{fig:EField}(a) and the initial electric potential distribution is shown in \cref{fig:EPotential}(a). Under these conditions, a cathode oriented positively charged streamer head that is able to freely move from the metallic anode to the dielectric surface is able to form.
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{ne_down_log.eps}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the negative streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region leading to \DIFaddFL{positive } streamer \DIFaddFL{like } propagation, II) shaded region showing location of electron depletion\DIFaddFL{, \textit{i.e.} sheath like feature.}}
\label{fig:DC NegativeStreamer}
\end{figure*}
The streamer structure is anchored to the anode just above where the highest electric fields are located. It would be expected that the anchoring would take place at the location of the highest electric field; however, under these conditions this is located at the intersection of the electrode and the dielectric surface. At this point, and immediately next to it, due to the strong curvature of the electric field, electrons do not have enough space to gain sufficient energy for ionization. Multiple executions of the simulation produce anchored positions at the same location; furthermore, the anchor position is also at a symmetrical position on the opposite anode, which is not presented in \cref{fig:DC PositiveStreamer}. This suggests that the anchor is positioning itself based on the strong curvature of the anode, and not through the randomness of the ionization events. Indeed, when looking at the curvature of the simulated electrode, it appears as if the plasma is next to the strongest curvature. \DIFaddbegin \DIFadd{Under no conditions did the simulated positive streamers extend a significant amount into the X-direction, such that interactions between the two positive streamers do not need to be considered.
}\DIFaddend
At $0.2\,$ns the positive streamer has advanced $0.12\,$mm meaning a propagation speed of $0.62\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The positive streamer had reached a propagation distance of $0.31\,$mm resulting in an averaged speed of $0.31\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. It was observed via multiple test executions that these propagation speeds and distances were highly dependent on the initial background electron density. With lower initial densities, the simulated streamer propagates a shorter distance. Likewise, larger background densities would result in faster speeds and longer propagation distances.
Initially the positive streamer began to propagate along the electric field lines at an angle offset from the surface of the dielectric barrier. The positive streamer head, which is not directly visible in \cref{fig:DC PositiveStreamer}, forms in front of the streamer and along the bottom side between the bulk plasma and the dielectric barrier. The streamer head is annotated in \cref{fig:DC PositiveStreamer} with an arrow labeled (I). Between the dielectric barrier and the positively charged streamer head is located a sheath like region\DIFaddbegin \DIFadd{, annotated via (II), }\DIFaddend where free electrons are attracted to the streamer head; however, they do not have enough space in order to promote further propagation towards the dielectric. Therefore, the only direction possible is outwards along the X- and positive Y-directions, towards the center of the simulated area. As the streamer continues to propagate along this direction, the \DIFdelbegin \DIFdel{applied }\DIFdelend electric field gets weaker proportional to \DIFdel{$1/r^2$} \DIFadd{$1/r$ (in 2D) or $1/r^2$ (in 3D)}, where $r$ is the distance from the electrode. Thus the positive streamer is able to advance in a somewhat straight line, parallel to the initial trajectory, which is at some angle to the dielectric surface; under these presented conditions this trajectory angle was determined to be $20.6^\circ$. The further the streamer propagates, the more space is available for propagation into the negative Y-direction, towards the dielectric surface. Therefore, in \cref{fig:DC PositiveStreamer}(b), \DIFdelbegin \DIFdel{two potential branches }\DIFdelend \DIFaddbegin \DIFadd{a potential branch }\DIFaddend had began to take shape\DIFdelbegin \DIFdel{; however they are }\DIFdelend \DIFaddbegin \DIFadd{, annotated with (III); however it is }\DIFaddend not able to fully develop. As the cathode is located underneath the positive streamer, that is the only location of the streamer head; therefore, no branching occurs above the streamer bulk.
\DIFaddbegin \DIFadd{Due to the location of the failed branch in \mbox
\cref{fig:DC PositiveStreamer}}\hspace{0pt
(b)(III), it would be extremely difficult to experimentally observe, and is noticeable within these simulations because of the kinetic nature of PIC/MCC models. Naturally, without experimental evidence, the reader might question the reality of whether branching forms or not at these orientations. The authors believe that the simulations are indeed accurate in predicting these features.}\DIFaddend
In \cref{fig:DC NegativeStreamer} the same simulation conditions are presented, except the initial seed electrons are on the cathode side of the dielectric barrier, thus the negative streamer is simulated. The seed electrons are still accelerated in the opposite direction of the electric field lines shown in \cref{fig:EField}(a). \DIFdelbegin \DIFdel{As described in \mbox
\cref{NegativeStreamer}}\hspace{0pt
, an }\DIFdelend \DIFaddbegin \DIFadd{An }\DIFaddend electron avalanche directed towards the anode initiates the discharge. Under these conditions the electrons are pushed towards the dielectric, where they begin to collect on and charge the surface of the dielectric. A positively charged spatial region forms next to the cathode, but is unable to anchor to the cathode, as it must float at some distance away from the cathode.
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{ne_all_log.eps}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region/streamer head, II) location of electron depletion, \DIFaddFL{\textit{i.e.} sheath like feature}, III) potential/failed positive streamer branch.}
\label{fig:DC BothStreamers}
\end{figure*}
\DIFaddend Newly created background electrons are pushed away from the cathode. Simultaneously, the electrons are attracted towards the positively charged region. Outside of the sheath region between the two, marked via an arrow labeled (II) in \cref{fig:DC NegativeStreamer}, these two directions are opposite one another. Only a very small amount of electrons are sufficiently accelerated to the positive charges with enough energy in order to cause ionization. Therefore, minimal propagation of the negative streamer parallel to the cathode surface takes place\DIFaddbegin \DIFadd{, as depicted via (I)}\DIFaddend . Newly created background and avalanche electrons that reach the dielectric surface, instead of the positively charged spatial region, help to promote the propagation of the negative streamer along the surface of the dielectric in the X-direction away from the cathode and towards the center of the simulation area. However, no distinctly visible negatively charged streamer head is directly observable.
At $0.2\,$ns the negative streamer has advanced $0.077\,$mm meaning a propagation speed of $0.39\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The negative streamer had reached a propagation distance of $0.25\,$mm resulting in an averaged speed of $0.25\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. As with the positive streamer, lower and higher initial electron densities result in a shorter and longer propagation distance, respectively. \DIFaddbegin \DIFadd{Furthermore, under no conditions did the two simulated negative streamers next to both cathodes extend a significant amount into the X-direction, such that interactions between the two negative streamers do not need to be considered.
}\DIFaddend
\subsection{Dual Streamer Dynamics - DC}
\label{DualStreamers}
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{Charge_all.eps}
\caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region/streamer head, II) surface charges which are visibly hidden by the mask of the dielectric barrier, III) potential/failed positive streamer branch.}
\label{fig:DC BothStreamersCharge}
\end{figure*}
Presented in \cref{fig:DC BothStreamers,fig:DC BothStreamersCharge} is the complete DC scenario, where seed electrons are present on both the anodic and cathodic sides of the dielectric barrier. The same positive $8\,$kV DC voltage is used. Comparing \cref{fig:DC PositiveStreamer}(a), \cref{fig:DC NegativeStreamer}(a), and \cref{fig:DC BothStreamers}(a) a small difference is observed at $0.2\,$ns. Primarily, the sizes and overall density of both the positive and negative streamers have increased. The positive streamer has advanced $0.15\,$mm while the negative streamer has advanced $0.088\,$mm away from the anodes and cathodes, respectively. By $1.0\,$ns both streamers have significantly increased in size and average density compared to \cref{fig:DC PositiveStreamer}(b) and \cref{fig:DC NegativeStreamer}(b). Failed branches on the positive streamer are still present. The positive streamer has advanced a total of $0.41\,$mm while the negative streamer advanced a total of $0.27\,$mm. \Cref{tab:SizeAndSpeed} summarizes the streamer height, length, propagation angle, and propagation speed for the positive and negative streamers under all three simulation conditions. The propagation angle is determined as the angle at which the positive streamer propagates away from the dielectric surface, and is treated as $0\,^\circ$ for the negative streamer. The streamer length and thickness are respectively the size of the streamers with respect to the parallel and perpendicular axes about the streamer propagation angle.
\DIFaddbegin \begin{table*}[t]
\centering
\begin{tabular}{|c|l|c c|c c|c|c c|}
\hline
\multirow{2}{*}{Time} & \multirow{2}{*}{DC Streamer} & \multicolumn{2}{c|}{Thickness [$\mu$m]} & \multicolumn{2}{c|}{Length [$\mu$m]} & {Angle }[{$^\circ$}] & \multicolumn{2}{c|}{Speed [$\frac{\mu\mathrm{m}}{\mathrm{ns}}$]}\\
& & {Average }& {Maximum }& {Average }& {Maximum }& & {Propagation }& {Lateral }\\
\hline \hline
\rowcolor{gray!25}
\cellcolor{white} & {Positive }& {38.39 }& {49.20 }& {123.4 }& {170.4 }& {20.60 }& {617.1 }& {577.7}\\
\rowcolor{white}
\cellcolor{white} & {Negative }& {63.66 }& {133.2 }& {77.70 }& {158.4 }& {-- }& {-- }& {388.5}\\
\rowcolor{gray!25}
\cellcolor{white} & {Full (+) }& {40.27 }& {54.00 }& {149.72 }& {206.4 }& {14.80 }& {748.61 }& {723.77}\\
\rowcolor{white}
\cellcolor{white}\multirow{-4}{*}{$0.2\,$ns} & {Full (-) }& {64.63 }& {128.4 }& {88.20 }& {166.8 }& {-- }& {-- }& {441.0}\\
\hline
\rowcolor{gray!25}
\cellcolor{white} & {Positive }& {60.07 }& {76.80 }& {305.5 }& {410.4 }& {13.30 }& {305.5 }& {297.3}\\
\rowcolor{white}
\cellcolor{white} & {Negative }& {79.10 }& {115.2 }& {247.9 }& {372.0 }& {-- }& {-- }& {247.9}\\
\rowcolor{gray!25}
\cellcolor{white} & {Full (+) }& {65.92 }& {80.40 }& {409.17 }& {516.0 }& {10.50 }& {409.17 }& {402.31}\\
\rowcolor{white}
\cellcolor{white}\multirow{-4}{*}{$1.0\,$ns} & {Full (-) }& {97.66 }& {157.2 }& {271.0 }& {429.6 }& {-- }& {-- }& {271.0}\\
\hline
\end{tabular}
\caption{Extracted average and maximum streamer heights and lengths of the DC streamer simulations at both output timestamps of $0.2\,$ns and $1.0\,$ns. The thickness and length are treated as the sum of cells perpendicular and parallel to the streamer propagation direction. The direction of the negative streamers is treated as parallel to the dielectric surface, while the angle of incidence of the positive streamers is determined in post analysis. The propagation speed is determined as the length of the streamer divided by the passed time. The lateral speed is the X-component of the propagation speed.}
\label{tab:SizeAndSpeed}
\end{table*}\DIFaddend
On the anodic side of the dielectric, the positively charged streamer head of the positive streamer is facing the dielectric surface, which can be seen as the red charges in \cref{fig:DC BothStreamersCharge}. This positively charged area acts as a virtual anode that leads to an enhanced electric field in both the X- and Y-directions below the dielectric surface on the cathodic side. \DIFaddbegin \DIFadd{Additionally, the positive streamer has a high charge density. }\DIFaddend \DIFdelbegin \DIFdel{The enhanced field promotes the negative streamer expansion. Likewise, the negative surface charges along the surface of the dielectric barrier on the cathodic side act as a virtual cathode which promotes expansion of the positive streamer along the X-direction.}\DIFdelend \DIFaddbegin \DIFadd{The enhanced field and high density promote the expansion of the negative streamer along the surface of the dielectric in the X-direction. The negative streamer thus charges the surface of the dielectric even more. These negative surface charges along the dielectric barrier on the cathodic side act as a virtual cathode, enhancing the electric field in both the X- and Y-directions above the dielectric. Thus, the negative streamer also facilitates an easier expansion of the positive streamer in the X-direction.}\DIFaddend Here it is clear, that both streamers work together in a unison that increases the effective plasma surface coverage and volume \DIFaddbegin \DIFadd{of both streamers. Naturally, the electric field reduces proportional to the square of the distance from the electrodes, such that the positive and negative streamers are no longer able to expand any further, even with their cooperative effect being considered. Therefore, as with the single phase streamer simulations, interactions with the neighboring discharges on the right hand side of the simulation domain do not need to be taken into consideration.}\DIFaddend \DIFdelbegin \DIFdel{ In essence, the positive streamer pulls the negative streamer along the dielectric surface while the negative streamer simultaneously pushes the positive streamer. Both of the pulling and pushing forces are acting against the potential energy barrier of ionization. Therefore, when either of the streamers are acting alone, they are not able to expand as far.}\DIFdelend
\DIFaddbegin \DIFadd{In essence, the positive streamer and the negative streamer work together to promote propagation. Both of the streamers are acting against the potential energy barrier of ionization and the ever decreasing electric field strength. Therefore, with the simultaneous ignition of both positive and negative streamers in a twin SDBD system, the surface coverage and plasma volume are significantly increased when compared to a submerged symmetric SDBD system. When comparing the average lengths of the single and dual streamers in \mbox
\cref{tab:SizeAndSpeed}}\hspace{0pt
, the positive streamer sees an increase of the propagation length by $17.6 - 25.3\,\%$ and then negative streamer sees an increase of $8.5 - 11.9\,\%$ when both streamers are simultaneously ignited}\DIFaddend .
\subsection{Dual Streamer Dynamics - AC}
\label{ACStreamers}
Due to the minimal extension of the plasma into the free space above and below the dielectric surface of the simulations discussed in \cref{SingleStreamers,DualStreamers}, the simulated area was shifted horizontally to be centered about a single electrode pair, and reduced in width. Under this geometry, geometry(b), a bipolar AC square voltage profile with fast rise and short pulse times is simulated, shown in \cref{fig:ACPulseform}. Seed electrons are placed both above and below the dielectric barrier. Under such conditions, during the first positive pulse the plasma propagates near identically to the DC case discussed in \cref{DualStreamers,fig:DC BothStreamers,fig:DC BothStreamersCharge}. However, here it is observed that two near-mirror discharges simultaneously propagate about the horizontal center axis of both the anode and cathode. For reasons of consistency, only the right half of the simulated area is shown, as seen in \cref{fig:Geom(b)}. If shown, minimal differences between the left and right discharges would be seen, but may be attributed to the stochastic nature of the PIC/MCC code and the random seed electrons implemented each time step. Additionally, the implemented rising time of the voltage waveform from $0\,$V to $+8\,$kV at $0.1\,$ns does not contribute many \DIFdelbegin \DIFdel{difference}\DIFdelend \DIFaddbegin \DIFadd{differences}\DIFaddend , except perhaps a slightly reduced overall density and propagation distance. The electron density distribution, \DIFaddbegin \DIFadd{positive ion density distribution, }\textit{\DIFadd{i.e.}} \DIFadd{summation of N$_2^+$ and O$_2^+$ ions, }\DIFaddend charge disparity distribution, and electric field magnitude and direction are shown in \DIFdelbegin \DIFdel{\mbox
\cref{fig:AC_Dens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt
}\DIFdelend \DIFaddbegin \DIFadd{\mbox
\cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt
}\DIFaddend , respectively. Sub-figures (a) through (f) of each correspond to identical timestamps of interest, shown with respect to the voltage waveform in \cref{fig:ACPulseform}.
\begin{figure*}[p]
\centering
\includegraphics[width=0.885\textwidth]{ne3x2.eps}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) shaded region showing location of electron depletion, \DIFaddbeginFL \textit{\DIFaddFL{i.e.}} \DIFaddFL{sheath like feature., }\DIFaddendFL III) potential/failed/completed positive streamer branch.}
\label{fig:AC_Dens}
\end{figure*}
\begin{figure*}[p]
\centering
\DIFaddbeginFL \includegraphics[width=0.885\textwidth]{N2O2ions.eps}
\caption{\DIFaddFL{Spatial profiles of the positive ion density, }\textit{\DIFaddFL{i.e.}} \DIFaddFL{the summation of N$_2^+$ and O$_2^+$ ions, plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles.}}
\label{fig:AC_ionDens}
\end{figure*}
\begin{figure*}[p]
\centering
\DIFaddendFL \includegraphics[width=0.885\textwidth]{charge3x2.eps}
\caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) surface charges which are visually hidden by the mask of the dielectric barrier, III) potential/failed/completed positive streamer branch.}
\label{fig:AC_Charge}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=0.885\textwidth]{Efield3x2.eps}
\caption{Spatial profiles of the absolute value of the electric field plotted on a linear intensity scale as well as directional arrows at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Cut off value for minimum intensity scale (white) chosen as 1e6$\,$V/m. The direction of the electric field is shown via \DIFdelbeginFL \DIFdelFL{arrows with a }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{the }\DIFaddendFL normalized \DIFdelbeginFL \DIFdelFL{size.}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{vector field as discussed in \mbox
\cref{Waveform}}\hspace{0pt
}\DIFaddendFL }
\label{fig:AC_EField}
\end{figure*}
Between $0.8\,$ns and $1.0$\,ns the applied voltage is reduced, at $0.9\,$ns the applied voltage is $0\,$V, after which the role of the anode and cathode switch. Due to the polarity switch the electric field is reversed, thus the electrons move in the opposite directions. Free electrons present in the streamer above the dielectric move away from the now metallic cathode. Likewise, electrons from the bulk of the streamer below the dielectric move towards the now anode. Electrons along the surface of the dielectric remain attached and do not move. At $1.0\,$ns the voltage on the cathode has reached its minimum value of $-8\,$kV, where it stays constant for a further $0.7\,$ns. \DIFaddbegin \DIFadd{After which, a second polarity switch takes place. All the while, the positive ion densities very closely follow the electron density profiles.
}\DIFaddend
\subsubsection{$1^{st}$ \DIFdelbegin \DIFdel{Phase }\DIFdelend \DIFaddbegin \DIFadd{Polarity }\DIFaddend Shift - Positive to Negative streamer} \hfill\\
Paying attention to the top half of the simulation regime focuses on the shift from a positive streamer to a negative streamer. As the voltage drops on the top electrode from $+8\,$kV to $0\,$V between $0.8\,$ns and $0.9\,$ns, sub-figures (a) and (b) respectively of \DIFdelbegin \DIFdel{\mbox
\cref{fig:AC_Dens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt
}\DIFdelend \DIFaddbegin \DIFadd{\mbox
\cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}}\hspace{0pt
}\DIFaddend , the electrons are not accelerated as strongly as before. The electrons relax and shift a little inwards towards the streamer bulk and the positively charged streamer head. The plasma volume slightly shrinks and the overall electron density becomes more refined and increases in number. The positively charged streamer head reduces in thickness and disparity, \textit{i.e.} becomes more quasi neutral. As the electrons are not as strongly/no longer attracted to the metallic anode, a positive space charge builds up at the streamer anchor on the anode. These two effects respectively lead to the electric field strength reducing between the streamer and the dielectric surface, and a very strong electric field between the anode and the streamer anchor. At $0.9\,$ns the quasi neutral streamer has a slight net positive charge, and thus takes on the role of a virtual anode along the boundaries of the streamer, meaning that the electric field above the streamer has reversed in the X-direction, but not the Y-direction.
Between $0.9\,$ns and $1.0\,$ns, sub-figures (b) and (c) respectively, the applied voltage is negative, thus the metallic electrode is now the cathode and the dielectric surface is the anode. Due to the reversed electric field, electrons within the streamer begin falling to the dielectric surface. Along the way the \DIFdelbegin \DIFdel{the }\DIFdelend remaining positive charges in the streamer head are \DIFdelbegin \DIFdel{either neutralized or }\DIFdelend flooded with electrons such that no charge disparity is noticeable\DIFaddbegin \DIFadd{, as it can be seen that the positive ion densities between the positive streamer and the dielectric do not change between these time steps}\DIFaddend . During this process, the electric field between the streamer and the dielectric surface completely reverses in both the X- and Y-directions. Naturally, falling electrons starting at locations where the streamer began to branch off but could not expand would reach the dielectric surface first. \DIFaddbegin \DIFadd{As the electrons are accelerated towards the dielectric surface, further ionization events take place creating new ions and electron avalanches. }\DIFaddend The electrons that first reach the dielectric charge the surface and repel other electrons into the X-direction away from the cathode, increasing the plasma propagation length. The original streamer is now acting like a negative streamer. By $1.7\,$ns, sub-figure (d), the negative streamer has charged the top surface of the dielectric and almost doubled the lateral length of the original positive streamer.
Between $0.9\,$ns and $1.0\,$ns, the \DIFaddbegin \DIFadd{electrons near the }\DIFaddend streamer anchor/tail completely \DIFdelbegin \DIFdel{breaks }\DIFdelend \DIFaddbegin \DIFadd{break }\DIFaddend away from the metallic cathode as the electrons are pushed away from it\DIFaddbegin \DIFadd{; however, the positive ions do not move}\DIFaddend . This results in a net positive charged being left behind. Thus a new positively charged streamer head forms between the cathode and the streamer bulk, both above and below the streamer. Along with this new streamer head forms an extremely high electric field in the local proximity, oriented away from the positive charges towards the cathode.
Newly created electrons above the cathode and the streamer bulk, which \DIFdelbegin \DIFdel{acts }\DIFdelend \DIFaddbegin \DIFadd{is acting }\DIFaddend as an anode, are attracted to the streamer head and a small branch begins to form. This branch is shown in \cref{fig:AC_Dens,fig:AC_Charge}\DIFaddbegin \DIFadd{(c) }\DIFaddend with arrows labeled (III)\DIFaddbegin \DIFadd{, and is also visible in \mbox
\cref{fig:AC_ionDens}}\hspace{0pt
(c)}\DIFaddend . As the simulation progresses in time, new electrons are continuously attracted towards this branch, gain energy, and eventually \DIFdelbegin \DIFdel{initiate an electron avalanche}\DIFdelend \DIFaddbegin \DIFadd{cause ionization}\DIFaddend . A cathode directed positively charged streamer head propagates along and floats above the cathode. A near mirror branch simultaneously forms on the other side of the metallic grid, which is not shown. Due to the positively charged streamer \DIFdelbegin \DIFdel{head }\DIFdelend \DIFaddbegin \DIFadd{heads }\DIFaddend leading both of \DIFdelbegin \DIFdel{the }\DIFdelend \DIFaddbegin \DIFadd{these } branches, they repel one another. Therefore, neither branch is able to reach the other. By $1.7\,$ns, sub-figure (d), the branch has completely developed. Through multiple executions, it has been observed that this branching does not take place if the applied voltage, and thus electric field between the streamer head and cathode, is too low.
\DIFadd{It should be noted that the initial branching has a very similar structure to the positively charged spatial region of the negative streamers in \mbox{\cref{fig:DC NegativeStreamer,fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge}}(a), and \mbox{\cref{fig:DC BothStreamers}}(b). One could expect that given a high enough voltage, the positive space charges would continue to wrap around the cathode in the same manner as the branching in \mbox{\cref{fig:AC_Dens,fig:AC_Charge}}(c) and (d). Therefore, the branching should not be considered as solely limited to the polarity switches, but rather that they are encouraged by the polarity switches. As with the positive streamer in the DC case, discussed in \mbox{\cref{SingleStreamers}}, the authors believe these simulated branching mechanisms are accurate, even given the difficulty of experimentally observing them.}\DIFaddend
\DIFdelbegin \DIFdel{Initially, on the top half of the simulation area, a positive streamer that is more or less freely able to propagate in space transitions into a negative streamer that propagates along the surface of the dielectric. Both streamers are able to propagate a similar distance in the same amount of time. However, under high enough applied voltage conditions, during the first polarity switch, additional branching is able to form between the cathode and the negative streamer bulk which functions as a virtual anode. This branching propagates via the well known cathode directed positively charged streamer head mechanism, \textit{i.e.} behaves like a positive streamer. It should be noted that the initial branching has a very similar structure to the positively charged spatial region of the negative streamers in \mbox{\cref{fig:DC NegativeStreamer,fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge}(a), and \cref{fig:DC BothStreamers}}(b). One could expect that given a high enough voltage, the positive space charges would continue to wrap around the cathode in the same manner as the branching in \mbox{\cref{fig:AC_Dens,fig:AC_Charge}}(c) and (d). Therefore, the branching should not be considered as solely limited to the polarity switches, but rather that they are encouraged by the polarity switches.}\DIFdelend
\subsubsection{$1^{st}$ \DIFdelbegin \DIFdel{Phase }\DIFdelend \DIFaddbegin \DIFadd{Polarity }\DIFaddend Shift - Negative to Positive Streamer} \hfill\\
Focusing now on the bottom half of the simulation regime tracks the shift of the negative streamer to a positive streamer between $0.8\,$ns and $1.0\,$ns. During this time, the applied voltage is switched from $+8\,$kV to $-8\,$kV; however, the bottom electrode is held at a constant $0\,$V. As the applied voltage changes polarity, the bottom electrode also switches roles, now becoming the anode. Unlike the top half, the relaxation of the electric field causes a small shift in the bulk electrons which leads to both a large increase in the streamer size \DIFdelbegin \DIFdel{and reduction of the }\DIFdelend \DIFaddbegin \DIFadd{as the electrons are pushed away from the dielectric and towards the metallic electrode. Similar to the top half, the }\DIFaddend average electron density \DIFdelbegin \DIFdel{, as well as reducing }\DIFdelend \DIFaddbegin \DIFadd{slightly increases and }\DIFaddend the charge disparity in the positively charged streamer head near the now anode \DIFaddbegin \DIFadd{reduces}\DIFaddend . This eventually leads to the streamer attaching to the anode, seen in in sub-figure (c), as electrons are freely absorbed by it. \DIFaddbegin \DIFadd{This motion also leads to the creation of a strong positive ion density within at the anchor position, as seen in in \mbox
\cref{fig:AC_ionDens}}\hspace{0pt
(c).
}\DIFaddend
Furthermore, a small positive space charge forms between the negatively charged dielectric surface and the bulk plasma as the electrons are pushed away from the dielectric surface\DIFaddbegin \DIFadd{, but the positive ions do not move}\DIFaddend . However, the electrons that had attached to the surface do not desorb within the simulation, neither are electrons emitted due to surface field emission, \DIFaddbegin \DIFadd{emitted due to ion induced secondary electrons, }\DIFaddend nor are electrons reflected. The newly formed positively charged head and the negative surface charges form a very high electric field in a very thin sheath like structure between the streamer and the dielectric surface by $1.0\,$ns. \DIFdelbegin \DIFdel{This }\DIFdelend \DIFaddbegin \DIFadd{The } positively charged streamer head is floating above \DIFadd{the} surface, which is acting as the cathode; however, due to the original proximity of \DIFadd{the }\DIFaddend bulk plasma to the surface and the surface charges, the streamer head remains very close to the surface.
\DIFaddbegin
\DIFaddend The proximity of the streamer head limits the ability of the streamer to propagate into the X-direction. \DIFdelbegin \DIFdel{The streamer head is eventually able to wrap around to the front }\DIFdelend \DIFaddbegin \DIFadd{As electrons are continuously pushed away from the dielectric surface, the thickness of the streamer head and consequentially the sheath like region increase. Eventually, near the "tip" of this region, along the X-direction, newly generated electrons outside of the plasma bulk are sufficiently attracted towards the positive charges. This leads to the streamer head curling around the "tip" }\DIFaddend of the streamer bulk, providing a virtual anode for \DIFaddbegin \DIFadd{further }\DIFaddend newly created electrons to be attracted to. Sufficiently energetic electrons will promote propagation further into the X-direction, extending the plasma. This propagation also \DIFaddbegin \DIFadd{significantly }\DIFaddend extends in the Y-direction away from the dielectric surface as electrons created near the surface will not gain enough energy for ionization. This causes the streamer to properly float above the dielectric surface, which can be seen at $1.7\,$ns in sub-figure (d), as expected of a cathode directed positively charged streamer head. \DIFaddbegin \DIFadd{The increased propagation length is not as significant as the streamer on the top half of the dielectric, due to the limiting effect that the surface streamer exhibited. }\DIFaddend If the surface of the dielectric was not considered as a pure absorber, then the \DIFdelbegin \DIFdel{additional }\DIFdelend \DIFaddbegin \DIFadd{emission and reflection }\DIFaddend features would provide an additional electron source that would promote the expansion and propagation of the streamer after the voltage had switched.
\DIFdel{Originally a low density, small volume negative surface streamer with a cathode oriented floating positive space charge developed on the bottom half of the simulations. This negative streamer is mostly limited to the surface of the dielectric barrier. However, under higher voltage conditions, it may be possible for the positive space charges to wrap around the metallic cathode. After the polarity switch, the negative streamer shifts to a positive streamer that floats above the dielectric surface acting as the cathode. The newly formed positively charged streamer head initially floats very close to the dielectric surface due to the original surface streamer. Eventually, the streamer head promotes propagation into the free space and floats a visible distance away from the dielectric surface. After the polarity switch, due to close proximity of the streamer head to the dielectric barrier, propagation in the X-direction is hindered, such that the negative streamer is able to propagate further.}
\subsubsection{$2^{nd}$ \DIFdelbegin \DIFdel{Phase }\DIFdelend \DIFaddbegin \DIFadd{Polarity }\DIFaddend Shift} \hfill\\
Between $1.7\,$ns and $1.9\,$ns, the applied voltage potential begins to switch again, this time rising from $-8\,$kV to $+8\,$kV. At $1.8\,$ns, the second polarity change occurs. Due to limited computational resources, the simulation was not executed for a second full positive cycle, and was instead ended at $2.0\,$ns. During this polarity switch, the same changes in the positive and negative streamers are observed.
On the bottom half of the simulation, the shift from a positive streamer at $1.7\,$ns, sub-figures (d) to a negative streamer is observed. When the applied voltage is $0\,$V at $1.8\,$ns, sub-figures (e), it can be seen that the floating positively charged streamer head is beginning to be flooded, while a new positively charged streamer head is forming near the metallic cathode. By $2.0\,$ns, sub-figures (f), the streamer bulk has mostly reached the dielectric surface again, has expanded further in the X-direction, and a new positive streamer branch forms near the cathode. It is very well expected that this branch would behave as the one discussed above.
On the top half of the simulation, not only is the shift from a negative to positive streamer observed, but also the beginning of the collapse of the positive streamer branch is observed. As already explained and expected, at $1.8\,$ns the positively charged streamer head of both the negative streamer and the streamer branch near the metallic electrode is flooded by electrons moving towards the new anode. At $2.0\,$ns the anchor of the main streamer on the anode is fully formed; however, the electrons within the branch have a further distance to travel and have not yet reached the anode. As the polarity switches at $1.8\,$ns, electrons near the dielectric surface are repelled away and a floating positively charged streamer head forms. At $2.0\,$ns this streamer head is beginning to wrap around the large streamer bulk to promote further expansion in both the X- and Y-directions away from the metallic anode and dielectric surface respectively. \DIFdelbegin \DIFdel{\mbox
\Cref{tab:AreaAndDens} }\hspace{0pt
presents the instantaneous area and density of the simulated streamersfor both the dual streamer DC and AC scenarios at each presented timestamp. Three different electron densities are tabulated, the average density of all locations containing an electron density greater than $1\cdot10^{16}\,$m$^{-3}$, the average density of all locations containing an electron density greater than $1\cdot10^{18}\,$m$^{-3}$, and the maximum electron density. Two different areas are tabulated, that corresponding to the size of all cells containing an electron density greater than $1\cdot10^{16}\,$m$^{-3}$, and the size of all cells containing an electron density greater than $1\cdot10^{18}\,$m$^{-3}$. Note that the presented electron densities are instantaneous values at specific timestamps and are therefore, not directly comparable to time averaged measurements}\DIFdelend \DIFaddbegin
\subsubsection{\DIFadd{Summary of Polarity Shifts}} \hfill\\
\DIFadd{During both polarity shifts, similar and important events take place on the respective positive and negative streamers. The Negative streamer is initially attached to the anodic dielectric surface, and floating away from the metallic cathode. As the polarity changes, the electrons reverse in direction, attaching to the metallic anode and forming a positively charged streamer head near the dielectric surface. Newly created electrons are quickly attracted to the streamer head and as such allow for the now positive streamer to further propagate into the X- and and Y-directions, thereby increasing the volume and overall density of the streamer. The positive streamer is initially floating away from the cathodic dielectric surface, and attached to the metallic anode. As the polarity changes, electron avalanches are instigated and rush towards the dielectric surface, thereby drastically increasing the plasma density, volume, propagation length, and surface coverage. Additionally, as a positively charged streamer head and sheath like region form near the metallic cathode, newly created electrons are able to instigate an additional positive streamer branch that floats above the metallic cathode. This branching feature also drastically increased the electron density and volume. Given a high enough initial voltage, it is expected that this positive streamer branch could form on the negative streamer before any polarity switching occurs.}
\DIFadd{The increase in plasma densities, volume, and surface coverage are expected to be directly beneficial to various applications such as plasma enhanced catalysis and gas treatment. In plasma enhanced catalysis, the dielectric surface will typically be coated with a catalyst, such that any increase in surface coverage directly increases the active area of the catalyst. Additionally, any increase in plasma volume and density will naturally increase the radical densities which are available to react with either the catalytic surface and or the treatment gas that the plasma is ignited in, thus directly affecting the efficiency of the process}\DIFaddend .
\section{Conclusions and Outlook}
\label{Conclusion}
In this work, the plasma streamer propagation of a twin SDBD setup by means of PIC/MCC code modeled in dry air under DC and AC voltage operation. The AC driving voltage waveform corresponded to a nanosecond square waveform with sub-nanosecond risetimes. The twin SDBD geometry being fully exposed and symmetric about the dielectric layer promotes both positive and negative streamer discharges to ignite simultaneously, along the edges of both the anode and cathode. This symmetry has not been theoretically investigated extensively, leaving the question of, among others, how do the streamers affect one another. In order to provide insight into this question, multiple scenarios were simulated. First, the propagation of a positive streamer and negative streamer were simulated individually under identical DC conditions. Second, both streamers were allowed to propagate using the same DC conditions, thereby providing insight into the interplay of the two streamers. However, the main focus of the paper is on the role of how the streamers interact and change under AC conditions; therefore, a short multi nanosecond duration bipolar square pulse is used to approximate said conditions.
It was first shown that both the positive and negative streamers behave as expected under DC conditions. Both streamers form a quasi neutral bulk. The positive streamer forms and propagates via a floating cathode directed positive streamer head, while the negative streamer propagates via an electron avalanche along the surface of the dielectric barrier. The negative streamer also forms a positive space charge that floats above the metallic cathode. The floating positive space charges of both the positive and negative streamer must float as new electrons which are introduced between the cathode and said space charges are not able to gain enough energy for new ionization events. It was then shown that the interaction of both streamers under DC conditions does not significantly alter the propagation methods, but that the positive streamer "pulls" the negative streamer while simultaneously being "pushed" by the negative streamer, effectively increasing the surface coverage and the densities of the plasma streamers. The speed of propagation of both streamers differs when individually simulated versus when simultaneously simulated. The positively charged streamer head of the positive streamer propagates away from the anode providing an enhanced electric fields that the negative charges of the avalanche of the negative streamer then follow. Likewise, the \DIFdelbegin \DIFdel{avalanche of the }\DIFdelend negative streamer charges the dielectric surface which then \DIFdelbegin \DIFdel{help }\DIFdelend \DIFaddbegin \DIFadd{helps }\DIFaddend to push the positively charged streamer head of the positive streamer further away from the anode.
Next, the interactions of the two streamers under switching voltage conditions was investigated. The fast polarity switching of the applied voltage causes significant changes in the streamers. The switch from a positive streamer to a negative streamer, and vice versa were observed to cause a significant increase in both plasma size and density due to similar effects that take place during the DC scenario. It was also observed that additional positive streamer branches are able to form between the negative streamer and cathode under the given conditions. The initial branching structure is very similar to structures that formed on the negative streamer during DC conditions and the AC conditions before any voltage switches. Therefore it is hypothesized that the voltage switching allows for a branch to more easily form, but is still subject to some minimal necessary applied voltage for a given set of geometrical conditions.
Overall, an electrode geometry allowing for two oppositely-phased plasmas to simultaneously ignite is beneficial with respect to plasma size and density. The two fully exposed electrodes create strongly curled electric fields that promote the ignition of plasma streamers near the surface of the dielectric. The simultaneous ignition of the streamers enhances the lateral electric fields causing the streamers to propagate further away from the metallic electrodes than they would if one electrode was submerged. This effect is even further enhanced if the applied voltage is able to quickly switch polarities before the streamers have a chance to self extinguish\DIFdelbegin \DIFdel{. The }\DIFdelend \DIFaddbegin \DIFadd{; however, this fast of a voltage profile is difficult to experimentally achieve and as such the reader should remember this if attempting to compare any numerical information from this paper. Nonetheless, the }\DIFaddend enhanced electric fields also allow the plasma to achieve higher densities, which is in many applications desirable.
In plasma enhanced catalysis applications, one might want to coat the dielectric surface with a catalyst. Having an enhanced plasma propagation length would directly correlate to an increased surface area of the catalyst that is directly affected by the plasma, leading to a potentially enhanced efficiency. In gas treatment applications, an increased plasma density is typically desirable in order to increase the rate of molecular fragmentation and/or purification. Future experimental measurements and theoretical \DIFaddbegin \DIFadd{and or numerical }\DIFaddend investigations on the electrode geometry could optimize an electrode system for a given set of applications. Additional simulations of a porous catalytic coating attached to the dielectric surface would provide \DIFaddbegin \DIFadd{additional }\DIFaddend insight into plasma enhanced catalysis applications.
\section{Acknowledgements}
This work is supported by the German Research Foundation (DFG) with the Collaborative Research Centre CRC1316 projects A4 and A5 and the Scientific Research Foundation from Dalian University of Technology, DUT19RC(3)045, and the National Science Foundation of China Grant No. 12020101005.
\color{black}
\DIFaddbegin
\section*{\DIFadd{ORCID iDs}}
\begin{table}[h]
\begin{tabular}{l p{4cm}}
\DIFadd{Q. Z. Zhang:} & \url{https://orcid.org/0000-0002-5726-0829} \\
\DIFadd{R. T. Nguyen-Smith:} & \url{https://orcid.org/0000-0002-5755-4595}\\
\DIFadd{F. Beckfeld:} & \url{https://orcid.org/0000-0001-8605-2634}\\
\DIFadd{Y. Liu:} & \url{https://orcid.org/0000-0002-2680-1338}\\
\DIFadd{T. Mussenbrock:} & \url{http://orcid.org/0000-0001-6445-4990} \\
\DIFadd{J. Schulze:} & \url{https://orcid.org/0000-0001-7929-5734}\\
\end{tabular}
\label{tab:my_label}
\end{table}
\DIFaddend \printbibliography
\end{document}
\section{Introduction}
Dielectric barrier discharges (DBDs) are plasma discharges incorporating at least one layer of dielectric material separating the two electrodes. The dielectric barrier limits the charge transfer and thus the current flow typically producing a non thermal plasma at atmospheric conditions. This non thermal nature allows for the efficient generation of reactive species thereby providing multiple possibilities in biomedical, surface, and industrial applications \cite{Brandenburg2017,HHKim2004}. DBDs are classifiable into two main categorical descriptors: volumetric and surface DBDs. Volume dielectric barrier discharges (VDBDs) are classifiable from DBDs by having a gas gap and a dielectric barrier present between the two electrodes, producing either homogeneous or filamantary like plasmas depending on the conditions \cite{Kogelschatz2010}. Surface dielectric barrier discharges (SDBDs) on the other hand, have only the dielectric layer directly separating the two electrodes; a plasma is thereby only able to ignite along the surface of the dielectric. Due to the possibility of having a thin structure, SDBDs may have particularly low flow resistance and are therefore commonly researched for gas treatment or flow control purposes \cite{Brandenburg2017,Moreau2007,Mueller2007,Corke2010,HHKim2004}. SDBDs have the capability of being built in many unique geometrical configurations ranging in symmetry providing either a single axis or multiple axes for plasma propagation. They may also allow for either a single phase, anodic or cathodic plasma, or a dual phase ignition process.
Throughout the 1990s SDBDs have been well investigated as potential actuators for gas flow control \cite{Brandenburg2017,HHKim2004,Moreau2007,Corke2010}. For such purposes an asymmetric geometry, where one electrode is offset from the opposite electrode and possibly completely submerged by the dielectric, is typically used \cite{Corke2010,Akishev2012,Audier2014,Biganzoli2012,Debien2012,GAO2017,Peng2019,Xiahua2016,Soloviev2017,Starikovskii2009,Unfer2010,Che2012,Hu2018,Shao2013,Soloviev2018,Opaits2008,Sato2019}. Much effort has been put into controlling the plasma behaviors, such as densities and surface charge deposition, and their corresponding aerodynamic effects from said SDBD configurations \cite{Opaits2008,Corke2010,Opaits2012,Audier2014,Sato2019}. It has also been shown that AC and pulsed waveforms can significantly modulate the plasma profiles (at positive and negative voltage phases) \cite{Akishev2012,Audier2014,Biganzoli2012,Che2012,Debien2012,Hu2018,Soloviev2017,Soloviev2018,Starikovskii2009,Unfer2010}.
In recent years, SDBDs have undergone extensive investigation for gas purification for industrial and environmental protection applications \cite{Brandenburg2017, Mueller2007,HHKim2004}. Absolutely calibrated two wavelength emission spectroscopy has been used in order to characterize a symmetric SDBD under tailored voltage waveforms \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019}. The waveform under experimental investigation is a damped sine wave with multiple $\mu$s period, adjustable peak to peak voltage, and pulsed in the kHz regime. Additional emission spectroscopy, absorption spectroscopy, and Fourier transform infrared (FTIR) spectroscopy methods have also been used to measure various species densities and chemical modifications of cystine. Furthermore, flame ionization detectors, gas chromatography-mass spectroscopy, and ion energy analyzer quadrupole mass spectroscopy are all being used to investigate and characterize the conversion of volatile organic compounds into non-harmful and non-toxic compounds \cite{Schuecke2020}. Furthermore, the inclusion of pre gas heating and catalyst coatings are being investigated for higher conversion efficiencies \cite{Schuecke2020,Peters2021}.
In many applications, like chemical processing and gas purification, the interaction between a plasma and a catalyst yields synergistic effects resulting in enhanced performances \cite{HHKim2004,HHKim1999}. As such, various structures of catalytic material are often inserted into traditional DBD reactors including, but not limited to: spheres, honeycombs, 3D fibre deposition structures and coatings of the dielectric barrier itself \cite{Zhang2018,HHKim1999}. The synergistic effect is obtained via two primary methods. Firstly, the altered geometry along with tailored voltage waveforms influence the discharge characteristics \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Zhang2015}. Secondly, the plasma distribution determines the effective contact area of the catalyst thereby altering the morphology and work function of the catalyst \cite{Neyts2014,Zhang2017}. This leads to a great importance on generating a controllable plasma density and spatial distribution \cite{Brandenburg2017,HHKim2004,Zhang2018,HHKim2016,Shang2019}.
The above studies, although very interesting, were mostly based on experiments of submerged SDBDs where the plasma discharge is confined to one side of the dielectric plate providing investigations only into a single phase ignition process \cite{Akishev2012,Audier2014,Biganzoli2012,Corke2010,Debien2012,GAO2017,Moreau2007,Opaits2012,Peng2019,Xiahua2016,Shang2019,Shang2019,Soloviev2017,Starikovskii2009}. That is to say that only either an anodic or cathodic phase plasma is present, but never both simultaneously. This single phase nature limits the effective volume and surface area of the plasma which defines the effective catalytic surface area exposed to the plasma species in plasma enhanced catalysis. As such, the catalyst performance is potentially limited to a great extent in a single phase SDBD. In gas treatment conditions, an SDBD electrode system is very likely to be placed along the central plane parallel to gas flow in order to minimize flow resistance and increase the treatment volume. Under these conditions, it is very clear that utilizing an SDBD electrode system which ignites on both sides of the dielectric plate will improve the treatment volume, and as such efficiency of the process.
Unfortunately, most theoretical investigations utilizing circuit models \cite{Pipa2012,Peeters2014,Pipa2020_PowerDBDEQC}, global models, molecular dynamic models \cite{Neyts2014}, fluid models \cite{Che2012,Peng2019,Soloviev2018}, and even particle-in-cell/Monte Carlo collision (PIC/MCC) models \cite{Zhang2015,Zhang2017,Zhang2018} of (S)DBDs and packed bed reactors provide limited insights into the underlying mechanisms of the plasma propagation \cite{Mujahid2018,Mujahid2020,mujahid2020Propagation}. No contributions on the theoretical investigation of a dual phase symmetric SDBD could be found by the authors, pointing to a significant lacking of knowledge of such configurations is present. The inherent mechanisms behind the evolution of the plasma discharge in asymmetric and even more so symmetric SDBDs is still not fully understood. It is not yet clear how a simultaneous positive and negative surface streamer (above and below the dielectric) can interact with each other, and to what extent, if any, do they enhance one another. It is not clear how the streamers respond to tailored voltage waveforms, nor what the optimized conditions are for generating large treatment volumes. It is unknown to what extent the surface streamers interact with an active surface such as a catalyst. These are crucial pieces of information to ensure good plasma enhanced catalysis performance. Additionally, many experiments, such as optical emission spectroscopy, still have open questions as to whether the results are more representative of the streamer bulk or the highly dynamic streamer head. These concerns demand a more detailed simulation for the dynamic behavior of the positive and negative streamers in a dual phase symmetric SDBD during the ignition process.
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{NegativStreamer_Initial-eps-converted-to.pdf}
\caption{Schematic detailing the negative streamer formed via an anode oriented electron avalanche.}
\label{fig:NegativeStreamer}
\end{figure}
Therefore, in the present work we computationally investigate the plasma propagation of a symmetric, dual phase SDBD, hereby referred to as the twin SDBD, under various voltage waveform conditions. The particular geometry of the twin SDBD ensures that both an anodic and cathodic phase plasma are simultaneously ignited, separated by the dielectric barrier, and are physically symmetric about the metallic electrodes. The symmetric geometry does not only give rise to a higher plasma surface coverage, but also enables a direct comparison between the positive streamers on the anode side versus the negative streamers on the cathode side as well as the interaction between the two. The numerical investigations are carried out by means of a 2D PIC/MCC simulation software known as VSim, a multi-physics simulation tool, which combines the Finite-Difference Time-Domain (FDTD), PIC, and Charged Fluid (Finite Volume) methods for simulating electrical gas discharges. \cite{NIETER2004}. The insights provided by this work are not only applicable to the twin SDBD and similar geometries, but also to other SDBD geometries, asymmetric ones included via a deeper understanding of the streamer propagation and form.
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{PositivStreamer_Initial-eps-converted-to.pdf}
\caption{Schematic detailing the positive streamer, which forms via a cathode oriented propagation front.}
\label{fig:PositiveStreamer}
\end{figure}
To provide a basis of understanding the streamer dynamics in a twin SDBD, that will be revealed in this work, we briefly recall the fundamentals of positive and negative streamer dynamics in a DBD. A negative streamer, see \cref{fig:NegativeStreamer}, ignites through an anode oriented electron avalanche: electrons, which are accelerated against the direction of the electric field, collide with the background gas. Ionization takes place causing an exponential growth of electrons and ions, creating a quasineutral bulk plasma that propagates from the cathode to the anode. A positive streamer, see \cref{fig:PositiveStreamer}, is also created via electron collisions, but is somewhat more complex. The cathode oriented positively charged streamer head attracts the electrons which cause ionization in front of the streamer head, resulting in an ionization wave. This ionization wave propagates from the anode to the cathode, leaving behind a quasineutral bulk plasma. Branches may form from the streamer head creating additional ionization waves; branching is more readily observed in gas mixtures that are susceptible to self induced photo ionization. Under short timescales, a few nanoseconds and less, a feature very similar to a low pressure sheath forms. The positive streamer head floats above the cathode due to an absence of available electrons, thus creating a region with a very strong electric field. Given an appropriate amount of time, the positive ions do reach the cathode due to their own velocities. At the dielectric(s), any charges that reach the surface adhere to it and charge it. These surface charges repel incoming like charges along the surface, causing both positive and negative streamers to spread out. Due to the lightweight electrons, this effect is more prominent in negative streamers; however, the floating nature of positive streamers can also facilitate a similar effect. For a deeper understanding we refer the reader to Nijdam \textit{et. al.} and to Zhang \textit{et.al.} \cite{Nijdam2020,Zhang2021} where the dynamics of positive and negative streamers of a VDBD via PIC/MCC simulations are detailed.
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{SDBD_Elektrode.png}
\caption{Computer generated graphic showing the physical structure of the SDBD electrode under consideration. A metallic lattice (dark grey structure) is printed symmetrically on both the top (visible) and bottom (hidden) faces of the Al$_2$O$_3$ dielectric barrier (light grey material). Due to the strong curvature of the electric field lines when under operation, the plasma (purple structure) ignites along the edges of the metallic lattice.}
\label{fig:Electrode}
\end{figure}
This paper is structured as follows: First in \cref{Model} the computational model and geometry are described. Following this, in \cref{Results} the results of the various simulations are presented: the DC results in sub-\cref{SingleStreamers,DualStreamers}, and the AC results in sub-\cref{ACStreamers}. Finally, in \cref{Conclusion} our closing remarks and conclusions are discussed.
\section{Computational model}
\label{Model}
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{NeuElektrode_GitterLinie0001.jpg}
\caption{SEM image of electrode cross section. The bulk, homologous material is the Al$_2$O$_3$ dielectric. The hump like structure with larger grains is the metallic electrode trace.}
\label{fig:SEM}
\end{figure}
The geometry to be simulated is chosen to resemble that of the twin SDBD electrode intended for use in gas treatment applications and was first experimentally presented in \cite{Offerhaus2017} and subsequently in \cite{Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}. The authors defer the readers to these references for a detailed description of the twin SDBD system under question. It is important to reiterate that this device consists of a dielectric plate, with metallic grids placed on the surface of the dielectric on both sides. A computer rendered sketch of the system can be seen in \cref{fig:Electrode}. These grids serve as electrodes. The system is built with both a geometric and electrical symmetry, such that both a positive and negative streamer are simultaneously ignited on either side of the dielectric under any given sufficiently high voltage conditions, which thereby warrants the name ”twin SDBD”. The metallic traces of the electrode system have been imaged with a scanning electron microscope for a more accurate depiction of the electrodes within the simulations. An example image of the cross sectional view of the metallic traces can be seen in \cref{fig:SEM}, which shows the curved nature of the metallic traces located on the dielectric, which is included in the simulation.
\subsection{Particle in Cell/Monte Carlo Collision model}
\label{Sim Model}
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{MCC_PIC_Flowdens-eps-converted-to.pdf}
\caption{Logic flow diagram of the PIC/MCC algorithim. One complete loop of the flow diagram represents one time stamp of the PIC/MCC code. During each successive change in the time step of the simulation, all sub algorithms are performed. Particles are pushed, merged, collided, generated, the densities are determined, and analyzed for electrical forces.}
\label{fig:ModelFlow}
\end{figure}
A 2D PIC/MCC model is used to study the plasma propagation of the twin SDBD based on the VSim simulation software \cite{NIETER2004}. VSim is being widely used and has been validated \cite{NIETER2004,Zhang2015,Zhang2017}. As these investigations taken place under similar conditions presented here (atmospheric pressure DBDs, nanosecond timescales and micrometer length scales), we operate under the assumption that our model is also valid. Additionally, the usage of PIC/MCC simulations to investigate the COST-Jet at atmospheric pressure yield realistic results that agree well with experiments, \cite{Bischoff2018,Korolov2019,Korolov2020}, proving that PIC/MCC models can indeed be used at atmospheric conditions. The PIC/MCC simulations performed in VSim are based on an explicit solver and the electrostatic approximation of Maxwell's equations, which were described in detail in \cite{Birdsall1991}. The PIC/MCC model takes advantage of accounting for the detailed kinetic behavior of charged particles which may be important for the evolution of electron avalanches and branching mechanisms, and therefore, the plasma streamer profiles. Air at atmospheric pressure is used as the discharge gas, with a constant density of background molecules, $80\,\%$ N$_2$ and $20\,\%$ O$_2$, at $300\,$K. Free electrons, N$_2^+$, O$_2^+$ and O$_2^-$ ions are traced throughout the simulation, which are represented as super-particles, i.e. one super-particle corresponds to a certain number of real particles defined by their numerical weighting, initially starting at $20\cdot10^3$ real particles per super particle \cite{Birdsall1991}.
In order to numerically initiate the plasma discharge, a uniform distribution of seed electrons is placed within the free space of the simulated geometry. These seed electron super-particles have a density corresponding to $1\cdot10^{15}\,$m$^{-3}$. Realistically, seed electrons are present due to cosmic radiation and environmental photo-ionization producing background electrons, as well as remaining charges from previous plasma discharges. The initial electron density was chosen as such in order to increase the initial weighting of the super particles, and thereby the simulation speed. The high initial density increases the speed of the initial electron avalanches and streamer breakdown. As seen later on, maximum achieved densities are on the order of $1\cdot10^{22}\,$m$^{-3}$, which is much higher than the initial density; therefore, the final profiles and mechanisms would not change if a lower initial density was chosen. Thus, the high initial density serves to increasing the simulation speed while not altering the results of the simulations. It should be noted that the usage of uniform seed electrons does not consider local effects of previous discharges.
As the plasma streamers evolve, the particle number of each considered species will rapidly increase due to the ionization avalanches. To account for this and to reduce the computation time, the weight of each super-particle is adaptive. A merger algorithm conserving both momentum and energy will combine same species super-particles when the number of said super-particles exceeds a threshold value of 10 super-particles respective to each cell of the simulation mesh. As the particle numbers only increase within the considered simulated time, no de-merger algorithm is implemented. This adaptive weight and merger algorithm is described in more detail in \cite{Zhang2017}.
Elastic, excitation, ionization, and attachment collisions of electrons with O$_2$ and N$_2$ gas molecules make up the considered reaction mechanisms as explained in more detail by \cite{Zhang2017}. The corresponding cross sections and threshold energies are adopted from the LXCat database and literature \cite{LiebermannAndLichtenberg,Furman2002,A_V_Phelps1999,PANCHESHNYI2012,LXCATdatabase}. At the surface of the dielectric barrier, only electron absorption is considered, \textit{i.e.} no electron reflection or surface electron emission is considered. Reported in \cite{Zhang2015,Zhang2017}, the inclusion of secondary electron emission, SEE, surface coefficients do not significantly alter the form of the simulated positive streamers, due to the floating nature of the streamer head. The negative streamer; however, propagates along the surface of the dielectric barrier, and as such, SEE coefficients would be more critical. The inclusion of SEE coefficients would theoretically increase the number of "background" electrons available for streamer propagation, and as such the streamers would propagate faster; however, their forms should not strongly change. Additionally, due to the lower electric fields of the negative streamer and the very short considered timescales, the effect of ion induced SEE would be very limited within this investigation.
\begin{figure*}[t]
\centering
\subfloat{
\label{fig:Geom(a)}
\includegraphics[width=0.885\textwidth]{GeoLarge-eps-converted-to.pdf}}
\\
\subfloat{
\label{fig:Geom(b)}
\includegraphics[width=0.885\textwidth]{GeoSmall-eps-converted-to.pdf}}
\caption{Schematic of the simulation regimes. Subfigure (a) and (b) correspond to the DC and AC simulated geometries respectively. The color scale corresponds to the different materials as follows: I) air ($80\,\%$ N$_2$ and $20\,\%$ O$_2$), II) Al$_2$O$_3$ dielectric, III) grounded electrode, IV) powered electrode. The boxed in regions denoted with (i) correspond to the regions that are presented in greater detail for the rest of the publication.}
\label{fig:Geometry}
\end{figure*}
With each successive timestamp of the model, a particle pusher, particle merger, and Monte Carlo collision algorithms for all particle species follow in succession. After the collisions, a new electron super particle is added to the simulation regime, the density of each cell is calculated, and Poisson's equation is solved in order to get the electric forces being applied to each particle, after which the cycle repeats. A diagram of the general flow is shown in \cref{fig:ModelFlow}.
\subsection{Simulated geometry}
\label{Sim Geometry}
The geometry to be simulated is a cross section of the twin SDBD described in \cite{Offerhaus2017,Offerhaus2018,Offerhaus2019,Kogelheide2019,Schuecke2020}, and shown in \cref{fig:Electrode}. The twin SDBD simultaneously produces positive and negative phased plasma streamers along the edges of the metallic traces; however, the two phases are separated by the Al$_2$O$_3$ dielectric barrier. On either side of the dielectric barrier, ignition on opposite edges of the respective metallic trace can be considered as two individual but same-phased streamers. Two different simulation geometries, referred to as geometry(a) and geometry(b), are considered in order to appropriately resolve the interaction of both the same-phased and respectively opposite-phased plasma streamers. Simulation geometry(a) and simulation geometry(b) are presented in \cref{fig:Geometry}. In total geometry(a) contains a 2D plane that is $9.6\,$mm x $1.2\,$mm in Cartesian X and Y coordinates. The plane is uniformly divided into square cells with unit length of $2.4\,\mu$m resulting in a square lattice of 4000 x 500 cells. The grid size was chosen based off of the Courant limit, $c\cdot dt<dx$, where $c$ is the speed of light and $dx$ is the grid size. Geometry(b) utilizes the same size grid cell, but uses only 1000 x 500 cells resulting in a total width of $2.4\,$mm. For ease of comparison, results from a zoomed in region of size 500 x 500 cells from both simulated geometries are presented for the rest of the paper. The respective regions are outlined by a dashed line and annotated with $(i)$ in \cref{fig:Geometry}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{Efield_arrows-eps-converted-to.pdf}
\caption{Electric field distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. The magnitude of the electric field is plotted on a linear intensity color scale, where the threshold value for the minimum intensity is chosen to be $1\cdot10^{6}\,$V/m. The normalized direction of the electric field is shown via the vector field.}
\label{fig:EField}
\includegraphics[width=0.885\textwidth]{Potential_arrows-eps-converted-to.pdf}
\caption{Electric potential distribution of the simulated electrode geometries for an applied $+8\,$kV and $-8\,$kV potential in (a) and (b) respectively. The electric potential is plotted on a linear intensity color scale. Additionally, the normalized direction of the electric field is shown via the vector field.}
\label{fig:EPotential}
\end{figure*}
Firstly, to investigate the interactivity of two same-phase streamers, positive-positive or negative-negative, two anodes and two cathodes are included in simulation geometry(a). The two same-phase electrodes are simulated with the same potential under DC conditions and are separated in the X-direction by $9.5\,$mm, corresponding to the distance between the edges of two neighboring and parallel metallic traces of the physical electrode. In order to minimize the computational time, the X boundaries of geometry(a) correspond to the vertical center lines of the metallic traces. Simulation geometry(a) may be seen in \cref{fig:Geom(a)}. Later, in \cref{SingleStreamers,DualStreamers} it is deduced that minimal interactivity is observed between two same-phase streamers. This is due to the limited spatial propagation of the plasma streamers on the considered timescales. Therefore, it is appropriate to simulate a section centered about just one metallic trace under the same timescales, thus a second simulation geometry is investigated. In simulation geometry(b) only one set of electrodes is considered, is only simulated under AC conditions, and is centered about the X-axis with the walls being $1.2\,$mm away from either side of their center line. Concerns about the reduced simulation domain having an effect on the calculated electric field strengths are mitigated by the naturally very fast reducing field strength as a function of the square of the distance from the electrodes. The usage of Neumann boundary conditions additionally improves the accuracy, as the simulation walls are not forced to a specific potential. Simulation geometry(b) may be seen in \cref{fig:Geom(b)}.
Both considered geometries of the 2D PIC/MCC model represent a cross sectional view of the electrode structure, where the anodes and cathodes are separated along the Y-axis by the dielectric barrier. The dielectric is located in the middle of the Y-axis, was chosen to be $0.500\,$mm thick and expands the whole X-direction, and is simulated with a dielectric constant of 9. In this representation, the Z-direction would equate to the length (or width) of the physical electrode setup but is mathematically treated as constant/homogeneous. This results in a simulation regime that is most valid for a planar section in the middle of any grid structure. In both geometries, the electrode structure itself is a geometrical composition of multiple tangent arcs resulting in a "hump" like structure. This electrode structure is used to approximate the real geometric structure of the metallic traces which can be seen in \cref{fig:SEM}. It should be noted that the simulated aspect ratios of the electrode thickness and width to the dielectric thickness is significantly different from reality; however, this was chosen as such in order to avoid numerical issues which would arise from using an appropriately sized simulation grid for realistic aspect ratios. Furthermore, the reduced dielectric thickness of the simulations versus the actual electrode configuration should not lead to any major differences in the interpretations of this paper, as it is the surface of the dielectric that plays a much more important role. By using a reduced dielectric thickness, we are able to increase the number of computational cells available for the plasma propagation, without increasing the entire simulation domain.
Particle densities and electric fields are resolved using a cutting-cell technique in order to handle the irregular geometry, through contributions of neighboring cells. The authors refer the reader to references \cite{Smithe2008,Meierbachtol2015,loverich2010} for more information. Neumann boundary conditions are used in all directions to ensure a smooth electric potential distribution at the boundaries of the simulation walls. The timesteps are non adaptive and fixed at $2\cdot10^{-13}\,$s. Similar to \cite{Likhanskii2010}, a singular new electron super-particle is randomly added to the simulation domain at each timestep in order to account for random events such as cosmic radiation, photo-ionization, \textit{etc.} as described in \cite{Ebert2006,E_M_van_Veldhuizen2002,Qiu2017}. These random events are beyond the scope of the available VSim functions. The seed electrons, both background and newly loaded electrons, are both sufficient in the simulation region to support streamer propagation as well as to not interfere with the plasma bulk as they are far fewer compared to the generated plasma. The generated plasma density profile is also much smaller than the simulation domain in both considered geometries.
\subsection{Waveform variation}
\label{Waveform}
In all considered simulations and both geometries, the electrode(s) above the dielectric barrier are treated as the powered electrode(s) while the bottom electrode(s) are held constant at $0\,$V. This choice is arbitrary and due to the physical symmetry of the system would provide only mirrored results if the opposite choice, either inverse polarity and/or choice of powered electrode, was made. Initially, a constant positive $8\,$kV potential is applied to geometry(a), thus the two powered electrodes take the role of the anodes while the bottom two are the cathodes. The initial electric field distribution can be seen in \cref{fig:EField}(a) and the initial potential distribution can be seen in \cref{fig:EPotential}(a). Within both figures, the magnitudes of the presented quantity are shown via the color scale, and the normalized direction of the electric field are additionally presented for further clarity. The normalized direction is presented as a vector field, where the X and Y directions of the vectors are the normalized X and Y values of the electric field at that grid cell. Naturally, the magnitude of the electric field is obtained from the square root of the sum of the X and Y components squared: $E_{mag} = \sqrt{E_X^2 + E_Y^2}$.
First, in order to investigate solely the role of the positive streamers, only the top half of the simulation area is seeded with the initial electrons. Likewise, the bottom half is subsequently seeded in a second simulation in order to solely investigate the negative streamers. Third, both halves are identically seeded thereby investigating the interplay and differences of both discharges igniting simultaneously under the DC voltage conditions. These three conditions are applied to geometry(a) only. Lastly, a varying voltage waveform is investigated.
\begin{figure}[t]
\centering
\includegraphics[width=0.4425\textwidth]{ACVoltage-eps-converted-to.pdf}
\caption{Applied voltage waveform of the AC simulations. Dashed lines labeled a through f at 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns respectively represent the timestamps at which results are presented in \cref{DualStreamers}.}
\label{fig:ACPulseform}
\end{figure}
Geometry(b) is only investigated under the AC conditions shown in \cref{fig:ACPulseform}. Under these conditions, the role of the anode and cathode switches twice; thereby giving insights into the extreme dynamics of fast voltage streamer switching. Initially, the applied voltage potential sharply rises within $0.1\,$ns to the $8\,$kV maximum which is then held constant for $0.7\,$ns. During this time, the anode is located on the top side of the dielectric barrier. At $0.8\,$ns, the voltage is decreased at the same rate, $80\,$kV/s, reaching the minimum applied voltage of $-8\,$kV at $1\,$ns making the top side of the dielectric barrier the cathode. Again, this minimum value is held constant for $0.7\,$ns until switching back to the positive $8\,$kV potential, again switching the location of the anode and cathode. Without considering any plasma propagation, the base electric field distribution for both a positive and negative applied potential are shown in \cref{fig:EField} and the equivalent potential distribution can be seen in \cref{fig:EPotential}.
All conditions are simulated for up to a maximum of $2\,$ns, thereby only revealing the inception phase of the streamers. The insights revealed within the \cref{Results} are consistent with other PIC/MCC models investigating DBD streamers in structured and porous catalytic surfaces \cite{Zhang2018,Zhang2018Porous}, which also are simulated in the ns timescales. Additionally, the phenomenon of a floating positive surface discharge is also observed in various fluid models \cite{Babaeva2016,Yan2014}. Therefore, the authors believe the results presented throughout this paper, even given the short time scales, are reasonable. The results reported below are meant for a qualitative understanding of the streamer dynamics in a twin SDBD. The general conclusions for more natural voltage waveforms, such as continuous sine waves, can be drawn, and could warrant further studies considering a real RF source. However, the results obtained in this work are particularly relevant for tailored voltage waveforms, which is a hot topic of current research and is trending towards shorter pulses and steeper rise times.
\section{Results and Discussion}
\label{Results}
\subsection{Single Streamer Dynamics} \label{SingleStreamers}
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{ne_up_log-eps-converted-to.pdf}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the positive streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged streamer head leading to streamer propagation, II) shaded region showing location of electron depletion, \textit{i.e.} sheath like feature, III) potential/failed positive streamer branch.}
\label{fig:DC PositiveStreamer}
\end{figure*}
Under the 8kV DC conditions with seed electrons present only on the anodic side of geometry(a), the propagation of an anodic phased plasma streamer, also known as positive streamer is simulated and presented in \cref{fig:DC PositiveStreamer}. The initial electric field distribution is shown in \cref{fig:EField}(a) and the initial electric potential distribution is shown in \cref{fig:EPotential}(a). Under these conditions, a cathode oriented positively charged streamer head that is able to freely move from the metallic anode to the dielectric surface is able to form.
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{ne_down_log-eps-converted-to.pdf}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the negative streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region leading to positive streamer like propagation, II) shaded region showing location of electron depletion, \textit{i.e.} sheath like feature.}
\label{fig:DC NegativeStreamer}
\end{figure*}
The streamer structure is anchored to the anode just above where the highest electric fields are located. It would be expected that the anchoring would take place at the location of the highest electric field; however, under these conditions this is located at the intersection of the electrode and the dielectric surface. At this point, and immediately next to it, due to the strong curvature of the electric field, electrons do not have enough space to gain sufficient energy for ionization. Multiple executions of the simulation produce anchored positions at the same location; furthermore, the anchor position is also at a symmetrical position on the opposite anode, which is not presented in \cref{fig:DC PositiveStreamer}. This suggests that the anchor is positioning itself based on the strong curvature of the anode, and not through the randomness of the ionization events. Indeed, when looking at the curvature of the simulated electrode, it appears as if the plasma is next to the strongest curvature. Under no conditions did the simulated positive streamers extend a significant amount into the X-direction, such that interactions between the two positive streamers do not need to be considered.
At $0.2\,$ns the positive streamer has advanced $0.12\,$mm meaning a propagation speed of $0.62\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The positive streamer had reached a propagation distance of $0.31\,$mm resulting in an averaged speed of $0.31\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. It was observed via multiple test executions that these propagation speeds and distances were highly dependent on the initial background electron density. With lower initial densities, the simulated streamer propagates a shorter distance. Likewise, larger background densities would result in faster speeds and longer propagation distances.
Initially the positive streamer began to propagate along the electric field lines at an angle offset from the surface of the dielectric barrier. The positive streamer head, which is not directly visible in \cref{fig:DC PositiveStreamer}, forms in front of the streamer and along the bottom side between the bulk plasma and the dielectric barrier. The streamer head is annotated in \cref{fig:DC PositiveStreamer} with an arrow labeled (I). Between the dielectric barrier and the positively charged streamer head is located a sheath like region, annotated via (II), where free electrons are attracted to the streamer head; however, they do not have enough space in order to promote further propagation towards the dielectric. Therefore, the only direction possible is outwards along the X- and positive Y-directions, towards the center of the simulated area. As the streamer continues to propagate along this direction, the electric field gets weaker proportional to $1/r$ (in 2D) or $1/r^2$ (in 3D), where $r$ is the distance from the electrode. Thus the positive streamer is able to advance in a somewhat straight line, parallel to the initial trajectory, which is at some angle to the dielectric surface; under these presented conditions this trajectory angle was determined to be $20.6^\circ$. The further the streamer propagates, the more space is available for propagation into the negative Y-direction, towards the dielectric surface. Therefore, in \cref{fig:DC PositiveStreamer}(b), a potential branch had began to take shape, annotated with (III); however it is not able to fully develop. As the cathode is located underneath the positive streamer, that is the only location of the streamer head; therefore, no branching occurs above the streamer bulk.
Due to the location of the failed branch in \cref{fig:DC PositiveStreamer}(b)(III), it would be extremely difficult to experimentally observe, and is noticeable within these simulations because of the kinetic nature of PIC/MCC models. Naturally, without experimental evidence, the reader might question the reality of whether branching forms or not at these orientations. The authors believe that the simulations are indeed accurate in predicting these features.
In \cref{fig:DC NegativeStreamer} the same simulation conditions are presented, except the initial seed electrons are on the cathode side of the dielectric barrier, thus the negative streamer is simulated. The seed electrons are still accelerated in the opposite direction of the electric field lines shown in \cref{fig:EField}(a). An electron avalanche directed towards the anode initiates the discharge. Under these conditions the electrons are pushed towards the dielectric, where they begin to collect on and charge the surface of the dielectric. A positively charged spatial region forms next to the cathode, but is unable to anchor to the cathode, as it must float at some distance away from the cathode.
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{ne_all_log-eps-converted-to.pdf}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled, where the annotations are as follows: I) positively charged region/streamer head, II) location of electron depletion, i.e. sheath like feature., III) potential/failed positive streamer branch.}
\label{fig:DC BothStreamers}
\end{figure*}
Newly created background electrons are pushed away from the cathode. Simultaneously, the electrons are attracted towards the positively charged region. Outside of the sheath region between the two, marked via an arrow labeled (II) in \cref{fig:DC NegativeStreamer}, these two directions are opposite one another. Only a very small amount of electrons are sufficiently accelerated to the positive charges with enough energy in order to cause ionization. Therefore, minimal propagation of the negative streamer parallel to the cathode surface takes place, as depicted via (I). Newly created background and avalanche electrons that reach the dielectric surface, instead of the positively charged spatial region, help to promote the propagation of the negative streamer along the surface of the dielectric in the X-direction away from the cathode and towards the center of the simulation area. However, no distinctly visible negatively charged streamer head is directly observable.
At $0.2\,$ns the negative streamer has advanced $0.077\,$mm meaning a propagation speed of $0.39\,$mm/ns. By the end of the simulated time, $1.0\,$ns, the streamer had stopped propagating a significant amount. The negative streamer had reached a propagation distance of $0.25\,$mm resulting in an averaged speed of $0.25\,$mm/ns. The actual instantaneous speed of the streamer would be significantly slower at this timestamp, as the average includes the faster propagation of the early streamer. As with the positive streamer, lower and higher initial electron densities result in a shorter and longer propagation distance, respectively. Furthermore, under no conditions did the two simulated negative streamers next to both cathodes extend a significant amount into the X-direction, such that interactions between the two negative streamers do not need to be considered.
\subsection{Dual Streamer Dynamics - DC}
\label{DualStreamers}
\begin{figure*}[t]
\centering
\includegraphics[width=0.885\textwidth]{Charge_all-eps-converted-to.pdf}
\caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale of the dual streamer simulations with constant voltage. Sub figures (a) and (b) correspond to the timestamps of 0.2 and $1.0\,$ns, respectively. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region/streamer head, II) surface charges which are visibly hidden by the mask of the dielectric barrier, III) potential/failed positive streamer branch.}
\label{fig:DC BothStreamersCharge}
\end{figure*}
Presented in \cref{fig:DC BothStreamers,fig:DC BothStreamersCharge} is the complete DC scenario, where seed electrons are present on both the anodic and cathodic sides of the dielectric barrier. The same positive $8\,$kV DC voltage is used. Comparing \cref{fig:DC PositiveStreamer}(a), \cref{fig:DC NegativeStreamer}(a), and \cref{fig:DC BothStreamers}(a) a small difference is observed at $0.2\,$ns. Primarily, the sizes and overall density of both the positive and negative streamers have increased. The positive streamer has advanced $0.15\,$mm while the negative streamer has advanced $0.088\,$mm away from the anodes and cathodes, respectively. By $1.0\,$ns both streamers have significantly increased in size and average density compared to \cref{fig:DC PositiveStreamer}(b) and \cref{fig:DC NegativeStreamer}(b). Failed branches on the positive streamer are still present. The positive streamer has advanced a total of $0.41\,$mm while the negative streamer advanced a total of $0.27\,$mm. \Cref{tab:SizeAndSpeed} summarizes the streamer height, length, propagation angle, and propagation speed for the positive and negative streamers under all three simulation conditions. The propagation angle is determined as the angle at which the positive streamer propagates away from the dielectric surface, and is treated as $0\,^\circ$ for the negative streamer. The streamer length and thickness are respectively the size of the streamers with respect to the parallel and perpendicular axes about the streamer propagation angle.
On the anodic side of the dielectric, the positively charged streamer head of the positive streamer is facing the dielectric surface, which can be seen as the red charges in \cref{fig:DC BothStreamersCharge}. This positively charged area acts as a virtual anode that leads to an enhanced electric field in both the X- and Y-directions below the dielectric surface on the cathodic side. Additionally, the positive streamer has a high charge density. The enhanced field and high density promote the expansion of the negative streamer along the surface of the dielectric in the X-direction. The negative streamer thus charges the surface of the dielectric even more. These negative surface charges along the dielectric barrier on the cathodic side act as a virtual cathode, enhancing the electric field in both the X- and Y-directions above the dielectric. Thus, the negative streamer also facilitates an easier expansion of the positive streamer in the X-direction. Here it is clear, that both streamers work together in a unison that increases the effective plasma surface coverage and volume of both streamers. Naturally, the electric field reduces proportional to the square of the distance from the electrodes, such that the positive and negative streamers are no longer able to expand any further, even with their cooperative effect being considered. Therefore, as with the single phase streamer simulations, interactions with the neighboring discharges on the right hand side of the simulation domain do not need to be taken into consideration.
\begin{table*}[t]
\centering
\begin{tabular}{|c|l|c c|c c|c|c c|}
\hline
\multirow{2}{*}{Time} & \multirow{2}{*}{DC Streamer} & \multicolumn{2}{c|}{Thickness [$\mu$m]} & \multicolumn{2}{c|}{Length [$\mu$m]} & Angle [$^\circ$] & \multicolumn{2}{c|}{Speed [$\frac{\mu\mathrm{m}}{\mathrm {ns}}$]}\\
& & Average & Maximum & Average & Maximum & & Propagation & Lateral \\
\hline \hline
\rowcolor{gray!25}
\cellcolor{white} & Positive & 38.39 & 49.20 & 123.4 & 170.4 & 20.60 & 617.1 & 577.7\\
\rowcolor{white}
\cellcolor{white} & Negative & 63.66 & 133.2 & 77.70 & 158.4 & -- & -- & 388.5\\
\rowcolor{gray!25}
\cellcolor{white} & Full (+) & 40.27 & 54.00 & 149.72 & 206.4 & 14.80 & 748.61 & 723.77\\
\rowcolor{white}
\cellcolor{white}\multirow{-4}{*}{$0.2\,$ns} & Full (-) & 64.63 & 128.4 & 88.20 & 166.8 & -- & -- & 441.0\\
\hline
\rowcolor{gray!25}
\cellcolor{white} & Positive & 60.07 & 76.80 & 305.5 & 410.4 & 13.30 & 305.5 & 297.3\\
\rowcolor{white}
\cellcolor{white} & Negative & 79.10 & 115.2 & 247.9 & 372.0 & -- & -- & 247.9\\
\rowcolor{gray!25}
\cellcolor{white} & Full (+) & 65.92 & 80.40 & 409.17 & 516.0 & 10.50 & 409.17 & 402.31\\
\rowcolor{white}
\cellcolor{white}\multirow{-4}{*}{$1.0\,$ns} & Full (-) & 97.66 & 157.2 & 271.0 & 429.6 & -- & -- & 271.0\\
\hline
\end{tabular}
\caption{Extracted average and maximum streamer heights and lengths of the DC streamer simulations at both output timestamps of $0.2\,$ns and $1.0\,$ns. The thickness and length are treated as the sum of cells perpendicular and parallel to the streamer propagation direction. The direction of the negative streamers is treated as parallel to the dielectric surface, while the angle of incidence of the positive streamers is determined in post analysis. The propagation speed is determined as the length of the streamer divided by the passed time. The lateral speed is the X-component of the propagation speed.}
\label{tab:SizeAndSpeed}
\end{table*}
In essence, the positive streamer and the negative streamer work together to promote propagation. Both of the streamers are acting against the potential energy barrier of ionization and the ever decreasing electric field strength. Therefore, with the simultaneous ignition of both positive and negative streamers in a twin SDBD system, the surface coverage and plasma volume are significantly increased when compared to a submerged symmetric SDBD system. When comparing the average lengths of the single and dual streamers in \cref{tab:SizeAndSpeed}, the positive streamer sees an increase of the propagation length by $17.6 - 25.3\,\%$ and then negative streamer sees an increase of $8.5 - 11.9\,\%$ when both streamers are simultaneously ignited.
\subsection{Dual Streamer Dynamics - AC}
\label{ACStreamers}
Due to the minimal extension of the plasma into the free space above and below the dielectric surface of the simulations discussed in \cref{SingleStreamers,DualStreamers}, the simulated area was shifted horizontally to be centered about a single electrode pair, and reduced in width. Under this geometry, geometry(b), a bipolar AC square voltage profile with fast rise and short pulse times is simulated, shown in \cref{fig:ACPulseform}. Seed electrons are placed both above and below the dielectric barrier. Under such conditions, during the first positive pulse the plasma propagates near identically to the DC case discussed in \cref{DualStreamers,fig:DC BothStreamers,fig:DC BothStreamersCharge}. However, here it is observed that two near-mirror discharges simultaneously propagate about the horizontal center axis of both the anode and cathode. For reasons of consistency, only the right half of the simulated area is shown, as seen in \cref{fig:Geom(b)}. If shown, minimal differences between the left and right discharges would be seen, but may be attributed to the stochastic nature of the PIC/MCC code and the random seed electrons implemented each time step. Additionally, the implemented rising time of the voltage waveform from $0\,$V to $+8\,$kV at $0.1\,$ns does not contribute many differences, except perhaps a slightly reduced overall density and propagation distance. The electron density distribution, positive ion density distribution, \textit{i.e.} summation of N$_2^+$ and O$_2^+$ ions, charge disparity distribution, and electric field magnitude and direction are shown in \cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}, respectively. Sub-figures (a) through (f) of each correspond to identical timestamps of interest, shown with respect to the voltage waveform in \cref{fig:ACPulseform}.
\begin{figure*}[p]
\centering
\includegraphics[width=0.885\textwidth]{ne3x2-eps-converted-to.pdf}
\caption{Spatial profiles of the electron density plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) shaded region showing location of electron depletion, \textit{i.e.} sheath like feature., III) potential/failed/completed positive streamer branch.}
\label{fig:AC_Dens}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=0.885\textwidth]{N2O2ions-eps-converted-to.pdf}
\caption{Spatial profiles of the positive ion density, \textit{i.e.} the summation of N$_2^+$ and O$_2^+$ ions, plotted on a logarithmic intensity scale at six chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles.}
\label{fig:AC_ionDens}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=0.885\textwidth]{charge3x2-eps-converted-to.pdf}
\caption{Spatial profiles of the charge disparity plotted on a diverging intensity scale at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Features of importance are labeled with arrows, where the annotations are as follows: I) positively charged region leading to streamer propagation, II) surface charges which are visually hidden by the mask of the dielectric barrier, III) potential/failed/completed positive streamer branch.}
\label{fig:AC_Charge}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=0.885\textwidth]{Efield3x2-eps-converted-to.pdf}
\caption{Spatial profiles of the absolute value of the electric field plotted on a linear intensity scale as well as directional arrows at four chosen time stamps of the multi streamer simulations with switching voltage. Sub figures (a) through (f) correspond to the timestamps of 0.8, 0.9, 1.0, 1.7, 1.8, and 2.0$\,$ns, respectively. The applied voltages are respectively written within the electrode profiles. Cut off value for minimum intensity scale (white) chosen as 1e6$\,$V/m. The direction of the electric field is shown via the normalized vector field as discussed in \cref{Waveform}}
\label{fig:AC_EField}
\end{figure*}
Between $0.8\,$ns and $1.0\,$ns the applied voltage is reduced, at $0.9\,$ns the applied voltage is $0\,$V, after which the role of the anode and cathode switch. Due to the polarity switch the electric field is reversed, thus the electrons move in the opposite directions. Free electrons present in the streamer above the dielectric move away from the now metallic cathode. Likewise, electrons from the bulk of the streamer below the dielectric move towards the now anode. Electrons along the surface of the dielectric remain attached and do not move. At $1.0\,$ns the voltage on the cathode has reached its minimum value of $-8\,$kV, where it stays constant for a further $0.7\,$ns. After which, a second polarity switch takes place. All the while, the positive ion densities very closely follow the electron density profiles.
\subsubsection{$1^{st}$ Polarity Shift - Positive to Negative streamer} \hfill\\
Paying attention to the top half of the simulation regime focuses on the shift from a positive streamer to a negative streamer. As the voltage drops on the top electrode from $+8\,$kV to $0\,$V between $0.8\,$ns and $0.9\,$ns, sub-figures (a) and (b) respectively of \cref{fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge,fig:AC_EField}, the electrons are not accelerated as strongly as before. The electrons relax and shift a little inwards towards the streamer bulk and the positively charged streamer head. The plasma volume slightly shrinks and the overall electron density becomes more refined and increases in number. The positively charged streamer head reduces in thickness and disparity, \textit{i.e.} becomes more quasi neutral. As the electrons are not as strongly/no longer attracted to the metallic anode, a positive space charge builds up at the streamer anchor on the anode. These two effects respectively lead to the electric field strength reducing between the streamer and the dielectric surface, and a very strong electric field between the anode and the streamer anchor. At $0.9\,$ns the quasi neutral streamer has a slight net positive charge, and thus takes on the role of a virtual anode along the boundaries of the streamer, meaning that the electric field above the streamer has reversed in the X-direction, but not the Y-direction.
Between $0.9\,$ns and $1.0\,$ns, sub-figures (b) and (c) respectively, the applied voltage is negative, thus the metallic electrode is now the cathode and the dielectric surface is the anode. Due to the reversed electric field, electrons within the streamer begin falling to the dielectric surface. Along the way the remaining positive charges in the streamer head are flooded with electrons such that no charge disparity is noticeable, as it can be seen that the positive ion densities between the positive streamer and the dielectric do not change between these time steps. During this process, the electric field between the streamer and the dielectric surface completely reverses in both the X- and Y-directions. Naturally, falling electrons starting at locations where the streamer began to branch off but could not expand would reach the dielectric surface first. As the electrons are accelerated towards the dielectric surface, further ionization events take place creating new ions and electron avalanches. The electrons that first reach the dielectric charge the surface and repel other electrons into the X-direction away from the cathode, increasing the plasma propagation length. The original streamer is now acting like a negative streamer. By $1.7\,$ns, sub-figure (d), the negative streamer has charged the top surface of the dielectric and almost doubled the lateral length of the original positive streamer.
Between $0.9\,$ns and $1.0\,$ns, the electrons near the streamer anchor/tail completely break away from the metallic cathode as the electrons are pushed away from it; however, the positive ions do not move. This results in a net positive charged being left behind. Thus a new positively charged streamer head forms between the cathode and the streamer bulk, both above and below the streamer. Along with this new streamer head forms an extremely high electric field in the local proximity, oriented away from the positive charges towards the cathode.
Newly created electrons above the cathode and the streamer bulk, which is acting as an anode, are attracted to the streamer head and a small branch begins to form. This branch is shown in \cref{fig:AC_Dens,fig:AC_Charge}(c) with arrows labeled (III), and is also visible in \cref{fig:AC_ionDens}(c). As the simulation progresses in time, new electrons are continuously attracted towards this branch, gain energy, and eventually cause ionization. A cathode directed positively charged streamer head propagates along and floats above the cathode. A near mirror branch simultaneously forms on the other side of the metallic grid, which is not shown. Due to the positively charged streamer heads leading both of these branches, they repel one another. Therefore, neither branch is able to reach the other. By $1.7\,$ns, sub-figure (d), the branch has completely developed. Through multiple executions, it has been observed that this branching does not take place if the applied voltage, and thus electric field between the streamer head and cathode, is too low. It should be noted that the initial branching has a very similar structure to the positively charged spatial region of the negative streamers in \cref{fig:DC NegativeStreamer,fig:AC_Dens,fig:AC_ionDens,fig:AC_Charge}(a), and \cref{fig:DC BothStreamers}(b). One could expect that given a high enough voltage, the positive space charges would continue to wrap around the cathode in the same manner as the branching in \cref{fig:AC_Dens,fig:AC_Charge}(c) and (d). Therefore, the branching should not be considered as solely limited to the polarity switches, but rather that they are encouraged by the polarity switches. As with the positive streamer in the DC case, discussed in \cref{SingleStreamers}, the authors believe these simulated branching mechanisms are accurate, even given the difficulty of experimentally observing them.
\subsubsection{$1^{st}$ Polarity Shift - Negative to Positive Streamer} \hfill\\
Focusing now on the bottom half of the simulation regime tracks the shift of the negative streamer to a positive streamer between $0.8\,$ns and $1.0\,$ns. During this time, the applied voltage is switched from $+8\,$kV to $-8\,$kV; however, the bottom electrode is held at a constant $0\,$V. As the applied voltage changes polarity, the bottom electrode also switches roles, now becoming the anode. Unlike the top half, the relaxation of the electric field causes a small shift in the bulk electrons which leads to both a large increase in the streamer size as the electrons are pushed away from the dielectric and towards the metallic electrode. Similar to the top half, the average electron density slightly increases and the charge disparity in the positively charged streamer head near the now anode reduces. This eventually leads to the streamer attaching to the anode, seen in in sub-figure (c), as electrons are freely absorbed by it. This motion also leads to the creation of a strong positive ion density within at the anchor position, as seen in in \cref{fig:AC_ionDens}(c).
Furthermore, a small positive space charge forms between the negatively charged dielectric surface and the bulk plasma as the electrons are pushed away from the dielectric surface, but the positive ions do not move. However, the electrons that had attached to the surface do not desorb within the simulation, neither are electrons emitted due to surface field emission, emitted due to ion induced secondary electrons, nor are electrons reflected. The newly formed positively charged head and the negative surface charges form a very high electric field in a very thin sheath like structure between the streamer and the dielectric surface by $1.0\,$ns. The positively charged streamer head is floating above the surface, which is acting as the cathode; however, due to the original proximity of the bulk plasma to the surface and the surface charges, the streamer head remains very close to the surface.
The proximity of the streamer head limits the ability of the streamer to propagate into the X-direction. As electrons are continuously pushed away from the dielectric surface, the thickness of the streamer head and consequentially the sheath like region increase. Eventually, near the "tip" of this region, along the X-direction, newly generated electrons outside of the plasma bulk are sufficiently attracted towards the positive charges. This leads to the streamer head curling around the "tip" of the streamer bulk, providing a virtual anode for further newly created electrons to be attracted to. Sufficiently energetic electrons will promote propagation further into the X-direction, extending the plasma. This propagation also significantly extends in the Y-direction away from the dielectric surface as electrons created near the surface will not gain enough energy for ionization. This causes the streamer to properly float above the dielectric surface, which can be seen at $1.7\,$ns in sub-figure (d), as expected of a cathode directed positively charged streamer head. The increased propagation length is not as significant as the streamer on the top half of the dielectric, due to the limiting effect that the surface streamer exhibited. If the surface of the dielectric was not considered as a pure absorber, then the emission and reflection features would provide an additional electron source that would promote the expansion and propagation of the streamer after the voltage had switched.
\subsubsection{$2^{nd}$ Polarity Shift} \hfill\\
Between $1.7\,$ns and $1.9\,$ns, the applied voltage potential begins to switch again, this time rising from $-8\,$kV to $+8\,$kV. At $1.8\,$ns, the second polarity change occurs. Due to limited computational resources, the simulation was not executed for a second full positive cycle, and was instead ended at $2.0\,$ns. During this polarity switch, the same changes in the positive and negative streamers are observed.
On the bottom half of the simulation, the shift from a positive streamer at $1.7\,$ns, sub-figures (d) to a negative streamer is observed. When the applied voltage is $0\,V$ at $1.8\,$ns, sub-figures (e), it can be seen that the floating positively charged streamer head is beginning to be flooded, while a new positively charged streamer head is forming near the metallic cathode. By $2.0\,$ns, sub-figures (f), the streamer bulk has mostly reached the dielectric surface again, has expanded further in the X-direction, and a new positive streamer branch forms near the cathode. It is very well expected that this branch would behave as the one discussed above.
On the top half of the simulation, not only is the shift from a negative to positive streamer observed, but also the beginning of the collapse of the positive streamer branch is observed. As already explained and expected, at $1.8\,$ns the positively charged streamer head of both the negative streamer and the streamer branch near the metallic electrode is flooded by electrons moving towards the new anode. At $2.0\,$ns the anchor of the main streamer on the anode is fully formed; however, the electrons within the branch have a further distance to travel and have not yet reached the anode. As the polarity switches at $1.8\,$ns, electrons near the dielectric surface are repelled away and a floating positively charged streamer head forms. At $2.0\,$ns this streamer head is beginning to wrap around the large streamer bulk to promote further expansion in both the X- and Y-directions away from the metallic anode and dielectric surface respectively.
\subsubsection{Summary of Polarity Shifts} \hfill\\
During both polarity shifts, similar and important events take place on the respective positive and negative streamers. The Negative streamer is initially attached to the anodic dielectric surface, and floating away from the metallic cathode. As the polarity changes, the electrons reverse in direction, attaching to the metallic anode and forming a positively charged streamer head near the dielectric surface. Newly created electrons are quickly attracted to the streamer head and as such allow for the now positive streamer to further propagate into the X- and Y-directions, thereby increasing the volume and overall density of the streamer. The positive streamer is initially floating away from the cathodic dielectric surface, and attached to the metallic anode. As the polarity changes, electron avalanches are instigated and rush towards the dielectric surface, thereby drastically increasing the plasma density, volume, propagation length, and surface coverage. Additionally, as a positively charged streamer head and sheath like region form near the metallic cathode, newly created electrons are able to instigate an additional positive streamer branch that floats above the metallic cathode. This branching feature also drastically increased the electron density and volume. Given a high enough initial voltage, it is expected that this positive streamer branch could form on the negative streamer before any polarity switching occurs.
The increase in plasma densities, volume, and surface coverage are expected to be directly beneficial to various applications such as plasma enhanced catalysis and gas treatment. In plasma enhanced catalysis, the dielectric surface will typically be coated with a catalyst, such that any increase in surface coverage directly increases the active area of the catalyst. Additionally, any increase in plasma volume and density will naturally increase the radical densities which are available to react with either the catalytic surface and or the treatment gas that the plasma is ignited in, thus directly affecting the efficiency of the process.
\section{Conclusions and Outlook}
\label{Conclusion}
In this work, the plasma streamer propagation of a twin SDBD setup by means of PIC/MCC code modeled in dry air under DC and AC voltage operation. The AC driving voltage waveform corresponded to a nanosecond square waveform with sub-nanosecond risetimes. The twin SDBD geometry being fully exposed and symmetric about the dielectric layer promotes both positive and negative streamer discharges to ignite simultaneously, along the edges of both the anode and cathode. This symmetry has not been theoretically investigated extensively, leaving the question of, among others, how do the streamers affect one another. In order to provide insight into this question, multiple scenarios were simulated. First, the propagation of a positive streamer and negative streamer were simulated individually under identical DC conditions. Second, both streamers were allowed to propagate using the same DC conditions, thereby providing insight into the interplay of the two streamers. However, the main focus of the paper is on the role of how the streamers interact and change under AC conditions; therefore, a short multi nanosecond duration bipolar square pulse is used to approximate said conditions.
It was first shown that both the positive and negative streamers behave as expected under DC conditions. Both streamers form a quasi neutral bulk. The positive streamer forms and propagates via a floating cathode directed positive streamer head, while the negative streamer propagates via an electron avalanche along the surface of the dielectric barrier. The negative streamer also forms a positive space charge that floats above the metallic cathode. The floating positive space charges of both the positive and negative streamer must float as new electrons which are introduced between the cathode and said space charges are not able to gain enough energy for new ionization events. It was then shown that the interaction of both streamers under DC conditions does not significantly alter the propagation methods, but that the positive streamer "pulls" the negative streamer while simultaneously being "pushed" by the negative streamer, effectively increasing the surface coverage and the densities of the plasma streamers. The speed of propagation of both streamers differs when individually simulated versus when simultaneously simulated. The positively charged streamer head of the positive streamer propagates away from the anode providing an enhanced electric fields that the negative charges of the avalanche of the negative streamer then follow. Likewise, the negative streamer charges the dielectric surface which then helps to push the positively charged streamer head of the positive streamer further away from the anode.
Next, the interactions of the two streamers under switching voltage conditions was investigated. The fast polarity switching of the applied voltage causes significant changes in the streamers. The switch from a positive streamer to a negative streamer, and vice versa were observed to cause a significant increase in both plasma size and density due to similar effects that take place during the DC scenario. It was also observed that additional positive streamer branches are able to form between the negative streamer and cathode under the given conditions. The initial branching structure is very similar to structures that formed on the negative streamer during DC conditions and the AC conditions before any voltage switches. Therefore it is hypothesized that the voltage switching allows for a branch to more easily form, but is still subject to some minimal necessary applied voltage for a given set of geometrical conditions.
Overall, an electrode geometry allowing for two oppositely-phased plasmas to simultaneously ignite is beneficial with respect to plasma size and density. The two fully exposed electrodes create strongly curled electric fields that promote the ignition of plasma streamers near the surface of the dielectric. The simultaneous ignition of the streamers enhances the lateral electric fields causing the streamers to propagate further away from the metallic electrodes than they would if one electrode was submerged. This effect is even further enhanced if the applied voltage is able to quickly switch polarities before the streamers have a chance to self extinguish; however, this fast of a voltage profile is difficult to experimentally achieve and as such the reader should remember this if attempting to compare any numerical information from this paper. Nonetheless, the enhanced electric fields also allow the plasma to achieve higher densities, which is in many applications desirable.
In plasma enhanced catalysis applications, one might want to coat the dielectric surface with a catalyst. Having an enhanced plasma propagation length would directly correlate to an increased surface area of the catalyst that is directly affected by the plasma, leading to a potentially enhanced efficiency. In gas treatment applications, an increased plasma density is typically desirable in order to increase the rate of molecular fragmentation and/or purification. Future experimental measurements and theoretical and or numerical investigations on the electrode geometry could optimize an electrode system for a given set of applications. Additional simulations of a porous catalytic coating attached to the dielectric surface would provide additional insight into plasma enhanced catalysis applications.
\section{Acknowledgements}
This work is supported by the German Research Foundation (DFG) with the Collaborative Research Centre CRC1316 projects A4 and A5 and the Scientific Research Foundation from Dalian University of Technology, DUT19RC(3)045, and the National Science Foundation of China Grant No. 12020101005.
\color{black}
\newpage
\section*{ORCID iDs}
\begin{table}[h]
\begin{tabular}{l p{4cm}}
Q. Z. Zhang: & \url{https://orcid.org/0000-0002-5726-0829} \\
R. T. Nguyen-Smith: & \url{https://orcid.org/0000-0002-5755-4595}\\
F. Beckfeld: & \url{https://orcid.org/0000-0001-8605-2634}\\
Y. Liu: & \url{https://orcid.org/0000-0002-2680-1338}\\
T. Mussenbrock: & \url{http://orcid.org/0000-0001-6445-4990} \\
J. Schulze: & \url{https://orcid.org/0000-0001-7929-5734}\\
\end{tabular}
\label{tab:my_label}
\end{table}
\printbibliography
\end{document}
|
{
"timestamp": "2021-02-19T02:12:45",
"yymm": "2102",
"arxiv_id": "2102.08749",
"language": "en",
"url": "https://arxiv.org/abs/2102.08749"
}
|
\section{Introduction}
~
Superstring theories on non-compact curved backgrounds
are receiving a great deal of attentions.
Well-defined description of these string vacua by
irrational superconformal field theories (SCFT's) is an important
and challenging problem.
Recently, considerable progress has been made
in the study of the ${\cal N}=2$ Liouville theory or
the $SL(2;\mbox{{\bf R}})/U(1)$ Kazama-Suzuki supercoset theory,
based on the method of modular bootstrap and
the exact description of D-branes in terms of boundary states
\cite{ES-L,ASY,IPT,ES-BH,IKPT,IPT2,ASY2,FNP,NST,Hosomichi,Nakayama2}.
It should be also mentioned that attempts
of the conformal (boundary) bootstrap in these ${\cal N}=2$ systems
are given in \cite{ASY,IPT2,ASY2,Hosomichi}.
In this paper we investigate the superstring vacua of the type
$$
\prod_{j=1}^{N_L} \{\mbox{${\cal N}=2$ Liouville}\}_j\otimes
\prod_{i=1}^{N_M}\{\mbox{${\cal N}=2$ minimal model}\}_i
$$
which in general contain more than one ${\cal N}=2$ Liouville fields
coupled to ${\cal N}=2$ minimal model.
A suitable orbifolding procedure is imposed
as in the Gepner construction \cite{Gepner} in order to ensure the space-time SUSY.
If one uses the T-duality (mirror symmetry) of ${\cal N}=2$ Liouville
to $SL(2;\mbox{{\bf R}})/U(1)$ Kazama-Suzuki model \cite{FZZ2,GK,HK1},
the above models are equivalent to
$$
\prod_{j=1}^{N_L} \{\mbox{$SL(2;\mbox{{\bf R}})/U(1)$ supercoset}\}_j
\otimes
\prod_{i=1}^{N_M}\{\mbox{${\cal N}=2$ minimal model}\}_i.
$$
These models are expected to describe non-compact Calabi-Yau manifolds
where we obtain non-gravitational space-time theories
due to the Liouville mass gap.
The earlier studies on such models are given in {\em e.g.}
\cite{MV,GV,OV,ABKS,GKP,GK,Pelc,ES1,Lerche,LLS,ES2,HK2,ncCY-others}.
Many related topics and a detailed list of literature
are found in a review paper \cite{Nakayama}.
A basic idea in Gepner construction is the orbifolding
with respect to the $U(1)_R$-charge of ${\cal N}=2$
superconformal algebra (SCA). One way to impose charge-integrality
is to consider spectral-flow orbits as in \cite{EOTY}: by
using flow-invariant orbits we can systematically construct
conformal blocks of the theory.
In our previous paper \cite{ES-L}, we have considered
${\cal N}=2$ Liouville theory with rational central charges and
introduced extended characters which are defined by an infinite sum
over spectral flows of irreducible ${\cal N}=2$ characters.
We have shown that
\begin{itemize}
\item Extended characters have discrete
and finite spectra of $U(1)_R$-charges, although
they may have continuous spectra of conformal weights.
\item They are closed under modular transformations.
\end{itemize}
We have also noticed that
these characters naturally appear in the
torus partition functions of $SL(2;\mbox{{\bf R}})/U(1)$ Kazama-Suzuki models
\cite{ES-BH} (see also \cite{IKPT}), which are
T-dual to the ${\cal N}=2$ Liouville theories.
In this paper we use extended characters together with
irreducible characters of minimal models
as basic building blocks of our construction.
In the following we especially study models
with $N_M=1$ and $1\leq N_L \leq 3$,
which are interpreted geometrically as ALE fibrations
over (weighted) projective spaces \cite{Lerche,HK2}.
We find a very
interesting aspect of massless spectrum in these models:
in the case of ${\cal N}=2$ Liouville theory coupled to minimal models
there exist only $(c,c)$ or $(a,a)$-type massless states
in the CY 3 and 4-folds and no
$(c,a)$ or $(a,c)$ massless states appear. Thus the
theory possesses only complex structure deformations
and no deformations of K\"ahler structure.
On the other hand, if we use the $SL(2;\mbox{{\bf R}})/U(1)$ description,
the theory possesses only $(c,a)$ and $(a,c)$-type massless states
and the moduli of K\"{a}hler structure deformations.
Thus the space-time has the characteristic feature
of a conifold type singularity which is deformed (resolved) by
the ${\cal N}=2$ Liouville ($SL(2;\mbox{{\bf R}})/U(1)$) theory.
In the case of models describing non-compact K3 surfaces
or smoothed ADE type singularity,
on the other hand,
the same number of $(a,c),(c,a)$ and $(c,c),(a,a)$ states
appear in accord with the
${\cal N}=4$ world-sheet supersymmetry.
This paper is organized as follows: in section 2
we present a brief review on the irreducible and
extended characters in the $SL(2;\mbox{{\bf R}})/U(1)$ Kazama-Suzuki models
following \cite{ES-BH}. In section 3 we study the closed string
sector of our non-compact models. We analyze
the torus partition functions, elliptic genera and the massless
spectra of closed string states. We study models with
$N_M=1$ and $1\leq N_L \leq 3$, and find the interesting
characteristics of their massless spectra as mentioned above.
In section 4, we study the open string sector of our models. We
focus on the BPS compact branes and evaluate the cylinder amplitudes.
We compare the spectra of BPS compact branes with
those of massless moduli determined in section 3. We find that
some of the BPS branes (cycles) are not
associated with massless moduli as noticed previously
in the case of singular CY manifolds \cite{GVW,GKP,Pelc}.
We also derive the general formula of open string Witten
indices and prove the conjecture of \cite{Lerche}.
We further construct boundary states for a class
of non-BPS D-branes which are extensions of
``unstable B-branes'' in the
$SU(2)$-WZW model \cite{MMS}.
Contrary to the flat case, non-BPS branes including
RR-components (but with vanishing RR-charges)
also exist. This type of branes could be identified
with the ones studied recently in \cite{Kutasov2} using
the DBI action.
Section 5 is devoted to a summary and discussions.
We present in Appendix E some consistency checks
of our modular transformation formulas with the known
results about the higher level Appell functions \cite{Pol,STT}.
In the following we mainly use the language of
$SL(2;\mbox{{\bf R}})/U(1)$ supercoset theory
rather than the ${\cal N}=2$ Liouville theory for the sake of convenience.
However, later in section 3 we identify results of CFT analyses as
describing the deformed geometries based on ${\cal N}=2$ Liouville theory.
~
\newpage
\section{Preliminaries}
~
We start with a brief review on the conformal blocks
and their modular properties of $SL(2;\mbox{{\bf R}})/U(1)$ Kazama-Suzuki model.
More complete arguments are given in \cite{ES-BH}
(see also \cite{IKPT}).
~
\subsection{Branching Functions in $SL(2;\mbox{{\bf R}})/U(1)$ Kazama-Suzuki model}
~
The Kazama-Suzuki model for $SL(2;\mbox{{\bf R}})/U(1)$
is defined as the coset CFT
\begin{eqnarray}
\frac{SL(2;\mbox{{\bf R}})_{\kappa}\times SO(2)_1}{U(1)_{-(\kappa-2)}}~,
\end{eqnarray}
which is an ${\cal N}=2$ SCFT
with $\hat{c}(\equiv c/3)=1+2/k$, ($k\equiv \kappa-2$).
The coset characters are defined by the following branching relation
\begin{eqnarray}
\chi_{\xi}\left(\tau,\frac{2}{k}z+w\right)\frac{
{\theta}_3\left(\tau,\frac{k+2}{k}z+w\right)}{\eta(\tau)}
=\sum_{m}\chi^{(\msc{NS})}_{\xi,m}(\tau,z)
\frac{q^{-\frac{m^2}{k}}e^{2\pi i mw}}{\eta(\tau)}~.
\label{branching 0}
\end{eqnarray}
where $\xi$ labels irreducible representations of
$\widehat{SL}(2;\mbox{{\bf R}})_{\kappa}$ and $\chi_{\xi}(\tau,u)$ denotes its
character.
We can identify the branching functions
$\chi^{(\msc{NS})}_{\xi,m}(\tau,z)$ with the irreducible characters of
${\cal N}=2$ SCA as follows:
\begin{itemize}
\item {\bf for the continuous series $\xi =\hat{\cal C}_{p,m}$ : }
\begin{eqnarray}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}_{\,p,m}(\tau,z) &=&
q^{\frac{p^2}{2}+\frac{m^2}{k}}\,
e^{2\pi i \frac{2m}{k}z} \,\frac{{\theta}_3(\tau,z)}{\eta(\tau)^3}~,
\label{branching C}
\end{eqnarray}
which are the massive characters of ${\cal N}=2$ SCA.
The highest-weight state has the quantum numbers, conformal dimension and
$U(1)$-charge as
\begin{eqnarray}
&& h= \frac{p^2}{2}+\frac{1}{4k}+\frac{m^2}{k}~, ~~~ Q=\frac{2m}{k}~.
\end{eqnarray}
\item {\bf for the discrete series $\xi=\hat{\cal D}^+_j$ : }
\begin{eqnarray}
{\chi_{\msc{\bf d}}}_{\,j,m=j+n}^{(\msc{NS})}(\tau,z)
&=& \frac{q^{\frac{j+n^2+2nj}{k}-\frac{1}{4k}}
e^{2\pi i \frac{2(j+n)}{k}z}}{1+e^{2\pi i z}q^{n+1/2}}\,
\frac{{\theta}_3(\tau,z)}{\eta(\tau)^3}~, ~~~ ({}^{\forall}n\in\mbox{{\bf Z}})~,
\label{branching D}
\end{eqnarray}
which are the $n$-step
spectral flow of massless matter characters. The $n$-step flow
is generated by an operator $U_n \equiv e^{in\Phi}$
where $\Phi$ denotes the zero mode of a scalar field
of the ${\cal N}=2$ $U(1)$ current $J\equiv i \partial \Phi$.
The unitarity requires the condition \cite{BFK,DPL}
\begin{eqnarray}
0<j<\frac{\kappa}{2}\left(\equiv \frac{k+2}{2}\right)~.
\label{unitarity bound}
\end{eqnarray}
The highest-weight states have the following quantum numbers;
\begin{eqnarray}
&&h= \frac{2j\left(n+\frac{1}{2}\right)+n^2}{k}~,~~~ Q= \frac{2(j+n)}{k}~,~~~
(n\geq 0) ~,
\label{vacuum 1} \\
&& h= \frac{-\left(k-2j\right)\left(n+\frac{1}{2}\right)+n^2}{k}~,
~~~ Q= \frac{2(j+n)}{k}-1~,~~~ (n<0)~.
\label{vacuum 2}
\end{eqnarray}
They are given explicitly by $(j^+_0)^n\ket{j,j}\otimes \ket{0}_{\psi}$
($n\geq 0$), $(j^-_{-1})^{|n|-1}\ket{j,j}\otimes \psi^-_{-1/2}\ket{0}_{\psi}$
($n<0$) respectively
(here $|j,j\rangle$ denotes
the lowest weight state of bosonic
$SL(2;\mbox{{\bf R}})$ algebra and $\ket{0}_{\psi}$ denotes the fermion Fock vacuum).
The highest weight representations $\hat{\cal D}^-_j$ merely yield the same type of
characters and we need not take account of them.
\item {\bf for the identity representation $\xi= \mbox{id}$ :}
\begin{eqnarray}
\chi^{(\msc{NS})}_{0,n}(\tau,z)
&=&
q^{-\frac{1}{4k}} \frac{(1-q)q^{\frac{n^2}{k}+n-\frac{1}{2}}
e^{2\pi i \left(\frac{2n}{k}+1\right)z}}
{\left(1+e^{2\pi i z}q^{n+1/2}\right)\left(1+e^{2\pi i z}q^{n-1/2}\right)}\,
\frac{{\theta}_3(\tau,z)}{\eta(\tau)^3}~,
\label{branching Id}
\end{eqnarray}
which are the spectrally flowed graviton characters.
The vacuum states are summarized as
\begin{description}
\item [$n=0$] : the vacuum is $\ket{0,0}\otimes \ket{0}_{\psi}$ with
$h=Q=0$.
\item [$n\geq 1$] : the vacuum is $(j^+_{-1})^{n-1}\ket{0,0} \otimes
\psi^+_{-1/2}\ket{0}_{\psi}$, which has the quantum numbers
\begin{eqnarray}
h= \frac{n^2}{k}+n-\frac{1}{2}~,~~~ Q= \frac{2n}{k}+1~.
\end{eqnarray}
\item [$n\leq -1$] : the vacuum is $(j^-_{-1})^{|n|-1}\ket{0,0} \otimes
\psi^-_{-1/2}\ket{0}_{\psi}$, which has the quantum numbers
\begin{eqnarray}
h= \frac{n^2}{k}-n-\frac{1}{2}~,~~~ Q= \frac{2n}{k}-1~.
\end{eqnarray}
\end{description}
\end{itemize}
~
\subsection{Extended Characters}
~
From now on, we shall concentrate on models with a rational level
$k=N/K$ ($N,K\in \mbox{{\bf Z}}_{>0}$).
We define the extended characters by taking the mod $N$ spectral flow
sums of irreducible characters.
In the following definitions, $m$ is assumed to be an integral
parameter within the range $\displaystyle -NK \leq m < NK$.
\begin{itemize}
\item {\bf continuous representation (`extended massive character') : }
We define
\begin{eqnarray}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}(p,m;\tau,z)& \equiv& \sum_{n\in N\msc{{\bf Z}}}\,
{\chi_{\msc{\bf c}}}^{(\msc{NS})}_{\,p,m/2K +n} (\tau,z)
\equiv q^{\frac{p^2}{2}} \Th{m}{NK}\left(\tau,\frac{2z}{N}\right)\,
\frac{{\theta}_3(\tau,z)}{\eta(\tau)^3}~,
\label{chi c}
\end{eqnarray}
which has the highest-weight state with
\begin{eqnarray}
h=\frac{p^2}{2}+\frac{m^2+K^2}{4NK}~,~~~ Q=\frac{m}{N}~.
\label{vacua chi c}
\end{eqnarray}
\item {\bf discrete representation (`extended massless matter character'): }
We define (with the reparameterization $j\equiv s/(2K)$, $s\in \mbox{{\bf Z}}$)
\begin{eqnarray}
{\chi_{\msc{\bf d}}}^{(\msc{NS})}(s,m;\tau,z) \equiv
\left\{
\begin{array}{ll}
\sum_{n\in N\msc{{\bf Z}}}\,{\chi_{\msc{\bf d}}}^{(\msc{NS})}_{\,\frac{s}{2K},\frac{m}{2K}+n}(\tau,z)&
~~ m \equiv s~(\mbox{mod}~2K) \\
0 &~~ m\not\equiv s ~(\mbox{mod}~2K)
\end{array}
\right.
\label{chi d}
\end{eqnarray}
where the unitarity condition \eqn{unitarity bound} imposes
\begin{eqnarray}
1\leq s \leq N+2K-1~,~~~ (s\in \mbox{{\bf Z}})~.
\label{range s unitarity}
\end{eqnarray}
The vacuum vectors for ${\chi_{\msc{\bf d}}}^{(\msc{NS})}(s,m=s+2Kr)$,
($\displaystyle -\frac{N}{2}\leq r < \frac{N}{2}$) are characterized by
\begin{eqnarray}
&& h=\frac{Kr^2+\left(r+\frac{1}{2}\right)s}{N}~, ~~
Q = \frac{s+2Kr}{N}~, ~~~ \left(0\leq r < \frac{N}{2}\right)\nonumber\\
&& h=\frac{Kr^2-\left(r+\frac{1}{2}\right)(N-s)}{N}~, ~~
Q = \frac{s-N+2Kr}{N}~, ~~~ \left(-\frac{N}{2}\leq r <0\right)~.
\label{vacua chi d}
\end{eqnarray}
\item{\bf identity representation (`extended graviton character'): }
We define
\begin{eqnarray}
\chi_0^{(\msc{NS})}(m;\tau,z) \equiv
\left\{
\begin{array}{ll}
\sum_{n\in N\msc{{\bf Z}}}\,\chi^{(\msc{NS})}_{0,\frac{m}{2K}+n}(\tau,z)&
~~ m \in 2K\mbox{{\bf Z}} \\
0 &~~ m\not\in 2K\mbox{{\bf Z}}
\end{array}
\right.
\label{chi 0}
\end{eqnarray}
The vacua for $\chi_0^{(\msc{NS})}(m=2Kr;\tau,z)$,
$\displaystyle \left(-\frac{N}{2}\leq r < \frac{N}{2}\right)$
are given as
\begin{eqnarray}
&& h=Q=0~, ~~~ (r=0)~, \nonumber\\
&& h=\frac{Kr^2}{N}+|r| -\frac{1}{2}~,~~ Q=\frac{2Kr}{N}+\mbox{sgn}(r)~,
~~~ (r\neq 0)~.
\label{vacua chi 0}
\end{eqnarray}
\end{itemize}
The extended characters of other spin structures are defined by the
1/2-spectral flow;
\begin{eqnarray}
&& \chi_*^{(\widetilde{\msc{NS}})}(*,m;\tau,z) \equiv e^{-i\pi \frac{m}{N}}\,
\chi_*^{(\msc{NS})}\left(*,m;\tau,z+\frac{1}{2}\right)~, \nonumber\\
&& \chi^{(\msc{R})}_{*}(*,m+K;\tau,z) \equiv
q^{\frac{\hat{c}}{8}} e^{i\pi \hat{c}z}\,
\chi^{(\msc{NS})}_{*}\left(*,m;\tau,z+\frac{\tau}{2}\right)~, \nonumber\\
&& \chi^{(\widetilde{\msc{R}})}_{*}(*,m+K;\tau,z) \equiv
e^{-i\pi \frac{m}{N}} q^{\frac{\hat{c}}{8}} e^{i\pi \hat{c} z}\,
\chi^{(\msc{NS})}_{*}\left(*,m;\tau,z+\frac{\tau}{2}+\frac{1}{2}\right)~.
\label{extended os}
\end{eqnarray}
Note that extended characters of
discrete and identity representations in R and $\widetilde{\mbox{R}}$-sectors
take non-zero values
only if $m\equiv s-K~(\mbox{mod}\, 2K)$, $m \in K(2\mbox{{\bf Z}}+1)$, respectively.
The quantum numbers
of the NS and R vacua are related by
\begin{eqnarray}
&& h^{(\msc{R})}(*,m+K) = h^{(\msc{NS})}(*,m+K) + \frac{1}{8} \equiv
h^{(\msc{NS})}(*,m)+\frac{1}{2}Q^{(\msc{NS})}(m)+\frac{\hat{c}}{8}~, \nonumber\\
&& Q^{(\msc{R})}(m+K) = Q^{(\msc{NS})}(m+K) +\frac{1}{2} \equiv
Q^{(\msc{NS})}(m)+ \frac{\hat{c}}{2}~,
\label{relation NS R}
\end{eqnarray}
where $h^{(\msc{NS})}(*,m)$, $Q^{(\msc{NS})}(m)$ are those
given in \eqn{vacua chi c}, \eqn{vacua chi d} and \eqn{vacua chi 0}.
Useful properties of the extended characters \eqn{chi c}, \eqn{chi d}
and \eqn{chi 0} are summarized in Appendix C.
The non-compactness of $SL(2;\mbox{{\bf R}})/U(1)$ model
leads to an IR divergence in the torus partition function.
One may introduce
an IR cut-off $\epsilon$ and then the regularized partition function
contains a piece which consists of continuous representations
and also a piece consisting of discrete representations \cite{HPT}.
Since the continuous representations describe string modes
propagating in the bulk, their contributions are proportional
to the volume of target space $V(\epsilon)$,
while the discrete representations describe
localized string states and their contributions
are volume independent.
Leading terms in the infinite volume limit
are given by continuous representations
\cite{ES-BH};
\begin{eqnarray}
&&\lim_{\epsilon\,\rightarrow\,+0} \frac{Z(\tau;\epsilon)}{V(\epsilon)} \propto
\frac{1}{2}\sum_{\sigma}\,
\int_0^{\infty} dp\,
\sum_{w\in 2K}\,\sum_{n\in N}\,
{\chi_{\msc{\bf c}}}^{(\sigma)}(p, Kn+Nw;\tau,0)
{\chi_{\msc{\bf c}}}^{(\sigma)}(p, -Kn+Nw;-\bar{\tau},0)~. \nonumber\\
&&
\label{MI part fn}
\end{eqnarray}
Here $\sigma$ denotes the spin structure and the above partition function
is modular invariant.
The quantum numbers $n$, $w$ are identified with the KK momenta and
winding modes along the circle of the Euclidean cigar geometry
with an asymptotic radius $\sqrt{2k} \equiv \sqrt{2N/K}$ of
the $SL(2;\mbox{{\bf R}})/U(1)$-coset theory \cite{2DBH}.
If one considers the $\widetilde{\mbox{R}}$-part of the partition function,
contributions of continuous representations drop out and only the
discrete representations survive.
They give rise to a volume-independent finite result.
This is nothing but the Witten index;
\begin{eqnarray}
&& Z^{(\widetilde{\msc{R}})}(\tau) =
\sum_{s=K}^{N+K} \,\sum_{w\in\msc{{\bf Z}}_{2K}}\, \sum_{n\in \msc{{\bf Z}}_N}\,
a(s) \, {\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(s, Kn+Nw;\tau,0)
{\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(s, -Kn+Nw;-\bar{\tau},0)~, \nonumber\\
&& \hspace{3cm}
a(s) \equiv
\left\{
\begin{array}{ll}
1& ~~K+1 \leq s \leq N+K-1 \\
\frac{1}{2} & ~~ s=K,N+K
\end{array}
\right. ~~.
\label{discrete part fn}
\end{eqnarray}
It is important to note that the quantum number $s$ runs over
the range \cite{ES-BH,IKPT} ;
\begin{eqnarray}
K \leq s \leq N+K~,
\label{range s}
\end{eqnarray}
which is strictly smaller than \eqn{range s unitarity} if $K\neq 1$
(see also \cite{HPT}).
This range is consistent with the modular transformation formulas
\eqn{S discrete}, \eqn{S graviton}.
~
\section{Non-compact Cosets Coupled to Minimal Models}
~
Now, we work on the main subject in this paper.
Let us study the superconformal system defined as
\begin{eqnarray}
\left\lbrack
L_{N_1,K_1} \otimes \cdots \otimes L_{N_{N_L}, K_{N_L}} \otimes
M_{k_1} \otimes \cdots \otimes M_{k_{N_M}}
\right\rbrack_{\msc{$U(1)$-projection}}~,
\label{nc gepner}
\end{eqnarray}
where $L_{N,K}$ denotes the $SL(2;\mbox{{\bf R}})/U(1)$
Kazama-Suzuki model with $k=N/K$ ($\hat{c}= 1+2K/N$) and
$M_k$ denotes the level $k$ ${\cal N}=2$ minimal model with
$\hat{c}=k/(k+2)$.
We impose the criticality condition for the case of
target manifold with (complex) dimension $\mbox{\bf n}$
\begin{eqnarray}
\sum_{i=1}^{N_M} \frac{k_i}{k_i+2} + \sum_{j=1}^{N_L}
\left(1+\frac{2K_j}{N_j}\right) = \mbox{\bf n}~, ~~~ \mbox{\bf n}=2,3,4~.
\label{criticality}
\end{eqnarray}
Since $\hat{c}>1$ for each $SL(2;\mbox{{\bf R}})/U(1)$-sector,
it is obvious that $N_L \leq \mbox{\bf n}-1$ holds.
We expect that the $U(1)$-charge projection
yields consistent superstring vacua describing non-compact
$CY_{\msc{\bf n}}$ compactifications with $d$-dimensional
Minkowski space $(d=10-2\mbox{\bf n})$.
Note that the periodicity of extended characters
depends on the choice of $N_j$, $K_j$, not only on the ratio
$N_j/K_j$.
We shall thus adopt the notation $L_{N_j,K_j}$ to indicate
which extended characters are used,
although only the ratio $N_j/K_j$ parameterizes the $SL(2;\mbox{{\bf R}})/U(1)$
supercoset.
For simplicity we here assume that each pair
$N_j$, $K_j$ is relatively prime for every $j=1,\ldots, N_L$.
We set
\begin{eqnarray}
N \equiv \mbox{L.C.M.}\left\{k_i+2, \, N_j\right\}~,~~~i=1,
\cdots,N_M, ~~~ j=1,\cdots,N_L~,
\end{eqnarray}
and then the required $U(1)$-projection is reduced to
the $\mbox{{\bf Z}}_N$-orbifoldization.
We introduce the notations
\begin{eqnarray}
\mu_i,\nu_j \in \mbox{{\bf Z}}_{>0}~,~~ N=\mu_i (k_i+2)= \nu_j N_j~,
\label{mu nu}
\end{eqnarray}
for later convenience.
~
\subsection{Toroidal Partition Functions : Continuous Part of
Closed String Spectra}
~
We first analyse the closed string sector. Only the continuous part of
closed string spectrum contributes to the modular invariant partition
function (per unit volume), and should be interpreted as the propagating
modes in the non-compact Calabi-Yau space.
More detailed argument has been given in \cite{ES-BH} for
the case $N_L=1$.
Let us start by assuming
\begin{itemize}
\item diagonal modular invariance in each $M_{k_i}$-sector,
\item the partition function \eqn{MI part fn} for each $L_{N_j,K_j}$-sector,
\end{itemize}
before performing the $\mbox{{\bf Z}}_N$-orbifoldization.
As in the standard treatment of orbifold, the $\mbox{{\bf Z}}_N$-projection
must be accompanied by the twisted sectors generated by the
spectral flows.
The integral spectral flows act on each character as the shifts
of the angular variable; $z\,\mapsto\,z+a\tau+b$ ($a,b \in \mbox{{\bf Z}}_N$),
and thus
the relevant conformal blocks are defined as the flow invariant
orbits \cite{EOTY};
\begin{eqnarray}
&&{\cal F}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}(\tau,z) \equiv
\frac{1}{N}\sum_{a,b\in \msc{{\bf Z}}_{N}}\, q^{\frac{\msc{\bf n}}{2}a^2}e^{2\pi i \msc{\bf n} a z}\,
\prod_{i=1}^{N_M}
\ch{(\msc{NS})}{\ell_i,m_i}
(\tau,z+a\tau+b) \, \nonumber\\
&& \hspace{4cm}\times
\prod_{j=1}^{N_L}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}(p_j,K_jn_j+N_jw_j;\tau,z+a\tau+b)~, \nonumber\\
&&\tilde{{\cal F}}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}(-\bar{\tau},\bar{z}) \equiv
\frac{1}{N}\sum_{a,b\in \msc{{\bf Z}}_{N}}\, \bar{q}^{\frac{\msc{\bf n}}{2}a^2}
e^{2\pi i \msc{\bf n} a \bar{z}}\,
\prod_{i=1}^{N_M}
\ch{(\msc{NS})}{\ell_i,m_i}
(-\bar{\tau},\bar{z}+a\bar{\tau}+b) \, \label{cF cont} \\
&& \hspace{4cm}\times
\prod_{j=1}^{N_L}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}(p_j,-K_jn_j+N_jw_j;-\bar{\tau},
-\bar{z}-a\bar{\tau}-b) \equiv
{\cal F}^{(\msc{NS})}_{I,\sp,-\msc{\bf w}}(-\bar{\tau},\bar{z})~,
\nonumber\\
&& I= \left((\ell_1,m_1),\ldots, (\ell_{N_M},m_{N_M}),
n_1,\ldots, n_{N_L}\right)~, ~~~ \mbox{\bf p}=(p_1,\ldots,p_{N_L})~,~~~
\mbox{\bf w}=(w_1,\ldots,w_{N_L})~,
\nonumber
\end{eqnarray}
where $\ch{(\msc{NS})}{\ell_i,m_i}(\tau,z)$ denotes the character of
the minimal
model $M_{k_i}$ and
${\chi_{\msc{\bf c}}}^{(\msc{NS})}(p_j,K_jn_j+N_jw_j;\tau,z)$ is the extended
character \eqn{chi c} of the $SL(2;\mbox{{\bf R}})/U(1)$ theory $L_{N_j,K_j}$.\footnote
{In the right-moving sector of $SL(2;\mbox{{\bf R}})/U(1)$ theory we have chosen an angular dependence $-\bar{z}$. This is in order to bring our convention of quantum numbers $n_j,w_j$ consistent with those
given by the cigar geometry of 2-dimensional black hole.}
The summation $\displaystyle \frac{1}{N}\sum_{b \in \msc{{\bf Z}}_N} \, *$ imposes
the constraint on the $U(1)$-charge
\begin{eqnarray}
\sum_{i=1}^{N_M}\,\frac{m_i}{k_i+2} +
\sum_{j=1}^{N_L}\,\frac{K_j n_j}{N_j} \in \mbox{{\bf Z}}~,
\label{U(1) charge constraint}
\end{eqnarray}
and we automatically have ${\cal F}_{*}^{(\msc{NS})} \equiv 0$ unless
\eqn{U(1) charge constraint} is satisfied.
The conformal blocks of other spin structures are defined by
the $1/2$-spectral flows;
\begin{eqnarray}
&&{\cal F}^{(\widetilde{\msc{NS}})}_{I,\sp,\msc{\bf w}}(\tau,z) \equiv
{\cal F}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}\left(\tau,z+\frac{1}{2}\right) , \,\,\,
{\cal F}^{(\msc{R})}_{I,\sp,\msc{\bf w}}(\tau,z) \equiv
q^{\frac{\msc{\bf n}}{8}} e^{i\pi \msc{\bf n} z}
{\cal F}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}\left(\tau,z+\frac{\tau}{2}\right) ~, \nonumber\\
&&{\cal F}^{(\widetilde{\msc{R}})}_{I,\sp,\msc{\bf w}}(\tau,z) \equiv
q^{\frac{\msc{\bf n}}{8}} e^{i\pi \msc{\bf n} z}
{\cal F}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}\left(\tau,z+\frac{\tau}{2}+\frac{1}{2}\right) ~.
\label{cF cont os}
\end{eqnarray}
By construction the conformal blocks ${\cal F}^{(\sigma)}_{I,\sp,\msc{\bf w}}$
have the following symmetry (${}^{\forall}a,{}^{\forall}b\in \mbox{{\bf Z}}$)
\begin{eqnarray}
q^{\frac{\msc{\bf n}}{2}a^2}e^{2\pi i \msc{\bf n} a z}
{\cal F}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}(\tau,z+a\tau+b)
&=& {\cal F}^{(\msc{NS})}_{I,\sp,\msc{\bf w}}(\tau,z)~, \nonumber\\
q^{\frac{\msc{\bf n}}{2}a^2}e^{2\pi i \msc{\bf n} a z}
{\cal F}^{(\widetilde{\msc{NS}})}_{I,\sp,\msc{\bf w}}(\tau,z+a\tau+b)
&=& (-1)^{\msc{\bf n} a} {\cal F}^{(\widetilde{\msc{NS}})}_{I,\sp,\msc{\bf w}}(\tau,z) ~, \nonumber\\
q^{\frac{\msc{\bf n}}{2}a^2}e^{2\pi i \msc{\bf n} a z}
{\cal F}^{(\msc{R})}_{I,\sp,\msc{\bf w}}(\tau,z+a\tau+b)
&=& (-1)^{\msc{\bf n} b} {\cal F}^{(\msc{R})}_{I,\sp,\msc{\bf w}}(\tau,z) ~, \nonumber\\
q^{\frac{\msc{\bf n}}{2}a^2}e^{2\pi i \msc{\bf n} a z}
{\cal F}^{(\widetilde{\msc{R}})}_{I,\sp,\msc{\bf w}}(\tau,z+a\tau+b)
&=& (-1)^{\msc{\bf n} (a+b)} {\cal F}^{(\widetilde{\msc{R}})}_{I,\sp,\msc{\bf w}}(\tau,z) ~,
\label{s f inv}
\end{eqnarray}
Taking the diagonal modular invariance for the spin structures,
we obtain the partition function (as a non-linear
$\sigma$-model)
\begin{eqnarray}
&&Z(\tau,z) = e^{-2\pi \msc{\bf n} \frac{\left(\msc{Im} \, z\right)^2}{\tau_2}}\,
\frac{1}{2N}\sum_{\sigma}\,\sum_{I,\msc{\bf w}}\,\int d^{N_L}\mbox{\bf p}\,
{\cal F}^{(\sigma)}_{I,\sp,\msc{\bf w}}(\tau,z)
{\cal F}^{(\sigma)}_{I,\sp,-\msc{\bf w}}(-\bar{\tau},\bar{z})~.
\label{MI part fn sigma model}
\end{eqnarray}
The overall factor $1/N$ is necessary to avoid the overcounting of
states.
One can easily check the modular invariance of this partition function
using the modular properties
of $L_{N_j,K_j}$, $M_{k_i}$ given in appendices.
A crucial point is
the fact that the sum over the spectral flow
$z\,\mapsto\,z+a\tau+b$ in \eqn{cF cont}
preserves modular invariance.
Incorporating the flat space-time $\mbox{{\bf R}}^{d-1,1}$
(with $\displaystyle \frac{d}{2}+\mbox{\bf n}=5$), we also obtain the supersymmetric
conformal blocks of superstring vacua
\begin{eqnarray}
\frac{1}{\tau_2^{\frac{d-2}{4}}\eta(\tau)^{d-2}}\,\sum_{\sigma}\,
\epsilon(\sigma)\,
\left(\frac{{\theta}_{\lbrack \sigma \rbrack}(\tau)}{\eta(\tau)}\right)^{\frac{d-2}{2}}
\, {\cal F}^{(\sigma)}_{I,\sp,\msc{\bf w}}(\tau,0)~,
\label{SUSY conf block}
\end{eqnarray}
where ${\theta}_{\lbrack \sigma \rbrack}$ denotes ${\theta}_3$, ${\theta}_4$, ${\theta}_2$,
$i{\theta}_1$ for $\sigma = \mbox{NS}, \widetilde{\mbox{NS}}, \mbox{R}, \widetilde{\mbox{R}}$ respectively, and we
set $\epsilon(\mbox{NS})=\epsilon(\widetilde{\mbox{R}})=+1$, $\epsilon(\widetilde{\mbox{NS}})=\epsilon(\mbox{R})=-1$.
The conformal blocks \eqn{SUSY conf block} actually vanish for
arbitrary $\tau$ \cite{HS}.
One can choose a large variety of modular invariants as consistent
conformal theories. For example, one may take general modular
invariants of the types given in \cite{GQ} with respect to
$\mbox{\bf w} \in \mbox{{\bf Z}}_{2K_1}\times \cdots \times \mbox{{\bf Z}}_{2K_{N_L}}$.
It turns out that some of the familiar non-compact Calabi-Yau spaces
do not correspond to the simplest choice of modular invariance
\eqn{MI part fn sigma model}
and we have to use a somewhat more non-trivial form of modular invariant.
A typical example
showing such peculiar feature is
the singular $CY_3$ of
$A_{n-1}$-type ($CY_3(A_{n-1})$) \cite{GVW,GKP}.
The conformal blocks for this model presented in
\cite{ES1} are written (with suitable change of notations) as
\begin{eqnarray}
&& \hspace{-1cm}
{\cal F}^{(\msc{NS})}_{\ell,w}(\tau,z) = \sum_{m\in \msc{{\bf Z}}_{4n}}\,
\ch{(\msc{NS})}{\ell,m}(\tau,z) \, \frac{\Th{-(n+2)m+2nw}{2n(n+2)}
\left(\tau,\frac{z}{n}\right)}{\eta(\tau)}~,
~~~ \ell +2w \in 2\mbox{{\bf Z}}~, ~~(w \in \frac{1}{2}\mbox{{\bf Z}}_{4(n+2)})~, \nonumber\\
&&
\label{ES conf-block CY3}
\end{eqnarray}
where we omitted the factor depending on the `Liouville momentum' $p$.
These are identified with the branching functions of the coset CFT: ~
$\displaystyle \frac{SU(2)_{n-2}\times SO(4)_1}{U(1)_{n+2}}$.
At first glance, \eqn{ES conf-block CY3} seems to fit with the formulas
\eqn{cF cont} with $N=2n$, $K=n+2$.
However, {\em half-integral} values of $w$ are now allowed with the constraint
\begin{eqnarray}
m+2w \in 2\mbox{{\bf Z}}~.
\label{cond m w CY3}
\end{eqnarray}
This condition may be interpreted
as some kind of orbifoldization, and makes it possible
to pair each of the primary states of the minimal model $M_{n-2}$ to
those of $L_{2n,n+2}$ so that it yields a physical state with
an integral $U(1)$-charge. As we shall see later,
we obtain the expected spectrum of massless states
as the singular $CY_3$ of $A_{n-1}$-type and
the correct open string Witten indices under this condition
\eqn{cond m w CY3}.
We denote the $SL(2;\mbox{{\bf R}})/U(1)$-sector defined this way as
$L'_{2n,n+2}$ from now on.
A similar example which we will later study
is a model with two Liouville fields
$N_L=2$ and $N_M=1$;
\begin{eqnarray}
\hat{c}=4~,~~~k_1=n-2~,~~~ N_1=N_2=4n~,~~~K_1=K_2=n+2~.
\end{eqnarray}
It is possible to show that the following conformal blocks give the consistent
superstring vacuum;
\begin{eqnarray}
&& \hspace{-1cm}
{\cal F}^{(\msc{NS})}_{\ell,m,m_j,w_j, p_j} (\tau,z)
= \frac{1}{4n}\sum_{a\in \msc{{\bf Z}}_{4n}}\, \ch{(\msc{NS})}{\ell,m-2a}(\tau,z)
\,
\prod_{j=1,2} \left\{
{\chi_{\msc{\bf c}}}^{(\msc{NS})}_{(4n,n+2)}(p_j, (n+2)m_j+4nw_j+2(n+2)a;\tau,z) \right. \nonumber\\
&& \hspace{2cm} \left. +
{\chi_{\msc{\bf c}}}^{(\msc{NS})}_{(4n,n+2)}(p_j, (n+2)(m_j+4n)+4nw_j+2(n+2)a;\tau,z)
\right\}~ \nonumber\\
&&
\equiv \frac{1}{2n} \sum_{a\in \msc{{\bf Z}}_{2n}}\, \ch{(\msc{NS})}{\ell,m-2a}(\tau,z)
\,\prod_{j=1,2}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}_{(2n, \frac{n+2}{2})}
\left(p_j, \frac{n+2}{2} m_j+2nw_j+(n+2)a;\tau,z\right)
~,~~\mbox{(if $n$ is even)}~, \nonumber\\
&&
\label{cF CY3 fiber}
\end{eqnarray}
with
\begin{eqnarray}
m \in \mbox{{\bf Z}}_{2n}~,~~ m_j \in \mbox{{\bf Z}}_{4n}~, ~~ w_j \in \frac{1}{4}\mbox{{\bf Z}}_{4(n+2)}~,
~~ m_j+4w_j \in 2\mbox{{\bf Z}}~, ~~ \sum_{j=1,2}(m_j+4w_j) \in 4\mbox{{\bf Z}}~,
\label{m CY3 fiber}
\end{eqnarray}
and the $U(1)$-charge condition
\begin{eqnarray}
2m+m_1+m_2 \in 2n\mbox{{\bf Z}}~.
\label{U(1) cond CY3 fiber}
\end{eqnarray}
We here use the notation ${\chi_{\msc{\bf c}}}^{(*)}_{(N,K)}(*,*;\tau,z)$
with the parameters $N$, $K$ written explicitly.
By careful calculations it is possible to show
that the conformal blocks \eqn{cF CY3 fiber}
are in fact closed under modular transformations in a manner
consistent with the non-trivial restrictions
\eqn{m CY3 fiber}, \eqn{U(1) cond CY3 fiber}.
The coefficients of S-transformation include the factors
$$
\frac{1}{\sqrt{4n}} e^{-2\pi i \frac{m_jm_j'}{4n}} \cdot
\frac{1}{\sqrt{4(n+2)}}e^{2\pi i \frac{4 w_j w_j'}{n+2}}
$$
in each $L'_{4n,n+2}$-sector, and we can construct modular invariants
in the standard way.
Hence this model yields a consistent string vacuum with
the choice of spectrum \eqn{m CY3 fiber}.
We denote the system defined this way as $L'_{4n,n+2}$.
All the primaries in $M_{n-2}$ can again find partners in
two $L'_{4n,n+2}$-sectors.
We will later identify this vacuum
as a non-compact $CY_4$ with a singular $CY_3$ fibered over $\mbox{{\bf C}} P^1$.
~
\subsection{Elliptic Genera : Discrete Part of Closed String Spectra}
~
Let us turn to the discrete spectrum in the closed
string Hilbert space. It describes localized string excitations
and is of basic importance since massless states appear in this sector.
A useful quantity that captures the BPS states
is the elliptic genus, and we try to evaluate it
for general models \eqn{nc gepner}. It is basically a generalization
of the analysis given in \cite{ES-BH} to the case $N_L \geq 1$.
We consider flow invariant orbits consisting of
the discrete characters \eqn{chi d} in place of \eqn{cF cont};
\begin{eqnarray}
&& {\cal G}^{(\msc{NS})}_{I,\ss,\msc{\bf w}}(\tau,z) \equiv
\frac{1}{N}\sum_{a,b\in \msc{{\bf Z}}_{N}}\, q^{\frac{\msc{\bf n}}{2}a^2}e^{2\pi i \msc{\bf n} a z}\,
\prod_{i=1}^{N_M}
\ch{(\msc{NS})}{\ell_i,m_i}
(\tau,z+a\tau+b) \, \nonumber\\
&& \hspace{4cm}\times
\prod_{j=1}^{N_L}
{\chi_{\msc{\bf d}}}^{(\msc{NS})}(s_j,K_jn_j+N_jw_j;\tau,z+a\tau+b)~,
\label{cG} \\
&&\tilde{{\cal G}}^{(\msc{NS})}_{I,\ss,\msc{\bf w}}(-\bar{\tau},\bar{z}) \equiv
\frac{1}{N}\sum_{a,b\in \msc{{\bf Z}}_{N}}\, \bar{q}^{\frac{\msc{\bf n}}{2}a^2}
e^{2\pi i \msc{\bf n} a \bar{z}}\,
\prod_{i=1}^{N_M}
\ch{(\msc{NS})}{\ell_i,m_i}
(-\bar{\tau},\bar{z}+a\bar{\tau}+b) \, \nonumber\\
&& \hspace{4cm}\times
\prod_{j=1}^{N_L}
{\chi_{\msc{\bf d}}}^{(\msc{NS})}(s_j,-K_jn_j+N_jw_j;-\bar{\tau},
-\bar{z}-a\bar{\tau}-b) \nonumber \\
&&\hspace{2cm} \equiv {\cal G}^{(\msc{NS})}_{I,\widehat{\msc{\bf s}},\widehat{\msc{\bf w}}}
(-\bar{\tau},\bar{z})~,
\nonumber\\
&& I= \left((\ell_1,m_1),\ldots, (\ell_{N_M},m_{N_M}),
n_1,\ldots, n_{N_L}\right)~, ~~~ \mbox{\bf s}=(s_1,\ldots,s_{N_L})~,~~~
\mbox{\bf w}=(w_1,\ldots,w_{N_L})~, \nonumber\\
&& \widehat{\mbox{\bf s}}=(N_1+2K_1-s_1, \ldots, N_{N_L}+2K_{N_L}-s_{N_L})~,~~~
\widehat{\mbox{\bf w}}=(1-w_1,\ldots,1-w_{N_L})~,
\nonumber
\end{eqnarray}
Recall the charge conjugation relations
\eqn{charge conjugation massless} to derive the last equality.
In the limit $z\rightarrow 0$ we obtain the Witten index
\begin{eqnarray}
\lim_{z\,\rightarrow\, 0} \, {\cal G}^{(\widetilde{\msc{R}})}_{I,\ss,\msc{\bf w}}(\tau,z) \equiv
{\cal I}_{I,\ss,\msc{\bf w}} ~,
\label{cG WI}
\end{eqnarray}
which can be evaluated by using the formulas \eqn{WI minimal}
and \eqn{Witten index}.
The elliptic genus is then written as
\begin{eqnarray}
&&{\cal Z}(\tau,z) = \frac{1}{N} \sum_{I,\ss,\msc{\bf w}}\,\a(\mbox{\bf s}){\cal I}_{I,\widehat{\msc{\bf s}},
\widehat{\msc{\bf w}}}
{\cal G}^{(\widetilde{\msc{R}})}_{I,\ss,\msc{\bf w}}(\tau,z)~,
\label{elliptic genus}
\end{eqnarray}
where we set
\begin{eqnarray}
&& \a(\mbox{\bf s}) = \prod_{j=1}^{N_L} \,a(s_j)~, ~~~
a(s_j) = \left\{
\begin{array}{ll}
1&~~ K_j+1 \leq s_j \leq N_j+K_j-1 \\
\frac{1}{2} & ~~ s_j=K_j,\, N_j+K_j.
\end{array}
\right. ~~
\end{eqnarray}
In the cases of $CY_3$ $(\mbox{\bf n}=3)$ the elliptic genera
are shown to have a particularly simple form;
\begin{eqnarray}
{\cal Z}(\tau,z) &=& \frac{\chi}{2}\frac{{\theta}_1(\tau,2z)}{{\theta}_1(\tau,z)}~,
\label{elliptic genus CY3} \\
\chi &= & \frac{1}{N}\sum_{I,\ss,\msc{\bf w}}\, \a(\mbox{\bf s}) {\cal I}_{I,\ss,\msc{\bf w}}
{\cal I}_{I,\widehat{\msc{\bf s}},\widehat{\msc{\bf w}}} ~.
\end{eqnarray}
Let us next try to exhibit more explicit forms of
elliptic genera.
To this end it is useful to recall the formula of
elliptic genus of minimal model \cite{Witten-E2}
\begin{eqnarray}
{\cal Z}_{k}(\tau,z)=
\sum_{\ell=0}^{k}\,\ch{(\widetilde{\msc{R}})}{\ell,\ell+1}(\tau,z)
=- \sum_{\ell=0}^{k}\,\ch{(\widetilde{\msc{R}})}{\ell,-(\ell+1)}(\tau,z)
=\frac{{\theta}_1(\tau, \frac{k+1}{k+2}z)}
{{\theta}_1(\tau,\frac{1}{k+2} z)}~.
\label{elliptic genus minimal}
\end{eqnarray}
The corresponding formula for the $L_{N,K}$-sector is written as
\cite{ES-BH}\footnote
{Overall sign is opposite to that of \cite{ES-BH}.}
\begin{eqnarray}
{\cal Z}_{N,K}(\tau,z) &\equiv& \sum_{s=K}^{N+K}\,
a(s) {\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(s,s-K;\tau,z) \nonumber\\
&\equiv & \left\lbrack{\cal K}_{2NK}\left(\tau,\frac{z}{N},0
\right) - \frac{1}{2}\Th{0}{NK}
\left(\tau,\frac{2z}{N}\right) \right\rbrack \,
\frac{i{\theta}_1(\tau,z)}{\eta(\tau)^3}~,
\label{cZ N K}
\end{eqnarray}
where ${\cal K}_{\ell}(\tau,\nu,\mu)$ is the level $\ell$ Appell function
\cite{Pol,STT} defined by
\begin{eqnarray}
{\cal K}_{\ell}(\tau,\nu,\mu) \equiv \sum_{m\in \msc{{\bf Z}}}\,
\frac{e^{i\pi m^2 \ell \tau +2\pi i m \ell\nu}}
{1-e^{2\pi i (\nu+\mu+m\tau)}}~.
\label{Appell}
\end{eqnarray}
The following identity is quite useful;
\begin{eqnarray}
\sum_{s=K}^{N+K-1} \, e^{2\pi i \frac{(s-K)b}{N}}\,
{\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(s,s-K+2Ka;\tau,z) &=&
q^{\frac{K}{N} a^2} y^{\frac{2K}{N}a}\,
{\cal K}_{2NK}\left(\tau,\frac{z+a\tau+b}{N},0\right)\,
\frac{i{\theta}_1(\tau,z)}{\eta(\tau)^3}~,
\nonumber\\
&& \hspace{4cm} ~~~ (a,b \in \mbox{{\bf Z}}_N)~,
\label{relation chid cK}
\end{eqnarray}
or conversely,
\begin{eqnarray}
{\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(s,s-K+2Ka;\tau,z) &=& \frac{1}{N}
\sum_{b \in \msc{{\bf Z}}_N} \, e^{-2\pi i \frac{(s-K)b}{N}}\,
q^{\frac{K}{N} a^2} y^{\frac{2K}{N}a}\,
{\cal K}_{2NK}\left(\tau,\frac{z+a\tau+b}{N},0\right)\,
\frac{i{\theta}_1(\tau,z)}{\eta(\tau)^3}~, \nonumber\\
&& \hspace{3.5cm} ~~~ (a \in \mbox{{\bf Z}}_N,~~ K\leq s \leq N+K-1)~.
\label{relation chid cK 2}
\end{eqnarray}
One may regard these relations as non-compact analogue of the formula
\eqn{elliptic genus minimal}.
More details on the relation between extended characters and Appell functions
are discussed in Appendix E.
Our goal is to derive the ``orbifold forms'' of elliptic genera
like those given in \cite{KYY}.
To this end we have to slightly modify \eqn{cZ N K}
except for the cases of $N_M=N_L=1$ treated in \cite{ES-BH},
so as to correctly reproduce \eqn{elliptic genus}.
We define
\begin{eqnarray}
\widehat{{\cal Z}}_{N,K}(\tau,z) &\equiv& \sum_{s=K}^{N+K}\,a(s)
{\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(s,s-K-2Kn(s);\tau,z) \nonumber\\
&\equiv& \left\lbrack \frac{1}{N}\sum_{s=K}^{N+K-1}\,\sum_{b\in \msc{{\bf Z}}_N}\,
e^{-2\pi i \frac{(s-K)b}{N}} q^{\frac{K}{N}n(s)^2}y^{-\frac{2K}{N}n(s)}\,
{\cal K}_{2NK}\left(\tau, \frac{z-n(s)\tau+b}{N},0\right)
\right.
\nonumber\\
&& \hspace{2cm} \left.
-\frac{1}{2} \Th{0}{NK}\left(\tau,\frac{2z}{N}\right)
\right\rbrack \, \frac{i{\theta}_1(\tau,z)}{\eta(\tau)^3}~,
\label{hat cZ N K}
\end{eqnarray}
where $n(s)$ is defined uniquely by the condition
\begin{eqnarray}
&& Kn(s) \equiv s-K ~(\mbox{mod}\, N)~,~~~ n(s) \in \mbox{{\bf Z}}_N~.
\label{ns}
\end{eqnarray}
(This is well-defined for each $s$, since
we are assuming that $N$ and $K$ are relatively prime.)
In the special case $K=1$, we simply have $n(s)= s-1$.
The elliptic genus \eqn{elliptic genus} is now rewritten as
the orbifold form;
\begin{eqnarray}
{\cal Z}(\tau,z) &=& \frac{1}{N} \sum_{a,b\in\msc{{\bf Z}}_N}\, (-1)^{(N_M+N_L)(a+b)}
q^{\frac{\msc{\bf n}}{2}a^2} y^{\msc{\bf n} a}\,
\prod_{i=1}^{N_M} {\cal Z}_{k_i}(\tau,z+a\tau+b)\, \prod_{j=1}^{N_L}
\widehat{{\cal Z}}_{N_j,K_j}(\tau,z+a\tau+b)~. \nonumber\\
&&
\label{elliptic genus orbifold}
\end{eqnarray}
For example, in the case of compactification
on $ALE(A_{n-1})$ spaces ({\em i.e.}
$N_M=N_L=1$, $k=n-2$, $N=n$, $K=1$),
the formula \eqn{elliptic genus orbifold} is reduced to
\begin{eqnarray}
{\cal Z}_{ALE(A_{n-1})}(\tau,z) &=& \sum_{\ell=0}^{n-2}\, \sum_{r\in \msc{{\bf Z}}_n}\,
\ch{(\widetilde{\msc{R}})}{\ell,\ell+1-2r}(\tau,z) \, {\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(\ell+2,
\ell+3-2(\ell+2)+2r;\tau,z) \nonumber\\
&=& \sum_{\ell=0}^{n-2}\, \sum_{r\in \msc{{\bf Z}}_n}\,
\ch{(\widetilde{\msc{R}})}{\ell,-(\ell+1)-2r}
(\tau,z)\, {\chi_{\msc{\bf d}}}^{(\widetilde{\msc{R}})}(\ell+2,\ell+1+2r;\tau,z)~,
\label{elliptic genus ALE}
\end{eqnarray}
which reproduces the one given in \cite{ES-BH}.
~
\subsection{Massless Closed String Spectra}
~
It is an important task to analyze the massless closed string spectrum.
We can solve this problem in a similar manner as in
the compact Gepner models, since we have already constructed
the conformal blocks in closed string sector.
Massless states correspond to the (anti-) chiral primary states
of conformal weights $h=\tilde{h}=1/2$.
As we discuss below,
basic aspects of massless spectra in the non-compact models
are summarized as follows:
\begin{itemize}
\item In the cases of $\hat{c}\neq 2$, there exists
at most one chiral primary of the $(a,c)$ (or $(c,a)$)-type
with $h=\tilde{h}=1/2$ in each spectral flow orbit \eqn{cG},
and none of the $(c,c)$ (or $(a,a)$)-type exist \footnote
{Of course, in mirror models where $L_{N_j,K_j}$-sectors
are realized by ${\cal N}=2$ Liouville theories,
the situation is reversed
: no chiral primaries of $(c,a)$ and $(a,c)$-types exist, while
$(c,c)$ and $(a,a)$-types are possible.}.
\item In the cases of $\hat{c}=2$, we have at most a quartet
of the $(c,c)$, $(a,a)$, $(c,a)$ and $(a,c)$-type primaries
with $h=\tilde{h}=1/2$ in each spectral flow orbit.
\end{itemize}
This fact implies that at our non-compact Gepner points for
$CY_3$ or $CY_4$ moduli space there exist
only the deformations of K\"{a}hler structure but not the deformations of
complex structure. On the other hand,
in the case of $K3$ surfaces $\hat{c}=2$
the superconformal symmetry is extended to ${\cal N}=4$
and the above massless states compose
the spin (1/2,1/2) representation of $SU(2)_L \times SU(2)_R$
of the ${\cal N}=4$ SCA. From the space-time point of view
the quartet corresponds to a scalar (tensor)-multiplet of $(1,1)$ ($(2,0)$)
SUSY in 6 dimensions.
Let us consider the $(a,c)$-type massless states:
the analysis for the $(c,a)$-type is parallel.
We start with working on the left-moving sectors.
The anti-chiral states are described by the conditions;
\begin{list}{}
\item $M_{k_i}$-sector :
\begin{eqnarray}
m_i= -\ell_i~, ~~(0\leq \ell_i \leq k_i)~,
~~~ \mbox{$U(1)$-charge }~ Q_i= -\frac{\ell_i}{k_i+2}~,
\label{anti-chiral min}\end{eqnarray}
\item $L_{N_j,K_j}$-sector :
\begin{eqnarray}
&& s_j=K_jn_j+N_jw_j+2K_j~,~~(K_j \leq s_j \leq N_j+K_j)~,
~~~
m_j=K_jn_j+N_jw_j~, \nonumber\\
&& \mbox{$U(1)$-charge }~ Q_j= \frac{K_jn_j+N_jw_j-N_j}{N_j}~,
\label{anti-chiral coset}\end{eqnarray}
\end{list}{}
and we must impose
\begin{eqnarray}
-\sum_i \frac{\ell_i}{k_i+2} + \sum_j \frac{K_jn_j+N_jw_j-N_j}{N_j} =-1~.
\label{cond anti-chiral}
\end{eqnarray}
We now recall the left-right pairing of states in our construction is given as
(see \eqn{cG});
\begin{equation}
\begin{array}{ccc}
\mbox{left-moving} & \null & \mbox{right-moving} \\
(\ell_i,m_i) &\Longleftrightarrow &(\tilde{\ell}_i=\ell_i,\tilde{m}_i=m_i)\\
(s_j,m_j=K_jn_j+N_jw_j) & \Longleftrightarrow &
(\tilde{s}_j=N_j+2K_j-s_j,\tilde{m}_j=K_jn_j-N_jw_j+N_j)
\label{left-right pairing}
\end{array}
\end{equation}
Thus before applying the spectral flow the right-moving state corresponding to
(\ref{anti-chiral min}), (\ref{anti-chiral coset})
has the following quantum numbers
\begin{eqnarray}
(\tilde{\ell}_i, \tilde{m}_i)
=(\ell_i, -\ell_i) ~, ~~~ (\tilde{s}_j, \tilde{m}_j)
= (-K_jn_j-N_jw_j+N_j, K_jn_j-N_jw_j+N_j)~.
\label{right mover 1}
\end{eqnarray}
They have the total $U(1)$-charge
\begin{eqnarray}
-\sum_i \frac{\ell_i}{k_i+2} + \sum_j \frac{K_jn_j-N_jw_j+N_j}{N_j}~,
\end{eqnarray}
which is integer because of the constraint \eqn{cond anti-chiral}.
Now, we look for chiral or anti-chiral
primaries with $\tilde{h}=1/2$ in the orbit of spectral flow starting from
the states \eqn{right mover 1}.
The flow $\bar{z}\,\rightarrow\,\bar{z}+\bar{\tau}$
acts on the quantum numbers as
\begin{eqnarray}
\tilde{m}_i~\longrightarrow~\tilde{m}_i-2~, ~~~ \tilde{m}_j~\longrightarrow~\tilde{m}_j+2K_j~,
\end{eqnarray}
and it has periodicities $k_i+2$ and $N_j$ respectively in the $M_{k_i}$ and
$L_{N_j,K_j}$-sectors.
We find that only the orbits satisfying
the condition
\begin{eqnarray}
{}^{\exists} r \in \mbox{{\bf Z}}_N~, ~ \mbox{s.t.}~
\left\{
\begin{array}{l}
\ell_i \equiv r~(\mbox{mod}\, k_i+2)~, ~~ {}^{\forall}i \\
n_j \equiv r ~ (\mbox{mod}\, N_j)~,~~ {}^{\forall}j
\end{array}
\right.
\label{r constraint}
\end{eqnarray}
contains a (unique) chiral state
with the total $U(1)$-charge $Q_{\msc{tot}}=1$;
\begin{eqnarray}
(\tilde{\ell}_i,\tilde{m}_i)=(\ell_i,\ell_i)~,~~~
(\tilde{s}_j,\tilde{m}_j)= (-K_j n_j +N_j(1-w_j),-K_j n_j +N_j(1-w_j))~,
~({}^{\forall}i,{}^{\forall}j).
\label{chiral state}
\end{eqnarray}
It also contains
a (unique) anti-chiral state
with $Q_{\msc{tot}}=1-\hat{c}$;
\begin{eqnarray}
&&(\tilde{\ell}_i,\tilde{m}_i)=(\ell_i,\ell_i+2) \cong
(k_i-\ell_i, \ell_i-k_i)~,~~~ \nonumber\\
&&(\tilde{s}_j,\tilde{m}_j)
= (-K_j n_j +N_j(1-w_j),-K_j (n_j+2) +N_j(1-w_j))~,
~({}^{\forall}i,{}^{\forall}j)~.
\label{anti-chiral state}
\end{eqnarray}
In fact, \eqn{chiral state}, \eqn{anti-chiral state}
are generated by the spectral flows
$\bar{z}\,\rightarrow\,\bar{z}-r \bar{\tau}$,
$\bar{z}\,\rightarrow\,\bar{z}-(r+1)\bar{\tau}$
from \eqn{right mover 1} respectively if \eqn{r constraint} holds.
Note that in the cases of $\hat{c}=3,4$
only \eqn{chiral state} yields a massless state, while both of
\eqn{chiral state}, \eqn{anti-chiral state} become
massless states at $\hat{c}=2$.
Therefore, the spectrum of massless closed string states
is given by the solutions $(\ell_i, n_j, w_j) $ ($0\leq \ell_i \leq
k_i$, $n_j\in \mbox{{\bf Z}}_{N_j}$, $w_j \in \mbox{{\bf Z}}_{2K_j}$)
of the constraints
\begin{eqnarray}
&& \left(\sum_i \frac{\ell_i}{k_i+2}-\sum_j\frac{K_j n_j}{N_j}\right)
= 1+ \sum_j(w_j-1) ~,
\label{charge constraint}
\\
&& -K_j \leq K_jn_j+N_jw_j \leq N_j-K_j~,
\label{s constraint}
\end{eqnarray}
as well as \eqn{r constraint}.
The second condition \eqn{s constraint} follows from the constraint
$K_j \leq s_j \leq N_j+K_j$.
Obviously, the counting of $(c,a)$-type chiral states can be
carried out in the same way, yielding the equal number
of massless states. We have thus shown the characteristic feature of
massless states announced before.
We now present concrete examples that have clear geometrical
interpretations.
~
\noindent
{\bf 1. Cases of one Liouville field $N_L=1$ : }
We first present examples
with $N_M=N_L=1$.
The condition \eqn{r constraint} simply yields $\ell_1=n_1(\equiv \ell)$
in these cases.
~
\begin{description}
\item[1.1. ALE($A_{n-1}$) : $M_{n-2} \otimes L_{n,1}$]
~
In this case
the constraint \eqn{charge constraint} simply gives $w_1=0$, and
\eqn{s constraint} is equivalent with
\begin{eqnarray}
-1 \leq \ell \leq n-1~.
\end{eqnarray}
We thus conclude that each (anti-)chiral primary state
in the range $0\leq \ell \leq n-2$ in $M_{n-2}$
can be paired up to massless states
of the types of $(c,c)$, $(a,a)$, $(c,a)$ and
$(a,c)$-types of $M_{n-2} \otimes L_{n,1}$ theory.
\item[1.2. $CY_4$ ($A_{n-1}$) : $M_{n-2}\otimes L_{n,n+1}$]
~
The condition \eqn{charge constraint} is solved as
\begin{eqnarray}
\ell = -w_1~, ~~~ w_1 \in \mbox{{\bf Z}}_{2(n+1)}.
\end{eqnarray}
\eqn{s constraint} then gives
\begin{eqnarray}
-(n+1) \leq \ell \leq -1~,
\end{eqnarray}
which has no solution in the range $0\leq \ell \leq n-2$.
Therefore, we have no massless states in this case.
\item[1.3. $CY_3$ ($A_{n-1}$) : $M_{n-2}\otimes L'_{2n,n+2}$]
~
This case is non-trivial.
We have $N_1=2n$, $K_1=n+2$, which are not necessarily relatively prime.
As addressed before, we must
allow the half-integral winding numbers $w_1$, and impose the constraint
$n_1+2w_1 \in 2\mbox{{\bf Z}}$.
The constraint \eqn{charge constraint}
now leads to
\begin{eqnarray}
\ell = -2w_1~, ~~~
\left\{
\begin{array}{ll}
w_1\in \mbox{{\bf Z}}_{2(n+1)}~ & ~~ \ell~:~ \mbox{even} \\
w_1\in \frac{1}{2} + \mbox{{\bf Z}}_{2(n+1)}~ & ~~ \ell~:~\mbox{odd}
\end{array}
\right.
\end{eqnarray}
and \eqn{s constraint} gives
\begin{eqnarray}
-(n+2) \leq 2\ell \leq n-2~.
\end{eqnarray}
We thus find that the (anti-)chiral states in $M_{n-2}$
with
\begin{eqnarray}
\ell = 0, 1, \ldots, \left\lbrack \frac{n-2}{2} \right\rbrack
\label{massless singular CY3}
\end{eqnarray}
produce massless states of the $(c,a)$ and $(a,c)$-types.
\end{description}
~
As already mentioned in \cite{ES-BH}, these aspects of massless states
in the above three examples are consistent with the spectra of
{\em normalizable\/} chiral operators describing the moduli of vacua
discussed in \cite{GVW,GKP,Pelc}. Especially,
the third example correctly reproduces the spectrum of
scaling operators in the ${\cal N}=2$ $SCFT_4$ of Argyres-Douglas points
\cite{AD}.
We also point out that the massless
spectra here are consistent with the ones deduced from the ``LSZ poles"
in correlation functions presented in \cite{AGK}.
~
\noindent
{\bf 2. Cases of two Liouville fields $N_L=2$ : }
\begin{description}
\item[2.1. $\hat{c}=3$, $M_{n-2}\otimes L_{2n,1} \otimes L_{2n,1}$ : ]
~
The criticality condition is satisfied as
\begin{eqnarray}
\hat{c} = \frac{n-2}{n} + \left(1+\frac{1}{n}\right)+
\left(1+\frac{1}{n}\right) = 3~.
\end{eqnarray}
This type of superconformal system has been first studied in \cite{Lerche}
and proposed to be the superstring vacuum corresponding to the
Seiberg-Witten theory with $SU(n)$ gauge group without matter
in the low energy regime. Geometrically the theory is supposed to describe
space-time of
the $ALE(A_{n-1})$-fibration over $\mbox{{\bf C}} P^1$, or the $n$ NS5-branes
wrapped around $\mbox{{\bf C}} P^1$ in the T-dual picture.
We have two possibilities of satisfying \eqn{r constraint};
\begin{list}{}
\item (i) $\ell_1=n_1=n_2 \equiv r$, $0\leq r \leq n-2$,
\item (ii) $\ell_1+n=n_1=n_2 \equiv r$, $n\leq r \leq 2n-2$.
\end{list}
In the case (i), \eqn{charge constraint} gives $w_1+w_2=1$,
and \eqn{s constraint} leads to
\begin{eqnarray}
-1 \leq r +2n w_i \leq 2n-1~,~~(i=1,2)~.
\label{constraint ex 2}
\end{eqnarray}
There are no solutions to these constraints.
In the case (ii), \eqn{charge constraint} gives $w_1+w_2=0$.
If (and only if) we set $w_1=w_2=0$, \eqn{constraint ex 2}
are satisfied for arbitrary $n\leq r \leq 2n-2$.
We thus find that
\begin{eqnarray}
\ell_1=0,1,\ldots, n-2~,~~
n_1=n_2=\ell_1+n~,~~ w_1=w_2=0~,~
\Longleftrightarrow~\mbox{massless states}~.
\end{eqnarray}
The $(c,a)$-type chiral fields also gives the equal number of
massless states. These are identified as the
moduli $u_2,\ldots, u_{n}$ in the $SU(n)$ SW theory.
Especially, the marginal deformation for $\ell=0$
corresponds to the size of base $\mbox{{\bf C}} P^1$ as suggested in \cite{HK2}.
\item[2.2. $\hat{c}=3$, $M_{n-2} \otimes L_{n\mu,K_1} \otimes
L_{n\mu,K_2}$, $K_1+K_2=\mu$, $\mbox{G.C.D}\{K_i\}=1$ : ]
~
This is a natural generalization of the example {\bf 2.1}.
The criticality condition is satisfied as
\begin{eqnarray}
\hat{c} \equiv \left(1-\frac{2}{n}\right)
+ \left(1+\frac{2K_1}{N_1}\right)
+ \left(1+\frac{2K_2}{N_2}\right) =
3 + \frac{2}{N}\left(-\mu + K_1+K_2\right) =3~.
\end{eqnarray}
In this case, using the relation $K_1+K_2=\mu$, we again obtain
\begin{eqnarray}
\ell_1=0,1,\ldots, n-2~,~~
n_1=n_2=\ell_1+n~,~~ w_1=w_2=0~,~
\Longleftrightarrow~\mbox{massless states}~.
\end{eqnarray}
This type of string vacua are identified as the non-compact $CY_3$
with the structure of $ALE(A_{n-1})$-fibration over
the weighted projective space $W\mbox{{\bf C}} P^1 \left\lbrack K_1,K_2\right\rbrack$.
\item[2.3. $\hat{c}=4$, $M_{n-2} \otimes L'_{4n,n+2} \otimes L'_{4n,n+2}$ : ]
~
We next consider a more subtle example.
The criticality condition is satisfied as
\begin{eqnarray}
\hat{c} = \frac{n-2}{n} + \left(1+\frac{n+2}{2n}\right)+
\left(1+\frac{n+2}{2n}\right) = 4~.
\end{eqnarray}
Similarly to the case of $CY_3 (A_{n-1})$, we have to allow
$\displaystyle w_j \in \frac{1}{4}\mbox{{\bf Z}}$ and assume \eqn{m CY3 fiber}
in each of $L'_{4n,n+2}$-sector
in order to obtain the expected spectrum.
As in the example 2.1, massless states are possible only for
\begin{eqnarray}
\ell_1+n = n_1=n_2 = r~, ~~~ 0\leq r \leq 2n-2~, ~~~ r+4w_i=0~,
\end{eqnarray}
and \eqn{s constraint} gives us
\begin{eqnarray}
\ell = 0, 1, \ldots, \left\lbrack \frac{n-2}{2} \right\rbrack
~ \Longleftrightarrow~ \mbox{massless states}.
\end{eqnarray}
This spectrum is the same as $CY_3$ ($A_{n-1}$).
We propose that this model is identified as
$CY_3 (A_{n-1})$-fibration on $\mbox{{\bf C}} P^1$.
The generalizations similar to the example {\bf 2.2}
are straightforward; $M_{n-2} \otimes L'_{2n\mu, K_1(n+2)} \otimes
L'_{2n\mu, K_2(n+2)}$, $K_1+K_2=\mu$, $\mbox{G.C.D}\{K_i\}=1$.
It is expected that it describes the $CY_3(A_{n-1})$-fibration over
$W\mbox{{\bf C}} P^1 \left\lbrack K_1,K_2\right\rbrack$ and we again obtain
the same massless spectrum \eqn{massless singular CY3}.
\end{description}
~
\noindent
{\bf 3. Cases of three Liouville fields $N_L=3$ : }
The $\hat{c}=4$ vacua are the only possibility for these cases.
\begin{description}
\item[3.1. $M_{n-2} \otimes L_{3n,1}\otimes L_{3n,1} \otimes L_{3n,1}$ : ]
~
The criticality condition is satisfied as
\begin{eqnarray}
\hat{c}= \frac{n-2}{n} + 3\times \left(1+ \frac{2}{3n}\right) =4~.
\end{eqnarray}
The massless states are possible only for
\begin{eqnarray}
\ell_1+2n = n_1=n_2=n_3 = r~, ~~~ 2n \leq r \leq 3n-2~, ~~~ w_1=w_2=w_3=0~.
\end{eqnarray}
\eqn{s constraint} gives us
\begin{eqnarray}
\ell_1= 0, 1, \ldots, n-2~ \Longleftrightarrow~ \mbox{massless states}~,
\label{massless ALE fiber P2}
\end{eqnarray}
which is the same spectrum as that of $ALE(A_{n-1})$.
However, we can only have the $(a,c)$ and $(c,a)$-type massless chiral
states contrary to the ALE case.
This model is identified as the $ALE(A_{n-1})$-fibration over $\mbox{{\bf C}} P^2$
\cite{HK2}.
The generalization similar to the example {\bf 2.2} is also easy;
$M_{n-2} \otimes L_{n\mu,K_1} \otimes L_{n\mu, K_2} \otimes L_{n\mu,K_3}$,
$K_1+K_2+K_3=\mu$, $\mbox{G.C.D}\{K_i\}=1$.
This model is identified as the $ALE(A_{n-1})$-fibration over
$W \mbox{{\bf C}} P^2 \left\lbrack K_1,K_2,K_3\right\rbrack$ and
we again obtain the same massless spectrum \eqn{massless ALE fiber P2}.
\end{description}
~
\subsection{Notes on Geometrical Interpretations}
~
Let us here clarify our geometrical interpretation of
the string vacua considered above
as non-compact Calabi-Yau spaces.
We first recall the familiar CY/LG correspondence:
\begin{eqnarray}
X_1^{r_1}+ \cdots + X_{n+2}^{r_{n+2}} =0~,~~~
\mbox{in}~ W\mbox{{\bf C}} P_{n+1} \left\lbrack \frac{1}{r_1}, \ldots, \frac{1}{r_{n+2}}
\right\rbrack~,
\label{CY general}
\end{eqnarray}
defines a Calabi-Yau $n$-fold if $\displaystyle \sum_{i=1}^{n+2} \frac{1}{r_i} =1$,
and is equivalent to the LG orbifold
defined by the superpotential
$W(\mbox{{\bf X}}_i) \equiv \mbox{{\bf X}}_1^{r_1} + \cdots + \mbox{{\bf X}}_{n+2}^{r_{n+2}}$,
where $\mbox{{\bf X}}_i$ denotes the chiral superfields.
If some of $r_i$ are negative, the corresponding Calabi-Yau space
becomes non-compact.
In fact, such LG interpretation was
the starting point for the CFT descriptions of
non-compact Calabi-Yau spaces \cite{GV,OV,GKP,Lerche},
and was further refined from the viewpoints of mirror symmetry in
\cite{HK2}.
As an illustration, we consider the example {\bf 2.1} :
$\hat{c}=3$, $M_{n-2}\otimes L_{2n,1} \otimes L_{2n,1}$.
The corresponding LG model is given as \cite{Lerche}
\begin{eqnarray}
W= \mbox{{\bf X}}^n + \mbox{{\bf Y}}_1^{-2n} + \mbox{{\bf Y}}_2^{-2n}~,
\label{LG Lerche}
\end{eqnarray}
which describes the non-compact CY space
\begin{eqnarray}
X^n + Y_1^{-2n} + Y_2^{-2n} + w_1^2 + w_2^2 =0~,~~~ \mbox{in}~
W \mbox{{\bf C}} P^4 \left\lbrack 2, -1, -1, n, n \right\rbrack~.
\label{CY Lerche}
\end{eqnarray}
This formula has structure of the $ALE(A_{n-1})$-fibration over $\mbox{{\bf C}} P^1$.
Following \cite{GKP}, one may
rewrite \eqn{LG Lerche} as the Liouville form \footnote
{Here we absorbed the cosmological constants $\mu_1$, $\mu_2$
by shifting zero-modes of $\mbox{{\bf X}}_1$, $\mbox{{\bf X}}_2$.};
\begin{eqnarray}
W= \mbox{{\bf X}}^n + e^{-\frac{1}{{\cal Q}} \msc{{\bf X}}_1} + e^{-\frac{1}{{\cal Q}} \msc{{\bf X}}_2}~,
\label{LG Lerche 2}
\end{eqnarray}
where we set ${\cal Q}=\sqrt{1/n}$.
This realization
amounts to expressing the
$L_{2n,1}$-sectors as the ${\cal N}=2$ Liouville theories, and the linear
dilaton is given by
\begin{eqnarray}
\Phi = -\frac{{\cal Q}}{2} \Re\,\left(\mbox{{\bf X}}_1+\mbox{{\bf X}}_2\right)~.
\end{eqnarray}
Equivalently, one may rewrite it as
\begin{eqnarray}
W= \mbox{{\bf X}}^n + e^{-n \msc{{\bf Z}}} (e^{\msc{{\bf Y}}}+ e^{-\msc{{\bf Y}}}) ~,
\label{LG Lerche 3}
\end{eqnarray}
where we set $\mbox{{\bf X}}_1=n{\cal Q} \mbox{{\bf Z}} + {\cal Q} \mbox{{\bf Y}}$, $\mbox{{\bf X}}_2= n{\cal Q} \mbox{{\bf Z}} -{\cal Q} \mbox{{\bf Y}}$.
In this parameterization the linear dilaton is
along the $\mbox{{\bf Z}}$-direction;
\begin{eqnarray}
\Phi = -\Re\,\mbox{{\bf Z}}~.
\end{eqnarray}
One can directly recover the geometry of $ALE(A_{n-1})$-fibration over
$\mbox{{\bf C}} P^1$;
\begin{eqnarray}
e^{Y}+ e^{-Y} + X^n+w_1^2+w_2^2=0~,
\end{eqnarray}
by integrating out the chiral superfield $\mbox{{\bf Z}}$ (with the rescaling
$\mbox{{\bf X}}\, \rightarrow\, e^{-\msc{{\bf Z}}}\mbox{{\bf X}}$) \cite{HK2}.
Similarly, the example {\bf 3.1}:
$\hat{c}=4$, $M_{n-2}\otimes L_{3n,1} \otimes L_{3n,1} \otimes L_{3n,1}$
can be identified as the LG theory with
\begin{eqnarray}
W= \mbox{{\bf X}}^n + e^{-n \msc{{\bf Z}}} (e^{\msc{{\bf Y}}_1}+ e^{\msc{{\bf Y}}_2}+e^{-\msc{{\bf Y}}_1-\msc{{\bf Y}}_2}) ~,
\label{LG P2 fibration}
\end{eqnarray}
where we have again the linear dilaton $\Phi = - \Re\, \mbox{{\bf Z}}$.
This model is shown to describe the $ALE(A_{n-1})$ fibration
over $\mbox{{\bf C}} P^2$ \cite{HK2}.
Other examples are also identified in a similar
manner, leading to the geometrical interpretations mentioned above.
For instance, the model {\bf 2.2} :
$\hat{c}=3$, $M_{n-2}\otimes L_{n\mu,K_1}\otimes L_{n\mu,K_2}$
($K_1+K_2=\mu$) is identified with the LG theory with
\begin{eqnarray}
W= \mbox{{\bf X}}^n + e^{-n \msc{{\bf Z}}} (e^{\msc{{\bf Y}} / K_1}+ e^{-\msc{{\bf Y}} / K_2}) ~, ~~~
\Phi = -\Re\,\mbox{{\bf Z}}~.
\label{LG WP1}
\end{eqnarray}
It corresponds to the non-compact Calabi-Yau geometry
\begin{eqnarray}
e^{Y / K_1}+ e^{-Y /K_2} + X^n+w_1^2+w_2^2=0~,
\end{eqnarray}
which has the structure of $ALE(A_{n-1})$-fibration over
$W\mbox{{\bf C}} P^1 \left\lbrack K_1,K_2\right\rbrack$.
~
\section{D-branes in Non-compact Models}
~
\subsection{Cardy States for Compact BPS D-branes}
~
We next study the open string sectors in the non-compact Gepner models.
It is well-known that the ${\cal N}=2$ superconformal symmetry allows
two types of boundary conditions \cite{OOY} \footnote
{A slightly different convention is often used
in literature; A-type brane, for instance, is defined by
$(G^{\pm}_r-i \eta \tilde{G}^{\mp}_{-r})\ket{B;\eta}=0~,~ (\eta =\pm 1)~.$
The relation to our convention is given by
$ \ket{B;+1}= \ket{B}~,~ \ket{B;-1}= (-1)^{F_R} \ket{B}~.$
};
\begin{eqnarray}
&& \mbox{\bf A-type}~~~ :~
(J_n-\tilde{J}_{-n})\ket{B}=0 ~,~~~(G^{\pm}_r-i\tilde{G}^{\mp}_{-r})\ket{B}=0~,
\label{A-type} \\
&& \mbox{\bf B-type}~~~ :~
(J_n+\tilde{J}_{-n})\ket{B}=0 ~,~~~(G^{\pm}_r-i\tilde{G}^{\pm}_{-r})\ket{B}=0~,
\label{B-type}
\end{eqnarray}
which are compatible with the ${\cal N}=1$ superconformal symmetry
\begin{eqnarray}
(L_n-\tilde{L}_{-n})\ket{B}=0~,~~~ (G_r-i\tilde{G}_{-r})\ket{B}=0~,
\label{N=1 gluing}
\end{eqnarray}
where $G=G^++G^-$ is the ${\cal N}=1$ supercurrent.
In the following we shall concentrate on the BPS D-branes
with compact world-volumes,
which play the fundamental role in the non-compact
Calabi-Yau manifolds.
Compact branes in $L_{N_j,K_j}$-sectors are described by the `class 1'
Cardy states in the classification of \cite{ES-L}, namely,
the ones associated to the extended graviton representations.
It can be shown that the consistency with the left-right pairing of the
closed string spectrum (\ref{left-right pairing})
allows
{\em only\/} the B-type boundary condition
for these compact branes \cite{RibS,IPT2,FNP}.
We will begin our analysis
by summarizing characteristic aspects of boundary states
in each sector\footnote
{Here we use notations slightly different from \cite{ES-L}.}.
We assume in the following that
all the Ishibashi states satisfy the B-type boundary condition.
~
\noindent
{\bf 1. minimal sector $M_k$ : } (see {\em e.g.} \cite{RS})
Let $\dket{\ell,m}^{(\msc{NS})}$
($\dket{\ell,m}^{(\msc{R})}$) be
the Ishibashi states in the NS (R) sector
characterized by the orthogonality condition ($\sigma = \mbox{NS},\, \mbox{R}$)
\begin{eqnarray}
&& \hspace{-1.3cm} {}^{(\sigma)}\dbra{\ell,m} e^{-\pi T H^{(c)}}
e^{2 \pi i z J_0}
\dket{\ell',m'}^{(\sigma')}
= \epsilon_{\sigma}\delta_{\sigma,\sigma'}
\left(\delta_{\ell,\ell'}\delta_{m,m'}
+\delta_{\ell,k-\ell'}\delta_{m,m'+k+2}
\right) \, \ch{(\sigma)}{\ell,m}(iT,z),
\end{eqnarray}
where $H^{(c)}=L_0+\tilde{L}_0-{c\over 12}$
is the closed string Hamiltonian
and $\ch{(\msc{NS})}{\ell,m}(\tau,z)$ ($\ch{(\msc{R})}{\ell,m}(\tau,z)$)
denotes the NS (R) character of the ${\cal N}=2$
minimal model for the primary field with
$\displaystyle h=\frac{\ell(\ell+2)-m^2}{4(k+2)}$, $\displaystyle Q=\frac{m}{k+2}$~
($\displaystyle h=\frac{\ell(\ell+2)-m^2}{4(k+2)}+\frac{1}{8}$,
$\displaystyle Q= \frac{m}{k+2}\pm \frac{1}{2}$). (See Appendix B.)
We have introduced an extra phase factor $\epsilon_{\sigma}=+1,-1$ for
$\sigma=\mbox{NS}, \mbox{R}$ respectively for convenience in imposing
the GSO projection for supersymmetric D-branes.
We also set $\dket{\ell,m}^{(\msc{NS})}=0$
($\dket{\ell,m}^{(\msc{R})}=0$), if $\ell+m \in 2\mbox{{\bf Z}}+1$ ($\ell+m \in 2\mbox{{\bf Z}}$).
The Cardy states are expressed as follows
($\sigma=\mbox{NS}, ~ \mbox{or} ~ \mbox{R}~$, and we set $L+M \in 2\mbox{{\bf Z}}$);
\begin{eqnarray}
&&\ket{L,M}^{(\sigma)} = \sum_{\ell=0}^k\,\sum_{m\in \msc{{\bf Z}}_{2(k+2)}}\,
C_{L,M}(\ell,m)\dket{\ell,m}^{(\sigma)}~, ~~~
C_{L,M}(\ell,m) = \frac{S^{L,M}_{\ell,m}}{\sqrt{S^{0,0}_{\ell,m}}}~,
\label{minimal Cardy states}
\end{eqnarray}
where $S^{\ell,m}_{\ell',m'}$ is the modular coefficients of
$\ch{(\msc{NS})}{\ell,m}(\tau,z)$ \eqn{minimal S}.
~
\noindent
{\bf 2. $SL(2;\mbox{{\bf R}})/U(1)$-sector $L_{N,K}$ : }
The relevant formulas of boundary states in this sector
are given in \cite{ES-L}.
The (B-type) Ishibashi states corresponding
to continuous and discrete representations
are characterized by the relations
\begin{eqnarray}
&& \hspace{-1cm}
{}^{(\sigma)}\dbrac{p,m} e^{-\pi T H^{(c)}} e^{ 2\pi i z J_0}
\dketc{p',m'}^{(\sigma')} = \epsilon_{\sigma}
\delta_{\sigma,\sigma'} \delta_{m,m'}^{(2NK)}
\delta(p-p')\, {\chi_{\msc{\bf c}}}^{(\sigma)}(p,m;iT,z) ~, \nonumber\\
&& \hspace{8cm} (p, p'>0)~,
\label{Ishibashi cont}\\
&& \hspace{-1cm}
{}^{(\sigma)}\dbrad{s,m} e^{-\pi T H^{(c)}} e^{ 2 \pi i z J_0}
\dketd{s',m'}^{(\sigma')} = \epsilon_{\sigma}
\delta_{\sigma,\sigma'} \delta_{m,m'}^{(2NK)}
\delta_{s,s'}\, {\chi_{\msc{\bf d}}}^{(\sigma)}(s,m;iT,z) ~,
\label{Ishibashi dis}
\end{eqnarray}
where the range of $s$ is $K+1 \leq s \leq N+K-1$.
We need not introduce Ishibashi states at the boundary
$s=K,\,N+K$ as is discussed in \cite{ES-L}.
We here set
\begin{eqnarray}
&& \hspace{-1cm}
\dketd{s,m}^{(\msc{NS})} = 0~,~\mbox{unless } s-m \in 2K \mbox{{\bf Z}}~, ~~~
\dketd{s,m}^{(\msc{R})} = 0~,~\mbox{unless } s-m \in K(2\mbox{{\bf Z}}+1)~.
\end{eqnarray}
The Cardy states necessary for our analysis are the class 1 states
given in \cite{ES-L} ($R\in \mbox{{\bf Z}}_N$)\footnote
{Recall that our closed string Hilbert space for the
$L_{N,K}$-piece includes the twisted sectors
generated by spectral flows unlike the `cigar CFT'
given in \cite{RibS,IPT2,FNP}.
Note that the D0-brane of cigar CFT corresponds to the boundary state
$
\sum_{R\in \msc{{\bf Z}}_{N}}
\ket{R}^{(\sigma)}
$
in our notation,
where the summation over $R \in \mbox{{\bf Z}}_{N}$
eliminates the twisted sectors.
}
;
\begin{eqnarray}
&& \hspace{-1cm}
\ket{R}^{(\sigma)} = \sum_{s=K+1}^{N+K-1}\, \sum_{m\in \msc{{\bf Z}}_{2NK}}\,
C^{(\sigma)}_R(s,m) \dketd{s,m}^{(\sigma)} +
\sum_{m\in \msc{{\bf Z}}_{2NK}}\, \int_0^{\infty}dp\, \Psi^{(\sigma)}_R(p,m)
\dketc{p,m}^{(\sigma)}~,
\label{Cardy L}\\
&& \hspace{-1cm}
C_R^{(\msc{NS})}(s,m) = C_R^{(\msc{R})}(s,m) = \left(\frac{2}{N}\right)^{1/2}
e^{-2\pi i \frac{Rm}{N}} \sqrt{\sin \left(\frac{\pi (s-K)}{N}\right)}~,
\label{C R} \\
&& \hspace{-1cm}
\Psi_R^{(\sigma)}(p,m) = \left(\frac{2^3 K}{N^3}\right)^{1/4}
e^{-2\pi i \frac{Rm}{N}}\,
\frac{\Gamma\left(\frac{1}{2}+\frac{m-\nu(\sigma)K}{2K}
+i\sqrt{\frac{N}{2K}}p\right)
\Gamma\left(\frac{1}{2}-\frac{m-\nu(\sigma)K}{2K}+i\sqrt{\frac{N}{2K}}p\right)}
{\Gamma\left(i\sqrt{\frac{2K}{N}}p\right) \Gamma(1+i\sqrt{\frac{2N}{K}}p)}~,
\label{Psi R}
\end{eqnarray}
where the symbol $\nu(\sigma)$ means $\nu(\mbox{NS})=0$, $\nu(\mbox{R})=1$.
Cylinder amplitudes of the class 1 Cardy states produce characters of identity representations;
\begin{eqnarray}
&& e^{\pi \hat{c} \frac{z^2}{T}}\cdot {}^{(\msc{NS})}\bra{R} e^{-\pi T H^{(c)}}
e^{ 2 \pi i z J_0} \ket{R'}^{(\msc{NS})}
= \chi_0^{(\msc{NS})}(2K(R'-R); it , z') ~, \\
&& e^{\pi \hat{c} \frac{z^2}{T}}\cdot {}^{(\msc{R})}\bra{R} e^{-\pi T H^{(c)}}
e^{2 \pi i z J_0} \ket{R'}^{(\msc{R})}
= \chi_0^{(\widetilde{\msc{NS}})}(2K(R'-R); it , z') ~, \\
&& \hspace{3cm} (T \equiv 1/t~,~~~ z'= -it z)~.
\nonumber
\end{eqnarray}
~
The desired Cardy states for our non-compact Gepner model
\eqn{nc gepner} should be constructed as
\begin{eqnarray}
&& \ket{B; \{L_i, M_i\}, \{R_i\}; \pm} =
\ket{B; \{L_i, M_i\}, \{R_i\}}^{(\msc{NS})} \pm
\ket{B; \{L_i, M_i\}, \{R_i\}}^{(\msc{R})}~, \nonumber\\
&& \ket{B; \{L_i,M_i\}, \{R_i\}}^{(\sigma)} = {\cal N}\,
P_{\msc{closed}}\, \left\lbrack \prod_i \ket{L_i,M_i}^{(\sigma)}
\otimes \prod_j \ket{R_j}^{(\sigma)}\right\rbrack~.
\label{Cardy ncg}
\end{eqnarray}
In the first equation $\pm$ refers to branes and anti-branes, respectively.
${\cal N}$ is an overall normalization constant
determined by the Cardy condition (its explicit value is not
important for our analysis), and
$P_{\msc{closed}}$ means the projection to the closed string Hilbert
space determined in our previous analysis. Namely, $P_{\msc{closed}}$
imposes the following two constraints on the Ishibashi states
$\dket{\ell_i,m_i}^{(\sigma)}$, $\dketc{p_j,K_jn_j+N_jw_j}^{(\sigma)}$,
and $\dketd{s_j,K_jn_j+N_jw_j}^{(\sigma)}$;
\begin{itemize}
\item Integrality of the total $U(1)$-charge in the $\mbox{NS}$-sector
;
\begin{eqnarray}
\sum_{i}\frac{m_i}{k_i+2} + \sum_j\frac{K_jn_j}{N_j} \in \mbox{{\bf Z}}~.
\label{U(1) integrality}
\end{eqnarray}
\item The consistency with the B-type boundary condition,
which gives essentially the same condition as \eqn{r constraint};
\begin{eqnarray}
{}^{\exists}r \in \mbox{{\bf Z}}_N~~ \mbox{s.t.}~
m_i \equiv -r ~(\mbox{mod}\, k_i+2)~, ~~~ n_j \equiv r ~ (\mbox{mod}\, N_j)~,~~
({}^{\forall}i,{}^{\forall}j)~.
\label{r constraint 2}
\end{eqnarray}
\end{itemize}
(\ref{r constraint 2}) is derived as follows;
due to the left-right pairing in our construction (\ref{left-right pairing})
a typical state in the closed string spectrum has the form
\begin{equation}
\begin{array}{l}
|\ell_i,m_i\rangle\otimes |\tilde{\ell_i}=\ell_i,\tilde{m_i}=m_i\rangle:
\hskip2cm \mbox{in the } M_{k _i}\mbox{-sector},\\
|s_j,m_j=K_jn_j+N_jw_j\rangle\otimes|\tilde{s_j}
=N_j+2K_j-s_j,\tilde{m_j}=K_jn_j-N_jw_j+N_j\rangle: \\
\hskip9cm \mbox{in the }L_{N_j,K_j} \mbox{-sector}
\end{array}
\label{initial state}
\end{equation}
Corresponding B-type boundary state has the quantum numbers as
\begin{equation}
\begin{array}{l}
|\ell_i,m_i\rangle\otimes |\tilde{\ell_i}=\ell_i,\tilde{m_i}=-m_i\rangle, \\
|s_j,m_j=K_jn_j+N_jw_j\rangle\otimes|\tilde{s_j}=N_j+2K_j-s_j,\tilde{m_j}=-K_jn_j-N_jw_j+N_j\rangle.
\label{B-type state}\end{array}
\end{equation}
(\ref{B-type state}) can be obtained from (\ref{initial state}) by spectral flow in the right moving sector if
(\ref{r constraint 2}) is obeyed.
Analysis of massive Ishibashi states is similar.
Because of the second constraint \eqn{r constraint 2},
the Cardy states actually depends only on the sum of labels
\begin{eqnarray}
M \equiv \sum_i \mu_i M_i + 2 \sum_j \nu_j K_j R_j~ \in \mbox{{\bf Z}}_{2N}~,
\label{def M}
\end{eqnarray}
and thus we shall write them as $\ket{B;\{L_i\}, M ; \pm}$,
$\ket{B;\{L_i\}, M}^{(\sigma)}$ from here on.
It is now possible to count the numbers of compact D-branes
and compare them with those of massless states
in the models with $N_M=1$, $1\leq N_L \leq 3$.
See table 1. We note that some of the D-branes
(cycles) do not have corresponding massless states and thus the number
of D-brane exceed those of massless moduli.
This is because the ``would be" moduli
which are non-normalizable in the non-compact geometry
do not appear as massless states in the closed string spectrum.
\vskip1cm
{\small
\hspace{-2cm}
\begin{tabular}{|c|c|c|c|}
\hline
Models & Geometric Identification & No. of massless &
No. of basic \\
& & states & vanishing cycles \\
\hline
$M_{n-2} \otimes L_{n,1}$ & $ALE$ ($A_{n-1}$) & $n-1$ & $n$
\\ \hline
$M_{n-2}\otimes L'_{2n,n+2}$ & $CY_3$ ($A_{n-1}$) &
$\left\lbrack \frac{n-2}{2}\right\rbrack +1$
& $n$ \\ \hline
$M_{n-2}\otimes L_{n,n+1}$ & $CY_4$ ($A_{n-1}$) & 0 & $n$
\\ \hline
$M_{n-2}\otimes L_{2n,1} \otimes L_{2n,1}$ &
$ALE$ ($A_{n-1}$) fibration over $\mbox{{\bf C}} P^1$ & $n-1$ & $2n$ \\ \hline
$M_{n-2} \otimes L_{n\mu,K_1} \otimes L_{n\mu,K_2}$,
& $ALE$ ($A_{n-1}$) fibration &
& \\
$K_1+K_2=\mu$,
$\mbox{G.C.D}\{K_i\}=1$
& over $W \mbox{{\bf C}} P^1 \lbrack K_1,K_2 \rbrack$ & $n-1$ & $ \mu n$ \\
\hline
$M_{n-2} \otimes L'_{4n,n+2} \otimes L'_{4n,n+2}$ &
$CY_3$ ($A_{n-1}$) fibration over $\mbox{{\bf C}} P^1$
& $\left\lbrack \frac{n-2}{2}\right\rbrack +1 $ & $2n$ \\ \hline
$M_{n-2} \otimes L'_{2n\mu,K_1(n+2)} \otimes L'_{2n\mu,K_2(n+2)}$,
& $CY_3$ ($A_{n-1}$) fibration & & \\
$K_1+K_2=\mu$,
$\mbox{G.C.D}\{K_i\}=1$
& over $W \mbox{{\bf C}} P^1 \lbrack K_1,K_2 \rbrack$
& $\left\lbrack \frac{n-2}{2}\right\rbrack +1 $
& $ \mu n$ \\
\hline
$M_{n-2}\otimes L_{3n,1} \otimes L_{3n,1} \otimes L_{3n,1} $ &
$ALE$ ($A_{n-1}$) fibration over $\mbox{{\bf C}} P^2$ & $n-1$ & $3n$ \\
\hline
$M_{n-2} \otimes L_{n\mu,K_1} \otimes L_{n\mu,K_2}\otimes L_{n\mu,K_3} $,
& $ALE$ ($A_{n-1}$) fibration &
& \\
$K_1+K_2+K_3=\mu$,
$\mbox{G.C.D}\{K_i\}=1$
& over $W \mbox{{\bf C}} P^2 \lbrack K_1,K_2,K_3 \rbrack$ & $n-1$ & $ \mu n$ \\
\hline
\end{tabular}
}
\begin{center}
Table 1
\end{center}
(``No. of basic vanishing cycles'' means the number of
compact BPS branes $\ket{B; L, M}$ with $L=0$.
They are not necessarily homologically independent.
Generic cycles with $L \neq 0$ are expressed as
superpositions of the basic ones.)
~
For the sake of later use let us
derive the `charge integrality' condition for the R-sector.
This is obtained from (\ref{U(1) integrality}) by a 1/2-spectral flow:
first, $U(1)$-charges of $M_{K_{i}}$
and $L_{N_j,K_j}$-sectors are shifted under the flow as
\begin{eqnarray}
&&{m_i\over k_i+2}\, \Longrightarrow \,
{1\over 2}+{m_i'\over k_i+2}~, \hskip1cm (m_i'\equiv m_i-1)\\
&&{K_jn_j\over N_j}\, \Longrightarrow \,
{1\over 2}+{K_jn_j'\over N_j}~, \hskip1.5cm (n_j'\equiv n_j+1)~.
\label{U(1)-charge R sector}
\end{eqnarray}
At the same time total $U(1)$-charge gets shifted by $\hat{c}/2$.
Thus the condition for charge integrality in R-sector becomes
\begin{equation}
{m_i'\over k_i+2}+{K_jn_j'\over N_j}\in \mbox{{\bf Z}}+{\gamma\over 2}
\label{recall}\end{equation}
where $\gamma$ is defined as
\begin{equation}
\gamma=\hat{c}-(N_M+N_L)~.
\label{gamma}
\end{equation}
As it turns out, our models have different characteristics
depending on the even-oddness of the parameter $\gamma$.
~
\noindent
{\bf A comment on the $\hat{c}=2$ case : }
Since we showed that the $(c,c)$ and $(a,a)$-type chiral primaries
exist in the $\hat{c}=2$ case, one may suppose that
the A-type compact branes also exist in $\hat{c}=2$ theory.
However, this is not the case. In fact, the $(a,a)$-type
chiral primary states
\eqn{anti-chiral state} contains in each $M_{k_i}$-sector a state of the type
\begin{eqnarray}
(\ell_i,-\ell_i)_L \otimes (k_i-\ell_i,\ell_i-k_i)_R.
\end{eqnarray}
Thus we cannot define the A-type Ishibashi
state except for the special case $\ell_i=k_i/2$.
Therefore, generic $\hat{c}=2$ theories
do not have sufficient number of
A-type Ishibashi states for constructing compact A-branes.
Strictly speaking, there is an exception;
$L_{2,1}$ (with no $M_k$ factor), which
corresponds to the Eguchi-Hanson space topologically equivalent with
$T^*S^2$. In this case we can construct the A-type boundary state
$\ket{B;O}_A$ for compact brane as well as the B-type $\ket{B;O}_B$,
both of which
are associated to the ${\cal N}=4$ massless character of $\ell=0$ \cite{ET}.
This fact seems to contradict with the geometrical interpretation,
since only one cycle $\cong S^2$ exists in
the Eguchi-Hanson space. However, this apparent puzzle
is resolved by analyzing cylinder amplitudes.
The B-B and A-A overlaps become (in the NS sector)
\begin{eqnarray}
&& \hskip-2cm \braB{B;O} e^{-\pi T H^{(c)}} \ketB{B;O} =
\braA{B;O} e^{-\pi T H^{(c)}} \ketA{B;O} =
\ch{{\cal N}=4}{0}(\ell=0;it,0)~, ~(T\equiv 1/t)
\end{eqnarray}
as expected.
Here $\ch{{\cal N}=4}{0}(\ell=0;\tau,z)$ is the ${\cal N}=4$ massless character
of $\ell=0$ \cite{ET}.
On the other hand, the A-B overlap is evaluated as follows;
\begin{eqnarray}
\braA{B;O} e^{-\pi T H^{(c)}} \ketB{B;O} =
\chi_{(-,+)}(p=i/2;it) - \int_0^{\infty} dp\, \frac{2}{\cosh \pi p}\,
\chi_{(-,+)}(p; it)~, ~ (t \equiv 1/T)
\label{compact A-B}
\end{eqnarray}
where $\chi_{(-,+)}(p;\tau)$ is the twisted ${\cal N}=2$ character
defined in \eqn{twisted massive}. Twisted character appears
due to the difference in the boundary conditions.
We have used the fact that the absolute value squared of
boundary wave function becomes
\begin{eqnarray}
2 \sinh \pi p' \tanh \pi p' = 2\left(\cosh \pi p' -
\frac{1}{\cosh \pi p'}\right)~,
\end{eqnarray}
and also used a contour deformation technique, which yields the first
term in \eqn{compact A-B}.
Note that the second term in \eqn{compact A-B}
appears with a negative sign, which means that the open channel
amplitude includes negative norm states. This implies
that compact A and B-branes are mutually incompatible,
and we discard the A-brane as in the other cases.
In conclusion our brane spectrum matches with
the geometrical expectation also in this case.
~
\subsection{Cylinder Amplitudes}
~
Now, let us analyze cylinder amplitudes ending on the compact BPS branes
in various models.
We assume the parameter $\gamma$ \eqn{gamma}
to be an even integer $\gamma \in 2\mbox{{\bf Z}}$ for the time being.
Calculation
of various amplitudes becomes simplified under this assumption.
We start our analysis by working on the $\mbox{NS}$-sector amplitudes;
\begin{eqnarray}
&& \hspace{-1cm} Z^{(\msc{NS})} (\{L_i\}, M | \{L_i'\}, M') (it)
\equiv {}^{(\msc{NS})}\bra{B;\{L_i\},M} e^{-\pi T H^{(c)}}
\ket{B;\{L_i'\},M'}^{(\msc{NS})},~
(T\equiv 1/t).
\label{NS overlap}
\end{eqnarray}
The calculation is quite similar to the cylinder amplitudes for the
B-branes in the compact Gepner models (see {\em e.g.} \cite{RS,BDLR}).
The non-trivial point is the treatment of the projection operator
$P_{\msc{closed}}$.
The following formulas are useful in imposing
the second constraint \eqn{r constraint 2};
\begin{eqnarray}
&& \sum_{\stackrel{a\in \msc{{\bf Z}}_{2(k+2)}}{L+a \in 2\msc{{\bf Z}}}}\,
e^{-2\pi i \frac{am}{2(k+2)} }\, \ch{(\msc{NS})}{L,a}
\left(-\frac{1}{\tau}, \frac{z}{\tau}\right) \nonumber\\
&& \hspace{1cm}
= e^{i\pi \frac{k}{k+2}\frac{z^2}{\tau}}\sum_{\ell=0}^{k}
\sin\left(\frac{\pi(L+1)(\ell+1)}{k+2}\right) \,
\left\lbrack \ch{(\msc{NS})}{\ell,m}(\tau,z)
+ (-1)^L \ch{(\msc{NS})}{\ell,m+k+2}(\tau,z)\right\rbrack~.
\label{identity M k ch}
\end{eqnarray}
\begin{eqnarray}
&& \sum_{R\in \msc{{\bf Z}}_N}\, e^{2\pi i \frac{R K n}{N}} \,
\chi_0^{(\msc{NS})}\left(2KR;-\frac{1}{\tau}, \frac{z}{\tau}\right) \nonumber\\
&& \hspace{1cm}
= e^{i\pi \hat{c}_L \frac{z^2}{\tau}} \sqrt{\frac{N}{2K}}
\sum_{w\in \msc{{\bf Z}}_{2K}}\,
\left\lbrack
\int_0^{\infty}dp \, \frac{\sinh\left(\pi\sqrt{\frac{2K}{N}}p\right)
\sinh\left(\pi \sqrt\frac{2N}{K} p\right)}
{\left|\cosh \pi \left(\sqrt{\frac{N}{2K}}p
+i \frac{Kn+Nw}{2K} \right)\right|^2}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}(p,Kn+Nw;\tau,z)
\right. \nonumber\\
&& \hspace{1cm}
\left. + \sum_{s=K+1}^{N+K-1}\, 2 \sin \left(\frac{\pi(s-K)}{N}\right)\,
{\chi_{\msc{\bf d}}}^{(\msc{NS})}(s,Kn+Nw;\tau,z)\right\rbrack~, ~(\hat{c}_L= 1+\frac{2K}{N})~.
\label{identity L ch}
\end{eqnarray}
We obtain (up to an overall normalization)
\begin{eqnarray}
&&Z^{(\msc{NS})}(\{L_i\}, M | \{L_i'\}, M') (it)
\propto \sum_{\ell_i} \,
\sum_{\stackrel{a_i \in \msc{{\bf Z}}_{2(k_i+2)}}{a_i\equiv \ell_i \,
(\msc{mod}\, 2)}}\,
\sum_{a_j'\in \msc{{\bf Z}}_{N_j}} \, \sum_{a \in \msc{{\bf Z}}_N}\,
\sum_{r\in \msc{{\bf Z}}_{2N}}\, \frac{1}{N} \cdot
\frac{1}{2N} \nonumber\\
&& \hspace{2cm}\times \exp \left\lbrack
2\pi i \frac{r}{2N} \left\{
M'-M + \sum_i \mu_i a_i +2\sum_j \nu_j K_j a_j'
\right\}
\right\rbrack \nonumber\\
&& \hspace{2cm} \times \prod_i \prod_j \, N_{L_i,L_i'}^{\ell_i}\,
\ch{(\msc{NS})}{\ell_i,a_i+2a}(it,0)\,
\chi_0^{(\msc{NS})}(2K_j(a_j'-a);it,0) \nonumber\\
&& = \sum_{\ell_i} \,
\sum_{\stackrel{a_i \in \msc{{\bf Z}}_{2(k_i+2)}}{a_i\equiv \ell_i \,
(\msc{mod}\, 2)}}\,
\sum_{a_j'\in \msc{{\bf Z}}_{N_j}} \, \sum_{a \in \msc{{\bf Z}}_N}\, \frac{1}{N} \,
\delta^{(2N)}\left(M'-M + \sum_i \mu_i a_i +2\sum_j \nu_j K_j a_j'
\right)\nonumber\\
&& \hspace{2cm} \times \prod_i \prod_j \,N_{L_i,L_i'}^{\ell_i}\,
\ch{(\msc{NS})}{\ell_i,a_i+2a}(it,0)\,
\chi_0^{(\msc{NS})}(2K_j(a_j'-a);it,0)~.
\label{cylinder amplitude 1}
\end{eqnarray}
Here $N_{L_i,L_i'}^{\ell_i}$ are the fusion coefficients of
$SU(2)_{k_i}$;
\begin{eqnarray}
N_{L_i,L_i'}^{\ell_i}=
\left\{
\begin{array}{ll}
1& ~~ |L_i-L_i'|\leq \ell_i \leq \min\lbrack L_i+L_i',\,
2k_i-L_i-L_i'\rbrack~ \mbox{and }
\ell_i \equiv |L_i-L_i'|~ (\mbox{mod}\, 2)\\
0& ~~ \mbox{otherwise}
\end{array}
\right. ~~.
\end{eqnarray}
The integrality of total $U(1)$-charge is ensured by the $a$-summation,
while the $a_i$ and $a_j'$ summations impose the constraint
\eqn{r constraint 2} via the relations \eqn{identity M k ch},
\eqn{identity L ch}. Thanks to our assumption
\begin{eqnarray}
2\sum_i \mu_i - 2\sum_j \nu_jK_j = -N\gamma \in 2N \mbox{{\bf Z}}~,
\end{eqnarray}
we may make the shifts $a_i\,\rightarrow\, a_i+2a$,
$a_j'\,\rightarrow\,a_j'-a$ in the factor $\delta^{(2N)}(\cdots)$.
The $a$-summation is then decoupled and we obtain a simpler form
of amplitude
\begin{eqnarray}
&& Z^{(\msc{NS})}(\{L_i\}, M | \{L_i'\}, M') (it)
\propto \sum_{\ell_i} \,
\sum_{\stackrel{a_i \in \msc{{\bf Z}}_{2(k_i+2)}}{a_i\equiv \ell_i \,
(\msc{mod}\, 2)}}\,
\sum_{a_j'\in \msc{{\bf Z}}_{N_j}} \,
\delta^{(2N)}\left(M'-M + \sum_i \mu_i a_i +2\sum_j \nu_j K_j a_j'
\right) \nonumber\\
&& \hspace{1cm}
\times \, \prod_i \prod_j \,N_{L_i,L_i'}^{\ell_i}\,
\ch{(\msc{NS})}{\ell_i,a_i}(it,0)\,
\chi_0^{(\msc{NS})}(2K_ja_j';it,0)~.
\label{cylinder amplitude 2}
\end{eqnarray}
In the cases of $\gamma \in 2\mbox{{\bf Z}}+1$ the $a$-summation is in general not
decoupled and the calculation becomes more complex.
We next consider the open string Witten indices defined by
\begin{eqnarray}
&& I(\{L_i\},M|\{L_i'\},M') = {}^{(\msc{R})}\bra{B;\{L_i\},M}
e^{-\pi T H^{(c)}} e^{-i\pi J_0}
\ket{B;\{L_i'\},M'}^{(\msc{R})} ~.
\label{osWI}
\end{eqnarray}
Under the assumption $\gamma \in 2\mbox{{\bf Z}}$,
the $U(1)$-charge condition for R-sector
has the same form as \eqn{U(1) integrality}.
In order to evaluate the amplitudes
we only have to replace the NS-characters
$\ch{(\msc{NS})}{*,*}(it,0)$, $\chi_0^{(\msc{NS})}(*;it,0)$
by the $\widetilde{\mbox{R}}$-characters, that is,
the Witten indices (up to signs)
\begin{eqnarray}
&& \ch{(\widetilde{\msc{R}})}{\ell_i,m_i}(it;0) = \delta^{(2(k_i+2))}(m_i-(\ell_i+1))
- \delta^{(2(k_i+2))}(m_i+(\ell_i+1))~,~\nonumber\\
&&
\chi_0^{(\widetilde{\msc{R}})}(2K_jr_j;it,0) = \delta^{(N_j)}\left(r_j-\frac{1}{2}\right)
- \delta^{(N_j)}\left(r_j+\frac{1}{2}\right)~,
\end{eqnarray}
and also replace the summation $\displaystyle \sum_{a\in \msc{{\bf Z}}_N}*$ with
$\displaystyle \sum_{a \in \frac{1}{2}+\msc{{\bf Z}}_N}*$ because of
the insertion of $e^{-i\pi J_0}$.
We finally obtain
\begin{eqnarray}
&& \hspace{-5mm}
I(\{L_i\}, M | \{L_i'\}, M') \propto
\sum_{\ell_i}\, \sum_{\alpha_i=\pm 1} \,
\sum_{\beta_j=\pm 1} \, \delta^{(2N)}\left(
M'-M + \sum_i \mu_i \alpha_i(\ell_i+1) + \sum_j \nu_j K_j \beta_j
+ \frac{N}{2}\gamma \right) \nonumber\\
&& \hspace{5cm} \times \prod_i \prod_j \, N_{L_i,L_i'}^{\ell_i}\,
\mbox{sgn}(\alpha_i) \mbox{sgn}(\beta_j)~.
\label{osWI2}
\end{eqnarray}
The sector in which all of $L_i$ and $L_i'$ equal 0 play the basic role.
Following \cite{BDLR},
$\lbrack I_0\rbrack_{M,M'} \equiv I(\{L_i=0\},M| \{L_i'=0\},M')$ can be
concisely expressed by introducing
a cyclic operator $g$ defined by the action
\begin{eqnarray}
g~:~M ~\longmapsto~ M+2~,
\label{operator g}
\end{eqnarray}
which satisfies $g^N=1$.
Then we obtain from \eqn{osWI2}
\begin{eqnarray}
I_0 &=& \prod_i \prod_j\, (g^{\frac{\mu_i}{2}}-g^{-\frac{\mu_i}{2}} )\,
(g^{\frac{\nu_j K_j}{2}} - g^{-\frac{\nu_j K_j}{2}}) \, g^{\frac{N}{4}\gamma}
\nonumber\\
&\propto & \prod_i \prod_j\, (1-g^{-\mu_i})\, (1-g^{\nu_j K_j})~,~~~
(\mbox{up to overall sign})~.
\label{osWI3}
\end{eqnarray}
In this way we can derive
the formula conjectured in \cite{Lerche}, which generalizes
the one for the B-branes in the compact Gepner models \cite{RS,BDLR}.
We here emphasize that the formula \eqn{osWI3} is correct only in the case
with even $\gamma$. It is easy to check that, under the assumption
$\gamma \in 2\mbox{{\bf Z}}$, the formula \eqn{osWI3}
has the correct symmetry, namely\footnote
{One way to confirm the result \eqn{symmetry I 0}
is to take the T-dual so that $L_{N_j,K_j}$-sectors
are realized as ${\cal N}=2$ Liouville theories.
There, the compact branes are A-branes corresponding to
middle dimensional cycles.},
\begin{eqnarray}
&& I_0^t = I_0~, ~~~\mbox{for}~ \hat{c}=2,4~, \nonumber\\
&& I_0^t = -I_0 ~, ~~~\mbox{for}~ \hat{c}=3~.
\label{symmetry I 0}
\end{eqnarray}
Let us next discuss the odd $\gamma$ cases that
are somewhat more complicated.
We here present two examples.
~
\noindent
{\bf 1. $CY_3$ ($A_{n-1}$) : }
In this case we have $\gamma=3-2=1$.
As was already discussed,
we adopt half-integral values of the winding the number
\begin{eqnarray}
w_1 \in \frac{1}{2}\mbox{{\bf Z}}_{4(n+2)}~, ~~~ n_1+2w_1 \in 2\mbox{{\bf Z}}~.
\label{w half-integer}
\end{eqnarray}
It is convenient to parameterize
\begin{eqnarray}
&& k_1+2=N_1=N=n~, ~ K_1=\frac{n+2}{2}~, ~~~ (\mbox{for even $n$})~, \nonumber\\
&& k_1+2=n~,~ N_1=N=2n~,~ K_1=n+2~, ~~~ (\mbox{for odd $n$})~,
\end{eqnarray}
so that $K_1$ and $N_1/2$ (not $N_1$) are relatively prime.
We first consider the even $n$ case.
An important difference from the previous analysis is in
the $U(1)$-charge condition for R-sector;
\begin{eqnarray}
\frac{m}{n}+ \frac{(n+2)n_1+2nw_1}{2n} \in \mbox{{\bf Z}} + \frac{\gamma}{2} =
\mbox{{\bf Z}}+ \frac{1}{2}~,
\end{eqnarray}
which leads to an extra insertion of $e^{i\pi a}$ in the amplitude.
Secondly, the ``$\displaystyle w\in \frac{1}{2}\mbox{{\bf Z}}$-rule'' \eqn{w half-integer}
gives rise to a replacement $a_1'\,\rightarrow\,2a_1'$.
We so obtain
\begin{eqnarray}
&& \hspace{-1cm} I (L, M | L', M')
\propto \sum_{\ell} \,
\sum_{\stackrel{a_1 \in \msc{{\bf Z}}_{2n}}{a_1\equiv \ell \,
(\msc{mod}\, 2)}}\,
\sum_{a_1'\in \msc{{\bf Z}}_{n}} \, \sum_{a \in \msc{{\bf Z}}_n}\,
\sum_{r\in \msc{{\bf Z}}_{2n}}\, \frac{1}{n} \cdot
\frac{1}{2n}
\, \exp \left\lbrack
2\pi i \frac{r}{2n} \left\{
M'-M + a_1 +(n+2) 2a_1'
\right\} \right\rbrack \nonumber\\
&& \hspace{3cm} \times e^{i\pi a}\, N_{L,L'}^{\ell}\,
\ch{(\widetilde{\msc{R}})}{\ell,a_1+2a}(it,0)\,
\chi_0^{(\widetilde{\msc{R}})}((n+2)(2a_1'-a);it,0) ~.
\label{osWI CY3 1}
\end{eqnarray}
Since only the terms with $\displaystyle
2a_1'-a = \pm \frac{1}{2}$ contribute,
we may replace the factor $e^{i\pi a}$ with
$e^{-i\frac{\pi}{2} \beta}$ ($\displaystyle \frac{\beta}{2}\equiv 2a_1'-a$,
$\beta=\pm 1$),
which cancels out the factor $\mbox{sgn}(\beta)$ in \eqn{osWI2}.
The summation over $a$ is again decoupled.
We finally obtain the formula
\begin{eqnarray}
&& I(L, M | L', M') \propto
\sum_{\ell}\, \sum_{\alpha=\pm 1} \,
\sum_{\beta=\pm 1} \, \delta^{(2n)}\left(
M'-M + \alpha(\ell+1) + \beta \right) \, N_{L,L'}^{\ell}\,
\mbox{sgn}(\alpha) ~.
\label{osWI CY3 2}
\end{eqnarray}
In the $L=0$ sector,
we have
\begin{eqnarray}
I_0\propto (1-g^{-1})(1+g) = g-g^{-1}~,
\label{osWI CY3 3}
\end{eqnarray}
that is anti-symmetric as is expected.
In the odd $n$ case the story becomes more complicated.
Since the label of the Cardy states $R_1$ in $L'_{2n,n+2}$-sector
runs over the range $R_1 \in \mbox{{\bf Z}}_{2n}$, it appears that
there exist twice as many BPS branes.
However, the proper Cardy states in $L'_{2n,n+2}$
have to be compatible with \eqn{w half-integer} and are given as
$
\ket{R_1}^{(\msc{R})} + \ket{R_1+n}^{(\msc{R})}.
$
We thus still have the same number of compact branes,
and one may restrict the label $M$ to the range $M\in \mbox{{\bf Z}}_{4n} \cap 2\mbox{{\bf Z}}$.
Rewriting $M/2$ as $M$, we can obtain the same formula of Witten indices
\eqn{osWI CY3 2} (and \eqn{osWI CY3 3}).
~
\noindent
{\bf 2. $\hat{c}=4$, $M_{n-2}\otimes L'_{4n,n+2} \otimes L'_{4n,n+2}$ : }
This model has been identified as the $CY_3(A_{n-1})$-fibration over
$\mbox{{\bf C}} P^1$ and we have $\gamma = 4-3=1$.
We should again apply the
``$\displaystyle w\in \frac{1}{4}\mbox{{\bf Z}}$ rule'' \eqn{m CY3 fiber}
to the $L'_{4n,n+2}$-sectors, and make the replacements
$a_i'\,\rightarrow\, 4a_i'$.
A similar calculation leads to
\begin{eqnarray}
&& I(L, M | L', M') \propto
\sum_{\ell}\, \sum_{\alpha=\pm 1} \,
\sum_{\beta_1=\pm 1}\sum_{\beta_2=\pm 1}
\nonumber\\
&& \hspace{2cm} \times
\delta^{(4n)}\left(
M'-M + 2\alpha(\ell+1) + \beta_1 + \beta_2\right) \, N_{L,L'}^{\ell}\,
\mbox{sgn}(\alpha) \mbox{sgn}(\beta_1)~,
\label{osWI CY4 1}
\end{eqnarray}
and also in the $L=0$ sector,
\begin{eqnarray}
I_0\propto (1-g^{-2})(1-g)(1+g) = 2-g^2-g^{-2}~.
\label{osWI CY4 2}
\end{eqnarray}
Note that the factor $\mbox{sgn}(\beta_2)$ was canceled
by $e^{i\pi a}$ in the same way as the first example,
yielding a contribution $1+g$ in place of $1-g$
in \eqn{osWI CY4 2}.
This fact makes \eqn{osWI CY4 2} symmetric as should be.
~
\subsection{Comments on Non-compact BPS Branes}
~
We can similarly investigate aspects of non-compact BPS branes.
The analysis is straightforward but more cumbersome technically.
We thus restrict ourselves to making some comments about non-trivial points.
We shall concentrate on the non-compact branes associated to
the massive representations in each of $L_{N_j,K_j}$-sectors
(`class 2' in the classification in \cite{ES-L}),
and focus on the NS-sector.
The first fact which is in contrast to the case of compact branes is
that {\em both} the A and B-type boundary conditions are possible.
This is because the $U(1)$-charges of massive representations
are uncorrelated with their conformal weights.
The desired Cardy states describing non-compact branes
are now constructed in a similar manner
to \eqn{Cardy ncg}: all we have to do is to take
the products of Cardy states in each sector and to let the
projection $P_{\msc{closed}}$ act on them.
The Cardy states in each of the $M_{k_i}$-sectors
are given as in \eqn{minimal Cardy states}
(here we specify the A and B-type
boundary conditions by subscripts);
\begin{eqnarray}
&&\ket{L_i,M_i}_A = \sum_{\ell_i} \sum_{m_i} \, e^{i\pi \frac{\ell_i}{2}}
C_{L_i,M_i}(\ell_i,m_i)
\, \dket{\ell_i,m_i}_A ~, \label{minimal Cardy A} \\
&&\ket{L_i,M_i}_B = \sum_{\ell_i} \sum_{m_i} \,
C_{L_i,M_i}(\ell_i,m_i)
\, \dket{\ell_i,m_i}_B ~. \label{minimal Cardy B}
\end{eqnarray}
Note that the A-B overlap of Ishibashi states yields
the twisted minimal characters (see Appendix D), and
the extra phase factor $e^{i\pi \frac{\ell_i}{2}}$
in \eqn{minimal Cardy A} is necessary for the consistency with
the modular bootstrap. (See the modular transformation formulas
\eqn{modular twisted minimal}, and
recall the fact that the identity brane $\ket{B;O}$ is a B-brane.)
The Cardy states in the $L_{N_j,K_j}$-sectors are more non-trivial.
Since the branes are non-compact, we should allow the continuous
spectrum of $U(1)$-charge in the open channel, while that in the closed
channel should be still discrete. We thus have to
introduce a one-parameter deformation of the extended massive
characters \eqn{extended massive w}
\begin{eqnarray}
&& \hspace{-2cm}
{\chi_{\msc{\bf c}}}^{(\msc{NS})}(p,m;\tau,z,w) = q^{\frac{p^2}{2}} \Th{m}{NK}
\left(\tau,\frac{2z}{N}- \frac{w}{NK}\right) \,
\frac{{\theta}_{3}(\tau,z)}{\eta(\tau)^3}~, ~~~ (w \in \mbox{{\bf R}}~, ~~0\leq w < 2NK)~,
\end{eqnarray}
and the associated Ishibashi states $\dket{p,m,\alpha}_A$,
$\dket{p,m,\alpha}_B$ ($\alpha \in \mbox{{\bf R}}$, $0\leq \alpha < 2NK$)
by the relations;
\begin{eqnarray}
\hspace{-1cm}
{}_A\dbra{p,m,\alpha} e^{-\pi T H^{(c)}} e^{2 \pi i z J_0 } \dket{p',m',\alpha'}_A
&=& {}_B\dbra{p,m,\alpha} e^{-\pi T H^{(c)}} e^{2 \pi i z J_0 }
\dket{p',m',\alpha'}_B
\nonumber\\
&=& \delta(p-p') \delta_{m,m'}^{(2NK)} {\chi_{\msc{\bf c}}}^{(\msc{NS})}(p,m;\tau,z,\alpha'-\alpha)
~, ~~~(p,p'>0)
\nonumber\\
\hspace{-1cm}
{}_A\dbra{p,m,\alpha} e^{-\pi T H^{(c)}} e^{2\pi i z J_0 } \dket{p',m',\alpha'}_B
&=& {}_B\dbra{p,m,\alpha} e^{-\pi T H^{(c)}} e^{2 \pi i z J_0 }
\dket{p',m',\alpha'}_A
\nonumber\\
&=& \delta(p-p') \delta_{m,0}^{(2NK)} \delta_{m',0}^{(2NK)}\,
\chi_{(+,-)}(p;iT)~,
~~~(p,p'>0)
\end{eqnarray}
where $\chi_{(+,-)}(p;\tau)$ is the twisted massive character defined in
\eqn{twisted massive}.
By these definitions
the Ishibashi states $\dket{p,m,\alpha}_A$, $\dket{p,m,\alpha}_B$ are explicitly
constructed as the spectral flow sums of irreducible Ishibashi states, and
the parameter $\alpha$ expresses the relative phase attached to each
irreducible one.
The desired pieces of A and B-type Cardy states in the $L_{N_j,K_j}$-sector
are now given as follows \cite{RibS,ES-L,ASY,IPT2,FNP}:
they have different boundary wave functions;
\begin{eqnarray}
&& \ket{P_j,\alpha_j}_B = \sqrt{\frac{2}{N_jK_j}} \int_0^{\infty}dp\, \sum_m\,
\cos(2\pi P_j p) f(p,m)\, \dket{p,m,\alpha_j}_B~,
\label{Cardy massive B} \\
&& \ket{P_j,\alpha_j}_A = \frac{1}{\sqrt{2N_jK_j}} \int_0^{\infty}dp\, \sum_m\,
\left(e^{2\pi iP_jp}+ (-1)^m e^{-2\pi i P_j p}\right)
f(p,m)\, \dket{p,m,\alpha_j}_A~,
\label{Cardy massive A}
\end{eqnarray}
where we set
\begin{eqnarray}
f(p,m) \equiv \frac{1}{\Psi_O^{(\msc{NS})}(-p,m)}
= \left(\frac{N^3}{2^3 K}\right)^{\frac{1}{4}}\,
\frac
{\Gamma\left(-i\sqrt{\frac{2K}{N}}p\right)
\Gamma\left(1-i \sqrt{\frac{2N}{K}}p\right)}
{\Gamma\left(\frac{1}{2}+\frac{m}{2K}-i \sqrt{\frac{N}{2K}}p\right)
\Gamma\left(\frac{1}{2}-\frac{m}{2K}-i \sqrt{\frac{N}{2K}}p\right)
}~,
\label{f p m}
\end{eqnarray}
~
We present a few comments:
~
\noindent
{\bf 1. }
The B-type Cardy state \eqn{Cardy massive B} is determined
from the modular bootstrap, while the A-type \eqn{Cardy massive A} is not.
This is because the identity brane is now a B-brane and hence
the modular bootstrap is not powerful to determine the A-type Cardy
state. Nevertheless, \eqn{Cardy massive A}, which is called `class $2'$'
in \cite{FNP}, has been constructed in
\cite{RibS,IPT2} as the `descent' of the $AdS_2$-brane \cite{BP}
in the Euclidean $AdS_3$ \cite{PST}.\footnote
{In the recent paper \cite{Hosomichi} the boundary wave function
of A-type \eqn{Cardy massive A}
has been also derived by the boundary bootstrap
approach based on the perturbative analysis in the dual
${\cal N}=2$ Liouville theory.}
One can check that the difference of coefficients
in \eqn{Cardy massive B} and \eqn{Cardy massive A}
is consistent with reflection amplitudes of
the $SL(2;\mbox{{\bf R}})/U(1)$-coset model \cite{Teschner-reflection,GK}.
In the cigar models \eqn{Cardy massive B} would correspond to non-compact
D2-branes (partially wrapping on cigar) \cite{FNP},
while \eqn{Cardy massive A} do to non-compact D1-branes
\cite{RibS,IPT2}.
The continuous parameter $\alpha_j$ would express the angular positions
and Wilson lines of D1, D2-branes respectively.
However, one should keep it in mind that the backgrounds here have
different geometries from the cigar due to the orbifolding procedure,
and thus the classical DBI analysis on the cigar as in \cite{RibS}
cannot be simply applied to our case.
~
\noindent
{\bf 2. }
Computations of cylinder amplitudes are straightforward but
more complicated than the compact brane case.
In the simplest example $M_{N-2}\otimes L_{N,1}$
some analysis has been already done in \cite{NST}\footnote
{The analysis in \cite{NST} corresponds to the case with discrete $\alpha$.}.
It is important that all the open string amplitudes
appearing in the A-A or B-B type overlaps are expanded
by the products of minimal characters and extended massive
characters with the {\em continuous\/} $U(1)$-charges
${\chi_{\msc{\bf c}}}^{(*)}(p_j,\omega_j;it,0)$, $\omega_j \in \mbox{{\bf R}}$, $0\leq \omega_j < 2N_jK_j$.
(See the modular transformation formula \eqn{S cont 2}.)
For the B-branes
the projection $P_{\msc{closed}}$ acts in the same way as
in the compact branes, namely imposes \eqn{U(1) integrality} and
\eqn{r constraint 2}, and hence the B-branes
can depend only on one parameter along the $U(1)$-direction.
As for the A-branes, on the other hand, $P_{\msc{closed}}$
only imposes the $U(1)$-charge integrality \eqn{U(1) integrality}
and not the second constraint \eqn{r constraint 2}.
If $K_j >1$, $P_{\msc{closed}}$ further gives rise to the summation
over the shifts $\omega_j\, \rightarrow\, \omega_j +2N_j l_j$
($l_j \in \mbox{{\bf Z}}_{K_j}$) in the open channel character
${\chi_{\msc{\bf c}}}^{(*)}(p_j,\omega_j;it,0)$, since the A-type Ishibashi states
$\dketc{p_j,m_j}_A$ exist only for $m_j= K_j n_j$ ($n_j \in
\mbox{{\bf Z}}_{2N_j}$).
The A-B (or B-A)-type overlaps are expressed by the products of
the twisted ${\cal N}=2$ characters $\chi_{(-,+)}(p;it)$ \eqn{twisted
massive} and $\chi_{L\,(-,+)}(it)$ \eqn{twisted minimal}.
We do not have spectral flow sums in the open channel in this case,
since only the $U(1)$-neutral states contribute to the twisted characters.
The Cardy condition is expected to be satisfied with suitable
spectral densities (after subtracting the IR divergences)
in all these cases.
~
\noindent
{\bf 3. }
We still have a possibility to construct other types of
non-compact BPS branes based on the `class 3' boundary states
\cite{ES-L},
which are associated with the massless matter representations
in the $L_{N_j,K_j}$-sectors.
In the cigar models
it has been pointed out \cite{Eguchi-strings04,FNP} that
they could describe the D2-branes found in \cite{RibS,IPT2}
covering the whole cigar.
However, as is suggested from a detailed analysis
of cylinder amplitudes performed in \cite{FNP},
it seems difficult that these class 3 branes can satisfy the Cardy condition,
as far as we insist on {\em unitary representations}
in the open string channel.
We leave this subtle problem to future works.
~
\subsection{Non-BPS Branes}
~
To close this section we discuss the Cardy states for non-BPS D-branes
that exhibit some interesting properties.
To avoid unessential complexity we shall concentrate on
the simplest case of $N$ NS5 branes (or $ALE(A_{N-1})$) described by
$M_{N-2} \otimes L_{N,1}$,
and only consider the compact branes. Extensions to
more general models and the cases of non-compact branes
should be straightforward.
The simplest non-BPS branes are of course obtained in the same way as
in the flat backgrounds (see {\em e.g.} \cite{Sen-review}),
that is, by projecting out the RR-components of boundary states and
by multiplying the remaining NSNS-components by $\sqrt{2}$.
These branes are constructed using the descent
relation and the $\mbox{{\bf Z}}_2$-orbifolding
acting by the space-time fermion number.
Since we now possess the $\mbox{{\bf Z}}_N$-symmetries in the $U(1)$-charge
sector, we may construct more non-trivial non-BPS branes
by using
the $\mbox{{\bf Z}}_N$-orbifolding procedure.
This type of non-BPS branes may be
regarded as the natural extension of the ``unstable B-branes'' in
the $SU(2)$-WZW model presented in \cite{MMS}.
The basic prescription for their construction is summarized as follows
(we focus on the NSNS-sector for the time being);
\begin{enumerate}
\item Start with the compact BPS brane \eqn{Cardy ncg}
\begin{eqnarray}
&& \ket{B;L,M}_B = {\cal N} P_{\msc{closed}} \left\lbrack
\ket{L,M}_{M_{N-2}} \otimes \ket{R=0}_{L_{N,1}}\right\rbrack~, ~~~\nonumber\\
&& \hspace{3cm}
(L+M\in 2\mbox{{\bf Z}})~,
\label{compact BPS NS5}
\end{eqnarray}
which includes the B-type Cardy states for both sectors.
Consider the ``wrong-dimensional'' BPS branes $\ket{B;L,M}'_{AB}$
which is defined by reversing formally the boundary condition
in the $M_{N-2}$-sector of \eqn{compact BPS NS5}.
The subscript $AB$ indicates to take the A and B-type
boundary conditions for the $M_{N-2}$
and $L_{N,1}$-sectors, respectively.
\item Sum up $\ket{B;L,M}'_{AB}$ over the spectral flows in the
open string channel, namely, we define
\begin{eqnarray}
&& \ket{B;L}_{AB} = \sum_{r\in \msc{{\bf Z}}_N}\, \ket{B;L,L+2r}'_{AB}~,
\label{nbps compact}
\end{eqnarray}
which should be the desired boundary states of non-BPS branes.
\end{enumerate}
It is important to note that, although the `wrong BPS brane'
$\ket{B;L,M}'_{AB}$ is not compatible with the charge-integrality condition,
\eqn{nbps compact}
is consistent because the sum over $r\in \mbox{{\bf Z}}_N$ projects out
states with fractional $U(1)$-charges. They actually consist of
Ishibashi states with integral $U(1)$-charges {\em separately\/}
in each sector, $M_{N-2}$ and $L_{N,1}$.
They do not satisfy the ${\cal N}=2$ boundary conditions \eqn{A-type}
or \eqn{B-type},
and at most preserve the ${\cal N}=1$ superconformal symmetry.
The modular bootstrap relation characterizing
this brane is written as
\begin{eqnarray}
&& {}_B \bra{B;O} e^{-\pi T H^{(c)}} \ket{B;L}_{AB}
= \chi_{L\,(-+)}(it) \, \widehat{\chi_{\msc{\bf G}}}(it,0)~,
\label{mb nbps 1}
\end{eqnarray}
which fixes the overall normalizations in \eqn{nbps compact}.
Here $\chi_{L\,(-+)}(\tau)$ is the twisted minimal character
in $M_{N-2}$-sector defined in \eqn{twisted minimal}, and
we have introduced a function
\begin{eqnarray}
&& \hspace{-5mm}
\widehat{\chi_{\msc{\bf G}}}(\tau,z) \equiv \sum_{r\in \msc{{\bf Z}}_{N}}\, \chi_{\msc{\bf G}}(2r;\tau,z)
= q^{-\frac{1}{4N}}\sum_{n\in \msc{{\bf Z}}}\,
\frac{(1-q)q^{\frac{n^2}{N}+n-\frac{1}{2}}
e^{2\pi i z \left(\frac{2}{N}n+1\right)}}
{\left(1+e^{2\pi i z}q^{n+\frac{1}{2}}\right)
\left(1+e^{2\pi i z}q^{n-\frac{1}{2}}\right)}\,
\frac{{\theta}_3(\tau,z)}{\eta(\tau)^3}~,
\label{chig hat}
\end{eqnarray}
which is
the sum of irreducible graviton character
over the whole spectral flows.
Note that \eqn{nbps compact}
is naturally regarded as the $\mbox{{\bf Z}}_N$-extension of the non-BPS branes in
the flat background \cite{Sen-review};
\begin{eqnarray}
\ket{Dp}_{\msc{non-BPS}} = \frac{1}{\sqrt{2}} \left(\ket{Dp}'+
\overline{\ket{Dp}}'\right)~,
\label{nbps brane flat}
\end{eqnarray}
where $\ket{Dp}'$ ($\overline{\ket{Dp}}'$) expresses the boundary state
for the `wrong-dimensional' BPS (anti-)$Dp$-brane.
Since each term of R.H.S is not compatible with the GSO condition,
the boundary state $\ket{Dp}_{\msc{non-BPS}}$ cannot be decomposed into
constituent branes. In the same sense our non-BPS brane
\eqn{nbps compact}
is irreducible and not decomposable to constituent
boundary states.
It is straightforward to work out cylinder amplitudes
and one can show that Cardy condition is always satisfied
for the non-BPS brane \eqn{nbps compact}.
Namely, all the overlaps are interpreted as
correct open string one-loop amplitudes.
The overlaps with compact BPS branes are easy to evaluate
in a similar manner as \eqn{mb nbps 1}, and we also find
\begin{eqnarray}
{}_{AB}\bra{B;L_1} e^{-\pi T H^{(c)}} \ket{B;L_2}_{AB}
= \sum_L N^L_{L_1,L_2}\, \widehat{\ch{}{L}}(it,0) \,
\widehat{\chi_{\msc{\bf G}}}(it,0)~,
\label{overlap nbps}
\end{eqnarray}
where $\widehat{\chi_{\msc{\bf G}}}(it,0)$ is defined in \eqn{chig hat},
and also we set
\begin{eqnarray}
\widehat{\ch{}{\ell}}(\tau,z) \equiv \sum_{r \in \msc{{\bf Z}}_{N}}\,
\ch{(\msc{NS})}{\ell,\ell+2r}(\tau,z) =
\sum_{s\in \msc{{\bf Z}}_{N-2}}\, c^{(N-2)}_{\ell,\ell+2s}(\tau)\,
\Th{2\ell + 2N s}{N-2}\left(\frac{\tau}{2N}, \frac{z}{N}\right)~.
\label{ch hat}
\end{eqnarray}
Here $c^{(k)}_{\ell,m}(\tau)$ denotes the level $k$ string function of
$SU(2)$.
Note that the open string channel includes states with fractional
$U(1)$-charges even in the self-overlap cases,
suggesting that this boundary state really
describes non-BPS D-branes.
To clarify the non-BPS nature it is important to examine the RR-sectors
of boundary states.
It is easy to construct the RR-counterpart of \eqn{nbps compact}.
However, we have to take account of
the compatibility with the charge integrality
and GSO projection together with the Minkowski part $\mbox{{\bf R}}^{5,1}$.
Let us now recall the formula for $U(1)$-charges in the R-sector
(\ref{U(1)-charge R sector}) ,
\begin{eqnarray}
&&M_{N-2} \mbox{-sector}: ~ {1\over 2}+{m'\over N},\hskip1.5cm
L_{N,1} \mbox{-sector}: ~ {1\over 2}+{n' \over N}~.
\end{eqnarray}
As we noted above, the sum over $r$ in (\ref{nbps compact}) forces
the fractional parts of the $U(1)$-charges $m'/N,n'/N$ to vanish and hence
$U(1)$-charge becomes $1/2$ in each R-sector.
When the boundary condition is flipped from B to A-type
in the $M_{N-2}$-sector,
$U(1)$-charge of the right mover
changes from $1/2$ to $-1/2$ and there is a net change of $U(1)$-charge
by $1$. Then a compensating change
must happen in the flat sector along $\mbox{{\bf R}}^{5,1}$.
One has to shift the dimension of the brane by one and obtains a brane with a wrong dimension
$|Dp'\rangle'$ ($p'$ is even(odd) for type IIA (IIB) string theory).
Note that in the NS5-brane background a BPS brane has odd dimensions
extended in the direction transverse to NS5-brane
and thus has odd (even) dimensions
extended along the NS5-branes in type IIA (IIB) theory.
Non-BPS brane is then
given by
\begin{eqnarray}
&& \ket{Dp'}^{'(\msc{NS})} \otimes \ket{B;L}_{AB}^{(\msc{NS})}
\pm \ket{Dp'}^{'(\msc{R})} \otimes \ket{B;L}_{AB}^{(\msc{R})}
~.
\label{nbps 2nd type}
\end{eqnarray}
In the present example it is also possible to construct a non-BPS brane
of the type known in the flat space-time
\begin{eqnarray}
&& \sqrt{2} \ket{Dp}^{(\msc{NS})} \otimes \ket{B;L}_{AB}^{(\msc{NS})}~.
\label{nbps 1st type}
\end{eqnarray}
Here $p$ is odd (even) in the type IIA (IIB) theory.
The second one \eqn{nbps 1st type} describes
an overall wrong-dimensional brane without RR-component and
the overall factor $\sqrt{2}$ is necessary by the same reason as
in the flat case.
The first one \eqn{nbps 2nd type} is more interesting and
characteristic for this conformal system. It is an overall {\em correct\/}
dimensional brane
and has a non-vanishing RR-component.
However, the boundary wave functions for the RR-ground states
always vanish, implying they have no RR-charges
(vanishing periods in other words).
In fact, as is obvious from the construction, they have the vanishing
$\mbox{{\bf Z}}_N$-brane charge valued in the twisted K-group
(see {\em e.g.} \cite{twisted K}).
This type of branes also break the space-time SUSY completely,
and one can show
that their self-overlaps always include open string tachyons.
In fact they always contain a contribution of $M_{N-2}$-sector
\begin{eqnarray}
\frac{1}{2}
\left(\ch{(\msc{NS})}{0,2}(it,0)- \ch{(\widetilde{\msc{NS}})}{0,2}(it,0)\right)
\equiv \frac{1}{2}
\left(\ch{(\msc{NS})}{N-2,N-2}(it,0)+\ch{(\widetilde{\msc{NS}})}{N-2,N-2}(it,0)\right)~,
\end{eqnarray}
in the open string amplitudes, where
the second term originates from the RR-boundary states.
In the second equality we used the formula \eqn{field identification}.
It yields the leading IR behavior;~ $\sim e^{-2\pi t (h-1/2)}$
with
\begin{eqnarray}
h= \frac{1}{2} - \frac{1}{N}~.
\label{tachyon nbps compact}
\end{eqnarray}
We have thus found open string tachyon modes
for any value $N\geq 2$.
We also remark that the involution $L\,\rightarrow\, N-2-L$
flips the sign in front of the RR-component in \eqn{nbps 2nd type}
as in the BPS branes.
Hence it is actually enough to only consider the plus sign in
\eqn{nbps 2nd type}.
~
We finally make a few comments:
~
\noindent
{\bf 1.}
In the recent paper \cite{Kutasov2}
D-brane configurations in NS5-backgrounds breaking/not breaking
the space-time SUSY have been investigated in detail
by means of the DBI action and an interesting geometrical interpretation of
tachyon condensation in the non-BPS branes has been proposed
in the Little String Theory (LST) \cite{GK}.
Among other things, it is claimed that the BPS branes
lying along the NS5 and breaking the space-time
SUSY completely, should be interpreted as the non-BPS branes in LST.
It will be interesting to compare this type of branes with our boundary states
for the compact non-BPS branes of the first type
\eqn{nbps 2nd type}. It is shown in \cite{Kutasov2} that
the open string modes describing the positions of these D-branes
transverse to NS5 become tachyonic and have the mass squared\footnote
{Although the $S^1$-compactification is taken in \cite{Kutasov2},
the tachyon mass given there
does not depend on the compactification radius. }
\begin{eqnarray}
\alpha' M^2_T = -\frac{1}{N}~.
\end{eqnarray}
This coincides precisely with our calculation of tachyon mass
\eqn{tachyon nbps compact}.
~
\noindent
{\bf 2. }
We can also consider extensions
to more general models and also the case of non-compact branes.
The consistency with the GSO projection
in RR-sector is the only non-trivial point. For instance, let us consider
the models with $N_M=1$, $N_L \geq 1$, which are identified with
the ALE fibrations as we addressed before, and focus on the compact
non-BPS branes.
We then find
\begin{itemize}
\item If $\gamma (\equiv \hat{c}-N_M-N_L)$ is even,
we have a similar situation as
in the NS5 case above.
Namely, we have two types of non-BPS branes with/without
the RR-components such as \eqn{nbps 2nd type},
\eqn{nbps 1st type}.
\item If $\gamma$ is odd, the boundary states of the type \eqn{nbps 2nd
type} is not allowed since the charge integrality condition in the R-sector (\ref{recall})
is not satisfied
(recall that L.H.S of (\ref{recall}) vanishes due to $r$-summation)
and R-sector
does not appear in the boundary state.
We have only the non-BPS branes without RR-components as
in the flat background.
\end{itemize}
~
\section{Conclusions}
~
In this paper we have discussed various aspects of
string theory compactification on non-compact Calabi-Yau manifolds
by making use of $SL(2;\mbox{{\bf R}})/U(1)$ supercoset theories
coupled to ${\cal N}=2$ minimal models.
We have used the extended characters of ${\cal N}=2$ SCA for
the $SL(2;\mbox{{\bf R}})/U(1)$ theory together with
the irreducible characters for the minimal model
and determined the closed string massless spectrum, open string Witten index
and (non-) BPS boundary states.
Important aspect of the massless spectrum is the following:
at our non-compact Gepner points
Calabi-Yau 3 and 4-folds possess only $(a,c)$ or $(c,a)$-type
massless states,
while $(c,c)$ or $(a,a)$-type states are absent in the spectrum.
Thus the theory possesses only K\"ahler structure deformations.
In the T-dual ${\cal N}=2$ Liouville description
the theory possesses only complex structure deformations
corresponding to the special Lagrangian cycles.
This is the characteristic feature
of the space-time with a conifold singularity and thus our models
describe generalized conifold singularities in CY 3 and 4-folds:
${\cal N}=2$ Liouville theories
describe their deformations while $SL(2;\mbox{{\bf R}})/U(1)$ theories
describe their resolutions.
On the other hand, in the case of K3 surface our models
possesses equal numbers of $(a,c)$, $(c,a)$, $(a,a)$ and $(c,c)$ states,
which corresponds to the characteristic feature
of ADE type (hyperK\"{a}hler) singularities.
We have also studied the spectrum of D-branes in our non-compact models:
only the B-type branes are allowed as compact branes (or
only the A-branes are possible in the ${\cal N}=2$ Liouville theory)
and the open string Hilbert space describing compact branes
consists of extended graviton representations of $SL(2;\mbox{{\bf R}})/U(1)$-sector and
representations of the minimal sector.
We have compared the spectra of compact branes
with those of the massless states in the closed string sector.
Some of the BPS branes (homology cycles) are
not associated with massless states:
corresponding space-time fields are frozen
due to non-normalizability of the wave function.
We have also seen that the
cylinder amplitudes of
$\widetilde{\mbox{R}}$-sector reproduce expected
intersection numbers of vanishing cycles.
The Cardy states describing non-BPS D-branes are also discussed.
They are expressed by boundary states breaking the ${\cal N}=2$
superconformal symmetry and identified as
extensions of the ``unstable B-branes" in the $SU(2)$-WZW model.
They may possess (massive) RR-components contrary to non-BPS branes in the
flat background.
Geometry of the deformed side of the conifold is relatively easy to study
due to the absence of quantum corrections and one can
study the Lagrangian cycles using Liouville theory.
On the other hand, possible quantum corrections make
the geometry of resolved conifold difficult to understand.
Even the dimensionality of the branes may not be well-defined.
It is interesting to see if the description
by means of the $SL(2;\mbox{{\bf R}})/U(1)$ theory will help
our understanding of the geometry of resolved conifold.
\section*{Acknowledgments}
\indent
Y. S. would like to thank N. Ishibashi, Y. Nakayama and Y. Satoh
for valuable comments and discussions.
The research of T. E. and Y. S. is partially supported by
Japanese Ministry of Education,
Culture, Sports, Science and Technology.
\newpage
|
{
"timestamp": "2005-01-17T06:48:50",
"yymm": "0411",
"arxiv_id": "hep-th/0411041",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/0411041"
}
|
\section{Introduction}
The recent measurements of the Cosmic Microwave Background
fluctuations, supernovae Ia, redshift surveys, clusters of galaxies
and other probes suggest a `concordance' model in which the universe
is flat and contains
approximately 4\% baryons, 26\% cold dark matter and 70\% dark
energy. It remains to be seen if this model will survive future tests,
but in any case there are many challenges ahead to understand the
formation of galaxies, and how they trace the mass distribution in the
non-linear regime. Redshift surveys provide an important
bridge between `linear cosmology' and the more complex processes of
galaxy formation.
Two main strategies have been implemented in mapping the local universe:
whole-sky `shallow' surveys (e.g. IRAS) and 'deep' surveys
over a limited parts of the sky (e.g. 2dFGRS, SDSS).
The Table below summarises the properties of the
main new surveys:
2dFGRS\footnote{http://www.mso.anu.edu.au/2dFGRS/},
SDSS\footnote{http://www.sdss.org/} + LRG\footnote
{Another part of the SDSS is the `Luminous Red Galaxies' (LRG)
with median redshift ${\bar z} \sim 0.5$,
An extension of the survey to higher redshift is now underway utilising
2dF.},
2MASS\footnote{http://www.ipac.caltech.edu/2mass/}
/6dFGS\footnote{http://www.mso.anu.edu.au/6dFGS/},
DEEP2\footnote{http://deep.berkeley.edu/}, and
VIRMOS\footnote{http://www.astrsp-mrs.fr/virmos/}.
\begin{center}
\begin{tabular}{|l||c|c|c|}
\hline
Survey
&number of galaxies
&median redshift
&angular coverage (sq. deg)
\\
\hline
2dFGRS & 230k & 0.1 & $\sim$ 1,800\\
SDSS & 1000k & 0.1 & $\sim$ 10,000\\
2MASS-2MRS & 25k & 0.02 & $\sim$ 40,000\\
2MASS-6dFGS & 150k & 0.05 & $\sim$ 20,000\\
DEEP2 & 65k & $\sim 1$ & 3.5 \\
VIRMOS &150k & $\sim 1$ & 16 \\
\hline
\end{tabular}
\end{center}
Each strategy has its pros and cons.
The whole-sky surveys have given useful 'full picture' of
the local cosmography
and they have allowed us to predict the local velocity field assuming
that light roughly traces mass.
The complete picture depends on careful mapping of the
Zone of Avoidance (ZoA),
as discussed in detail at the Proceedings of this Cape Town (2004)
meeting and at the previous two
ZoA conferences in Paris (1994) and Mexico (2000).
The deep limited-sky surveys are very useful for statistical studies
such as the power spectrum.
Both types of surveys pose challenges for quantifying the web
of cosmic structure.
Using simulations Bond, Kofman \& Pogosyan (1996) coined the
term `cosmic web' and argued that a filament-dominated structure was
already present in the overdensity fields of the initial Gaussian
fluctuations, and was then amplified over a Hubble time by non-linear
gravitational dynamics. In the new era of large redshift surveys
(see the Table) and huge simulations
the next important step is to quantify this `cosmic web' using various
novel statistical measures beyond the traditional methods
(e.g. Martinez \& Saar 2002; Lahav \& Suto 2004 for reviews)
and to identify `Great
Attractors', `Great Walls', `Zeldovich pancakes' and voids. This will
allow us to understand the role of initial conditions vs.
non-linear gravitational evolution and to constrain cosmological
models and scenarios for biased galaxy formation.
\section{Results from the 2dF Galaxy Redshift Survey}
Redshifts surveys in the 1980s and the 1990s (e.g the CfA, IRAS and Las
campanas surveys) measured redshifts
of thousands to tens of thousands of galaxies.
Multifibre technology now allows us to measure redshifts of
millions of galaxies.
The Anglo-Australian 2
degree Field Galaxy Redshift Survey\footnote{The 2dFGRS Team comprises:
I.J. Baldry, C.M. Baugh, J. Bland-Hawthorn, T.J. Bridges, R.D. Cannon,
S. Cole,
C.A. Collins,
M. Colless,
W.J. Couch, N.G.J. Cross, G.B. Dalton, R. DePropris, S.P. Driver,
G. Efstathiou, R.S. Ellis, C.S. Frenk, K. Glazebrook, E. Hawkins,
C.A. Jackson,
O. Lahav, I.J. Lewis, S.L. Lumsden, S. Maddox,
D.S. Madgwick, S. Moody, P. Norberg, J.A. Peacock, B.A. Peterson,
W. Sutherland, K. Taylor.
For more details on the survey and resulting publications see http://www.mso.anu.edu.au/2dFGRS/}
(2dFGRS)
measured redshifts for 230,000 galaxies
selected from the APM catalogue. The survey is now complete and publically available.
The median redshift of the
2dFGRS is ${\bar z} \sim 0.1$,
down to an
extinction corrected magnitude limit of $b_J<19.45$ (Colless et al. 2001).
A sample of this size allows large-scale structure statistics
to be measured with very small random errors.
Here we summarize some recent results
from the 2dFGRS on clustering and galaxy biasing.
Comprehensive recent reviews are given by Colless (2003) and Peacock (2003).
\subsection{The power spectrum of 2dF galaxies}
An initial estimate of the convolved, redshift-space power spectrum of the
2dFGRS has been determined (Percival et al. 2001)
for a sample of 160,000 redshifts.
On scales $0.02<k<0.15 \ifmmode {\,h\,\rm Mpc^{-1}$,
where $H_0 = 100 h$ km/sec/Mpc,
the data are
robust and the shape of the power spectrum is not affected by
redshift-space or non-linear effects, though the amplitude
is increased by redshift-space distortions.
Percival et al. (2001), Efstathiou
et al. (2002) and Lahav et al. (2002) compared the
2dFGRS and CMB
power spectra, and concluded that they are consistent with each other.
A key assumption in deriving cosmological parameters from redshift surveys is that
the biasing parameter,
defined as the ratio of
of galaxy to matter power spectra,
is constant, i.e. scale independent.
On scales of
$0.02 < k < 0.15 \ifmmode {\,h\,\rm Mpc^{-1}$
the fluctuations are close
to the linear regime, and there are theoretical reasons
(e.g. Fry 1996; Benson et al. 2000)
to expect that on large scales
the biasing parameter
should tend to a constant and close to unity at the present epoch.
This is supported by the derived biasing close to unity by combining
2dFGRS with the CMB (Lahav et al. 2002) and by the
study of the bi-spectrum of the 2dFGRS alone (Verde et al. 2002).
The 2dFGRS power spectrum (Figure 1) was fitted in Percival et al. (2001)
over the above range in $k$,
assuming scale-invariant primordial
fluctuations and a $\Lambda$-CDM cosmology, for
four free parameters: ${\Omega_{m}} h$, ${\Omega_{b}}/{\Omega_{m}}$, $h$
and the redshift space $\sigma^S_{8{\rm g}}$.
The amplitudes
of the linear-theory rms fluctuations are traditionally labeled $\sigma_{8{\rm m}}$
in mass $\sigma_{8{\rm g}}$ in galaxies, defined on $8 h^{-1}$ Mpc spheres.
Assuming a Gaussian prior on the Hubble constant $h=0.7\pm0.07$ (based
on Freedman et al. 2001) the shape of the recovered spectrum within
the above $k$-range was used to yield 68 per cent confidence limits on
the shape parameter ${\Omega_{m}} h=0.20 \pm 0.03$, and the baryon fraction
${\Omega_{b}}/{\Omega_{m}}=0.15 \pm 0.07$, in accordance with the popular
`concordance' model (e.g. Bahcall et al. 1999; Lahav \& Liddle 2004).
For fixed `concordance model' parameters $n=1, {\Omega_{m}} = 1 - {\Omega_\Lambda} = 0.3$,
$\Omega_{\rm b} h^2 = 0.02$ and a Hubble constant $h=0.70$,
the amplitude of 2dFGRS galaxies in redshift space is $\sigma_{8{\rm
g}}^S (L_s,z_s) \approx 0.94$ (at the survey's effective luminosity and redshift).
Recently the SDSS team presented their results for the power spectrum
(Tegmark et al. 2003a,b; Pope et al. 2004), and they found good agreement
with the 2dFGRS gross shape of the power spectrum.
Pope et al. (2004) emphasize that SDSS alone cannot
break the degeneracy between ${\Omega_{m}} h$ and ${\Omega_{b}}/{\Omega_{m}}$
because the baryon oscillations are not resolved given
the window function of the survey.
\subsection {Upper limits on the neutrino mass}
Solar, atmospheric, and reactor neutrino experiments have confirmed
neutrino oscillations, implying that neutrinos have non-zero mass, but
without pinning down their absolute masses. While it is established
that the effect of neutrinos on the evolution of cosmic structure is
small, the upper limits derived from large-scale structure could help
significantly to constrain the absolute scale of the neutrino masses.
Elgar\o y et al. (2002) used the 2dFGRS power spectrum (Figure 1) to
provide an upper limit $m_{\nu,\rm tot} < 2.2\;{\rm eV}$ ,
i.e. approximately 0.7 eV for each of the three neutrino flavours, or
phrased in terms of their contribution to the matter density,
$\Omega_{\nu} / \Omega_{\rm m} < 0.16$.
The WMAP team (Spergel et al. 2003) reported an improved
limit of $m_{\nu,\rm tot} < 0.71\;{\rm eV}$ (95\% CL).
Actually the main neutrino signature comes from
the 2dFGRS and the Lyman $\alpha$ forest which were combined with the
WMAP data. The main contribution of WMAP is that it constrains better the
other parameters involved, e.g. ${\Omega_{m}}$ (see also Hannestad 2003 and
Tegmark et al. 2003b for similar results from SDSS+WMAP).
Despite the uncertainties involved, it is remarkable that the results from
redshift surveys give upper limits which are lower than
those deduced from laboratory experiments, e.g. tritium decay.
\begin{figure}
\plotone{pk_mdm.eps}
\label{pk_mdm}
\caption{The observed
2dFGRS power spectrum (in redshift space and convolved with the
survey window function; Percival et al. 2001) contrasted
with models.
The three models are
the old Cold Dark Matter model
($\Omega_{\rm m}=1$, $\Omega_\nu=0$, $h=0.45$, $n=0.95$)
which poorly fits the data,
the `concordance' model
$\Lambda {\rm CDM}$ ($\Omega_{\rm m}=1$, $\Omega_\Lambda=0.7$,
$\Omega_\nu = 0$, $h=0.7$, $n=1.0$)
and Mixed Dark Matter
($\Omega_{\rm m}=1$, $\Omega_\nu=0.2$, $h=0.45$, $n=0.95$),
all with $\Omega_{\rm b}h^2 = 0.024$.
The models were normalized to each
data set separately, but otherwise these are assumed models, not
formal best fits.
Only the range $ 0.02 < k < 0.15 \ifmmode {\,h\,\rm Mpc^{-1}$ is used
at the present linear theory analysis.
These scales of $k$
roughly correspond to CMB harmonics $200 < \ell < 1500$
in a flat ${\Omega_{m}} = 0.3$ universe.
From Elgar\o y \& Lahav (2003).}
\end{figure}
\section{Wiener reconstruction of 2dFGRS}
Wiener filtering is a well-known technique and it has been applied to
many fields in astronomy. For example, the method was used to
reconstruct the angular distribution over the whole sky including the
ZoA (Lahav et al. 1994), the real-space density, velocity and
gravitational potential fields of the 1.2-Jy IRAS (Fisher et
al. 1995). The Wiener filter was also applied to the reconstruction
of the maps of the cosmic microwave background temperature
fluctuations. A detailed formalism of the Wiener filtering method as
it pertains to the large-scale structure reconstruction can be found
in Zaroubi et al. (1995). The Wiener filter is optimal in the sense
that the variance between the derived reconstruction and the
underlying true density field is minimised. As opposed to ad hoc
smoothing schemes, the Wiener filtering is
determined by the data. In the limit of high signal-to-noise, the
Wiener filter modifies the observed data only weakly, whereas it
suppresses the contribution of the data contaminated by shot noise.
Erdogdu et al. (2004) reconstructed
the underlying density field of the Two-degree Field
Galaxy Redshift Survey (2dFGRS) for the redshift range
$0.04 < z < 0.20$
using the Wiener filtering method.
They used a
variable smoothing technique with two different effective resolutions:
5 and 10 $h^{-1}$ Mpc at the median redshift of the survey.
They identified all
major superclusters and voids in 2dFGRS. In particular, they found
two large superclusters and two large local voids.
One of the two large superclusters is shown in Figure 2.
The full set of
colour maps can be viewed on the World Wide Web at
http://www.ast.cam.ac.uk/$\sim$pirin.
For comparison see the catalogue of superclusters derived by Einasto et al.
(2003) from the SDSS.
\begin{figure}
\plotone{2dF_wf_fig13.ps}
\label{pirin1}
\caption{Wiener reconstruction of a 2dFGRS redshift shell
$0.107 < z < 0.108$
for $5 h^{-1}$ Mpc cells.
(a) the redshift-space density field weighted by the selection function and the
angular mask; (b) the Wiener filtered density field
in redshift space; (c) the Wiener filtered density field
in real space, i.e. after correction for redshift distortion.
The contours are spaced at $\Delta \delta=0.5$
with solid/dashed lines denoting positive/negative contours;
the heavy solid lines correspond to $\delta=0$.
The connectivity of structure across the field is striking.
The Supercluster centred at $RA \approx 2^\circ; Dec \approx -31^\circ$
is one of the two largest superclusters in the 2dFGRS, and it contains
20 Abell clusters and approximately 80
smaller groups.
From Erdogdu et al. (2004).}
\end{figure}
\section{Discussion: Future studies of the cosmic web}
We motivate the great need for new approaches to
analysis of redshift surveys and simulations by some illustrative examples:
\noindent
(i) Consider two images, say of a cat and a dog.
If we take Fourier transforms of both, and swap the
amplitudes and phases, we will still be able to recognise the
cat in the image which retains its original phases, even if it has
the amplitudes from the dog's original image!
Phase information is thrown away in
the commonly used power spectrum (or its Fourier
transform, the two-point correlation function).
Two realisations of the galaxy distribution may have the same
power spectrum, but they may look very different due to phase correlations.
These phase correlations are expected to be due to the non-linear effects
in the evolution of the gravitational instability
and would arise even
if the primordial fluctuations were purely Gaussian.
\noindent
(ii) The 2-degree-Field Galaxy Redshift Survey (2dFGRS) power spectrum is consistent with the
low density ($\Omega_{\rm m} = 0.3$) Cold Dark Matter model (Percival et al. 2001).
Volume averaged $p$-point correlation functions
up to order $p=6$ have recently been calculated
(Baugh et al. 2004)
and in particular on small scales they are sensitive to the appearance
of two rich superclusters in the 2dFGRS volume, one of them is shown
in Figure 2. Visual inspection of the Abell
catalogue suggests that such superclusters are quite common. However,
they seem less common in $\Lambda$-CDM simulations which nevertheless
do agree with the 2dFGRS power spectrum ($p=2$) statistic. It is
important to know whether the $\Lambda$-CDM simulations pass the test of
high order moments, whether they require strong biasing in high density
regions and whether 2dFGRS is a fair sample of the nearby universe.
\noindent
(iii) Recent studies of higher moments, e.g. the three point
correlation function (or its Fourier transform, the bi-spectrum)
in both 2dFGRS and SDSS
showed that they set important constraints on biasing, although the
interpretation is model dependent (e.g. Verde et al. 2002; Kayo et
al. 2004).
\noindent
\bigskip
It is timely to address these issues now for a number of reasons:
\noindent
$\bullet$
For the first time the surveys are large enough to extract
volume limited subsets. Some of the statistics (e.g. minimal
spanning tree, percolation and Minkowski functionals)
could not be applied effectively
to flux limited surveys, where the mean separation between
observed objects varies with distance from the observer.
With volume limited samples we can apply these and new methods
easily, and contrast data with simulations on equal footing.
\noindent
$\bullet$
The new surveys are also large enough now to sub-divide the galaxies
by colour or spectral type. Recent studies of 2dFGRS and SDSS
show the bimodality of galaxy populations in colour or related properties
(e.g. Madgwick et al. 2002;
Kauffmann et al. 2004)
and that clustering patterns of `red' and `blue' galaxies
are quite different on scales smaller than 10 $h^{-1}$Mpc
(e.g. Zehavi et al. 2003; Madgwick et al. 2003,
Wild et al. 2004).
\noindent
$\bullet$
We can now probe the evolution of clustering patterns with redshift,
given DEEP2 and VIRMOS at redshift ${\bar z} \sim 1$ compared
to 2dFGRS and SDSS
at $ {\bar z} \sim 0.1$.
\noindent
$\bullet $
The N-body simulations (e.g. Virgo)
are now well advanced in resolution and volume to allow
us to produce 2dF-like or SDSS-like samples.
\section{Acknowledgements}
I thank
the conference organisers for the hospitality in Capetown,
and \O ystein Elgar\o y and Pirin Erdogdu
for their contribution to the work presented here.
I acknowledge a PPARC Senior Research Fellowship.
|
{
"timestamp": "2004-11-03T18:50:07",
"yymm": "0411",
"arxiv_id": "astro-ph/0411090",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411090"
}
|
\section{Introduction}
\setcounter{equation}{0}
The motivation to develop the results in the present paper
stems from a desire to have available a source of
examples of vertex operator algebras which can be used to
test ideas. The most familiar types of vertex operator algebras
tend to have a number of features in common. A large portion
of the
literature (in both mathematics and physics)
concerned with vertex
operator algebras $V$ assumes that
$V$ is of \emph{CFT-type}, that is the underlying Fock space has the shape
\begin{equation}\label{eq: CFT-type}
V = \mathbf{C1} \oplus V_2 \oplus V_3 ...
\end{equation}
Another common assumption is that $V$ is \emph{self-dual}
in the
sense that the dual module $V'$ is isomorphic to the
adjoint module
$V$. Deviation from any of these assumptions could be considered
somewhat pathological.
\vspace{.15 in}
It is a natural question to ask
whether there are significant classes of vertex operator algebras
exhibiting such pathologies.
We are particularly interested in the
case in which the vertex operator algebra $V$
is nevertheless well-behaved from the representation-theoretic
point-of-view. By this
we generally have in mind that $V$ is
\emph{rational} in the sense of \cite{DLM3},
that is, the module
category $V-Mod$ is semisimple.
Closely related to this is the condition that $V$ is \emph{regular}
in the sense of \cite{DLM1}. By
\cite{ABD}, this is the same as requiring that $V$
is both rational and $C_2$-cofinite (cf. \cite{Z}). A case of
particular interest
involves \emph{holomorphic} vertex operator algebras,
which by definition are rational and for which the adjoint module
is the unique simple module.
\vspace{.15 in}
Consider, then, the following pathologies that
a vertex operator algebra $V$ might exhibit:
\begin{eqnarray*}
&&(i) \ V \ \mbox{has nonzero negative weight spaces;} \\
&&(ii) \ V \ \mbox{is not self-dual as $V$-module;} \\
&&(iii)\ \mbox{the zero weight space is degenerate: dim} V_0 \geq 2.
\end{eqnarray*}
It is easily seen (see below for the easy proof) that (i)
$\Rightarrow$ (iii), so that there are
six remaining possible combinations of (i)-(iii) which may occur. We will show
in Section 5 that
there are regular, simple vertex operator algebras of all six types.
Indeed, it will be clear from our
construction that such vertex operator algebras \emph{exist in profusion}.
\vspace{.15 in}
A related issue that arises is concerned with the question of the
\emph{finiteness}
of the number of vertex operator algebras satisfying various
conditions. Among other things, we construct examples of the following:
\begin{eqnarray*}
&&(1) \mbox{ infinitely many nonisomorphic regular, simple vertex operator} \\
&& \mbox{ algebras with the \emph{same} partition function} \
Z_V(q) = \mbox{Tr}_V q^{L(0) - c/24} \\
&&\mbox{ and \emph{equivalent} module categories;} \\
&&(2) \mbox{ for any integer $c$ divisible by $8$, infinitely many
nonisomorphic } \\
&&\mbox{ holomorphic vertex operator algebras of central charge $c$}.
\end{eqnarray*}
One may arrange for the vertex operator algebras in (1) to be
holomorphic, for example,
or that none of them are self-dual.
Thus there is no finiteness theorem for holomorphic vertex operator
algebras, say, of a given central charge, that is analagous to
Minkowski's finiteness
theorem for positive-definite self-dual lattices.
Our examples \emph{are} consistent with the possibility of a
finiteness theorem for
(rational, say) vertex operator algebras with given values of both
central charge $c$ and \emph{effective} central charge $\tilde{c}$
(cf. \cite{DM1}).
\vspace{.15 in}
The method of construction for all of these examples is not difficult. It
involves taking a vertex operator algebra with known properties (we
use lattice theories (\cite{LL}) as they are particularly accessible)
and \emph{shifting} the Virasoro element while at the same time retaining
the overall space of vertex operators. For this reason, we
call this general class of theories
\emph{shifted vertex operator algebras}. This idea was already used
in \cite{DLinM} to a limited extent, and by Matsuo and Nagatomo \cite{MN},
who considered shifting the conformal vector in
the Heisenberg vertex operator algebra.
As far as we know however, there is presently no systematic
discussion of the properties
of shifted vertex operator algebras in the literature That is what we
carry out here.
\vspace{.15 in}
As we have suggested, once the basic
construction is presented, the implementation
in explicit examples
is not difficult. Nevertheless, the type of pathological vertex
operators
that can result seem not to be well-known.
We discuss the case of lattice theories set in Section 2, showing
how a shifted lattice
theory gives rise to a particular type of vertex algebra that we call
a \emph{$\mathbf{C}$-graded vertex operator algebra},
or more generally a $k$-graded vertex operator algebra, $k$ a
subgroup of $\mathbf{C}$. Vertex operator algebras in the usual sense
correspond to the case $k = \mathbf{Z}$.
Sections 3-5 are devoted to elucidating the properties of
shifted lattice theories in the case $k = \mathbf{Z}$, and among
other things we
construct the various examples alluded to above. In the final Section
6, we show how
our constructions can be put in a more abstract context, so that one
can define shifted vertex operator algebras in a quite general setting.
\vspace{.15 in}
\section{$\mathbf{C}$-graded Vertex Operator Algebras}
\setcounter{equation}{0}
We begin with the
\vspace{.15 in}
\noindent
{\sc Definition:} A \textit{\mbox{$\mathbf{C}$}-graded vertex
operator algebra} is a
quadruple $(V, Y, \mathbf{1}, \omega)$ consisting of a $\mathbf{C}$-graded
complex linear space (the \emph{Fock space})
\begin{equation}\label{eq: linspac}
V = \bigoplus_{ r \in \mathbf{C}}V_r,
\end{equation}
a linear map
\begin{eqnarray}\label{eq: Ymap}
Y: &V& \rightarrow ({\rm End} V)[[z,z^{-1}]], \nonumber \\
&v& \mapsto Y(v, z) = \sum_{n \in \mathbf{Z}}v_n z^{-n-1}
\end{eqnarray}
together with a pair of distinguished states $\mathbf{1}, \omega \in V$
(the \emph{vacuum} and \emph{Virasoro} elements respectively). The
following axioms are imposed:\\
(a) For any $u,v\in V,$ $u_nv=0$ if $n$ is sufficiently. \\
(b) Grading: Each homogeneous subspace $V_r$ has finite dimension, and
the $\mathbf{C}$-grading is truncated from below in the following sense:
the implication
\begin{eqnarray}\label{eq: grade}
V_r \neq 0 \Rightarrow \mbox{Re}(r) \geq | \mbox{Im}(r)|
\end {eqnarray}
holds for \textit{all but finitely many} $r$. (Here, Re($r$) and
Im($r$) refer to real and imaginary
parts of the complex number $r$.)\\
(c) Locality: For any pair of states $u, v \in V,$ there exits a nonnegative
integer $n$ such that
$$(z_1-z_2)^n[Y(u, z_1),Y(v,z_2)]=0.$$
(d) Creativity: If $v \in V$ then
\begin{eqnarray*}
Y(v, z)\mathbf{1} = v + \sum_{n < -1}v(n)\mathbf{1}z^{-n-1}.
\end{eqnarray*}
(e) Conformality: The state $\omega$ is a Virasoro element. That is,
if $Y(\omega, z) = \sum L(n)z^{-n-2}$
then there is a complex number $c$ (central charge) such that
\begin{equation}\label{eq: Virasororeln}
[L(m), L(n)] = (m-n)L(m+n) + \frac{(m^3-m)}{12}\delta_{m, -n}c Id.
\end{equation}
Moreover
\begin{eqnarray*}
\frac{d}{dz}Y(v, z) = Y(L(-1)v, z)
\end{eqnarray*}
for $v \in V$, and $L(0)$ is a semisimple operator such that
\begin{eqnarray*}
V_r = \{ v \in V | L(0)v = rv \}.
\end{eqnarray*}
A $\mathbf{C}$-graded vertex operator algebra is therefore a vertex
algebra with a particular kind
of grading induced by the $L(0)$ operator. Note that from (\ref {eq:
grade}), there are only finitely many
values of $r$ for which $V_r \neq 0$ and Re($r) < 0$. In particular, if
the grading is \emph{real} (i.e. $V_r \neq 0 \Rightarrow r \in
\mathbf{R}$), then the
truncation condition simply says that $V_r = 0$ for all small enough
$r$. If the grading is \emph{integral}
(i.e. $V_r \neq 0 \Rightarrow r \in \mathbf{Z}$), then $V$ is just a vertex
operator algebra in
the usual sense. For an additive subgroup $k \subseteq \mathbf{C}$,
we say that $V$ is
\emph{$k$-graded} in case $V$ is $\mathbf{C}$-graded and $V_r \neq 0
\Rightarrow r \in k$.
\section{Shifted Lattice Theories}
\setcounter{equation}{0}
In this Section we construct a large number of $\mathbf{C}$-graded
vertex operator algebras by the method of \emph{shifting}
the Virasoro element of a lattice vertex operator algebra $V_L$.
We will see that this idea leads to a rich
source of examples of $\mathbf{C}$-graded vertex operator algebras. The basic
set-up, which we discuss below in somee detail, is presented in
Section 4 of \cite{DLinM}.
\vspace{.15 in}
Let $L$ be a positive-definite even lattice of rank $l$, with inner product
\begin{eqnarray*}
(\ , \ ): L \ \mbox{x} \ L \rightarrow \mathbf{Z}.
\end{eqnarray*}
Let $H = \mathbf{C} \otimes L$
be the corresponding complex linear space equipped with the $\mathbf{C}$-linear
extension of $( \ , \ )$. In fact, it is useful to introduce
some related spaces, and for this purpose we recall that
the $\mathbf{Z}$-dual, or \emph{dual lattice} of $L$, is defined by
\begin{eqnarray*}
L^o = \{ f \in \mathbf{R} \otimes L \ | \ (f, \alpha) \in \mathbf{Z}, \
\mbox{all} \ \alpha \in L \}.
\end{eqnarray*}
Then for a (nonzero) additive subgroup $k \subseteq
\mathbf{C}$, set $H_k = k \otimes L^o \subseteq H$.
\vspace{.15 in}
Let $(M, Y, \mathbf{1},
\omega_L)$ be the free bosonic vertex operator based on $H$, and \\
$(V_L, Y, \mathbf{1}, \omega_L)$ the
corresponding lattice vertex operator algebra. Both of these theories
have central charge $l$. The
Fock space for $V_L$ is
\begin{equation}\label{eq: latticefockspace}
V_L = M(1) \otimes \mathbf{C}[L]
\end{equation}
where $\mathbf{C}[L]$ is the group algebra of $L$. For more
background we refer the reader to \cite{FLM}.
\vspace{.15 in}
For a state $h \in H \subseteq V_L$ we set
\begin{equation}\label{eq: shiftedvirasoro}
\omega_h = \omega_L + h(-2)\mathbf{1}.
\end{equation}
Note that $\omega_h \in (V_L)_2$. We are going to consider the quadruple
\begin{equation}\label{eq: shiftedvir}
(V_L, Y, \mathbf{1}, \omega_h),
\end{equation}
which we denote by $V_{L, h}$ when it is convenient.
This means that we are looking at the Fock space (\ref{eq: latticefockspace})
equipped with the \emph{same} set of fields as $V_L$ and the \emph{same}
vacuum state. However
the Virasoro state has been shifted, and consequently the conformal structure
and grading are modified.
\vspace{.15 in}
{\sc Theorem 3.1}: The quadruple (\ref {eq: shiftedvir}) is a
$\mathbf{C}$-graded
vertex operator algebra with central charge
\begin{equation}\label{eq: shiftedcc}
c_h = l - 12(h, h).
\end{equation}
If $h \in H_k$ for an additive subgroup $k \subseteq \mathbf{C}$,
then $V_{L, h}$ is $k$-graded. In particular, if $h \in L^o$
then $V_{L, h}$ is a vertex operator algebra.
\vspace{.15 in}
{\sc Proof:} Note that parts of this result are already contained in
\cite{DLinM} and \cite{MN}.
Since $V_L$ is a vertex operator algebra, and in view of
the fact that the Fock space, fields and vacuum state of $V_{L, h}$
are identical to those for $V_L$, it is evident
that the locality and creativity axioms for $V_{L, h}$ hold. So we only need
check the Virasoro and grading axioms.
\vspace{.15 in}
That $\omega_h$ is a Virasoro element is well-known (loc. cit.) We run
through the details to get the central charge. There are the
following relations:
\begin{equation}\label{eq: heisenbergrelns}
[h(m), h(n)] = m \delta_{m, -n}(h, h) \mbox{Id},
\end{equation}
\begin{equation}\label{eq: primaryfld}
L(m)h = 0 \ \mbox{for} \ m \geq 1, \ L(0)h = h.
\end{equation}
Moreover,
\begin{eqnarray*}
(h(-2)\mathbf{1})(n) = (L(-1)h)(n) = -nh(n-1).
\end{eqnarray*}
So if we set
\begin{equation}\label{eq: shiftedomegaops}
Y(\omega_h, z) = \sum_{n \in \mathbf{Z}} L_h(n)z^{-n-2},
\end{equation}
then
\begin{equation}\label{eq: shiftedLs}
L_h(n) = L(n) - (n+1)h(n).
\end{equation}
Now using (\ref {eq: Virasororeln}), (\ref{eq: heisenbergrelns}) and
(\ref{eq: primaryfld})
we calculate that
\begin{eqnarray*}
[L_h(m), L_h(n)] = (m-n)L_h(m+n) + \frac{(m^3-m)}{12}\delta_{m, -n}(l
- 12(h, h))\mbox{Id}.
\end{eqnarray*}
So the central charge is indeed $l - 12(h, h)$. It is an important
feature of (\ref{eq: shiftedLs})
that
\begin{eqnarray*}
L_h(-1) = L(-1),
\end{eqnarray*}
from which it is clear that the derivative property also holds in $V_{L, h}$.
\vspace{.15 in}
We turn to consideration of the grading. Obviously from (\ref {eq:
shiftedLs}) we have
\begin{eqnarray*}
L_h(0) = L(0) - h(0).
\end{eqnarray*}
Now $h(0)$ is a semisimple operator on $V_L$ with action
\begin{eqnarray*}
h(0) : u \otimes e^{\alpha} \mapsto (h, \alpha) u \otimes e^{\alpha}
\end{eqnarray*}
for $u \in M(1), \alpha \in L$. Then clearly $L_h(0)$ is also semisimple
and if $u \in M(1)_n$ then
\begin{eqnarray*}
L_h(0) : u \otimes e^{\alpha} \mapsto (n - (h, \alpha)) u \otimes e^{\alpha}.
\end{eqnarray*}
So the eigenspace $(V_{L, h})_r$ for $L_h(0)$ with eigenvalue $r$ is spanned by
states $u\otimes e^{\alpha}$ with $u \in M(1)_n$ and $n \geq 0$, $\alpha
\in L$, and satisfying
\begin{equation}\label{eq: eigenstates}
n + \frac{1}{2}(\alpha, \alpha) - (h, \alpha) = r .
\end{equation}
Let us write
\begin{equation}\label{eq: realineq}
r = x + iy, \; x, y \in \mathbf{R},
\end{equation}
\begin{equation}\label{eq: cmpxineq}
h = a + ib, \; a, b \in \mathbf{R}\otimes L.
\end{equation}
Then (\ref {eq: eigenstates}) tells us that
\begin{equation}\label{eq: realandim}
n + \frac{1}{2}(\alpha, \alpha) - (a, \alpha) = x, \\
-(b, \alpha) = y.
\end{equation}
Now because $( \ , )$ is positive-definite, all but finitely many
$\alpha \in L$ satisfy $(\alpha, \alpha) \geq 4(a \pm b, a\pm b)$.
In this case, an application of the Schwarz inequality leads to
\begin{eqnarray*}
&&(a, \alpha) \pm (b, \alpha) = (a \pm b, \alpha)
\leq |(a \pm b, \alpha)| \\
&\leq& \sqrt{(a \pm b, a \pm b)(\alpha, \alpha)}
\leq \frac{1}{2}(\alpha, \alpha),
\end{eqnarray*}
so that
\begin{eqnarray*}
|y| = |(b, \alpha)| \leq \frac{1}{2}(\alpha, \alpha) - (a, \alpha) =
x-n \leq x.
\end{eqnarray*}
This proves that $V_{L, h}$ satisfies the truncation condition (\ref
{eq: grade}).
In order to complete the proof of Theorem 3.1, it remains to show that
each of the eigenspaces $(V_{L, h})_r$ has finite dimension.
But this follows from what we have already done. Indeed
the first equation of (\ref {eq: realandim}) can be written in the form
\begin{equation}\label{eq: newvareq}
\frac{1}{2}(\alpha - a, \alpha - a) = x - n + \frac{1}{2}(a, a).
\end{equation}
Because $( , )$ is positive-definite and $L$ discrete,
for fixed $x, a$ there
are only finitely many choices for $\alpha \in L$ and $0 \leq n \in \mathbf{Z}$
which satisfy (\ref {eq: newvareq}). Finally, if
$h \in H_k$ for an additive subgroup $k \subseteq \mathbf{C}$,
it follows from (\ref {eq: eigenstates}) that $V_{L, h}$
is $k$-graded. This completes the proof of the Theorem. $\Box$
\vspace{.15 in}
We refer to the $\mathbf{C}$-graded vertex operator algebras
$V_{L, h}$ as \emph{shifted} vertex operator algebras.
\vspace{.15 in}
There is a more precise approach to the grading on
the vertex operator algebras $V_{L, h}$, for which we consider
the corresponding \emph{partition functions}.
For a $\mathbf{C}-graded$ vertex operator algebra (\ref {eq: linspac}) of
central charge $c$, we define the partition
function to be the usual formal $q$-expansion
\begin{equation}\label{eq: partfunc}
Z_V(q) = \mbox{Tr}_V q^{L(0) - c/24}.
\end{equation}
For example, it is well-known that we have
\begin{equation}\label{eq: lattpartfunc}
Z_{V_L}(q) = \frac{\theta_L(q)}{\eta(q)^l},
\end{equation}
where $\eta(q)$ is Dedekind's \emph{eta function}
\begin{eqnarray*}
\eta(q) = q^{1/24}\prod_{n=1}^\infty(1 - q^n),
\end{eqnarray*}
and $\theta_L(q)$ is the \emph{theta function} of $L$.
For any coset $C = L - h \subseteq H$ of $L$ in $H$ we define Fock spaces
\begin{eqnarray*}
\mathbf{C}[L-h] = \oplus_{\alpha \in L} \mathbf{C}e^{\alpha - h},
\end{eqnarray*}
\begin{eqnarray*}
V_{L-h} = M(1) \otimes \mathbf{C}[L-h],
\end{eqnarray*}
and the formal sum
\begin{eqnarray*}
\theta_C(q) &=& \sum_{f \in C}q^{(f, f)/2} = \sum_{\alpha \in L}
q^{(\alpha-h, \alpha-h)/2} \\
&=& q^{(h, h)/2}\sum_{\alpha \in L}q^{(\alpha, \alpha)/2 - (h, \alpha)}.
\end{eqnarray*}
$\theta_L(q)$ is just the case when $C = L$. Of particular interest
are the cosets $L - \lambda$ which are contained in the \emph{dual
lattice} $L^o$.
We know from \cite{D1} that for $\lambda \in L^o$, the Fock space
$V_{L- \lambda}$ naturally carries the structure of a simple $V_L$-module.
Moreover as $L - \lambda$ ranges over the elements of $L^o/L$,
we obtain in this way each simple $V_L$-module exactly once.
One defines the partition function
of the Fock spaces $V_{L-h}$-modules by the obvious extension of
(\ref {eq: partfunc}),
and we have (generalizing (\ref {eq: lattpartfunc}))
\begin{equation}\label{eq: shiftpartfunc}
Z_{V_{L-h}}(q) = \frac{\theta_{L-h}(q)}{\eta(q)^l}.
\end{equation}
Let us now consider the partition function for the $\mathbf{C}$-graded
vertex operator algebra $V_{L, h}$. We see from Theorem 3.1 and its proof that
\begin{eqnarray*}
Z_{V_{L,h}}(q) &=& \mbox{Tr}_{V_{L,h}} q^{L_h(0) - c_h/24}\\
&=& \mbox{Tr}_{V_{L}} q^{L(0) - h(0) - l/24 + (h, h)/2}\\
&=& Z_M(q) \mbox{Tr}_{\mathbf{C}[L]}q^{L(0) - h(0) + (h, h)/2}\\
&=& Z_M(q) \sum_{\alpha \in L} q^{(\alpha-h, \alpha-h)/2}\\
&=&\frac{\theta_{L-h}(q)}{\eta(q)^l}.
\end{eqnarray*}
Using (\ref {eq: shiftpartfunc}), we have proved
\vspace{.15 in}
{\sc Proposition 3.2:} The partition function of the shifted lattice vertex
operator algebra satisfies
\begin{equation}\label{shiftpartfuncidentity}
Z_{V_{L,h}}(q) = Z_{V_{L-h}}(q).
\end{equation} $\Box$
\vspace{.15 in}
This result has several interesting corollaries. Suppose
first that $h \in L^o$.
Then $V_{L,h}$ is a ($\mathbf{Z}$-graded)
vertex operator algebra by Theorem 3.1. On the other hand, by Proposition
3.2 and the discussion that preceded it, the partition function of $V_{L, h}$
is identical to that of the simple $V_L$-module $V_{L-h}$. It is now
easy to establish
\vspace{.15 in}
{\sc Theorem 3.3:} Let $L$ be a (nonzero) positive-definite even
lattice of rank $l$,
and let $N$ be any simple $V_L$-module. Then there
are infinitely many pairwise nonisomorphic ($\mathbf{Z}$-graded)
vertex operator algebras,
\emph{all} of which have the \emph{same} partition function as $N$.
\vspace{.15 in}
{\sc Proof:} Let $N = V_{L-\lambda}$ for some $\lambda \in L^o$.
We have only to consider the vertex operator algebras $V_{L,
h}$ where $h$ ranges over an infinite sequence of elements in the coset
$L - \lambda$
such that $(h, h)$ is strictly increasing. By (\ref {eq: shiftedcc}),
no two of these vertex operator algebras
are isomorphic, since the corresponding central charges are strictly
decreasing. On the
other hand, we have just explained that each of them has
partition function equal to $Z_N(q)$. $\Box$
\section{Modules over $V_{L, h}$}
\setcounter{equation}{0}
In this Section we continue the discussion of the modules and
partition functions
for the vertex operators $V_{L, h} \ (h \in L^o$) begun in $\S3$.
Recall from $\S3$
that for a vertex operator algebra
$V$ and $V$-module $N$ the partition function is
\begin{eqnarray*}
Z_{N,V}(q) = Z_N(q) = Tr_N q^{L(0) - c/24}.
\end{eqnarray*}
For what follows, we refer the reader to \cite{DLM2} for results and
terminology
concerning $V$-modules.
\vspace{.15 in}
{\sc Theorem 4.1:} Let $h \in L^o$. There are equivalences between
the categories
of weak, admissible and ordinary $V_L$-modules, and the categories
of weak, admissible and ordinary $V_{L, h}$-modules, respectively. In
particular, $V_{L, h}$ is a \emph{regular}
vertex operator algebra.
\vspace{.15 in}
{\sc Proof:} Suppose that $(N, Y_N)$ is any weak , admissible or
ordinary module for $V_L$. Thus
for $v \in V_L, Y_N(v, z)$ is the field on $N$ determined by $v$. We turn
$N$ into a (weak) $V_{L, h}$-module in the obvious way. Namely, take the
\emph{same} fields $Y_N(v, z)$, now with $v$ considered as a state in
$V_{L. h}$.
It is clear from the definition of module (over $V_L$) that in this way,
$N$ indeed becomes a weak $V_{L, h}$-module. In the same way, because $V_L$ and
$V_{L, h}$ share the same set of fields, a weak $V_{L, h}$-module
is also a $V_L$-module. In this way, the equivalence of the category
of weak modules
for $V_L$ and $V_{L, h}$ follows.
\vspace{.15 in}
Now take $N$ to be a simple $V_L$-module, so that $N = V_{L-\lambda}$
for some $\lambda \in L^o$. Then certainly $N$ is an irreducible module
for the operators spanned by Fourier modes of states in $V_{L, h}$. Moreover,
the argument used to establish Proposition 3.2 shows that
\begin{eqnarray*}
Z_{N, V_{L, h}}(q) = Z_{V_{L-\lambda-h}}(q) =
\frac{\theta_{L-\lambda-h}}{\eta(q)^l}.
\end{eqnarray*}
In particular, $N$ is indeed a module over $V_{L, h}$. Finally, let
$N$ be a simple
module for $V_{L, h}$. Then $N$ is a simple weak module
for $V_L$, and by Theorem 3.16 of \cite{DLM2}, $N$ is therefore a
simple (ordinary)
module for $V_L$. All parts of the Theorem now follow easily from what we have
established, together with the result \cite{DLM2} that $V_L$ is
a regular vertex operator algebra. $\Box$
\vspace{.15 in}
From the proof of Theorem 4.1, we see that the object map of the
categorical equivalence
\begin{equation}\label{eq: functor}
F : V_L-Mod \longrightarrow V_{L, h}-Mod
\end{equation}
is just an identification $F(N) = N$.
\vspace{.15 in}
Suppose that
$L = L^o$ is self-dual. This is equivalent \cite{D1} to the assertion
that $V_L$ has
a \emph{unique} simple module, namely $V_L$ itself. Indeed $V_L$
is \emph{holomorphic}. That is, it is regular and has a unique simple module.
By Theorem 4.1, we can conclude that $V_{L, h}$ is also a holomorphic
vertex operator algebra whenever $h \in L$.
\vspace{.15 in}
{\sc Theorem 4.2:}. Let $c$ be any integer divisible $8$. Then there
are infinitely
many pairwise nonisomorphic, holomorphic, $C_2$-cofinite vertex
operator algebras of central charge $c$.
\vspace{.15 in}
\noindent
{\sc Remark:} By results from \cite{Z} and \cite{DLM3}, we know that if $V$ is
holomorphic
and $C_2$-cofinite, then we necessarily have $8|c$.
\vspace{.15 in}
{\sc Proof:} Let $d$ be a positive integer satisfying
\begin{eqnarray*}
24d + c > 0.
\end{eqnarray*}
For each integer $r \geq d$, let $(L_r, h_r)$ be a pair consisting of
a self-dual, even lattice $L_r$ of rank $24r + c$ and an element $h _r \in L_r$
which satisfies $(h_r, h_r) = 2r$. Such a sequence of pairs is easily
constructed:
for example, we can take $L_r$ to be the orthogonal direct sum of
$3r+\frac{c}{8}$
copies of the $E_8$ root lattice. From the discussion preceding the
statement of
Theorem 4.2, it follows that each of the shifted vertex operator algebras
$V_{L_r, h_r}$ is holomorphic, and from (\ref{eq: shiftedcc}) we see that
the central charge of each $V_{L_r, h_r}$ is $c$.
\vspace{.15 in}
Next we assert that if $r \neq s$ then $V_{L_r, h_r}$ and $V_{L_s, h_s}$
are \emph{not} isomorphic as vertex operator algebras. Indeed, by
(\ref {shiftpartfuncidentity}),
the partition function of $V_{L_r, h_r}$ coincides with that of
$V_{L_r}$ itself,
and hence has the shape
\begin{eqnarray*}
Z_{V_{L_r, h_r}}(q) = q^{-r-c/24}(1 + \cdots).
\end{eqnarray*}
Our assertion follows from this. Finally, the $C_2$-cofiniteness condition
for a vertex operator algebra $V$ refers to the finiteness of the
codimension of the
subspace $C_2(V)$ of $V$ spanned by states $u(-2)v$ for $u, v \in V$.
In view of the fact
that $V_L$ and $V_{L, h}$ share the same states and fields,
it is evident that we have $C_2(V_L) = C_2(V_{L, h})$. Since one knows (
\cite{DLM3}, Proposition 12.5) that $V_L$ is $C_2$-cofinite, then so too are
the vertex operator algebras $V_{L, h}$. This completes the proof of
the Theorem.
$\Box$
\vspace{.15 in}
Next we consider the question of \emph{duality} of modules. Recall
\cite{B}, \cite{FHL}
that if $V$ is a vertex operator algebra and $(N, Y_N)$ a $V$-module,
then the restricted
dual $N'$ of $N$ becomes a $V$-module if we inflict upon it the
fields $Y_N'(v, z)$ \emph{adjoint}
to $Y_N(v, z)$. Precisely, the adjoint is defined for $v \in V$ using
\begin{eqnarray*}
Y_N(v, z)^{\dagger} = Y_N(e^{zL(1)}(-z^{-2})^{L(0)}, z^{-1}).
\end{eqnarray*}
Then $(N', Y_N')$ is a $V$-module, called the \emph{dual} module of $N$.
For $f \in N', v \in V, n \in N$ and $< \ , >: N' \otimes N
\rightarrow \mathbf{C}$ the canonical pairing, we have
\begin{eqnarray*}
<Y_N'(v, z)f, n> = <f, Y_N(v, z)^{\dagger}n>.
\end{eqnarray*}
We say that $N$ is \emph{self-dual} in case there is an
isomorphism of $V$-modules $N \cong N'$. This applies, in particular,
to the adjoint module $V$ itself. For example,
a holomorphic vertex operator algebra is necessarily self-dual.
\vspace{.15 in}
{\sc Proposition 4.3:} The equivalence of categories (\ref{eq: functor})
preserves dualities. That is, we have $F(N') = F(N)'$ for any $V_L$-module
$N$.
\vspace{.15 in}
{\sc Proof:} We have already seen that any $V_L$-module is \emph{ipso facto}
a $V_{L, h}$-module with the same set of fields. It follows from what we have
said that the adjoint operators are also the same, and the
Proposition follows immediately.
$\Box$
\vspace{.15 in}
Now we know that for a lattice vertex operator algebra $V_L$,
the dual of the simple $V_L$-module $V_{L - \lambda}$ ($\lambda \in L^o$)
is $V_{L + \lambda}$. Hence, the same is true if we regard this
pair as modules over a shifted vertex operator algebra $V_{L, h}$, $h \in L^o$.
Then by Propositions 3.2 and 4.3, the partition
functions for $V_{L, h}$ and its dual satisfy
\begin{eqnarray*}
Z_{V_{L, h}}(q) = Z_{V_{L-h}}(q), \\
Z_{V_{L, h}'}(q) = Z_{V_{L+h}}(q).
\end{eqnarray*}
From this and Theorem 4.1, it follows that the following is true:
\vspace{.15 in}
{\sc Theorem 4.4:} Let $L$ be an even lattice and $h \in L^o$. Then
the shifted vertex operator algebra $V_{L, h}$ is self-dual if, and only if,
$2h \in L$. $\Box$
\section{Types of simple vertex operator algebras}
\setcounter{equation}{0}
In this Section we consider various types of simple vertex operator
algebras. The
attributes of $V$ that concern us here are related to the nature of the maps
in the following commutative diagram:
\begin{eqnarray*}
\begin{array}{ccccc}
&&V_{-1}&\stackrel{L(1)}{\longleftarrow}&V_0 \\
&&\downarrow&&\downarrow \\
V_{-1}& \stackrel{L(1)}{\longleftarrow} &V_0
&\stackrel{L(1)}{\longleftarrow}& V_1 \\
\downarrow& & \downarrow& & \\
V_0&\stackrel{L(1)}{\longleftarrow}&V_1&&
\end{array}
\end{eqnarray*}
where all vertical maps are $L(-1)$.
\vspace{.15 in}
{\sc Lema 5.1}\cite{L}: Let $V$ be a simple vertex operator algebra. Then
\begin{equation}\label{eq: codim}
\mbox{dim} \ V_0/L(1)V_1 \leq 1.
\end{equation}
{\sc Proof:} Li proved \cite{L} that for
a vertex operator algebra $V$, the space Hom$_V(V, V')$
of $V$-module maps of the adjoint module $V$ into the dual
module $V'$ has dimension equal to that of $V_0/L(1)V_1$. But
if $V$ is also simple then by Schur's Lemma, the Hom space
in question has dimension at most $1$. The Lemma is proved. $\Box$
\vspace{.15 in}
{\sc Lemma 5.2}: Suppose that $V$ is a vertex operator algebra. Then
\begin{eqnarray*}
\mbox{dim} V_0 > \mbox{dim} \ V_{-1}.
\end{eqnarray*}
In particular, if $V_0 = \mathbf{C}\mathbf{1}$ then $V$ has no
nonzero negative weight spaces.
\vspace{.15 in}
{\sc Proof:} We know \cite{DLinM} that $L(-1): V_n \rightarrow V_{n+1}$
is an injection
as long as $n \neq 0$. It therefore suffices to
show that $\mathbf{1} \notin L(-1)V_{-1}$. Suppose
that $v \in
V_{-1}$ satisfies $L(-1)v = \mathbf{1}$. Bearing in mind that $L(1)$
and $L(-1)$
generate an algebra $S \cong sl_2$ of operators on V,
we see that $v$ generates an
$S$-submodule for which $\mathbf{1}$
is the \emph{unique} highest weight vector
(up to scalars). The
structure theory of $sl_2$-modules shows that this is not possible.
$\Box$
\vspace{.15 in}
\noindent
{\sc Definition:} Suppose that $V$ is a simple vertex operator
algebra. We say that $V$ has \emph{type
$I$} if $V_0 = L(1)V_1$ and \emph{type II} if $L(1)V_1 \neq V_0$;
\emph{type A}
if dim$V_0 > 1$ and \emph{type B} if dim$V_0 = 1$; \emph{type +} if
$V_{-1} = \{0\}$
and \emph{type -} if $V_{-1} \neq \{0\}$. In other words, $V$ has
type $II$ or type $I$
according to whether it is self-dual or not; it has type $B$ or type
$A$ according to whether
the vacuum is nondegenerate or not; and type $-$ or $+$ according to whether
it has nonzero negative weight spaces or not.
\vspace{.15 in}
One can combine these qualities to obtain eight types
of simple vertex operator algebras in all.
Not all of them exist, however. It is a consequence of Lemma 5.2 that
there can be \emph{no} simple vertex operator algebras of types $IB-$
or $IIB-$.
We present a table of the possibilities. Lemma 5.1 shows that the
only two possibilities for
the first column are $0$ and $1$.
\vspace{.15 in}
\noindent
\( \begin{array}{cccccc}
\hspace{2.5 cm} \underline{ \mbox{codim}L(1)V_1}&
\underline{\mbox{dim}V_{-1}}& \underline{\mbox{dim}V_0}&
\underline{\mbox{Type}}&
\underline{\mbox{Exist?}}
\\
\hspace{3 cm}0&>0&>1&IA-& \mbox{Yes} \\
\hspace{3 cm}0& >0&1&IB-&\mbox{No}\\
\hspace{3 cm}0&0&>1&IA+& \mbox{Yes} \\
\hspace{3 cm}0&0&1&IB+& \mbox{Yes}\\
\hspace{3 cm}1&>0&>1&IIA-& \mbox{Yes}\\
\hspace{3 cm}1&>0&1&IIB-& \mbox{No}\\
\hspace{3 cm}1&0&>1&IIA+& \mbox{Yes}\\
\hspace{3 cm}1&0&1&IIB+&\mbox{Yes}
\end{array} \)
\vspace{.15 in}
\noindent
The "existence" column indicates whether a given type exists or not.
We have already explained
nonexistence for types $IB-$ and $IIB-$.
We next
give some further explicit constructions of shifted lattice
vertex operator algebras which will establish the existence of examples of
each of the remaining six types. From what we have already proved in
previous Sections,
all of our examples are simple, $C_2$-cofinite, regular vertex
operator algebras.
\vspace{.15 in}
\noindent
{\sc Example 1:} Let $0\leq k \leq 2N$ be integers with $N \geq 2$.
Let $L = \mathbf{Z}\alpha$
be the $1$-dimensional lattice spanned by $\alpha$ where $(\alpha,
\alpha) = 2N$, and
take $h = k\alpha/2N \in L^o$. Thus $V_{L, h}$ is a vertex operator
algebra. We will discuss
the properties of $V_{L, h}$ as $k$ varies within the indicated range.
\vspace{.15 in}
\noindent
(i) $k = 0$. Type $IIB-$. Here $V_{L, h}$ is simple the lattice
theory $V_L$, which is
self-dual, has no nonzero negative weight spaces, and has nondegenerate vacuum.
\vspace{.15 in}
\noindent
(ii) $0<k<N$. Type $IB+$. In this case $2h \notin L$, so $V_{L, h}$
is not self-dual
by Theorem 4.4. If $\beta = m\alpha \in L$ and $u \in H$, then when
considered as
a state in $V_{L, h}$, we have
\begin{eqnarray*}
wt(u \otimes e^{\beta}) = wt(u) + m(mN-k).
\end{eqnarray*}
This follows from (\ref {eq: eigenstates}). Since $0<k<N$ then
$m(mN-k) \leq 0$ if, and only if, $m=0$. It follows that the vacuum space of
$V_{L, h}$ is nondegenerate, so that $V_{L, h}$ has the type stated.
\vspace{.15 in}
There are
three other essentially different cases, each of which can be analyzed
in the same way. We merely state the result in each case:
\vspace{.15 in}
\noindent
(iii) $k=N$. Type $IIA+$.
\vspace{.15 in}
\noindent
(iv) $N<k<2N$. Type $IA-$.
\vspace{.15 in}
\noindent
(v) $k=2N$. Type $IIA-$.
\vspace{.15 in}
In this way we have constructed 5 of the six possible types of
vertex operator algebras. Only type $IA+$ remains
to be accounted for. We get this type in the next example.
\vspace{.15 in}
\noindent
{\sc Example 2:} We take $L$ to be the root lattice of type $A_l$
with $l \geq 2$. Let $h \in L^o$ be such that $L + h$ generates $L^o/L$.
As this latter quotient has order $l+1 \geq 3$ then $V_{L, h}$
is not self-dual by Theorem 4.4. A calculation only slightly more
complicated that the one above shows that $V_{L, h}$ has
no nonzero negative weight spaces, that is it has type $+$.
Moreover the zero weight space is spanned by states of the form
$\mathbf{1} \otimes e^{\beta}$ where $\beta \in L$ is either
$\mathbf{1}$ or a connected sum
\begin{eqnarray*}
\beta = \alpha_1 + \alpha_{l-1} + ... + \alpha_t, 1\leq t \leq l,
\end{eqnarray*}
where $\alpha_1, ...\alpha_l$ is a fundamental system of roots.
So $V_{L, h}$ has the asserted type $IA+$, indeed we have shown that the
zero weight space has dimension $l+1$.
It follows that for all integers $n\geq 1$, there are (simple) vertex operator
algebras $V$ of type $IA+$ for which the zero weight space $V_0$
has dimension $n$
\section{An Abstract Approach}
\setcounter{equation}{0}
We have made use of Proposition 3.2 mainly in the case that $h \in L^o$.
In general it says that the partition function of the shifted vertex operator
algebra $V_{L, h}$ coincides with that of a twisted $V_L$-module,
namely the one determined by the coset $L - h$ in $H/L$. (See \cite{DM2}
for twisted modules over lattice theories in the case of automorphisms
of finite order.) In this Section we show how to
extend this to a general setting. The main idea is to utilize a
certain operator $\Delta( \ , z)$ introduced by Li \cite{L}, and which we
have used elsewhere \cite{DLM4}.
\vspace{.15 in}
We work with the following set-up: $V$ is a vertex operator algebra
of central charge $c$ and
$h \in V_1$ satisfies the following conditions:
\begin{eqnarray*}
(i)&&h \ \mbox{is a primary state, i.e.} \ L(n)h = 0, n \geq 1; \\
(ii)&&h(0) \ \mbox{is semisimple with real eigenvalues}; \\
(iii) &&h(n)h = 0 \ \mbox{for} \ 0 \leq n \neq 1, \mbox{and} \
h(1)h \in \mathbf{C1}; \\
(iv)&&[h(m), h(n)] = m \delta_{m, -n}h(1)h \mbox{Id}.
\end{eqnarray*}
The same argument
given during the proof of Theorem 3.1 shows that
$\omega_h = \omega - h(-2) \mathbf{1}$ is
a Virasoro element in $V$ of central charge
\begin{eqnarray*}
c_h = c - 12h(1)h,
\end{eqnarray*}
where we identify $h(1)h$ with the scalar multiple of $\mathbf{1}$
to which it is equal. We continue to use
the notation (\ref {eq: shiftedomegaops}), whence (\ref {eq:
shiftedLs}) still holds.
\vspace{.15 in}
\noindent
Now set
\begin{eqnarray*}
\Delta(h, z) = z^{h(0)}\mbox{exp}\{ - \sum_{k \geq 1} \frac{h(k)}{k}
(-z)^{-k}\}.
\end{eqnarray*}
\vspace{.15 in}
{\sc Proposition 6.1:} Suppose that $(M, Y_M)$ is a
$V$-module. Then
$(M, Y_ {M, h})$ is a $e^{- 2 \pi i h(0)}$-twisted
$V$-module, where we set
\begin{eqnarray*}
Y_{M, h}(v, z) = Y_M(\Delta(h, z)v, z) \ \mbox{for} \ v \in V. \ \ \ \Box
\end{eqnarray*}
\vspace{.15 in}
If the eigenvalues of $h(0)$ are \emph{rational},
this result has been
proved in \cite{L}. But the same proof works in our more general
setting if we use the definition of twisted module given in \cite{DLinM}
for an automorphism whose eigenvalues lie on the unit circle.
\vspace{.15 in}
We will prove an analog of Proposition 3.2, namely
\vspace{.15 in}
\noindent
{\sc Proposition 6.2:} We have
\begin{eqnarray*}
Z_{(M, Y_ {M, -h})}(q) = \mbox{Tr}_Mq^{L_h(0) - c_h/24}.
\end{eqnarray*}
\vspace{.15 in}
{\sc Proof:} The reader may verify that
\begin{eqnarray*}
\Delta( - h, z)\omega = \omega - hz^{-1} + \frac{1}{2}h(1)hz^{-2},
\end{eqnarray*}
so that the corresponding zero mode operator is
\begin{eqnarray*}
L_{\Delta, - h}(0) = L(0) - h(0) + \frac{1}{2}h(1)h.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
Z_{(M, Y_ {M, -h})}(q) &=& \mbox{Tr}_Mq^{L_{\Delta, -h}(0) - c/24} \\
&=& \mbox{Tr}_M q^{ L(0) -
h(0) + \frac{1}{2}h(1)h - c/24} \\
&=& \mbox{Tr}_M q^{L_h(0) - c_h/24},
\end{eqnarray*}
as required. $\Box$
\vspace{.15 in}
Now assume that $h(0)$ has \emph{integral} eigenvalues, so that $e^{2
\pi i h(0)}$
is the \emph{trivial} automorphism of $V$. Then $(M, Y_{M, h})$ is
a $V$-module by Proposition 6.1, and in particular it is $\mathbf{Z}$-graded.
Taking $V = M$, Proposition 6.2 together with the argument used in the
proof of Theorem 3.1 show that $(V, Y, \omega_h, \mathbf{1})$
is a vertex operator algebra of central charge $c_h$. We denote it by
$V_h$. Evidently, $V_h$ is
a type of shifted vertex operator algebra that includes
the shifted lattice theories considered earlier. Furthermore,
we see as in the proof of Theorem 4.1 that $V$ and $V_h$
have equivalent module categories (of various types). For example,
if $V$ is rational then so too is $V_h$, and the set of partition
functions of the simple modules for the two
vertex operator algebras agree. In particular, in this case the effective
central charge is an invariant, i.e. it is the same for each vertex
operator algebra.
We end the paper with a natural question. Having constructed vertex
operator algebras exhibiting various pathologies by modifying the
conformal vector of a `nice' vertex operator algebra, one can ask
whether all such pathologies arise in this way.
\vspace{.15 in}
\noindent
{\sc Question:} Suppose $V=(V,Y,{\bf 1},\omega)$ is a simple
$\mathbf{Z}$-graded rational vertex operator algebra. Is it true
that there exists $h\in V_1$ satisfying conditions (i)-(iv)
above,
and such that $(V,Y,{\bf 1}, \omega_h)$ is of CFT type?
|
{
"timestamp": "2004-11-24T01:43:05",
"yymm": "0411",
"arxiv_id": "math/0411526",
"language": "en",
"url": "https://arxiv.org/abs/math/0411526"
}
|
\section{Introduction}
The Wisconsin H$\alpha$ Mapper (WHAM) has provided the first large-scale
survey of the distribution and kinematics of ionized interstellar
hydrogen, covering the sky north of declination $-30\arcdeg$ with and
angular resolution of about 1$\arcdeg$ and a velocity resolution of 12 km
s$^{-1}$ within approximately $\pm 100$ km s$^{-1}$ of the LSR (Haffner et
al 2003). This survey shows interstellar H$\alpha$ emission filling the
sky, with loops, filaments, and other large emission enhancements
superposed on a more diffuse background. However, in addition to these
large-scale features, the survey also reveals numerous small H$\alpha$
emission regions that have angular sizes comparable to or less than WHAM's
1$\arcdeg$ diameter beam. In narrow ( $\approx 20$ km
s$^{-1}$) velocity interval maps, these WHAM point sources stand out
as intensity enhancements in a single beam (or two adjacent beams) within
a region of fainter diffuse emission.
Below we briefly describe our procedure for identifying and characterizing
these enhancements, and we list the resulting flux, radial velocity,
and line width of the H$\alpha$ emission, along with any previously
cataloged nebulosity or hot star that may be associated with the region.
The nature of most of these emission regions is unknown.
\section{Identification of ``WHAM Point Sources''}
The enhancements were identified through a systematic search ``by eye''
through the entire data cube of the WHAM survey. This consisted of
examining regions of the sky approximately 100 to 400 square degrees in
size within narrow (20 to 30 km s$^{-1}$) radial velocity intervals
centered between $-90$ km s$^{-1}$ and $+90$ km s$^{-1}$ (LSR). To
minimize confusion with structure within bright, larger scale emission
features near the Galactic midplane, we confined the search to Galactic
latitudes $|$b$| > 10\arcdeg$. We also avoided the radial velocity
interval $-15$ to $+15$ km s$^{-1}$ in directions toward the
Orion-Eridanus bubble, where relatively bright high latitude H$\alpha$
emission features make the identification of ``point sources''
unreliable. A less subjective identification program was also carried
out, which calculated for each of the approximately 37,000 survey spectra
the difference between the spectrum in a given direction and the average
spectrum of that direction's nearest neighbors. Directions with an
enhancement were then selected based upon whether the difference spectrum
exhibited an emission feature that was significantly greater than the
scatter in the intensities of the nearest neighbors within the velocity
range of the feature. This second method yielded a factor of ten more
``point sources identifications''. However, a cursory examination
revealed that the vast majority of these were false positives associated
with small angular scale fluctuations within the diffuse H$\alpha$
background. We concluded that confidence in the identification of a true
enhancement above the background required an examination ``by eye'' of a
relatively large ($\sim 10\arcdeg \times 10\arcdeg$) region of the
surrounding sky, not just the six nearest neighbors. This allowed
us to select only those enhancements that stood out most clearly
against the background and was thus the more conservative approach. The
survey was examined by two of us (VC and RJR) independently, and the good
agreement between the two results suggests that the enhancement
identifications are robust, with H$\alpha$ surface brightnesses measured
down to about 0.3 R (1 R = $10^6$/4$\pi$ photons cm$^{-2}$ s$^{-1}$
sr$^{-1}$), corresponding to an H$\alpha$ flux of about 1 $\times 10
^{-11}$ erg cm$^{-2}$ s$^{-1}$ for sources subtending 1$\arcdeg$. While
this H$\alpha$ flux limit is not particularly low for a planetary nebula
search, the surface brightness limit, corresponding to an emission measure
of about 1 cm$^{-6}$ pc, is well below that of most planetary nebula
searches. The sensitivity of WHAM is further enhanced over low spectral
resolution imaging in cases where the enhancement is Doppler shifted with
respect to the often higher surface brightness emission associated with
the ubiquitous warm ionized component of the interstellar medium (see
below).
The H$\alpha$ flux, radial velocity, and line width associated with each
enhancement was measured by subtracting from the spectrum toward the
enhancement the average spectrum toward the nearest neighbors, and then
fitting the resultant H$\alpha$ emission line with a Gaussian profile.
Examples are presented in Figures 1 through 4, which show for four
relatively faint enhancements the velocity interval beam map of an area
surrounding the enhancement, the spectra in the source direction and its
nearest neighbors, and the difference spectrum with the best-fit Gaussian
and residuals. The intensity enhancements in these examples range from
1.3~R (Fig. 1) down to 0.5~R (Figs. 2 \& 3) and, depending upon the
brightness of the diffuse H$\alpha$ background, produce moderate (Fig. 2)
to small (Fig. 4) increases in the total H$\alpha$ intensity on the sky.
The high spectral resolution of the WHAM survey has made possible the
detection of sources (e.g., Fig. 4) that would be masked by the H$\alpha$
background and its variation in maps of total H$\alpha$ intensity. For
these examples, there are no associations with previously reported nebulae
(i.e., listed either in SIMBAD or in Cahn, Kaler, and Stanghellini 1992).
In one of these examples (Fig. 2), the enhanced emission appears to be
associated with a hot evolved low mass star (DA white dwarf), while for
the other three no cataloged hot star is associated with the ionized
region (see \S3).
Seven of the enhancements occupy two adjacent pixels on the sky (e.g.,
Fig. 4), rather than being confined to a single WHAM beam. With one
exception (WPS 6), we have assumed that these are situations in which the
emission region is located near the edge of a beam, extending into the
second beam. In these cases, we summed the two spectra, and the
coordinates of the enhancement refer to the mean position of the two
beams. For the two-pixel source WPS 6 at $l = 33\fdg8$, b $= -22\fdg1$
and $l = 34\fdg1$, b $= -21\fdg2$, the results in Table 1 are listed
separately for each beam because there is a significant velocity shift
associated with the enhancement between the two directions (Fig. 5),
suggesting two independent sources (see \S4)
\section{Results}
Table 1 lists in order of increasing Galactic longitude the identified
``WHAM point sources'' (WPS) and the results of the Gaussian fits to the
difference spectra. Following the WPS number are the Galactic coordinates
of the center of the WHAM survey beam (for single pixel sources) plus the
H$\alpha$ flux, radial velocity, and line width (FWHM) for each of the
enhancements. The enhancements WPS 6-1 and WPS 6-2, while in adjacent
pixels, are treated as two separate H~II regions (see \S 4), while WPS 65
is a planetary nebula exhibiting two resolved H$\alpha$ velocity
components (Recillas-Cruz \& Pi\c{s}mi\c{s} 1981). The H$\alpha$ flux is
based on a calibration using those enhancements identified as planetary
nebulae (see Table 2) and for which H$\beta$ fluxes and reddening
measurements have been published (Cahn et al 1992). The errors in the
fitted parameters are dominated by the uncertainty in the baseline of the
difference spectrum and/or by the scatter in the data points of the
spectum (e.g., see Figs. 1 - 4). Errors due to baseline uncertainty were
estimated by fitting each difference spectrum multiple times with
different fixed baselines. Errors due to scatter in the spectral data
points were determined by the standard deviation calculation carried out
by the least-squares Gaussian fitting program. The listed errors for each
parameter represent the largest uncertainties determined by these methods.
Table 2 lists for each enhancement the celestial coordinates (2000.0) of
the center of the beam (or the mean position for two-pixel sources), the
name of any cataloged nebulae near that direction, the name of a candidate
ionizing star (if there is no cataloged nebula), the spectral type of the
star, and the off-set of the nebula or star from the center of the beam.
The resources for the ionizing star and nebula searches were SIMBAD and
the planetary nebula catalog by Cahn et al (1992). When no cataloged
nebula was listed within 40$\arcmin$ of the beam center, a search for an
ionizing star was carried out on SIMBAD to a radius of $60\arcmin$.
\section{Discussion}
Of the 85 H$\alpha$ enhancements identified, more than half (44) are not
associated
with any previously cataloged nebula, and of these, fifteen are associated
with hot evolved low mass stars, including one DO and seven DA white
dwarfs, three SdO, and two SdB stars (Table 2). This is a potential source
of new information about the natures of these evolved stars and their
evolution (Tweedy \& Kwitter 1994). For example, because of WHAM's large
beam, some of these enhancements could be associated with large planetary
nebulae in very late stages of their evolution having surface brightnesses
that are too faint to have been detected on earlier searches. The
identification of the nebulosity with the star is most certain for those
stars within the WHAM beam (i.e., angular offsets $ < 30\arcmin$).
However, because large, highly evolved planetary nebulae can be offset
significantly from their ionizing stars (Tweedy \& Napiwotzki 1994;
Borkowski, Sarazin, \& Soker 1990; Reynolds 1985), we have considered
stars located up to 1$\arcdeg$ from the directions listed in the Tables.
We confirm the earlier detection of ionized gas associated with the DO
white dwarf PG 0108+101 (Reynolds 1987) and provide improved kinematic
information about that H~II region.
Twenty nine emission regions could not be associated with either a
cataloged nebula or hot star. This could be the result of incompleteness
in the SIMBAD listings, or it could indicate another kind of nebulosity.
These enhancements have a mean line width near 27 km s$^{-1}$,
significantly smaller than that (38 km s$^{-1}$) of the cataloged
planetary nebulae. This is illustrated in Figure 6, which compares
histograms of line widths for three categories of WHAM point sources:
enhancements not associated with any cataloged nebula or evolved hot star,
enhancements near a hot low mass star, enhancements associated cataloged
planetary nebulae (the six regions associated with massive O and B stars
are excluded from these histograms). We found no associations with
supernova remnants or Herbig-Hero objects, although WPS 11 and WPS 21 are
within $37\arcmin$ and $48\arcmin$ of the two high Galactic latitude
molecular clouds, MBM 50 and MBM 46, respectively, which could harbor star
formation activity. However, the narrow line widths appear to rule out
such shock excited sources, as well as any association with emission line
stars, which exhibit line widths in excess of 60 -- 100 km s$^{-1}$ (e.g.,
Hartigan et al 1987; Hamann \& Persson 1992a,b). It is tempting to
speculate that these emission regions are associated with the most evolved
planetary nebulae, those whose expansion has been halted by interactions
with the ambient interstellar medium (Tweedy \& Napiwotzki 1994; Reynolds
1985), or those whose gas has thinned to such an extent that it is the
ambient interstellar medium itself that has become the primary H~II region
(Borkowski, Sarazin, \& Soker 1990). Followup, high angular resolution
imaging of these regions could help to discriminate between the
possibilities (Soker, Borkowski, \& Sarazin 1991). Figures 7 and 8 show
corresponding histograms for radial velocity and H$\alpha$ flux. No clear
differences between the three catagories of enhancements are apparent in
these distributions. In Figures, 6, 7, and 8, the V$_{LSR}$ and FWHM used
for WPS 65 are the flux weighted average radial velocity (i.e., +19 km
s$^{-1}$) and the separation (i.e., 37 km s$^{-1}$) of the two velocity
components, respectively.
Six of the enhancements appear to be small H~II regions associated with
massive late O and early B type stars. The region near the B1~V star
HD~191639 (WPS 6) was detected in two pixels and exhibits a significant
(21 km s$^{-1}$) radial velocity difference between the two directions
(Fig. 5). This suggests either peculiar, small-scale kinematic variations
within the region or the existence of two independent H~II regions closely
spaced on the sky. In this latter case, because the B star has a radial
velocity of $-7 \pm 5$ km s$^{-1}$ (Wilson 1953), the emission region
produced by the B star would be more likely associated with the
enhancement (WPS 6-1) at $-10 \pm 2$ km s$^{-1}$ toward $l = 33\fdg8$, b
$= -22\fdg1$, while the emission (WPS 6-2) at $+11$ km s$^{-1}$ toward $l
= 34\fdg1$, b $= -21\fdg2$ would have no identified source of ionization
(for Figs. 6, 7, \& 8, we have assumed this latter case). Enhancements
WPS 23, 60, 68, and 72 have late B stars (B6~V, B9, B9, and B8/B9II,
respectively) located $34\arcmin$ to $43\arcmin$ away. Because the Lyman
continuum fluxes from such late type B stars are predicted to be orders of
magnitude weaker than the fluxes of the early B stars discussed above, we
have concluded that these associations are coincidences.
\section{Summary and Conclusions}
From the WHAM sky survey we have identified and measured the fluxes,
radial velocities, and line widths for 85 regions of H$\alpha$ enhancement
at Galactic latitudes $|$b$| > 10\arcdeg$ that appear to subtend
approximately one degree or less on the sky. Most of these ionized
regions have not been previously reported as emission nebulae, and their
nature is unknown. A next step is to carry out additional observations to
determine the morphology of these emission regions and their sources of
ionization. This will include spectra of [O~III] $\lambda$5007, [N~II]
$\lambda$6584, and [S~II] $\lambda$6716 to explore the ionization and
excitation state of the gas, as well as observations using WHAM's
``imaging mode'' to obtain deep, very narrow band (30 km s$^{-1}$) images
of these enhancements at an angular resolution at about $3\arcmin$ within
a $1\arcdeg$ field of view (Reynolds et al 1998).
\acknowledgments
We thank an anonymous referee for helpful comments. This work was funded
by the National Science Foundation through grants AST96-19424 and
AS02-04973. The WHAM survey was funded by the National Science Foundation
through grants AST 91-22701 and AST 96-19424 with assistance from the
University of Wisconsin's Graduate School, Department of Astronomy, and
Department of Physics. This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France.
|
{
"timestamp": "2004-11-04T18:28:01",
"yymm": "0411",
"arxiv_id": "astro-ph/0411120",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411120"
}
|
\section{Introduction}
\setcounter{equation}{0}
Recently, the superstring measure to two loop order and for even
spin structure was computed from first principles \cite{I,II,III,IV,V,dp02,adp}.
The construction relies on a careful treatment of supermoduli, chiral
splitting and finite-dimensional gauge fixing determinants, and builds
on earlier work in this direction \cite{dp88, dp89}. Although intermediate
calculations are complex and intricate, the final form of the superstring
measure turns out to be very simply expressed in terms of a new
modular object, denoted by $\Xi _6 [\delta] (\Omega)$ in \cite{IV}.
\medskip
At present, no analogous derivation is available to 3-loop order and beyond.
Some of the special simplicity of genus 2 does carry over to genus 3,
in that no Schottky relations need to be imposed on the period matrix.
The structure of supermoduli, however, becomes considerably more
complex and, at present, the calculation appears formidable.
\medskip
Therefore, the simplicity of the ultimate form of the two-loop superstring measure
raises the question as to whether the genus 3 superstring measure might
have a comparatively simple form in terms of natural modular objects.
Constraints from holomorphicity, modular invariance, and physical
factorization will provide powerful restrictions on any candidate measures.
The precise form of the 2-loop measure gives a drastic constraint on the
separating degeneration limits of the 3-loop measure.\footnote{The
constraints of modular invariance were used along these lines to
guess the bosonic string measure to 2- and 3-loops in
\cite{moore} and \cite{bkmp} respectively. A general theory based on
constraints from modular invariance and physical factorization was
developed in \cite{deg}.}
\medskip
In this paper, we take a first step in the degeneration approach to the
superstring measure by formulating a precise Ansatz for
the 3-loop measure and verifying that it satisfies the correct
factorization conditions when the worldsheet degenerates.
Our Ansatz for the (chiral) superstring measure $d\mu[\Delta](\Omega^{(3)})$
can be described as follows. Set
\begin{equation}
\label{Ansatz}
d\mu[\Delta](\Omega^{(3)})
=
{\vartheta[\Delta](0,\Omega^{(3)})^4
\,
\Xi_6[\Delta](\Omega^{(3)})
\over
8\pi^4\,\Psi_9(\Omega^{(3)})}
\
\prod_{I\leq J}d\Omega_{IJ}^{(3)}
\end{equation}
Here $\Delta$ is a fixed even spin structure,
$\Omega^{(3)}=\{\Omega_{IJ}^{(3)}\}$ is the period matrix of the genus 3 worldsheet,
$\Psi_9(\Omega_{IJ}^{(3)})^2$ is the modular form
$\Psi_{18}(\Omega^{(3)})$ of weight 18 constructed in
\cite{igusa}, and the measure
$\Psi_9(\Omega_{IJ}^{(3)})^{-1}\prod d\Omega_{IJ}$
has been shown to be holomorphic in \cite{bkmp}. The key term $\Xi_6[\Delta](\Omega^{(3)})$
is to be determined by the following constraints:
\medskip
({\it i}) $\Xi_6[\Delta](\Omega^{(3)})$ is holomorphic in
$\Omega^{(3)}$ on the Siegel upper half space;
\medskip
({\it ii}) $\Xi_6[\Delta](\Omega^{(3)})$ is a modular covariant form
of weight 6 in the sense that, under modular transformations sending $\Omega_{IJ}^{(3)}\to\tilde\Omega_{IJ}^{(3)}=
(A\Omega^{(3)}+B)(C\Omega^{(3)}+D)^{-1}$, $\Delta\to\tilde\Delta$, we have
\begin{equation}
\Xi_6[\tilde\Delta](\tilde\Omega^{(3)})
=
\epsilon(\Delta,M)^4
{\rm det}\,(C\Omega^{(3)}+D)^6
\,
\Xi_6[\Delta](\Omega^{(3)}),
\end{equation}
where $\epsilon(\Delta,M)$ is the same phase factor as in the modular
transformation for $\vartheta$-constants.
\medskip
({\it iii}) In the degeneration $t\to 0$,
where the worldsheet separates into
a genus $1$ and a genus $2$ surface of period matrices $\Omega^{(1)}$ and $\Omega^{(2)}$ respectively, we must have
\begin{equation}
\lim_{t\to 0} \Xi_6[\Delta](\Omega^{(3)})
=
\eta(\Omega^{(1)})^{12}
\
\Xi_6[\delta](\Omega^{(2)}),
\end{equation}
where $\Xi_6[\delta](\Omega^{(2)})$ is the main new factor in the genus $2$ superstring measure found in \cite{I,IV}.
\medskip
The constraint ({\it iii}) on the degeneration limit of $\Xi_6[\Delta](\Omega^{(3)})$
is a consequence of the factorization properties of string amplitudes.
To establish it, we require a precise formula for the degeneration
limit of the measure $\Psi_9(\Omega^{(3)})^{-1}\prod d\Omega_{IJ}^{(3)}$,
formula which is also one of the main results of this paper
(see Theorem 1 below).
\medskip
We should stress that the condition ({\it iii}) is very restrictive, since it applies to an {\it arbitrary} separating degeneration. Thus we have to expect $\Xi_6[\Delta](\Omega^{(3)})$ to be built of sums of many terms,
different groups of which would tend to 0 in different limits.
\medskip
The original expression for $\Xi_6[\delta](\Omega^{(2)})$ derived in \cite{I,II,III,IV} depended very much on the fact that the worldsheet had genus $2$. Since then, two alternate expressions have been found which can extend to higher genus \cite{dp04}.
A characterizing feature of these two expressions is that one
of them is a sum over fourth powers of $\vartheta$-constants
of {\it triplets} of spin structures,
while the other is a sum of second powers of
$\vartheta$-constants of {\it sextets} of spin structures.
The key to determining which $N$-tuplets $\{\delta_i\}$ of spin structures should contribute to $\Xi_6[\delta](\Omega)$ turns out to be the notion of total asyzygies. Recall that
to any triplet of spin structures $\{\delta_1,\delta_2,\delta_3\}$ is associated a modular invariant sign, namely the product
\begin{eqnarray}
e(\delta _1, \delta _2, \delta_3)=\<\delta_1|\delta_2\>\,\<\delta_2|\delta_3\>
\<\delta_3|\delta_1\>
\end{eqnarray}
of relative signatures
$\< \delta |\epsilon \> = \exp 4 \pi i (\delta ' \epsilon '' - \epsilon ' \delta '')$.
A triplet of spin structures is said to be syzygous or asyzygous, depending on whether $e$ is $+1$ or $-1$. The criteria for which triplets or sextets should contribute to $\Xi_6[\delta](\Omega)$ turns out to be entirely
expressible in terms of asyzygies
(see \S 5.1 below). Once the criteria for which triplets or sextets to include has been identified, one needs to find
phase assinments $\epsilon(\delta;\{\delta_i\})$ with which to
sum the contributions of various sextets. The phase assignments have to be consistent with modular invariance, which identifies them all up to a global phase.
\medskip
These alternative descriptions of $\Xi_6[\delta](\Omega)$ suggest
several possible generalizations to genus 3, all involving
summations over monomials in $\vartheta[\Delta_i]$. They are listed in
\S 5.2, where we describe also in detail their viability as Ans\"atze
for the genus 3 superstring chiral measure
$\Xi_6[\Delta](\Omega^{(3)})$. The net outcome is the following:
\medskip
$\bullet$ A first Ansatz is in terms of sums of products of three fourth powers
only, such as $\vartheta [\Delta_{i_1}]^4 \vartheta [\Delta_{i_2}]^4\vartheta [\Delta_{i_3}]^4$.
Using in particular the degeneration formulas
of \cite{dp04}, we show that none of this form exists which
satisfies the criteria ({\it ii}) and ({\it iii}).
More generally, the criterion ({\it iii}), requiring the appearance of $\eta(\Omega^{(1)})$, effectively prevents the rule for which $N$-tuplets to
be included to remain the same for all genera.
\medskip
$\bullet$ Next, we consider Ans\"atze involving sums of second powers,
such as $\prod_{j=1}^6\vartheta[\Delta_{i_j}]^2$.
In genus $2$, the sextets which contribute to $\Xi_6[\delta](\Omega)$
can be characterized by the condition
of $\delta$-admissibility (see \S 5.1). This condition makes sense for
all genera, but in genus $3$, the set of such sextets
(called $\Delta$-admissible by extension) breaks up into many orbits
under the subgroup of modular transformations fixing a given spin
structure $\Delta$. One particularly important orbit is the set of sextets
which do not contain $\Delta$, and which are {\it totally asyzygous},
in the sense that any of their sub-triplets is asyzygous.
We refer to the other orbits as {\it partially asyzygous}.
The partially asyzygous orbits do not appear to have as simple a description as the orbit of totally asyzygous sextets,
although they can be identified by computer analysis.
\medskip
The partially asyzygous orbits turn out not to be viable candidates for $\Xi_6[\Delta](\Omega^{(3)})$: computer analysis reveals that many
of them do not admit consistent phase
assignments $\epsilon(\Delta;\{\Delta_i\})$. Even when they do,
their degeneration limits do not satisfy the criterion ({\it iii}) listed above.
Thus we rule them out as Ans\"atze for
$\Xi_6[\Delta](\Omega^{(3)})$.
\medskip
We found the criterion of totally asyzygous sextets to be much more compelling: its key property is that the genus $2$ sextets
obtained by factorization from a totally asyzygous genus 3 sextet
automatically satisfy the key condition of admissibility in genus 2 (see Lemma 1 in section \S 6.1). Furthermore, although these genus 2 sextets may be admissible but not $\delta$-admissible, Lemma 2 in section \S 6.2 shows that the contributions of such sextets sum up to 0 if they are assigned phases consistent with modular invariance. Thus the Ansatz in terms of totally asyzygous sextets would satisfy the degenerating condition ({\it iii}) if phase assignments exist which are consistent with ({\it ii}).
However, perhaps surprisingly, such a consistent phase assignment
does not exist and ({\it ii}) cannot be satisfied. A simple example is provided in section \S 6.2.2.
\medskip
$\bullet$ Another possible Ansatz could be in terms of sums of
products of twelve first powers of $\vartheta$, such as
$\prod_{i=1}^{12}\vartheta[\Delta_i](0,\Omega^{(3)})$.
The criterion for which dozens $\{\Delta_i\}_{1\leq i\leq 12}$
to include is difficult to guess from the genus 2 case.
There is no consistent phase assignments if the dozens are
assumed to consist of a pair of totally asyzygous sextets,
and more generally, no consistent sign assignments appear possible.
\medskip
Thus, we are led to believe that no candidate for
$\Xi _6 [\Delta] (\Omega ^{(3)})$ exists which is a polynomial in $\vartheta$.
On the other hand, consistent modular covariant assignments
$\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})$ do exist for suitable bilinear
combinations of pairs of totally asyzygous
sextets of $\vartheta [\Delta _i]^2$.
This suggests that only $\Xi_6[\Delta](\Omega^{(3)})^2$ is a
polynomial in $\vartheta$-constants.
We find that, for a suitable integer normalization factor $N$,
and a suitable choice of multiplicities $N_{pq}$ of the orbits
${\cal Q}_{pq}$ of pairs $\{\Delta_i,\Delta_i'\}$ of totally asyzygous sextets under the subgroup of $Sp(6,{\bf Z})$ fixing $\Delta$, the expression
\begin{equation}
\label{Ansatz1}
\Xi_6[\Delta](\Omega^{(3)})^2
=
{1\over 2^8N}\sum_{pq}N_{pq}
\sum_{(\{\Delta_i\},\{\Delta_i'\})\in{\cal Q}_{pq}}
\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})
\prod_{i=1}^6\vartheta^2[\Delta_i]
\prod_{i=1}^6\vartheta^2[\Delta_i']
\end{equation}
does satisfy all the conditions implied by ({\it i})-({\it iii}) for the square of $\Xi_6[\Delta](\Omega^{(3)})$, for arbitrary separating degeneration limits. In particular, it is a highly non-trivial
result that in any separating degeneration limit of this form to
a genus 2 and a genus 1 surface, the limit becomes a perfect square.
In general, these expressions will not admit holomorphic square roots
away from the separating degeneration limit.
If there exists a specific choice of multiplicities $N_{pq}$ (not all $0$)
which guarantees the existence of a holomorphic square root, then
(\ref{Ansatz1}) will single out a compelling candidate for the genus $3$
superstring measure. The existence of such a holomorphic square
root is known to occur at genus 3 in at least one other instance,
namely the modular form $\Psi _9 (\Omega ^{(3)})
= \prod _\Delta \vartheta [\Delta] (0,\Omega ^{(3)}) ^ {1\over 2}$, which is known
to be the (unexpectedly) holomorphic square root of $\Psi _{18} (\Omega ^{(3)})$.
\medskip
The remainder of this paper is organized as follows.
In section 2, the general criterion for physical factorization is spelled out
for the superstring measure. In section 3, the factorization properties
of the bosonic factors in the genus 3 measure are derived.
In section 4, the construction of the genus 3 superstring measure is
formulated as a degeneration problem. In section 5, the consistency
with criteria {\it (i), (ii), (iii)} above of various candidates is analyzed
and (\ref{Ansatz1}) is constructed.
\newpage
\section{Factorization of the superstring measure}
\setcounter{equation}{0}
The main goal of this section is to derive the
precise degeneration constraints which the 3-loop superstring measure
must satisfy when a separating cycle in the worldsheet $\Sigma^{(3)}$ is pinched to a point, and $\Sigma^{(3)}$ separates into a torus $\Sigma^{(1)}$ and a genus $2$ surface
$\Sigma^{(2)}$.
\subsection{Geometric picture of factorization}
We begin with the geometric description of the moduli space of
Riemann surfaces near the divisor of surfaces with nodes, as provided by the following well-known construction \cite{fay}.
\medskip
Let $\Sigma^{(1)}$ and $\Sigma^{(2)}$ be two Riemann surfaces
of genus $h_1$ and $h_2$, let $p_1\in \Sigma^{(1)}$, $p_2\in \Sigma^{(2)}$ be two given points, and let $|z_1|<1$,
$|z_2|<1$ be local coordinates on $\Sigma^{(1)}$ and $\Sigma^{(2)}$ which are centered at $p_1$ and $p_2$
respectively. Let ${\cal S}$ be the surface given by
${\cal S}=\{(X,Y,t);\ XY=t\ ,|X|<1,\,|Y|<1,\,|t|<1\}$, and construct the
fibration ${\cal C}$ of surfaces over the unit disk $\{t;|t|<1\}$
given by
\begin{equation}
{\cal C}=\{(z_1,t);z_1\in\Sigma^{(1)},\ |z_1|>|t|\}
\ \cup\
{\cal S}
\ \cup\
\{(z_2,t);z_1\in\Sigma^{(2)},\ |z_2|>|t|\},
\end{equation}
with the following identifications
\begin{eqnarray}
&&
(z_1,t)\sim (z_1,{t\over z_1},t)\ \ {\rm for}\ z_1\in\Sigma^{(1)},
\ |t|<|z_1|<1
\nonumber\\
&&
(z_2,t)\sim ({t\over z_2},z_2,t)\ \ {\rm for}\ z_2\in\Sigma^{(2)}, \ |t|<|z_2|<1.
\end{eqnarray}
For each $t\not=0$, the fiber of ${\cal S}$ above $t$
can be identified with the annulus
$A_t=\{X;|t|<|X|<1\}$. Thus
the fiber of ${\cal C}$ above $t$
is a regular surface $\Sigma_t$ of genus $h=h_1+h_2$,
which can be covered by the three overlapping charts $\Sigma^{(1)}\setminus
\{|z_1|>|t|\}$, $A_t$, and $\Sigma^{(2)}\setminus \{|z_2|>|t|\}$, with the identifications
\begin{equation}
z_1\,\sim\, X\,\sim\, {t\over z_2},
\ \ \
{\rm for}\ \ |t|<|z_1|,|z_2|<1.
\end{equation}
\subsection{Physical picture of factorization}
In the physical picture, we view the surface $\Sigma_t$ rather as the disjoint union
\begin{equation}
\Sigma_t
=
\Sigma_{in}^{(1)}
\
\cup
\
A_t
\
\cup
\Sigma_{out}^{(2)}
\end{equation}
where we have set $\Sigma_{in}^{(1)}=\Sigma^{(1)}\setminus\{|z_1|<1\}$,
and
$\Sigma_{out}^{(2)}=\Sigma^{(1)}\setminus\{|z_1|<1\}$.
In a given conformal field theory, the surfaces with boundary $\Sigma_{in}^{(1)}$,
$\Sigma_{out}^{(2)}$ define two states $\<\Sigma_{in}^{(1)}|$
and $|\Sigma_{out}^{(2)}\>$. To make contact with the Hamiltonian
picture, we can use the exponential map
$\xi\to X=t^{ {1\over 2}}e^\xi$ to identify the annulus $A_t$ with a cylinder
\begin{equation}
\{\xi=\xi_0+i\xi_1;\ 0\leq\xi_1\leq 2\pi,
\ - {1\over 2}\ln{1\over|t|}<\xi_0< {1\over 2}\ln{1\over|t|}\}.
\end{equation}
Now the operators for time and space translations are the
Hamiltonian $H=L_0+\bar L_0$ and the momentum operator
$P=L_0-\bar L_0$
\footnote{Since all conformal anomalies ultimately cancel,
we can ignore the contribution of the central charge
when we map the annulus into the cylinder.}.
If we view $\xi_0$ as ``time", and $\xi_1$ as ``space",
then the shift in time and the shift in space corresponding
to the cylinder are given respectively by the length of the cylinder and the phase shift in $\xi$ as the point $X$
moves on a straight line from $X=|t|$ to $X=1$.
This gives
$-\ln |t|$ for the shift in time
and $\arg (t) $ for the shift in space, since
$\xi=|t|^{ {1\over 2}}e^{-i {1\over 2}\arg(t)}$
and $\xi=|t|^{- {1\over 2}}e^{i {1\over 2}\arg(t)}$ are the points
on the cylinder corresponding to $X=|t|^{ {1\over 2}}$
and $X=1$. The cylinder corresponds then to the following
operator insertion
\begin{equation}
\exp \bigg (i\arg(t)(L_0-\bar L_0) \bigg )
\,
\exp \bigg (\ln (|t|) (L_0+\bar L_0) \bigg )
=
t^{L_0}\,\bar t^{\bar L_0}
\end{equation}
and hence the partition function ${\cal Z}_t$ corresponding to the surface
$\Sigma_t$ is given by
\begin{equation}
{\cal Z}_t
=
\<\Sigma_{in}^{(1)}|\ t^{L_0}\,\bar t^{\bar L_0}\ |\Sigma_{out}^{(2)}\>
\end{equation}
To obtain the degenerating limit $t\to 0$, we insert a
basis of states $|\psi_\alpha\>$
diagonalizing $t^{L_0}\,\bar t^{\bar L_0}$
\begin{equation}
{\cal Z}_t
=
\sum_\alpha\ \<\Sigma_{in}^{(1)}|\psi_\alpha\>\,
\<\psi_\alpha|\ t^{L_0}\,\bar t^{\bar L_0}\ |\psi_\alpha\>
\,
\<\psi_\alpha|\Sigma_{out}^{(2)}\>
\end{equation}
The descendant states $|\psi_\alpha\>$ contribute lower order terms in the limit $t\to 0$. To identify the leading contribution, we need thus to consider only primary states.
In the case of string propagation, before the GSO projection,
the state with lowest $m^2$
is the tachyon with $m^2=-2$. By momentum conservation,
its momentum must be $k^\mu=0$ (it is not on-shell, but intermediate states do not have to be on-shell). Since the vertex
for tachyon emission with momentum $0$ is just the identity,
the leading term for ${\cal Z}_t$ is given by
\begin{equation}
{\cal Z}_t
=
{\cal Z}^{(1)}\cdot t^{-2}\bar t^{-2}\cdot
{\cal Z}^{(2)}
+
O(|t|^{-3})
\end{equation}
where ${\cal Z}^{(1)}$ and ${\cal Z}^{(2)}$ are the partition functions for the surfaces $\Sigma^{(1)}$ and $\Sigma^{(2)}$.
\medskip
To deal with spin structures, we start from surfaces $\Sigma^{(i)}$ with canonical homology bases $A_I^{(i)}$, $B_I^{(i)}$, $\#(A_I^{(i)}\cap B_J^{(i)})=\delta_{IJ}$,
$\#(A_I^{(i)}\cap A_J^{(i)})=0$, $\#(B_I^{(i)}\cap B_J^{(i)})=0$
for $1\leq I,J\leq h_i$. Then the combined bases give a canonical
basis for the genus $h_1+h_2$ surface $\Sigma_t$. With this choice of homology bases,
a spin structure $\Delta$ can be identified with an assignment
of either $0$ or $1/2$ to each homology cycle of $\Sigma_t$,
and hence with a pair $(\delta_1,\delta_2)$, with $\delta_i$ a spin structure on the surface $\Sigma^{(i)}$
\begin{equation}
\Delta=\pmatrix{\delta_2\cr\delta_1}.
\end{equation}
In a conformal field theory where the fields are world sheet fermions requiring
a spin structure, the preceding degeneration formula becomes
\begin{equation}
{\cal Z}_t[\Delta]
=
{\cal Z}^{(1)}[\delta_1] \cdot t^{-2}\bar t^{-2}\cdot
{\cal Z}^{(2)}[\delta_2]
+
{\cal O}(|t|^{-3}).
\end{equation}
\subsection{Factorization of the genus $3$ superstring measure}
We formulate now the precise degeneration constraint for the superstring
measure when the worldsheet $\Sigma=\Sigma_t$ is of genus $h=3$ and
degenerates into two surfaces $\Sigma^{(1)}$ and $\Sigma^{(2)}$
of genus $h_1=1$ and $h_2=2$.
\medskip
We shall assume that, at loop order $h$, the vacuum-to-vacuum superstring amplitude is of the form
\begin{equation}
\label{fullintegral}
{\cal A}
=
\sum_{\Delta,\bar\Delta}
c_{\Delta,\bar\Delta}\int_{{\cal M}_h}({\rm det}\, {\rm Im}\,\Omega^{(h)})^{-5}
\
d\mu[\Delta](\Omega^{(h)})
\,\wedge
\,
\overline{d\mu[\bar\Delta](\Omega^{(h)})}
\end{equation}
where $c_{\Delta,\bar\Delta}$ are suitable phases, and the sum
over the spin structures $\Delta,\bar\Delta$ corresponds to the GSO projection, which projects out the tachyon and produces space-time supersymmetry. The space ${\cal M}_h$ is the moduli space of Riemann surfaces of genus $h$.
We always fix a homology basis, and view each Riemann surface as
characterized by its
period matrix $\Omega^{(h)}=\{\Omega_{IJ}^{(h)}\}_{1\leq I,J\leq h}$.
The form $d\mu[\Delta](\Omega)$ is a $(3h-3,0)$ holomorphic form
on ${\cal M}_h$, transforming under modular transformations
in such a way that the full expression above is modular invariant. It is called the (chiral) superstring measure at
genus $h$.
\medskip
Near $t=0$, the $3h-3$ moduli parametrizing $\Sigma_t$ can be chosen to be the $3h_1-3$ and $3h_2-3$ moduli for the surfaces
$\Sigma^{(1)}$ and $\Sigma^{(2)}$, together with the 3 parameters
$p_1,p_2$ and $t$. The degeneration formulas derived above for
conformal field theory suggest imposing the following degeneration
constraint for the chiral superstring measure
\begin{equation}
\label{degeneration}
d\mu[\Delta](\Omega)
=
d\mu[\delta_1](\Omega^{(1)}) \, \wedge \, {dt\over t^2} \,
\wedge \, d\mu[\delta_2](\Omega^{(2)}) \, \wedge \, dp_1\wedge dp_2
+ {\cal O}(t^{-1})
\end{equation}
As usual, these formulas hold for $h_1,h_2\geq 2$. When $h_1=1$,
the counting is slightly different, since $p_1$ and its differential are no longer relevant due to translation invariance on the torus. This is actually the case of main interest in the
present paper, so we make the above formula more explicit in this
case: the moduli for $\Sigma^{(1)}$ is then a single parameter
$\Omega^{(1)}$, and the superstring measure for one-loop
is $\vartheta^4[\delta_1](\Omega^{(1)})/2^5\pi^4\eta^{12}(\Omega^{(1)})$
(see e.g. \cite{IV}, eq. (8.2)). Thus the degeneration constraint for
the chiral superstring measure at genus $h$ when the worldsheet
separates into a torus $\Sigma^{(1)}$ and a genus $h-1$ surface $\Sigma^{(2)}$ is given by
\begin{equation}
\label{superdeg}
d\mu[\Delta](\Omega)
=
{\vartheta^4[\delta_1](\Omega^{(1)})
\over
2^5\pi^4\eta^{12}(\Omega^{(1)})}\,d\Omega^{(1)} \, \wedge \, {dt\over t^2}
\, \wedge \, d\mu[\delta_2](\Omega^{(2)}) \, \wedge \, dp_2 + {\cal O}(t^{-1}).
\end{equation}
\subsection{Factorization of the genus $3$ bosonic string measure}
Although this paper is mainly concerned with the genus $3$ superstring measure and its degeneration limit, we take the opportunity to discuss also similar issues for the bosonic
string, partly as a check later on our method. The measure for the bosonic string in the critical dimension is of the form
\begin{equation}
{\cal A}=\int_{{\cal M}_h}({\rm det}\, {\rm Im}\,\Omega)^{-13}\
d\mu_B(\Omega)\,\wedge\,\overline{d\mu_B(\Omega)}
\end{equation}
where $d\mu_B(\Omega)$ is holomorphic. Because the intermediate state of lowest mass is still the tachyon, the measure
$d\mu_B(\Omega)$ satisfies the same
degeneration constraint as in (\ref{degeneration}).
When the worldsheet $\Sigma$ degenerates into a torus $\Sigma^{(1)}$ and a surface of genus $2$,
the degeneration constraint can be written as
\begin{equation}
\label{bosonic}
d\mu_B(\Omega)
=
{d\Omega^{(1)}
\over
(2\pi)^{12}\eta^{24}(\Omega^{(1)})} \, \wedge \, {dt\over t^2} \,
\wedge \, d\mu_B(\Omega^{(2)}) \, \wedge \, dp_2 + {\cal O}(t^{-1})
\end{equation}
where $(2\pi)^{-12}\eta^{-24}(\Omega^{(1)})\,d\Omega^{(1)}$ is the genus $1$ bosonic string measure,
with the conventions of \cite{dp88} and the normalization
$d^2\Omega^{(1)}/(8\pi^2\, {\rm Im}\,\Omega^{(1)})^2$
for the $SL(2,{\bf R})$ invariant measure on the Siegel upper half space.
\newpage
\section{The measure
$\prod_{I\leq J}d\Omega_{IJ}^{(3)}/\Psi_9(\Omega^{(3)})$ in genus $3$}
\setcounter{equation}{0}
An important feature of the chiral superstring measure
$d\mu[\Delta](\Omega^{(h)})$ is that it is a holomorphic $(3h-3,0)$ form.
To find it, we begin by constructing a natural holomorphic $(3h-3,0)$ form $d\mu_B(\Omega^{(h)})$ on ${\cal M}_h$
(later identified with the chiral bosonic measure, but this is not essential for our considerations), so that the problem of finding $d\mu[\Delta](\Omega^{(h)})$
reduces to that of finding
the density $d\mu[\Delta]/d\mu_B$. In genera $h=2$ and $h=3$,
we can exploit the fact that ${\cal M}_h$ and the Siegel upper
half space of symmetric matrices with positive imaginary part
have the same dimension, and henceforth we consider only these
cases.
\subsection{The modular forms $\Psi_{18}(\Omega^{(3)})$ and $\Psi_{10}(\Omega^{(2)})$}
Recall that on a surface $\Sigma$ of genus $h$, there are
$2^{2h}$ spin structures, of which $2^{h-1}(2^h+1)$ are even
and $2^{h-1}(2^h-1)$ are odd. The parity of a spin structure
$\Delta$ corresponds to the parity in $\zeta$ of the $\vartheta$-function $\vartheta[\Delta](\zeta,\Omega^{(h)})$,
which is also the parity of the number of independent
holomorphic spinors of spin structure $\Delta$. The properties
of $\vartheta$-functions which we need can be found in \cite{IV},
\S 2.1-\S 2.3 and \cite{adp}, Appendix B. For convenience, we
restate here the transformations of spin structures $\Delta\to
\tilde\Delta$ and $\vartheta$-constants $\vartheta[\Delta](0,\Omega^{(h)})\to
\vartheta[\tilde\Delta](0,\tilde\Omega^{(h)})$ under modular
transformations
\begin{equation}
\label{modulartransformation}
\tilde\Omega^{(h)}=(A\Omega^{(h)}+B)(C\Omega^{(h)}+D)^{-1},
\qquad M=\pmatrix{A &B\cr C& D\cr}\in Sp(2h,{\bf Z}).
\end{equation}
If we write $\Delta=(\Delta'|\Delta'')$
and $\tilde\Delta=(\tilde\Delta'|\tilde\Delta'')$,
they are given by
\begin{equation}
\pmatrix{\tilde\Delta'\cr\tilde\Delta''}
=
\pmatrix{D&-C\cr -B&A\cr}\pmatrix{\Delta'\cr\Delta''}
+
{1\over 2}
{\rm diag}\,\pmatrix{CD^T\cr AB^T}
\end{equation}
and by
\begin{equation}
\vartheta[\tilde\Delta](0,\tilde\Omega)
=
\epsilon(\Delta,M)
\,
{\rm det}(C\Omega^{(h)}+D)^{ {1\over 2}}
\,
\vartheta[\Delta](0,\Omega^{(h)}),
\end{equation}
where $\epsilon(\Delta,M)$ is an eighth root of unity, which depends
on both the spin structure $\Delta$ and the modular
transformation $M$. There is no simple closed formula for
$\epsilon(\Delta,M)$, but its values for $h=2$ on generators of
$Sp(4,{\bf Z})$ can be found in \cite{IV}, \S 2.3.
\medskip
The above transformation for $\vartheta$-constants should be compared
with the defining transformation law for modular forms
$\Phi(\Omega)$ of a given weight $w$
\begin{equation}
\Phi(\tilde\Omega^{(h)})
=
{\rm det}(C\Omega^{(h)}+D)^w\,\Phi(\Omega^{(h)})
\end{equation}
which do not involve roots of unity
such as $\epsilon(\Delta,M)$. Nevertheless,
the following natural form can be defined using the even $\vartheta$-constants
\begin{equation}
\label{Psi}
\Psi_{2^{h-1}(2^h+1)k}(\Omega^{(h)})
=
\prod_{\Delta\ even}\vartheta^{2k}[\Delta](0,\Omega^{(h)})
\end{equation}
It has been shown by Igusa \cite{igusa} that in genus $h=2$ and $h=3$,
$\Psi_{2^{h-1}(2^h+1)k}(\Omega^{(h)})$ are modular forms of weight
$2^{h-1}(2^h+1)k$ when $k=1$ and $k=1/2$ respectively.
\medskip
Let these forms be denoted by $\Psi_{10}(\Omega^{(2)})$ and $\Psi_{18}(\Omega^{(3)})$ respectively. It is well-known that the form
$\Psi_{10}(\Omega^{(2)})$ has no zero inside the moduli space of
Riemann surfaces of genus $2$, while the form $\Psi_{18}(\Omega^{(3)})$
vanishes exactly of second order along
the variety of hyperelliptic surfaces of genus $3$
\cite{bkmp}. Indeed, $\Psi_{2^{h-1}(2^h+1)}(\Omega^{(h)})$ vanishes
if and only if a $\vartheta$-constant vanishes for some even spin
structure $\Delta$. Since the parity of the number of independent
holomorphic spinors is the same as the parity of $\Delta$,
this means that there are at least $2$ independent holomorphic
spinors of spin structure $\Delta$. By the Riemann-Roch theorem,
the number of zeroes of a holomorphic spinor is always $(h-1)$.
In genus $h=2$, a holomorphic spinor has then exactly one zero,
and the ratio of two linearly independent holomorphic spinors
would be a meromorphic function with exactly one zero and one pole.
Such a function provides a one-to-one correspondence between the
given Riemann surface and the sphere, contradicting
our initial assumption that $h=2$. Similarly, when $h=3$,
a holomorphic spinor has $2$ zeroes, and the ratio of two linearly
independent holomorphic spinors is a meromorphic function with
two zeroes and two poles. Such a function provides a two-to-one
correspondence with the sphere, and thus the Riemann
surface must be hyperelliptic.
Conversely, if $s^2=\prod_{i=1}^8(x-u_i)$ is a hyperelliptic surface of
genus $3$, then $s^{- {1\over 2}}(dx)^{ {1\over 2}}$ and $xs^{- {1\over 2}}(dx)^{ {1\over 2}}$
define two holomorphic spinors associated with an even spin structure.
Thus $\Psi_{18}(\Omega^{(3)})$ vanishes at such surfaces (in fact,
to second order), and the proof of the claim is complete.
\medskip
Since the form $\Psi_{18}(\Omega^{(3)})$ vanishes of second order, we can follow \cite{bkmp} and obtain a holomorphic character $\Psi_9(\Omega^{(3)})$ by taking its square root
\begin{equation}
\Psi_9(\Omega^{(3)})^2
=
\Psi_{18}(\Omega^{(3)})
\end{equation}
\medskip
In genus $h=2$ and $h=3$, the moduli space ${\cal M}_h$ and the Siegel upper half space have the same dimension, which is 3 and 6
respectively. An integral over ${\cal M}_h$ can be identified with an integral over a fundamental domain of the modular group
$Sp(2h,{\bf Z})$ in the Siegel upper half space. On this space,
we can introduce the following holomorphic $(3h-3,0)$ forms
\footnote{The ordering of the forms $d\Omega_{IJ}^{(h)}$ in these measures is a matter of convention. We shall ignore the resulting
$\pm$ signs and sometimes denote the resulting volume form just by $d^{h(h+1)/2}\Omega^{(h)}$.}
\begin{eqnarray}
\label{volumeform}
&&{1
\over
\Psi_{10}(\Omega^{(2)})}
\,
\prod_{1\leq I\leq J\leq 2}d\Omega_{IJ}^{(2)},
\ \ {\rm for\ genus}\ h=2
\nonumber\\
&&{1
\over
\Psi_{9}(\Omega^{(3)})}
\ \
\prod_{1\leq I\leq J\leq 3}d\Omega_{IJ}^{(3)},
\ \ {\rm for\ genus}\ h=3.
\end{eqnarray}
Both measures are holomorphic on the Siegel upper half space.
This is obvious when $h=2$. When $h=3$, this is due to \cite{bkmp},
who showed that the form $\prod_{I\leq J}d\Omega_{IJ}^{(3)}$
also vanishes along the variety of hyperelliptic surfaces,
so that the zeroes in the denominator $\Psi_{9}(\Omega^{(3)})$
are cancelled by the measure factor.
\medskip
It follows from Igusa's classification theorem for genus $2$ modular
forms that the bosonic string measure is actually
given in genus $h=2$ by \cite{bkmp,moore}
\begin{equation}
d\mu_B(\Omega^{(2)})=
\ {c_2 \over \Psi_{10}(\Omega^{(2)})}
\,
\prod_{1\leq I\leq J\leq 2}d\Omega_{IJ}^{(2)},
\end{equation}
where $c_2$ is an overall constant. This constant was in fact evaluated
in \cite{IV} \S 7.1, and was found to be $c_2 = \pi ^{-12}$.
There is no such classification theorem in genus $3$ or higher,
but cogent arguments have been proposed
for the similar relation in genus $3$ to hold
\cite{bkmp}
\begin{equation}
d\mu_B(\Omega^{(3)})
=
\ {c_3 \over \Psi_{9}(\Omega^{(3)})}
\ \
\prod_{1\leq I\leq J\leq 3}d\Omega_{IJ}^{(3)}
\end{equation}
with $c_3$ another overall constant.
As part of our program for
determining the genus $3$ superstring measure,
we shall present further evidence for this relation below.
\subsection{Degeneration of $\Psi_{9}^{-1}(\Omega^{(3)})\prod_{I\leq J}d\Omega_{IJ}^{(3)}$}
The superstring chiral measure will be identified by its density
with respect to the basic measure $\Psi_9(\Omega^{(3)})\prod_{I\leq J}d\Omega_{IJ}^{(3)}$.
In order to reformulate the degeneration constraints
(\ref{degeneration}) for the superstring measure in terms
of degeneration constraints for its density, we need the precise
degeneration limit of the measure $\Psi_9(\Omega^{(3)})\prod_{I\leq J}d\Omega_{IJ}^{(3)}$.
This is given in the following theorem:
\begin{theorem}
In the degeneration limit given by \S 2.1, $t\to 0$, we have
\begin{eqnarray}
{1 \over \Psi_{9}(\Omega^{(3)})}
\ \
\prod_{1\leq I\leq J\leq 3}d\Omega_{IJ}^{(3)}
= { 1 \over (2 \pi )^6 }
{\prod_{I\leq J}\Omega_{IJ}^{(2)}
\over
\Psi _{10} (\Omega^{(2)})}
\wedge
{d\Omega^{(1)} \over \eta (\Omega^{(1)})^{24}}
\wedge {dt \over t^2} \wedge dp_2
+{\cal O}({1\over t}).
\end{eqnarray}
\end{theorem}
\medskip
\noindent
{\it Proof.} We consider the parametrization of surfaces $\Sigma_t$
degenerating into two surfaces $\Sigma^{(1)}$ and $\Sigma^{(2)}$
described in \S 2.1. As indicated there, we choose canonical homology bases $(A_{I_i}^{(i)},B_{I_i}^{(i)})$, so that the union of these
cycles constitutes a canonical homology basis for $\Sigma$.
Let $(\omega_{I_1}^t,\omega_{I_2}^t)$ be the basis of
holomorphic Abelian differentials on $\Sigma_t$ dual to
the $(A_{I_1},A_{I_2})$ cycles. Then these holomorphic differentials have the following asymptotic behavior as $t\to 0$
\cite{fay}
\begin{eqnarray}
\omega ^t _{I_1} (z)
& = & \left \{ \matrix{
\omega _{I_1} (z) + { t \over 4} \omega _{I_1} (p_1) \omega ^{(1)} _{p_1} (z)
+ {\cal O} (t^2)
& {\rm when} & z \in \Sigma^{(1)} \cr
{ t \over 4} \omega _{I_1} (p_1) \omega ^{(2)} _{p_2} (z)
+ {\cal O} (t^2)
& {\rm when} & z \in \Sigma^{(2)} \cr } \right .
\nonumber \\ && \\
\omega ^t _{I_2} (z)
& = & \left \{ \matrix{
{ t \over 4} \omega _{I_2} (p_2) \omega ^{(1)} _{p_1} (z)
+ {\cal O} (t^2)
& {\rm when} & z \in \Sigma^{(1)} \cr
\omega _{I_2} (z) + { t \over 4} \omega _{I_2} (p_2) \omega ^{(2)} _{p_2} (z)
+ {\cal O} (t^2)
& {\rm when} & z \in \Sigma^{(2)} \cr } \right .
\nonumber
\end{eqnarray}
Here, $\omega ^{(i)} _{p_a}$ refers to the meromorphic differential
on surface $\Sigma^{(i)}$, $i=1,2$ with a double pole at $p_i$,
while $\omega _{I_i}$ refers to a basis of
holomorphic differentials on surface $\Sigma^{(i)}$.
\medskip
The components of the period matrix behave as follows
\cite{fay},
\begin{eqnarray}
\Omega ^t _{I_1 J_1}
=
\Omega ^{(1)} _{I_1 J_1} +
{i \pi \over 2} t \omega _{I_1} (p_1) \omega _{J_1} (p_1)
& \hskip .5in &
\Omega ^t _{I_1 J_2}
=
{i \pi \over 2} t \omega _{I_1} (p_1) \omega _{J_2} (p_2)
\nonumber \\
\Omega ^t _{I_2 J_2}
=
\Omega ^{(2)} _{I_2 J_2} +
{i \pi \over 2} t \omega _{I_2} (p_2) \omega _{J_2} (p_2)
& \hskip .5in &
\Omega ^t _{I_2 J_1}
=
{i \pi \over 2} t \omega _{I_2} (p_2) \omega _{J_1} (p_1)
\end{eqnarray}
where $\Omega ^{(i)}$ refers to the period matrix on the surface $\Sigma^{(i)}$.
\bigskip
Henceforth, we consider the case where $h_1=1$ and $h_2=2$.
It is convenient to set $\Omega^{(1)}=\tau$,
$\Omega^{(2)}=\Omega$, and
use the following notations,
\begin{eqnarray}
\Omega ^{(3)} = \left ( \matrix{
\Omega _{11} & \Omega _{12} & \tau_1 \cr
\Omega _{12} & \Omega _{22} & \tau_2 \cr
\tau _1 & \tau _2 & \tau_3 \cr} \right )
\hskip 1in
\left \{ \matrix{
\tau _1 & = & {i\pi\over 2} t ~ \omega _1 (p_2) \omega _0 (p_1) \cr
\tau _2 & = & {i\pi\over 2} t ~ \omega _2 (p_2) \omega _0 (p_1) \cr} \right .
\end{eqnarray}
Here $\omega _I(p_2)$ denote the genus 2
holomorphic differentials and $\omega _0$ denotes the genus
1 holomorphic differential which is just the constant $1$
in the usual parametrization of the torus of modulus $\tau$
as ${\bf C}/{\bf Z}+\tau{\bf Z}$, since the homology
basis $(A_1^{(1)},B_1^{(1)})$ has been fixed.
\medskip
The $\vartheta$-constants at genus 3 for even spin structures
behave differently in the separating limit depending on
whether the spin structures on the genus 2 and genus 1
components are both even or both odd. We have the
following limits,
\begin{eqnarray}
\label{thetaconst}
\vartheta \left [ \matrix{\delta \cr \mu \cr} \right ] (0, \Omega ^{(3)})
& = &
\vartheta [ \delta ] (0, \Omega ) ~ \vartheta [\mu ] (0, \tau) + {\cal O} (t)
\nonumber \\
\vartheta \left [ \matrix{\nu \cr \nu_0 \cr} \right ] (0, \Omega ^{(3)})
& = &
{ t \over 4} \omega _0 (p_1) \vartheta ' _1 (0, \tau) h_\nu (p_2)^2 + {\cal O} (t^2)
\end{eqnarray}
Here, $\delta$ (resp. $\nu$) denote an even (resp. odd) genus 2
spin structure, while $\mu$ denotes an even genus 1 spin structure
and $\nu_0$ denotes the unique genus 1 odd spin structure.
Furthermore, we use the familiar notation,
\begin{eqnarray}
h_\nu (z)^2 \equiv \omega _I (z) \partial ^I \vartheta [\nu ] (0, \Omega)
\end{eqnarray}
This square is defined for any surface, while its square
root, $h_\nu$ in single-valued only on a surface with
spin structure $\nu$.
\medskip
\noindent
{\sl a) The limit of $\Psi _{18}$}
\medskip
We are now in a position to study the limit of the modular
form $\Psi _{18}$ and its square root $\Psi _9$.
In genus $3$, there are 36 even spin structures, of which
30 separate into two even spin structures in genus
$1$ and $2$, and 6 separate into two odd spin structures
in genus $1$ and $2$. In the first group of 30, the spin
structures obtained after degeneration run over all 10
genus $2$ even spin structures and over all 3 genus 1 even spin structures.
Similarly, in the second group of 6, the spin structures
obtained after degeneration run over all 6 genus 2 even spin structures.
Thus we obtain
\begin{eqnarray}
\Psi _{18} (\Omega ^{(3)})
=
\prod _{\delta, \mu} \left ( \vartheta [\delta ](0, \Omega ) \vartheta [\mu ](0,\tau) \right )
~
\prod _\nu \left ( {t \over 4} h_\nu (p_2)^2 \omega _0 (p_1) \vartheta _1 ' (0,\tau) \right )
+ {\cal O} (t^7)
\end{eqnarray}
In view of the well-known genus $1$ identities,
\begin{equation}
\label{genus1identity}
\vartheta '_1 (0,\tau) = -2 \pi \eta (\tau)^3,
\qquad
\prod _\mu \vartheta [\mu ](0, \tau) = 2 \eta (\tau)^3,
\end{equation}
and the definition of $\Psi_{10}(\Omega)$,
this can be rewritten as
\begin{eqnarray}
\Psi_{18}(\Omega^{(3)})
& = &
\Psi _{10} (\Omega) ^{3/2} \left ( 2 \eta (\tau)^3 \right )^{10}
\left ( {t \over 4} \right )^6 \omega _0 (p_1)^6 \left ( - 2 \pi \eta (\tau)^3 \right )^6
\prod _\nu h_\nu (p_2)^2 + {\cal O} (t^7)
\nonumber \\
& = & 2^4 \pi ^6 t^6 \omega _0 (p_1) ^6 \Psi _{10}(\Omega) ^{3/2}
\eta (\tau)^{48} \prod _\nu h_\nu (p_2)^2 + {\cal O} (t^7)
\end{eqnarray}
Taking the square root, we find
\begin{eqnarray}
\label{Psi}
\Psi _9(\Omega^{(3)}) = 4 \pi^3 t^3 \omega _0(p_1)^3 \Psi _{10} (\Omega )^{3/4}
\eta (\tau)^{24} \prod _\nu h_\nu (p_2) + {\cal O} (t^4)
\end{eqnarray}
Notice that, while each $h_\nu$ may not be single-valued on a
surface with given spin structure (or without specified spin structures),
the product over all $\nu$ is single-valued on any surface.
\medskip
\noindent
{\sl b) The limit of the volume factor $d^6 \Omega ^{(3)}_{IJ}$}
\medskip
We turn next to the limit of the measure $d^6 \Omega ^{(3)}_{IJ}$.
In the above notation, we have
\begin{eqnarray}
\label{dOmega}
d^6 \Omega ^{(3)} _{IJ}
=
d^3 \Omega \wedge d\tau \wedge d\tau_1 \wedge d \tau _2
\end{eqnarray}
We now evaluate $d \tau_1 \wedge d \tau_2$, using the definition
of its ingredients,
\begin{eqnarray}
d \tau_1 \wedge d \tau_2
=
- {\pi ^2 \over 4} ~ t dt \wedge dp_2 ~ \omega _0 (p_1)^2
\bigg ( \omega _1 (p_2) \partial \omega _2 (p_2)
- \omega _2 (p_2) \partial \omega _1 (p_2) \bigg )
\end{eqnarray}
The combination in parentheses is a holomorphic 3-form in $p_2$.
To evaluate it, we turn to the hyperelliptic representation
of Riemann surfaces of genus $2$. Let the surface $\Sigma^{(2)}$ be given by
\begin{equation}
s^2=\prod_{i=1}^6(x-u_i)
\end{equation}
Then $z^{J-1}dz/s(z)$, $J=1,2$, is a basis of holomorphic differential forms. Let $\sigma_{IJ}$ be the change of bases matrix from this basis to the basis $\omega_{I_2}^{(2)}(z)$
(which we abbreviate to $\omega_I(z)$ for the rest of the proof of Theorem 1),
\begin{eqnarray}
\label{sigma}
2 \pi i\, \omega _I (z) = \sum _J \sigma _{IJ} { z^{J-1} dz \over s(z)}
\end{eqnarray}
Hence, we have
\begin{eqnarray}
\omega _1 (p_2) \partial \omega _2 (p_2)
- \omega _2 (p_2) \partial \omega _1 (p_2)
=
- { 1 \over 4 \pi^2} ({\rm det} \sigma ) { (dp_2)^3 \over s(p_2)^2}
\end{eqnarray}
Thus the holomorphic 3-form manifestly has
6 simple zeros precisely at the branch points, exactly as
$\prod _\nu h_\nu (z)$. Thus, the $p_2$-dependence
of these two forms is the same.
\medskip
\noindent
{\sl c) Determining the constant of proportionality}
\medskip
We need to determine
the constant of proportionality, which is moduli dependent,
and requires several precise coefficients of proportionality
between the $\vartheta$-function and the hyperelliptic representation
of holomorphic spinors \cite{IV}.
In the hyperelliptic representation, each of the 6 odd spin
structures $\nu_i$ corresponds to a branch point $u_i$,
and the one-form $h_{\nu_i}^2(z)$
is proportional to the one-form $(x-u_i)dz/s(z)$. Set
\begin{equation}
\label{proportionality}
h_{\nu_i}^2(z)={\cal N}_{\nu_i}(x-u_i){dz\over s(z)}
\end{equation}
where ${\cal N}_{\nu_i}$ is a moduli dependent constant.
Then we have
\begin{eqnarray}
{ (dp_2)^3 \over s(p_2)^2}
=
\left ( \prod _i { 1 \over {\cal N} _i ^{1/2}} \right ) \prod _\nu h_\nu (p_2)
\end{eqnarray}
Combining all, we obtain
\begin{eqnarray}
\label{dtau1}
d \tau_1 \wedge d \tau_2
=
{1 \over 16} ~ t dt \wedge dp_2 ~ \omega _0 (p_1)^2
{{\rm det} \sigma \over \prod _i {\cal N} _i ^{1/2}} \prod _\nu h_\nu (p_2)
\end{eqnarray}
Next, we have the following two identities
\begin{eqnarray}
\label{2identities}
({\rm det} \sigma ) ^4 \vartheta [\delta ]^8
&= &
\prod _{i<j} (a_i-a_j)^2 (b_i-b_j)^2
\nonumber\\
\pi ^{24} ({\rm det} \sigma ) ^{12} \vartheta [\delta ]^8 \Psi _{10} ^2
&= &\left ( \prod _i {\cal N} _i ^4 \right )
\prod _{i<j} (a_i-a_j)^2 (b_i-b_j)^2
\end{eqnarray}
Here $\delta$ is an even spin structure. In the hyperelliptic
representation, it corresponds to a partition of the 6 branch points into two disjoint sets $\{a_1,a_2,a_3\}$ and $\{b_1,b_2,b_3\}$ of three branch points each. The first identity is a classic Thomae formula
\cite{mumford}, vol II, \S 8.
To establish the second identity, we make use of the following
bilinear $\vartheta$-constants, introduced in \cite{IV},
equation (2.38)
\begin{equation}
{\cal M}_{\nu_i\nu_j}
=
\partial_1\vartheta[\nu_i](0,\Omega)\,
\partial_2\vartheta[\nu_j](0,\Omega)
-
\partial_2\vartheta[\nu_i](0,\Omega)
\,
\partial_1\vartheta[\nu_j](0,\Omega)
\end{equation}
Solving for $\partial_I\vartheta[\nu_i](0,\Omega)$ from
(\ref{proportionality}) and using the formula
(\ref{sigma}) for $\omega_I(z)$, we find
\begin{equation}
({\rm det}\,\sigma)\,{\cal M}_{\nu_i\nu_j}
=
4\pi^2\,{\cal N}_{\nu_i}{\cal N}_{\nu_j}(u_i-u_j)
\end{equation}
Taking the products gives
\begin{equation}
\label{M}
({\rm det}\,\sigma)^4{\cal M}_{12}^2
{\cal M}_{23}^2
{\cal M}_{31}^2
{\cal M}_{45}^2
{\cal M}_{56}^2
{\cal M}_{64}^2
=
\prod_{i=1}^6{\cal N}_{\nu_i}^4
\prod_{i<j}(a_i-a_j)^2(b_i-b_j)^2
\end{equation}
However, the ${\cal M}_{\nu_i\nu_j}^2$ have been determined completely explicitly in terms of $\vartheta$-constants in \cite{IV},
equation (4.9)
\begin{equation}
{\cal M}_{\nu_i\nu_j}=\pi^4\vartheta[\delta]^2
\prod_{k\not=i,j}\vartheta[\nu_i+\nu_j+\nu_k]^2
\end{equation}
so that
\begin{equation}
{\cal M}_{12}^2{\cal M}_{23}^2{\cal M}_{31}^2
=
{\cal M}_{45}^2{\cal M}_{56}^2{\cal M}_{64}^2
=
\pi^{12}\vartheta[\delta]^4
\Psi_{10}(\Omega).
\end{equation}
Substituting this into (\ref{M}) gives the second identity
in (\ref{2identities}), and (\ref{2identities}) is now established. Taking the ratio of the two identities in
(\ref{2identities}), we find
\begin{equation}
\pi^3\,\Psi_{10}(\Omega)^{1/4} ~ {\rm det}\,\sigma
= \prod_{i=1}^6{\cal N}_{\nu_i}^{ {1\over 2}}
\end{equation}
Comparing with
(\ref{dtau1}),
we obtain in this manner the following asymptotics for
$d \tau_1 \wedge d \tau_2$
\begin{eqnarray}
\label{dtau2}
d \tau_1 \wedge d \tau_2
=
{1 \over 16 \pi ^3 } ~ t dt \wedge dp_2 ~ \omega _0 (p_1)^2
{ 1 \over \Psi _{10} ^{1/4}} \prod _\nu h_\nu (p_2)
+
{\cal O}(t^2).
\end{eqnarray}
The theorem is now an immediate consequence of
(\ref{Psi}), (\ref{dOmega}), and (\ref{dtau2}).
Q.E.D.
\subsection{Degeneration limit for the 3-loop bosonic string}
We recall that the genus $3$ bosonic string measure must satisfy the degeneration constraint (\ref{bosonic}).
Since the genus $2$ bosonic string measure is given by
$c_2\Psi_{10}^{-1}d^3\Omega$, Theorem 1 provides further evidence
that the genus $3$ bosonic string measure is given by
$c_3\Psi_9^{-1}(\Omega^{(3)})d^6\Omega^{(3)}$.
In fact, Theorem 1 also dictates what the coefficient of proportionality between the genera $2$ and $3$ must be
\begin{equation}
c_3={c_2 \over (2\pi)^6}= { 1 \over 2^6 \pi ^{18}}
\end{equation}
As another check, we consider
the separating degeneration limit of the tachyon amplitude,
which is given by the following integral,
where $E(z,w)$ is the prime form,
\begin{eqnarray}
\int \left | {dt \over t^2} \right |^2 \prod _{i<j} \left |
E(z_i,z_j) \right | ^{2 k_i \cdot k_j}
\end{eqnarray}
The behavior of the prime form when $z_i \in \Sigma^{(2)}$ and
$z_j \in \Sigma^{(1)}$ is given by
\begin{eqnarray}
E(z_i,z_j) \to t^{- {1\over 2}} E(z_i,p_2) E(p_1,z_j)
\end{eqnarray}
If the sum of the momenta on $\Sigma^{(2)}$ is $k$,
\begin{eqnarray}
k= \sum _{i, ~z_i\in \Sigma^{(2)}} k_i =
- \sum _{j, ~z_j\in \Sigma^{(1)}} k_j
\end{eqnarray}
then we have the following $t$-dependence,
\begin{eqnarray}
\int \left | {dt \over t^2} \right |^2 |t|^{p^2}
\sim {1 \over p^2 -2}
\end{eqnarray}
which is the expected tachyon pole, with the correct value of the mass squared.
\newpage
\section{The genus 3 superstring measure as a degeneration problem}
\setcounter{equation}{0}
Using the formula for the genus $2$ superstring measure
found in \cite{I} and the degeneration formulas of Theorem 1,
we can formulate now more concretely the constraints on the
genus $3$ superstring measure, in the degeneration limit where
the worldsheet $\Sigma$ separates into a torus $\Sigma^{(1)}$
and a surface $\Sigma^{(2)}$ of genus $2$. Let $\Delta$ be an
even genus $3$ spin structure. If, in this degeneration,
$\Delta$ factorizes into two odd spin structures, the leading contribution
of order $t^{-2}$ to $d\mu[\Delta](\Omega^{(3)})$
vanishes, and we need not consider this case.
Henceforth, we assume that $\Delta$ factorizes into
two even spin structures, and denote by $\delta_1$ and
by $\delta_2\equiv\delta$ the even spin structures respectively
on the torus and on the genus $2$ surface $\Sigma^{(2)}$.
\medskip
Let the genus $3$ superstring measure be expressed under the form
(\ref{Ansatz}),
for some density $\Xi_6[\Delta](\Omega^{(3)})$ yet
to be determined.
Recall that
in genus $h=2$, the superstring measure $d\mu[\delta](\Omega^{(2)})$ was shown to be given by \cite{I,IV}
\begin{equation}
d\mu[\delta](\Omega^{(2)})
=
{\vartheta[\delta] ^4 (0,\Omega^{(2)})\,\Xi_6[\delta](\Omega^{(2)})
\over
16\pi^6\Psi_{10}(\Omega^{(2)})}\prod_{1\leq I\leq J\leq 2}d\Omega_{IJ}^{(2)}
\end{equation}
The main expression $\Xi_6[\delta](\Omega^{(2)})$ is given in
\cite{I}, equation (7.1). We shall discuss it further in the next section. The degeneration constraint (\ref{superdeg}), Theorem 1,
and the degeneration formulas (\ref{thetaconst}) for
$\vartheta$-constants imply then that $\Xi_6[\Delta](\Omega^{(3)})$ must satisfy the following limit
\begin{equation}
\label{condition1}
\lim_{t\to 0}
\Xi_6[\Delta](\Omega^{(3)})
=
\eta(\Omega^{(1)})^{12}
\
\Xi_6[\delta](\Omega^{(2)}).
\end{equation}
This is the condition ({\it iii}) formulated in the Introduction.
\medskip
We discuss next the issue of modular invariance for
$d\mu[\Delta](\Omega^{(3)})$.
The full integrand in the amplitude (\ref{fullintegral})
must be invariant under $Sp(6,{\bf Z})$. Under the modular
transformations (\ref{modulartransformation}), we have
\begin{eqnarray}
\label{modular1}
{\rm det}\, {\rm Im} \,\tilde\Omega^{(3)}&=& |{\rm det}\,(C\Omega^{(3)}+D)|^{-2}\
{\rm det}\, {\rm Im}\,\Omega^{(3)}
\nonumber\\
\prod_{I\leq J}
d\tilde\Omega_{IJ}^{(3)}
&=&
{\rm det}\,(C\Omega^{(3)}+D)^{-4}\
\prod_{I\leq J}d\Omega_{IJ}^{(3)}
\end{eqnarray}
At first sight, in genus $3$, we have difficulties due to the fact that
the expression $\Psi_9(\Omega^{(3)})$
is defined only through its square, $\Psi_{18}(\Omega^{(3)})$.
However, the ambiguity in taking square roots here should not be
relevant in string theory: for the superstring, $\Psi_9$ and its
conjugate appear in each chiral sector. This is also the case for the
heterotic string, since we have seen that $\Psi_9$ appears
in the chiral measure for the bosonic string in the critical dimension,
and this is unaffected by compactification.
Thus the sign ambiguity in $\Psi_9$ can be ignored.
In analogy with the genus 2 case, we shall impose then the
modular transformation law ({\it ii}) described in the Introduction
on the unknown term $\Xi_6[\Delta](\Omega^{(3)})$
\footnote{A slightly less restrictive requirement is to allow
in ({\it ii}) an additional phase $\epsilon(M)$ depending only
on the modular transformation $M$, but not on the spin structure
$\Delta$. Such additional phases do not affect significantly
our subsequent construction of candidates for $\Xi_6[\Delta](\Omega^{(3)})$.}. This condition implies that the
superstring chiral measure $d\mu[\Delta](\Omega^{(3)})$ transforms covariantly under modular transformations without any
phase factor
\begin{equation}
d\mu[\tilde\Delta](\tilde\Omega^{(3)})
=
{\rm det}\,(C\Omega^{(3)}+D)^{-5}
\
d\mu[\Delta](\Omega^{(3)})
\end{equation}
so that a manifestly modular invariant GSO projection is given by
\begin{equation}
\sum_{\Delta}d\mu[\Delta](\Omega^{(3)}).
\end{equation}
This completes our discussion of the three conditions
({\it i}-{\it iii}) formulated in the Introduction
for the modular covariant form $\Xi_6[\Delta](\Omega^{(3)})$.
\newpage
\section{Ans\"atze for the superstring chiral measure}
\setcounter{equation}{0}
The goal of this section is to construct modular covariant
forms in genus $3$ satisfying the constraints ({\it ii})
and ({\it iii}). The starting point is the expression
$\Xi_6[\delta](\Omega^{(2)})$ in genus $2$. Our strategy is to
find and analyze analogous expressions in genus $3$. Although there are several natural analogues, it will turn out that the
degeneration condition ({\it iii}) is quite rigid,
and singles out a very small set of candidates.
\subsection{The form $\Xi_6[\delta](\Omega^{(2)})$ in genus $2$}
We begin by recalling the form $\Xi_6[\delta](\Omega^{(2)})$ in genus $2$.
It was derived directly from the gauge-fixed genus $2$ superstring measure,
and its original expression was heavily dependent on the fact that the
worldsheet had genus $2$ (see \cite{I}, eq. (7.1)). More recently, two
different expressions were found for $\Xi_6[\delta](\Omega^{(2)})$
which do admit generalizations to higher genus \cite{dp04}.
To describe them, recall that a triplet $\{\delta_1,\delta_2,\delta_3\}$
is said to be {\sl asyzygous} if $e(\delta_1,\delta_2,\delta_3)=-1$
(respectively {\sl syzygous} when +1),
using the usual definitions of the signatures on pairs and triples of spin
structures,
\begin{eqnarray}
\label{triple}
\<\delta_i|\delta_j\> & \equiv & \exp (4\pi i(\delta_i'\delta_j''-\delta_i''\delta_j'))
\nonumber \\
e(\delta_1, \delta_2, \delta_3) & \equiv &
\<\delta_1|\delta_2\>\,\<\delta_2|\delta_3\>\,\<\delta_3|\delta_1\>
\end{eqnarray}
More generally,
we define as in \cite{dp04} an $N$-tuple of spin structures
of be {\it totally asyzygous},
if any triplet of distinct spin structures in the $N$-tuple
is asyzygous
\begin{eqnarray}
&&
\{\delta_1,\cdots,\delta_N\} \ {\rm totally\ asyzygous}
\\ && \hskip .5in
\Leftrightarrow
\
\{\delta_i,\delta_j,\delta_k\}
\ {\rm asyzygous},
\ \ {\rm for \ all} \ i,j,k \ {\rm pairwise\ distinct}.
\nonumber
\end{eqnarray}
The notion of totally asyzygous $N$-tuple is modular invariant, since the cyclic product in (\ref{triple})
of relative signatures for a triple
of spin structures is.
\medskip
Returning now to $\Xi_6[\delta](\Omega^{(2)})$, the first alternate expression
involves $4$-th powers of $\vartheta$-constants (as does the original expression in \cite{I}, eq. (7.1)) and is given by
\begin{equation}
\label{Xi1}
\Xi_6[\delta](\Omega^{(2)})
=
-{1\over 2}\sum_{ { \{\delta,\delta_1,\delta_2,\delta_3\} \atop {\rm tot.asyz.}} }
\
\prod_{i=1}^3\<\delta|\delta_i\>
\
\vartheta[\delta_i](0,\Omega^{(2)})^4
\end{equation}
The notation indicates that, for given spin structure $\delta$,
the summation runs over all triples $\{\delta_1,\delta_2,\delta_3\}$
such that $\{\delta,\delta_1,\delta_2,\delta_3\}$ forms
a totally asyzygous quartet. The second alternate expression
for $\Xi_6[\delta](\Omega^{(2)})$ involves only squares of $\vartheta$-constants,
but it requires summation over certain sextets
of even spin structures. To identify which sextets, we define a
sextet $\{\delta_1,\cdots,\delta_6\}$ of spin structures in genus
$2$ to be {\it admissible} if it can be decomposed into three pairs
\begin{equation}
\{\delta_1,\cdots,\delta_6\}
=
\{\delta_{i_1},\delta_{i_2}\}
\cup
\{\delta_{i_3},\delta_{i_4}\}
\cup
\{\delta_{i_5},\delta_{i_6}\},
\end{equation}
with the union of any two pairs of the decomposition forming
a totally asyzygous quartet. For a given spin structure $\delta$,
we define the sextet $\{\delta_i\}$ to be {\it $\delta$-admissible}
if it is admissible and it does not contain $\delta$.
With this definition, the second alternative expression
for $\Xi_6[\delta](\Omega^{(2)})$ is given by
\begin{equation}
\label{Xi2}
\Xi_6[\delta](\Omega^{(2)})
=
{1\over 2}\sum_{\{\delta_i\}
\,
\delta-adm.}
\
\epsilon(\delta;\{\delta_i\})
\prod_{i=1}^6\vartheta[\delta_i](0,\Omega^{(2)})^2
\end{equation}
Here the signs $\epsilon(\delta;\{\delta_i\}$ are uniquely related by modular transformations
\begin{equation}
\label{phase2}
\epsilon(M\delta;\{M\delta_i\})
\
\prod_{i=1}^6\epsilon^2(\delta_i,M)
=
\epsilon^4(\delta,M)
\
\epsilon
(\delta;\{\delta_i\}),
\qquad
M\in Sp(4,{\bf Z}),
\end{equation}
where $\epsilon^4(\delta,M)$ is the same factor occurring in the
transformation law for $\vartheta^4[\delta]$. An explicit expression for the
signs $\epsilon$ was given in \cite{dp04}.
\subsection{Ans\"atze in genus 3}
The preceding formulas for $\Xi_6[\delta](\Omega)$ in genus 2 suggest several natural extensions to genus~$3$. We discuss them below. The main issue is whether they can satisfy the
desired conditions ({\it i}), ({\it ii}), and ({\it iii})
listed in the Introduction, which are required for any viable
Ansatz for the genus $3$ superstring chiral measure.
\subsubsection{Ansatz in terms of asyzygous
quartets of spin structures}
The first alternate expression (\ref{Xi1}) for $\Xi_6[\delta](\Omega^{(2)})$
clearly makes sense for arbitrary genus, and in particular for genus $3$.
Thus we are dealing here with an Ansatz for $\Xi_6[\Delta](\Omega^{(3)})$ involving summations over spin structures $\{\Delta_1,\Delta_2,\Delta_3\}$
which together with the given spin structure $\Delta$,
form a totally asyzygous quartet. The modular covariant form which it defines
has been studied in \cite{dp04}, where it was denoted by
$\Xi_6^\#[\Delta](\Omega^{(3)})$. However, its degeneration limits, as
determined in \cite{dp04}, Theorem 5, do not satisfy the degeneration constraint (\ref{condition1}) for the genus $3$ superstring measure.
Thus this Ansatz in terms of asyzygous quartets of spin structures
must be dropped from contention.
\subsubsection{Ans\"atze in terms of admissible sextets of spin structures}
We turn then to several Ans\"atze which can be viewed as generalizations to genus 3 of the expression (\ref{Xi2})
for $\Xi_6[\delta](\Omega)$ in terms of $\delta$-admissible sextets. First, in complete analogy with the genus case,
we define a sextet $\{\Delta_1,\cdots,\Delta_6\}$ to
be {\it admissible} if it can be decomposed as
\begin{equation}
\label{admissible}
\{\Delta_{i_1},\Delta_{i_2}\}
\cup
\{\Delta_{j_1},\Delta_{j_2}\}
\cup
\{\Delta_{k_1},\Delta_{k_2}\}
\end{equation}
with any two pairs constituting a totally asyzygous quartet.
Given spin structure $\Delta$, a sextet is said to be $\Delta$-{\it admissible} if it is admissible, and it does not
contain $\Delta$.
\medskip
Despite the similarity in the definitions, there is in practice a
fundamental difference between admissible sextets of spin
structures in genus 2 and in genus 3: if
\begin{eqnarray}
s=
\{\delta_{i_1},\delta_{i_2}\}
\cup
\{\delta_{j_1},\delta_{j_2}\}
\cup
\{\delta_{k_1},\delta_{k_2}\}
\end{eqnarray}
is an admissible sextet in genus 2, then the triplets
$\{ \delta _{i_\alpha}, \delta _{j_\beta}, \delta _{k_\gamma} \}$, with $\alpha,
\beta, \gamma =1,2$, are automatically syzygous.
Furthermore, if the sextet is $\delta$-admissible,
then the following triplet
signatures are automatically determined,
\begin{eqnarray}
\label{genus2triplets}
e(\delta , e_{i _1}, e_{i_2}) = e(\delta , e_{j _1}, e_{j_2}) =
e(\delta , e_{k_1}, e_{k_2})=+1
\end{eqnarray}
This may easily be inferred by inspection of Table 4 in \cite{dp04}.
\medskip
This is no longer true for genus 3: in an admissible
sextet $\{\Delta_1,\cdots,\Delta_6\}$, the triplets
$\{ \Delta _{i_\alpha}, \Delta _{j_\beta}, \Delta _{k_\gamma} \}$
need not all be syzygous (or all asyzygous).
Thus the admissible sextets in genus 3 fall into 2 categories:
\medskip
(1) All triplets $\{ \Delta _{i_\alpha}, \Delta _{j_\beta}, \Delta _{k_\gamma} \}$
are asyzygous, so that the whole sextet
$\{\Delta_1,\cdots,\Delta_6\}$ is {\it totally asyzygous};
\medskip
(2) At least one triplet
$\{ \Delta _{i_\alpha}, \Delta _{j_\beta}, \Delta _{k_\gamma} \}$
is syzygous.
In this case, the relations (\ref{genus2triplets}) also
do not follow
from $\Delta$-admissibility. In particular, one
has a classification depending on the following signs
\begin{eqnarray}
\rho _1 & = & e(\Delta, \Delta _{j_1}, \Delta _{k_1})
\nonumber \\
\rho _2 & = & e(\Delta, \Delta _{k_1}, \Delta _{i_1})
\nonumber \\
\rho _3 & = & e(\Delta, \Delta _{i_1}, \Delta _{j_1})
\end{eqnarray}
The four resulting cases, namely $[+++]$, $[++-]$, $[+--]$ and $[---]$
are non-empty and the modular group acts within each case,
though not necessarily transitively so.
For convenience, we refer to all these cases as cases
of {\it partially asyzygous} sextets.
\newpage
$\bullet$ {\it Ans\"atze in terms of totally asyzygous sextets}
\medskip
We shall examine two Ans\"atze, in terms of totally asyzygous sextets with sign assignments.
\begin{eqnarray}
\label{AB}
&({\rm A})&\ \ \Xi_6[\Delta](\Omega^{(3)})
\sim
\sum_{\{\Delta_i\} {\rm tot. \, asyz.}
\atop
\Delta\notin\{\Delta_i\}}
\epsilon(\Delta;\{\Delta_i\})\prod_{i=1}^6\vartheta[\Delta_i](0,\Omega^{(3)})^2
\nonumber\\
&({\rm B})&\ \ \Xi_6[\Delta](\Omega^{(3)})
\sim
\bigg (
\sum_{\{\Delta_i\},\{\Delta_i'\} {\rm tot.\, asyz.}
\atop
\Delta\notin\{\Delta_i\},\{\Delta_i'\}}
\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})\prod_{i=1}^6\vartheta[\Delta_i](0,\Omega^{(3)})^2
\vartheta[\Delta_i'](0,\Omega^{(3)})^2
\bigg )^{1/2}
\nonumber\\
\end{eqnarray}
A key issue in these Ans\"atze is whether sign assignments exist
which are consistent with modular transformations.
The first Ansatz (A) is simpler, and it will turn out that it does
satisfy the degeneration constraint (\ref{condition1}) if a consistent
assignment existed. However, this turns out not to be the case,
which is why the second Ansatz (B) is needed.
This second Ansatz (B) turns out to be the only viable candidate
for $\Xi_6[\Delta](\Omega^{(3)})$ among all the ones examined in
the present paper. Its full treatment requires the rest of
the paper. We postpone it then to the next section \S 6,
and complete now the discussion of the remaining Ans\"atze,
which involve partially asyzygous sextets.
\bigskip
$\bullet$ {\it Ansatz in terms of partially asyzygous sextets}
\medskip
In these remaining cases, the natural Ans\"atze would be
\begin{eqnarray}
\Xi_6[\Delta](\Omega^{(3)})
\sim
\sum_{[\rho_1,\rho_2,\rho_3]}
\epsilon(\Delta;\{\Delta_i\})\prod_{i=1}^6\vartheta[\Delta_i](0,\Omega^{(3)})^2
\end{eqnarray}
where the summation would be over all $\Delta$-admissible sextets
$\{\Delta_1,\cdots,\Delta_6\}$ with some fixed
sign assignment $[\rho_1,\rho_2,\rho_3]$,
with not all $\rho_i$ equal to $-1$.
The first task is to examine whether consistent phase assignments $\epsilon(\Delta;\{\Delta_i\})$ exist. One does this orbit by
orbit under the modular group which leaves $\Delta$ invariant,
and uses the usual modular sign factors in the transformations
of $\vartheta ^2$. The results are as follows,
where we have numbered the genus 3 even spin structures as
in Appendix \S C of \cite{dp04},
\begin{enumerate}
\item For the cases $[---]$, $[+--]$ and $[++-]$,
all orbits produce inconsistent sign assignments and are ruled out;
\item For the case $[+++]$, one orbit with 1680 sextets (generated by
sextet $\{ 2,4,5,6,33,35\}$) and one orbit with 3360 sextets (generated by
sextet $\{ 5,7,12,13,22,30\}$) both generate {\sl consistent sign assignments};
\item For the case $[+++]$, there is one remaining orbit (generated by
sextet $\{ 2,4,5,9,27,32\}$) which produces an inconsistent sign assignment.
\end{enumerate}
A simple example showing the non-existence of consistent phases for one of these orbits of $\Delta$-admissible, partially asyzygous sextets is given in section \S 6.2.3 below.
\medskip
The actual sums in both cases of 2. above are non-vanishing.
In the limit where the surface degenerates to a genus 2 times
genus 1 surface, both sums converge to the same limit as
the form $\Xi _6 ^\# [\delta ](\Omega ^{(3)})$
of \cite{dp04}, which is inconsistent with the requirement
({\it iii}) in the Introduction.
Thus, even though the sign assignments are consistent,
the limits are not and the cases are all ruled out.
\medskip
Although this analysis rules out a construction of $\Xi_6[\Delta](\Omega^{(3)})$ in terms of partially asyzygous $\Delta$-admissible sextets, it is in principle still possible that an Ansatz for
$\Xi_6[\Delta](\Omega^{(3)})^2$ can be obtained in terms of
pairs of partially asyzygous $\Delta$-admissible sextets,
just as we outlined in the preceding case (B) of totally
asyzygous $\Delta$-admissible sextets.
However, there does not appear to be any clear way of
recapturing $\Xi_6[\delta](\Omega^{(2)})^2$ from the degeneration limits of sums over pairs of partially asyzygous sextets.
\subsubsection{Ansatz in terms of dozens of spin structures}
Since the previous Ans\"atze have not produced viable candidates
for $\Xi_6[\Delta](\Omega^{(3)})$ itself as a polynomial in
$\vartheta^2$, we may ask whether polynomials in $\vartheta$ could work,
\begin{equation}
\Xi_6[\Delta](\Omega^{(3)})
=
\sum_{\{\Delta_i\}} \epsilon(\Delta;\{\Delta_i\}_{1\leq i\leq 12})
\prod_{i=1}^{12}\vartheta[\Delta_i](0,\Omega^{(3)}),
\end{equation}
where the summation runs over a suitable set of dozens $\{\Delta_1,\cdots,\Delta_{12}\}$ of spin structures.
Here the expression for $\Xi_6[\delta](\Omega)$ in genus 2
provides little guidance for choosing this set.
There are conceivably many possibilities. But,
given the good degeneration limits of totally asyzygous sextets,
it is natural to consider products of pairs of totally asyzygous
sextets,
\begin{eqnarray}
\Xi _6 ' [\Delta ](\Omega ^{(3)})
=
\sum _ {\{ s_1,s_2 \} \in {{\cal Q} _{pq}}} \epsilon (\Delta, s_1,s_2)
\prod _{i=1} ^{12} \vartheta [\Delta _i] (0,\Omega ^{(3)})
\end{eqnarray}
where the sum is over pairs of totally asyzygous sextets,
\begin{eqnarray}
s_1 & = &
\{ \Delta _1 , \Delta _2 , \Delta _3 , \Delta _4 , \Delta _5 , \Delta _6 \}
\nonumber \\
s_2 & = &
\{ \Delta _7 , \Delta _8 , \Delta _9 , \Delta _{10} , \Delta _{11} , \Delta _{12} \}
\end{eqnarray}
and ${\cal Q}_{pq}$ denote the different orbits of $\Delta$-admissible
pairs of totally asyzygous sextets under the modular subgroup
leaving $\Delta$ invariant. The orbits ${\cal Q}_{pq}$ are described in detail in section \S 6.3.2 below.
\medskip
Clearly, this construction can make sense only for orbits ${\cal Q}_{pq}$
for which the phases $\epsilon (\Delta, s_1,s_2)^2 $ can be consistently
defined. But this problem was already solved (by computer)
in the treatment of the Ansatz (B) in terms of pairs of totally
asyzygous sextets (see section \S 6.3.2 and subsequent discussions). It was found that consistent phases exist
for the orbits ${\cal Q}_{01}$, ${\cal Q}_{02}$, ${\cal Q}_{13}$, ${\cal Q}_{20}$ ${\cal Q}_{21}$, ${\cal Q}_{22}$, and ${\cal Q}_{23}$
but not for the orbits
${\cal Q}_{11}$, ${\cal Q}_{12}$, and ${\cal Q}_{3}$. A computer calculation
shows, however, that in none of these orbits the sign $\epsilon (\Delta, s_1,s_2)$
can actually be consistently defined.
Thus this particular Ansatz for $\Xi_6[\Delta](\Omega^{(3)})$ is also ruled out.
\newpage
\section{The Ans\"atze in terms of totally asyzygous sextets}
\setcounter{equation}{0}
To determine the degeneration behavior of the genus $3$
candidates (A) and (B), we need to determine the degeneration
behavior of genus $3$ totally asyzygous sextets of even spin structures.
\subsection{Degenerations of totally asyzygous sextets}
The basic fact is the following:
\bigskip
\noindent
{\bf Lemma 1}. {\it Let $\{\Delta_i\}_{1\leq i\leq 6}$ be a sextet of genus
$3$ even spin structures. Assume that it is totally asyzygous and that
each $\Delta_i$ degenerates into even spin structures in genera 2 and 1.
Let $\delta_1,\cdots,\delta_6$ be the 6 genus $2$ even spin structures
which arise in this manner. Then the sextet $\{\delta_1,\cdots,\delta_6\}$
is an admissible sextet of genus $2$ even spin structures in the
sense defined above, that is, it can be divided into three pairs,
the union of any two defines a totally asyzygous quartet.
Furthermore, we have}
\begin{equation}
\prod_{i=1}^6
\vartheta[\Delta_i](0,\Omega^{(3)})^2
\to
2^4\eta(\Omega^{(1)})^{12}
\prod_{i=1}^6
\vartheta[\delta_i](0,\Omega^{(2)})^2
\end{equation}
\bigskip
\noindent
{\it Proof.} We recall that there exist no totally asyzygous quintets at genus 2
(and thus no genus 2 totally asyzygous sextuplets etc).
This can be seen by direct inspection of the tables of asyzygies
in genus $2$ provided in \cite{dp04}.
Let $\mu_1,\cdots,\mu_6$ be the genus $1$ spin structures arising
from the degeneration of $\Delta_1,\cdots,\Delta_6$.
By assumption, they are even. We examine in turn all possible arrangements for
$\mu_1,\cdots,\mu_6$ :
\begin{itemize}
\item Assume that $\mu_1 , \cdots , \mu_6$ take at most 2 distinct values
amongst the 3 possible even spin structures at genus 1. Then, it
follows that $e(\mu_i,\mu_j,\mu_k)=+1$ for any triplet of $\mu$'s arising in the sextet. For the genus 3 sextet to be totally
asyzygous, the genus 2 sextuplet $\delta _1, \cdots , \delta _6$
must be totally asyzygous, but this is impossible.
\item Assume that five of the six $\mu_1 , \cdots , \mu_6$ (say
$\mu_1 , \cdots , \mu_5$ for definiteness) take at most 2 distinct
values amongst the 3 possible even spin structures at genus 1.
Then $e(\mu_i,\mu_j,\mu_k)=+1$ for $1\leq i,j,k \leq 5$.
For the genus 3 sextet to be totally asyzygous,
the genus 2 quintet $\delta _1, \cdots , \delta _5$
must be totally asyzygous, but this is impossible.
\item The only remaining possibility is that amongst the six
$\mu_1 , \cdots , \mu_6$, each of the 3 distinct genus 1 even
spin structures (which we denote $\mu_2 , \mu_3 , \mu_4$
by slight abuse of notation) occurs precisely twice.
\end{itemize}
Thus, up to permutations of the $\mu$'s, we have
\begin{eqnarray}
\label{genus3split}
\Delta _1 = \left ( \matrix{\delta _1 \cr \mu _2 \cr } \right )
\hskip 1in
\Delta _3 = \left ( \matrix{\delta _3 \cr \mu _3 \cr } \right )
\hskip 1in
\Delta _5 = \left ( \matrix{\delta _5 \cr \mu _4 \cr } \right )
\nonumber \\
\Delta _2 = \left ( \matrix{\delta _2 \cr \mu _2 \cr } \right )
\hskip 1in
\Delta _4 = \left ( \matrix{\delta _4 \cr \mu _3 \cr } \right )
\hskip 1in
\Delta _6 = \left ( \matrix{\delta _6 \cr \mu _4 \cr } \right )
\end{eqnarray}
It is clear that the quartets
$\{\delta _1 ~ \delta _2 ~ \delta _3 ~ \delta _4\}$,
$\{\delta _1 ~ \delta _2 ~ \delta _5 ~ \delta _6\}$,
$\{\delta _3 ~ \delta _4 ~ \delta _5 ~ \delta _6\}$,
are totally asyzygous, and that they are the only totally asyzygous quartets within $\{\delta_1,\cdots,\delta_6\}$.
This proves the first part of Lemma 1. The second part
follows immediately from the degeneration formulas for
$\vartheta$-constants and from the identity (\ref{genus1identity}). Q.E.D.
\subsection{Orbits of sextets}
Since the genus $2$ expression $\Xi_6[\delta](\Omega^{(2)})$ is built from genus $2$ admissible sextets, Lemma 1 shows that asyzygous sextets have the potential to produce a form $\Xi_6[\Delta](\Omega^{(3)})$ tending
to $\Xi_6[\delta](\Omega^{(2)})$ in the degeneration limit.
Fix an even spin structure $\Delta$.
In analogy with the genus $2$ case, we
define a $\Delta$-admissible sextet of even spin structures to be a totally asyzygous sextet $\{\Delta_i\}$ not containing $\Delta$. We can restrict then the sextets
entering the candidate for $\Xi_6[\Delta](\Omega^{(3)})$
to the $\Delta$-admissible ones.
This justifies the form given in
(\ref{AB}) for the Ans\"atze (A) and (B).
\medskip
We need to consider the degenerations of $\Delta$-admissible sextets
$\{\Delta_i\}$. We can assume that
\begin{equation}
\label{factorization}
\Delta=\pmatrix{\delta\cr \mu}
\end{equation}
with both lower genus spin structures $\mu_i$ and $\delta$
even, since otherwise $\Delta$ will not contribute to the leading asymptotics. Let $\{\delta_i\}$ be the sextet of genus $2$ spin structures obtained by factoring $\{\Delta_i\}$.
We can assume that they are all even, since otherwise $\{\Delta_i\}$ will again not contribute to the leading asymptotics. Now
Lemma 1 guarantees that the sextet $\{\delta_i\}$ is admissible
in the genus $2$ sense. However, the condition $\Delta\notin\{\Delta_i\}$ does not guarantee that $\delta\notin \{\delta_i\}$, i.e., the $\Delta$-admissibility of
the genus $3$ sextet $\{\Delta_i\}$ does not guarantee the $\delta$-admissibility of the genus $2$ sextet $\{\delta_i\}$.
Thus we have to analyze the contributions of genus $2$ admissible
sextets which are not $\delta$-admissible. We also have to
determine the exact multiplicities with which $\delta$-admissible and $\delta$-not admissible sextets occur
in the degeneration of an $Sp(6,{\bf Z})$ orbit of $\Delta$-admissible sextets in genus $3$. The first issue is addressed by Lemma 2 below. The second issue will be addressed by
a computer listing of all possibilities. The results will be described in subsequent sections.
\subsubsection{Orbits of admissible sextets in genus $2$}
The list of admissible sextets in genus $2$ is provided in \cite{dp04}, Table 4.
By a simple inspection of that list and the actions of modular transformations
in the same table, we find that
\medskip
$\bullet$ There are no totally asyzygous quintets,
and a fortiori, no totally asyzygous sextets in genus $2$;
$\bullet$ In genus $2$, there are 15 admissible sextets.
The group $Sp(4,{\bf Z})$ acts transitively on the set
of admissible sextets;
$\bullet$ Given a genus $2$ even spin structure $\delta$, there are always exactly $6$ sextets which are admissible, and $9$ which are not.
We denote these sets of sextets by $s[\delta]$ and $s^c[\delta]$
respectively
\begin{eqnarray}
\label{S}
s[\delta]&=&\bigg\{\{\delta_i\} \ {\rm admissible\ sextet}\ ;\
\delta\notin \{\delta_i\}\bigg\}
\nonumber\\
s^c[\delta]&=&\bigg\{\{\delta_i\} \ {\rm admissible\ sextet}\ ;
\ \delta\in \{\delta_i\}\bigg\};
\end{eqnarray}
$\bullet$ Let $Sp[\delta](4,{\bf Z})$ be the subgroup of
$Sp(4,{\bf Z})$ fixing $\delta$. Then $Sp[\delta](4,{\bf Z})$
acts transitively on both $s[\delta]$ and $s^c[\delta]$.
In particular, if the phases $\epsilon(\delta;\{\delta_i\})$
satisfy the transformation (\ref{phase}), then all the phases
in each orbit $s[\delta]$ or $s^c[\delta]$ are uniquely
determined by the phase of any single element inside
$s[\delta]$ and $s^c[\delta]$.
\bigskip
\noindent
{\bf Lemma 2.} {\it Let $\delta$ be a fixed genus $2$ even spin structure.
Assume that the phases $\epsilon(\delta;\{\delta_i\})$ satisfy the condition (\ref{phase2}) for all $M\in Sp[\delta](4,{\bf Z})$. Then we have
\begin{eqnarray}
\label{SandSc}
\sum_{\{\delta_i\}\in s[\delta]}
\epsilon(\delta;\{\delta_i\})
\prod_{i=1}^6\vartheta[\delta_i]^2
&=&
\pm\,2\,\Xi_6[\delta](\Omega^{(2)})
\\
\sum_{\{\delta_i\}\in s^c[\delta]}
\epsilon(\delta;\{\delta_i\})
\prod_{i=1}^6\vartheta[\delta_i]^2
&=&
0.
\end{eqnarray}
The $\pm$ sign in the first identity is a consequence of the fact that the
phases $\epsilon(\delta;\{\delta_i\})$ in each orbit $s[\delta]$ or $s^c[\delta]$
are determined only up to a global sign.}
\bigskip
\noindent
{\it Proof.} The first identity in (\ref{SandSc}) is just a reformulation of (\ref{Xi2}), and was proved in \cite{dp04}.
To establish the second identity, we go to the hyperelliptic representation.
\medskip
Let $s^2=\prod_{i=1}^6(x-p_i)$ be a hyperelliptic representation
for the surface $\Sigma^{(2)}$
\footnote{The branch points $p_i$ here should not be confused with the punctures $p_1$ and $p_2$ in the degeneration construction of \S 2. The notation $p_i$ for the branch points
is in accord with \cite{dp04}, which is used heavily in the proof of Lemma 2.}. As before, we identify the spin structure $\delta$
with a partition of the 6 branch points into two sets of 3 branch points each, say
$\delta \sim \{ a_1,a_2,a_3 \} \cup \{ b_1,b_2,b_3\}$.
The Thomae formula (for genus 2) takes the following form,
\begin{eqnarray}
\vartheta [\delta ] ^2 = \epsilon C
x_{a_1 a_2} x_{a_2 a_3} x_{a_3 a_1}
x_{b_1 b_2} x_{b_2 b_3} x_{b_3 b_1}
\hskip .5in
x_{p_i p_j} = \sqrt{p_i-p_j}
\end{eqnarray}
Here, $\epsilon^4=1$, and $C$ is $\delta$-independent.
Actually, we need the explicit correspondence only for the
sextets themselves. Given the normalization of a single
sextet, the correspondences for all others may be derived
using the action of modular transformations on both sides.
We fix the expression for one sextet, say $(125690)$, to be $C^6$,
and determine the
hyperelliptic expressions for the others by modular transformations,
\begin{eqnarray}
\label{deft}
t_1 \equiv + (125690) & = & + (p_1-p_6) (p_2-p_4) (p_3-p_5) ~ C^6 V(p_i)
\nonumber \\
t_2 \equiv + (137890) & = & - (p_1-p_3) (p_2-p_5) (p_4-p_6) ~ C^6 V(p_i)
\nonumber \\
t_3 \equiv + (145678) & = & - (p_1-p_4) (p_2-p_3) (p_5-p_6) ~ C^6 V(p_i)
\nonumber \\
t_4 \equiv + (124580) & = & - (p_1-p_4) (p_2-p_6) (p_3-p_5) ~ C^6 V(p_i)
\nonumber \\
t_5 \equiv + (134670) & = & + (p_1-p_5) (p_2-p_3) (p_4-p_6) ~ C^6 V(p_i)
\nonumber \\
t_6 \equiv + (123689) & = & - (p_1-p_6) (p_2-p_5) (p_3-p_4) ~ C^6 V(p_i)
\nonumber \\
t_7 \equiv - (134589) & = & + (p_1-p_4) (p_2-p_5) (p_3-p_6) ~ C^6 V(p_i)
\nonumber \\
t_8 \equiv - (124679) & = & - (p_1-p_6) (p_2-p_3) (p_4-p_5) ~ C^6 V(p_i)
\nonumber \\
t_9 \equiv - (123570) & = & + (p_1-p_2) (p_4-p_6) (p_3-p_5) ~ C^6 V(p_i)
\nonumber \\
t_{10} \equiv + (235678) & = & + (p_1-p_2)(p_3-p_4)(p_5-p_6) ~ C^6 V(p_i)
\nonumber \\
t_{11} \equiv + (247890) & = & - (p_1-p_3)(p_2-p_6)(p_4-p_5) ~ C^6 V(p_i)
\nonumber \\
t_{12} \equiv - (234579) & = & + (p_1-p_2)(p_3-p_6)(p_4-p_5) ~ C^6 V(p_i)
\nonumber \\
t_{13} \equiv + (234680) & = & - (p_1-p_5)(p_2-p_6)(p_3-p_4) ~ C^6 V(p_i)
\nonumber \\
t_{14} \equiv + (345690) & = & + (p_1-p_5)(p_2-p_4)(p_3-p_6) ~ C^6 V(p_i)
\nonumber \\
t_{15} \equiv + (567890) & = & - (p_1-p_3)(p_2-p_4)(p_5-p_6) ~ C^6 V(p_i)
\end{eqnarray}
The omnipresent factor $V$ is the Vandermonde polynomial
\begin{eqnarray}
V (p_i) \equiv \prod _{1 \leq i < j \leq 6} x_{ij} ^2
= \prod _{1 \leq i < j \leq 6} (p_i -p_j)
\end{eqnarray}
Under a permutation of the branch points, $V(p_i)$ is multiplied by
the signature of this permutation.
The following modular transformations were used to
establish these signs,
\begin{eqnarray}
\Sigma (t_1) = + t_2 \hskip .5in &
M_3 (t_4) = + t_8 & \hskip .5in
S(t_{14}) = + t_{11}
\nonumber \\
T(t_1) = + t_3 \hskip .5in &
S(t_8) = + t_7 & \hskip .5in
M_1 (t_8) = - t_{12}
\nonumber \\
S(t_3) = + t_6 \hskip .5in &
T(t_7) = + t_9 & \hskip .5in
M_3(t_{12}) = + t_{13}
\nonumber \\
T(t_6) = + t_5 \hskip .5in &
M_1 (t_3) = - t_{10} & \hskip .5in
S(t_{13}) = + t_{15}
\nonumber \\
\Sigma (t_5) = + t_4 \hskip .5in &
T(t_{10}) = + t_{14} &
\end{eqnarray}
Taking into account the behavior of $V$ under permutations, modular
invariance determines the relative signs in the sums over $s[\delta]$ and $s^c[\delta]$. Working this out for one of the spin structures, say $\delta = \delta_1$
gives the following explicit formulas,
\begin{eqnarray}
\sum_{\{\delta_i\}\in s[\delta_1]}
\epsilon(\delta_1;\{\delta_i\}) \prod_{i=1}^6\vartheta[\delta_i]^2
& = & 2t_{12} + 2t_{13} + 2 t_{15} = - 2t_{10} - 2t_{11} - 2t_{14}
\nonumber \\
& = & 2 \Xi _6 [\delta _1]
\nonumber \\
\sum_{\{\delta_i\}\in s^c[\delta_1 ]}
\epsilon(\delta_1;\{\delta_i\}) \prod_{i=1}^6\vartheta[\delta_i]^2
& = & \sum _{i=1} ^ 9 t_i =0
\end{eqnarray}
The modular covariance properties then yield these results for all
spin structures $\delta$ and thus completes the proof of the theorem.
Q.E.D.
\subsubsection{Orbits of admissible sextets in genus $3$ (totally asyzygous sextets)}
We list here a number of results on the modular transformations
of asyzygous multiplets $\{\Delta_i\}$ in genus $3$, all of which
have been proven by computer calculations.
\medskip
$\bullet$ The sets of all asyzygous quartets, quintets and sextets transform
transitively under the full modular group acting on characteristics;
\medskip
$\bullet$ There are 5040 totally asyzygous quartets, 2016 totally asyzygous
quintets, $336$ totally asyzygous sextets, and no totally asyzygous septets;
\medskip
$\bullet$ The set of all asyzygous sextets that do not contain a given
spin structure $\Delta$ transforms transitively
under the modular subgroup $Sp[\Delta](6,{\bf Z})$
leaving $\Delta$ invariant. In analogy with the genus $2$ case,
we denote by $S[\Delta]$ the set of asyzygous sextets not
containing a spin structure $\Delta$. For any $\Delta$,
$S[\Delta]$ consists of $280$ elements;
\medskip
$\bullet$ Upon factorization, each sextet $\{\Delta_i\}$ of genus 3 spin structures produces a sextet $\{\delta_i\}$ of genus 2
spin structures. Consider the set of the 336 sextets $\{\delta_i\}$ of genus
2 spin structures which are obtained from factorization from
the set of all $336$ asyzygous sextets in genus 3.
Then the set of such $\{\delta_i\}$
can be divided into 246 sextets which contain at least some odd spin structure, together with 6 copies
of all 15 genus 2 admissible sextets;
\medskip
$\bullet$
Similarly, let $\Delta$ factorize into a genus $1$ and a genus $2$
spin structure $\delta$ as in (\ref{factorization}),
and consider the set of all genus $2$ sextets $\{\delta_i\}$
arising from factorization of the 280 $\Delta$-admissible
genus 3 sextets in $S[\Delta]$. Then the set of such $\{\delta_i\}$
can be divided into 208 sextets which contain at least some odd
spin structures, together with 6 copies of $s[\delta]$ and
4 copies of $s^c[\delta]$.
\medskip
We can now consider the first Ansatz in (\ref{AB}) for $\Xi_6[\Delta](\Omega^{(3)})$, where the summation is over the
set $S[\Delta]$ of $\Delta$-admissible sextets. For
$\Xi_6[\Delta](\Omega^{(3)})$ to transform as in ({\it ii}), we impose the analogous condition to (\ref{phase2}) in genus $3$
\begin{equation}
\label{phase3}
\epsilon(M\Delta;\{M\Delta_i\})
\
\prod_{i=1}^6\epsilon^2(\Delta_i,M)
=
\epsilon^4(\Delta,M)
\
\epsilon
(\delta;\{\Delta_i\}),
\qquad
M\in Sp(6,{\bf Z}),
\end{equation}
Restricted to $M\in Sp[\Delta](6,{\bf Z})$, this implies that all the phases $\epsilon(M\Delta;\{M\Delta_i\})$ in the first Ans\"atz uniquely determine one another. Assuming the
existence of such a consistent assignment of phases, the expression in the first Ans\"atz is then uniquely determined up
to a global $\pm$ sign. The $Sp[\Delta](6,{\bf Z})$ consistency of phases implies the $Sp[\delta](4,{\bf Z})$ consistency of phases. Thus Lemmas 1 and 2 apply. Together with the numerology for the degeneration of the orbit $S[\Delta]$ found above,
we obtain
\begin{equation}
\lim_{t\to 0}
\sum_{\{\Delta_i\}\in S[\Delta]}
\epsilon(\Delta;\{\Delta_i\})
\prod_{i=1}^6\vartheta[\Delta_i]^2(0,\Omega^{(3)})
=
6\cdot
2^4\,
\eta(\Omega^{(1)})^{12}\,
\Xi_6[\delta](\Omega^{(2)})
\end{equation}
Here, we assume that all 6 copies of $s[\delta]$ obtained in factoring
$S[\Delta]$ lead to contributions of the same sign. However, there is a
more severe obstruction to the Ansatz (A):
\medskip
$\bullet$ There does not exist a phase assignment
$\epsilon(\Delta;\{\Delta_i\})$ satisfying the condition
(\ref{phase3}) and the sextets are totally asyzygous.
This is in marked contrast with the genus $2$
case, where the phases $\epsilon(\delta;\{\delta_i\})$ satisfying (\ref{phase2}) do exist. A counterexample in genus $3$ is obtained by
considering the following $\Delta_1$-admissible sextet,\footnote{Throughout,
we shall use the nomenclature for genus 3 spin structures and
modular transformations given in Appendix C of \cite{dp04}.}
\begin{eqnarray}
s_1 = (\Delta _2, \Delta _8, \Delta _{14}, \Delta _{16}, \Delta _{25}, \Delta _{30})
\end{eqnarray}
and the action of the composite modular transformation $A_1B_4$.
From Table 6 of \cite{dp04}, it is clear that $A_1B_4$ leaves
$\Delta_1, \Delta _3, \Delta _4, \Delta _6$ invariant
and maps $\Delta _2 \leftrightarrow \Delta _5$.
Thus, the $\Delta$-admissible sextet $s_1$, as a whole, is invariant
under $A_1B_4$. The sign factor is also easily computed, using
\begin{eqnarray}
\epsilon ^2 (\Delta _i, A_1 B_4)
= \epsilon ^2 (B_4 \Delta _i, A_1) \times \epsilon ^2 (\Delta _i, B_4)
= e^{4 \pi i (\Delta _i)_1 ' (\Delta _i)_2'}
\end{eqnarray}
and we find
\begin{eqnarray}
\epsilon ^2 (\Delta _i, A_1 B_4) = +1 \hskip .5in i=2,8,14,16,25 \hskip .5in
\epsilon ^2 (\Delta _{30}, A_1 B_4) = -1
\end{eqnarray}
But then the sextet contribution changes sign under a transformation that
leaves the sextet invariant, which means to no consistent sign can be defined.
\subsubsection{Orbits of admissible sextets in genus 3 (partially asyzygous sextets)}
A consistent phase assignment is also lacking in this case. A counterexample
in genus 3 is obtained by considering the following $\Delta_1$-admissible sextet,
\begin{eqnarray}
s_2 = (\Delta _2, \Delta _6, \Delta _8, \Delta_{18}, \Delta_{29}, \Delta _{36})
\end{eqnarray}
The modular transformation $A_6B_6A_6B_6$ leaves each of the spin
structures in $s_2$, and thus the entire sextet, invariant. The signs
accompanying the transformation are easily computed, using
\begin{eqnarray}
\epsilon ^2 (\Delta _i, A_6B_6A_6B_6) = +1 & \qquad & i=2,8,18
\nonumber \\
\epsilon ^2 (\Delta _i, A_6B_6A_6B_6) = -1 & \qquad & i=6,29,36 \hskip .5in
\end{eqnarray}
But then the sextet contribution changes sign under a transformation that
leaves the sextet invariant, which means to no consistent sign can be defined.
\subsection{Orbits of pairs of sextets}
In the preceding section, we have seen sums over $\Delta$-admissible
sextets are not consistent with the
modular transformation (\ref{phase3}). Thus we cannot construct
$\Xi_6[\Delta](\Omega^{(3)})$ directly by the Ansatz (A). In this section, we shall show that certain sums over {\it pairs of sextets} do admit consistent phase assignments, and that carefully chosen sums do lead to viable candidates for $\Xi_6[\Delta](\Omega^{(3)})^2$.
\subsubsection{Orbits of pairs of admissible sextets in genus 2}
Fix an external genus $2$ even spin structure $\delta$.
Our first task is to identify the orbits of pairs
$\delta$-admissible sextets under $Sp[\delta](4,{\bf Z})$.
Clearly, for each integer $p$, the subset of pairs
$\{\delta_i\}$, $\{\delta_i'\}$ with $p$ common spin structures
is invariant under $Sp[\delta](4,{\bf Z})$. For $\delta$-admissible pairs of sextets, there is a finer partition
which does give precisely all the orbits under $Sp[\delta](4,{\bf Z})$:
\begin{eqnarray}
{Q}_p^{0,0}[\delta]&=&
\big\{ (\{\delta_i\},\{\delta_i'\})\in s[\delta]\times s[\delta]; \ \#(\{\delta_i\}\cap\{\delta_i'\})=p\big\}
\nonumber\\
{Q}_p^{0,1}[\delta]&=&
\big\{ (\{\delta_i\},\{\delta_i'\})\in s[\delta]\times s^c[\delta]; \ \#(\{\delta_i\}\cap\{\delta_i'\})=p\big\}
\nonumber\\
{Q}_p^{1,0}[\delta]&=&
\big\{ (\{\delta_i\},\{\delta_i'\})\in s^c[\delta]\times s[\delta]; \ \#(\{\delta_i\}\cap\{\delta_i'\})=p\big\}
\nonumber\\
{Q}_p^{1,1}[\delta]&=&
\big\{ (\{\delta_i\},\{\delta_i'\})\in s^c[\delta]\times s^c[\delta]; \ \#(\{\delta_i\}\cap\{\delta_i'\})=p\big\}
\end{eqnarray}
By inspecting the table of admissible sextets in genus $2$, we find that only the values $p=3,4$ and $6$ produce non-empty
sets $Q_p^{a,b}[\delta]$.
The sizes of the orbits $Q_p^{a,b}[\delta]$ are
given by
$\#\,Q_3^{0,0}[\delta]=12$,
$\#\,Q_4^{0,0}[\delta]=18$,
$\#\,Q_6^{0,0}[\delta]=6$,
$\#\, Q_3^{0,1}[\delta]=\#\,Q_3^{1,0}[\delta]=36$,
$\#\, Q_4^{0,1}[\delta]=\#\,Q_4^{1,0}[\delta]=18$,
$\#\,Q_3^{1,1}[\delta]=36$,
$\#\,Q_4^{1,1}[\delta]=36$,
$\#\,Q_6^{1,1}[\delta]=9$,
which does add up to $15^2=225$. In this counting, the pairs
of sextets have been viewed as ordered pairs. For later purposes,
it is preferrable to count unordered pairs, in which case the
sizes of the orbits $Q_p^{a,b}[\delta]$ become
\begin{equation}
\matrix{Q_3^{0,0}=6\qquad &Q_3^{0,1}=36\qquad&Q_3^{1,1}=18\cr
Q_4^{0,0}=9\qquad&Q_4^{0,1}=18\qquad&Q_4^{1,1}=18\cr
Q_6^{0,0}=6\qquad&Q_6^{0,1}=0\qquad&Q_6^{1,1}=9\cr}
\end{equation}
\medskip
To each orbit $Q_p^{a,b}[\delta]$, we can associate
the following polynomial in $\vartheta$-constants
\begin{eqnarray}
F_p^{a,b}[\delta]=
\sum_{(\{\delta_i\},\{\delta_i'\})\in Q_p^{a,b}[\delta]}
\epsilon_p^{a,b}(\delta;\{\delta_i\},\{\delta_i'\})
\prod_{i=1}^6\vartheta[\delta_i]^2
\prod_{i=1}^6\vartheta[\delta_i']^2
\end{eqnarray}
where the phases $\epsilon(\delta;\{\delta_i\},\{\delta_i'\})$ are required to satisfy
\begin{equation}
\label{phase2pairs}
\epsilon(M\delta;\{M\delta_i\},\{M\delta_i'\})
\prod_{i=1}^6\epsilon^2(\delta_i,M)
\prod_{i=1}^6\epsilon^2(\delta_i',M)
=
\epsilon(\delta;\{\delta_i\},\{\delta_i'\})
\end{equation}
Since $Q_p^{a,b}[\delta]$ are orbits of $Sp[\delta](4,{\bf Z})$,
the phases $\epsilon(\delta;\{\delta_i\},\{\delta_i'\})$ completely determine each other within $Q_p^{a,b}[\delta]$. We also find,
by computer inspection, that a consistent assignment of phases
$\epsilon(\delta;\{\delta_i\},\{\delta_i'\})$ exist for each $Q_p^{a,b}[\delta]$. Thus the expressions $F_p^{a,b}[\delta]$
exist, and are uniquely determined by a single normalizing
sign. We shall define this normalizing sign below.
\medskip
Remarkably, the expressions $F_p^{a,b}[\delta]$ can be expressed
very simply in terms of $\Xi_6[\delta](\Omega^{(2)})$ and two
other polynomials in $\vartheta$-constants, defined by
\begin{eqnarray}
F [\delta_1] \equiv \sum _{i \in s [\delta_1 ]} t_i ^2
& \hskip .6in &
s[\delta_1] = \{ 10, 11, 12, 13, 14, 15 \}
\nonumber \\
F^c [\delta_1] \equiv \sum _{i \in s^c [\delta_1]} t_i ^2
& \hskip .6in &
s^c [\delta _1] = \{ 1,2,3,4,5,6,7,8,9 \}
\end{eqnarray}
Then we have
\bigskip
\noindent
{\bf Lemma 3.} {\it Let the normalizing signs for
$F_p^{a,b}[\delta]$ be defined by the equation
(\ref{normalizingsign}). Then the expressions $F_p^{a,b}[\delta]$
are given by}
\begin{eqnarray}
\label{Fexp}
F^{0,0} _3 [\delta_1] = \Xi_6 [\delta _1]^2 - F [\delta _1]/2
\qquad
& F^{0,1} _3 [\delta_1] = - F^c [\delta _1] &
\qquad
F^{1,1} _3 [\delta_1] = F^c [\delta_1] /2
\nonumber \\
F^{0,0} _4 [\delta_1] = - \Xi _6 [\delta _1] ^2
\hskip .9in
& F^{0,1} _4 [\delta_1] = F^c [\delta _1] &
\qquad
F^{1,1} _4 [\delta_1] = - F^c [\delta_1]
\nonumber \\
F^{0,0} _6 [\delta_1] = F [\delta _1]
\hskip 1.15in & & \qquad
F^{1,1} _6 [\delta_1] = + F^c [\delta _1] \qquad
\end{eqnarray}
\medskip
\noindent
{\it Proof.}
The following relations were established earlier,
\begin{eqnarray}
\label{rel1}
\Xi_6 [\delta_1] = t_{10} + t_{11} + t_{14}
= - t_{12} - t_{13} - t_{15}
\end{eqnarray}
\begin{eqnarray}
\label{rel2}
0 = t_1 + t_2 + t_3 + t_4 + t_5 + t_6 + t_7 + t_8 + t_9
\end{eqnarray}
The relation (\ref{rel1}) was established as a step in the proof
of the alternative form (\ref{Xi2}) of $\Xi_6[\delta](\Omega^{(2)})$ in \cite{dp04}.
The relation (\ref{rel2}) is a reformulation of the second identity in Lemma 2.
Additional ``rearrangement" formulas are as follows,
\begin{eqnarray}
\label{rel3}
t_1 & = & t_2 + t_3 + t_5 + t_7 ~=~ t_{14} + t_{15}
\nonumber \\
t_2 & = & t_1 + t_3 + t_4 + t_8 ~=~ t_{11} + t_{15}
\nonumber \\
t_3 & = & t_1 + t_2 + t_6 +t_9 ~=~ t_{10} + t_{15}
\nonumber \\
t_4 & = & t_2 + t_5 + t_6 +t_8 ~=~ t_{11} + t_{13}
\nonumber \\
t_5 & = & t_1 + t_4 +t_6 +t_7 ~=~ t_{13} + t_{14}
\nonumber \\
t_6 & = & t_3 + t_4 +t_5 +t_9 ~=~ t_{10} + t_{13}
\nonumber \\
t_7 & = & t_1 +t_5 + t_8 +t _9 ~=~ t_{12} + t_{14}
\nonumber \\
t_8 & = & t_2 + t_4 + t_7 + t_9 ~=~ t_{12} + t_{11}
\nonumber \\
t_9 & = & t_3 + t_6 +t_7 +t_8 ~=~ t_{12} + t_{10}
\end{eqnarray}
and
They follow directly from the hyperelliptic representation; the equivalences are under the relations (\ref{rel1}, \ref{rel2}).
\medskip
We define now the normalizing signs for $F_p^{a,b}[\delta]$ promised earlier.
Writing $e_p^{a,b}(\delta_1;t_i,t_j)=
e_p^{a,b}[\delta](i,j)$ for simplicity, they are given by
\begin{eqnarray}
\label{normalizingsign}
\epsilon _3 ^{0,0} [\delta_1](10,11) = +1 & \hskip .3in &
\epsilon _3 ^{0,1} [\delta_1](1,10) = +1 \hskip .5in
\epsilon _3 ^{1,1} [\delta_1](1,2) = +1
\nonumber \\
\epsilon _4 ^{0,0} [\delta_1](10,13) = +1 & \hskip .3in &
\epsilon _4 ^{0,1} [\delta_1](1,14) = +1 \hskip .5in
\epsilon _4 ^{1,1} [\delta_1](1,4) = +1
\nonumber \\
\epsilon _6 ^{0,0} [\delta_1](10,10) = +1 & \hskip .3in & \hskip 1.81in
\epsilon _6 ^{1,1} [\delta_1](1,1) = +1
\nonumber \\ &&
\end{eqnarray}
The resulting polynomials are then as follows,
\begin{eqnarray}
F^{0,0} _3 [\delta_1](t)
& = &
+ t_{10} t_{11} + t_{10} t_{14} + t_{11} t_{14}
+ t_{12} t_{13} + t_{12} t_{15} + t_{13} t_{15}
\nonumber \\
F^{0,0} _4 [\delta_1](t)
& = &
+ (t_{10} + t_{11} + t_{14} )(t_{12} + t_{13} + t_{15} )
\nonumber \\
F^{0,0} _6 [\delta_1](t)
& = &
+ t_{10} ^2 + t_{11}^2 + t_{12}^2 + t_{13}^2 + t_{14}^2 + t_{15}^2
\nonumber \\
F^{0,1} _3 [\delta_1](t)
& = &
+ t_1 (t_{10} + t_{11} + t_{12} + t_{13})
+ t_2 (t_{10} + t_{12} + t_{13} + t_{14})
\nonumber \\ &&
+ t_3 (t_{11} + t_{12} + t_{13} + t_{14})
+ t_4 (t_{10} + t_{12} + t_{14} + t_{15})
\nonumber \\ &&
+ t_5 (t_{10} + t_{11} + t_{12} + t_{15})
+ t_6 (t_{11} + t_{12} + t_{14} + t_{15})
\nonumber \\ &&
+ t_7 ( t_{10} + t_{11} + t_{13} + t_{15})
+ t_8 ( t_{10} + t_{13} + t_{14} + t_{15})
\nonumber \\ &&
+ t_9 (t_{11} + t_{13} + t_{14} + t_{15})
\nonumber \\
F^{0,1} _4 [\delta_1](t)
& = &
+t_1 (t_{14} + t_{15}) + t_2 (t_{11} + t_{15}) + t_3 (t_{10} + t_{15})
\nonumber \\ &&
+ t_4 (t_{11} + t_{13}) + t_5 (t_{13} + t_{14}) + t_6 (t_{10} + t_{13})
\nonumber \\ &&
+ t_7 (t_{12} + t_{14}) + t_8 (t_{11} + t_{12}) + t_9 (t_{10} + t_{12})
\nonumber \\
F^{1,1} _3 [\delta_1](t)
& = &
+t_1 t_2 + t_1 t_3 + t_1 t_5 + t_1 t_7 + t_2 t_3 + t_2 t_4 + t_2 t_8
+ t_3 t_6 + t_3 t_9
\nonumber \\ &&
+ t_4 t_5 + t_4 t_6 + t_4 t_8 + t_5 t_6 + t_5 t_7
+ t_6 t_9 + t_7 t_8 + t_7 t_9 + t_8 t_9
\nonumber \\
F^{1,1} _4 [\delta_1](t)
& = &
+ t_1 t_4 + t_1 t_6 + t_1 t_8 + t_1 t_9 + t_2 t_5 + t_2 t_6 + t_2 t_7 + t_2 t_9
+ t_3 t_4
\nonumber \\ &&
+ t_3 t_5 + t_3 t_7 + t_3 t_8 + t_4 t_7 + t_4 t_9 + t_5 t_8 + t_5 t_9
+ t_6 t_7 + t_6 t_8
\nonumber \\
F^{1,1} _6 [\delta_1](t)
& = &
+ t_1^2 + t_2 ^2 + t_3 ^2 + t_4 ^2 + t_5 ^2 + t_6 ^2 + t_7 ^2 + t_8 ^2 + t_9^2
\end{eqnarray}
The above expressions for $F$ can be recast in the following,
more systematic way,
\begin{eqnarray}
F^{0,0} _p [\delta_1](t)
& = &
\sum _{{\#(i\cap j)=p, \atop i\leq j; i,j \in s[\delta_1] }} t_i t_j
\hskip 1in p=3,4,6
\nonumber \\
F^{0,1} _p [\delta_1](t)
& = &
\sum _{{\#(i\cap j)=p, \atop i \in s[\delta_1], j \in s^c[\delta_1] }} t_i t_j
\hskip 1in p=3,4
\nonumber \\
F^{1,1} _p [\delta_1](t)
& = &
\sum _{{\#(i\cap j)=p, \atop i\leq j, i,j \in s^c[\delta_1]}} t_i t_j
\hskip 1in p=3,4, 6
\end{eqnarray}
Using the relations (\ref{rel1}), (\ref{rel2}), and (\ref{rel3}), we can reduce
these expressions to the linear combinations of the 3 standard forms
$\Xi_6[\delta_1]^2$, $F[\delta_1]$, and $F^c [\delta _1]$ given in Lemma 3. Q.E.D.
\subsubsection{Orbits of pairs of admissible sextets in genus 3}
We consider next the same issue of orbits and consistency of phase assignments for pairs of admissible sextets in genus $3$.
The following can be found by computer listings:
\medskip
$\bullet$
The set of all pairs of asyzygous sextets may be decomposed
into 7 mutually exclusive sets, according to whether the two sextets
in the pair have 0, 1, 2, 3, 4, 5, or 6 spin structures in common.
Each of these sets of pairs transforms transitively under the group of all
modular transformations $Sp(6,{\bf Z})$.
The number of pairs in each category is listed in the second column of
the table below.
\medskip
Also, we shall need the number of pairs of sextets,
such that neither sextet in the pair contains a given spin structure $\Delta_1$.
The numbers of such pairs in each category is listed in the third
column of the table below. Under modular subgroup that preserves
$\Delta_1$, the sets with 0, 1, and 2 spin structures in common are
NOT transitive. The table below lists in the fourth column the sizes
of the orbits of the modular subgroup $Sp[\Delta_1](6,{\bf Z})$.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
\# $\cap$ & \# pairs & \# pairs $ \not\supset \Delta_1$ & Orbits &
Reference pair \\
\hline \hline
0 & 15120 & 10080 & 5400$^{(1)}$
& $\{ 2,15,17,19,22,32\}, ~ \{ 9,10,13,16,27,28 \} $\\ \hline
& & & 5400$^{(2)}$
& $\{ 4,5,10,16,26,36\}, ~ \{ 7,11,14,20,27,33 \}$ \\ \hline \hline
1 & 30240 & 21000 & 840
& $\{ 3,4,12,17,25,29\}, ~ \{10,20,22,25,28,35\}$ \\ \hline
& & & 10080$^{(1)}$
& $\{ 5,15,22,27,29,31\}, ~ \{ 4,11,18,19,24,31\}$ \\ \hline
& & & 10080$^{(2)}$
& $\{ 2,5,17,19,28,30\}, ~ \{ 2,8,10,15,25,32\}$ \\ \hline \hline
2 & 7560 & 5460 & 1260
& $\{ 8,12,15,20,28,33 \}, ~ \{ 4,12,13,17,24,33 \}$ \\ \hline
& & & 1680
& $ \{ 4,5,10,16,26,36\}, ~ \{ 3,4,7,12,22,36 \}$ \\ \hline
& & & 2520
& $ \{ 5,14,16,20,25,35 \}, ~ \{ 6,16,24,25,30,36 \}$\\ \hline \hline
3 & 3360 & 2800 & 2800
& $ \{ 1,4,10,17,27,33 \}, ~ \{ 4,10,11,17,25,26 \} $ \\ \hline \hline
4 & 0 & 0 & --
& -- \\ \hline \hline
5 & 0 & 0 & --
& -- \\ \hline \hline
3 & 336 & 280 & 280
& any pair \\ \hline
\end{tabular}
\end{center}
\label{table:1}
\caption{Numbers of pairs of asyzygous sextets and modular orbits excluding $\Delta_1$}
\end{table}%
$\bullet$
Consider the transformation law for sign assignments
$\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})$ for pairs of sextets in
genus $3$ given by the analogue of (\ref{phase2pairs}),
\begin{equation}
\label{phase3pairs}
\epsilon(M\Delta;\{M\Delta_i\},\{M\Delta_i'\})
\prod_{i=1}^6\epsilon^2(\Delta_i,M)
\prod_{i=1}^6\epsilon^2(\Delta_i',M)
=
\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\}),
\end{equation}
where $M$ is any element of $Sp(6,{\bf Z})$.
With computer calculations, using all the generators
$S$, $M_{A_i}$, and $M_{B_i}$, $i=1, \cdots , 6$
of the full $Sp(6,{\bf Z})$, the following may be shown.
\begin{enumerate}
\item A unique (up to a global sign) and consistent sign assignment
exists for all the orbits in the sets with 0, 2 and 6 spin structures in common,
as well as for the orbit $10080^{(2)}$ in the set with 1 spin structure in common;
\item No consistent sign assigment exists for any of the other orbits.
\end{enumerate}
\subsection{Branching rules for $Sp[\Delta](6,{\bf Z})$ orbits
into $Sp[\delta](4,{\bf Z})$ orbits}
In this section, we list the multiplicities of all the $Sp[\delta](4,{\bf Z})$
orbits which arise upon factorization of the orbits of $Sp[\Delta](6,{\bf Z})$.
Recall that, in genus $3$, the invariant set of pairs of $\Delta$-admissible
sextets with $p$ common spin structures can be decomposed further into
irreducible orbits. Let these orbits be denoted by ${\cal Q}_{pq}$, with $p$ indicating that the pairs of $\Delta$-admissible sextets
have $p$ common spin structures, and $q$ indicating
which orbit is being considered for given $p$.
In the table below, $N_{pq}$ denotes the multiplicity of the genus 2 orbit in the decomposition of the orbit ${\cal Q}_{pq}$. Also, $\# (Q)$ denotes the cardinality of the genus 2 even spin structure orbit.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline
genus 2 orbit & $\#(Q)$ & $N_{01}$ & $N_{02}$ & $N_{21}$ & $N_{22}$ & $N_{23}$ & $N_{6}$
\\ \hline \hline
$Q^{0,0} _3$ & 6
& 12
& 0
& 0
& 0
& 0
& 0
\\ \hline
$Q^{0,1} _3$ & 36
& 4
& 4
& 0
& 0
& 0
& 0
\\ \hline
$Q^{1,1} _3$ & 18
& 0
& 4
& 0
& 0
& 0
& 0
\\ \hline
$Q^{0,0} _4$ & 9
& 0
& 0
& 0
& 4
& 8
& 0
\\ \hline
$Q^{0,1} _4$ & 18
& 0
& 0
& 8
& 0
& 0
& 0
\\ \hline
$Q^{1,1} _4$ & 18
& 0
& 0
& 2
& 2
& 2
& 0
\\ \hline
$Q^{0,0} _6$ &6
& 0
& 6
& 6
& 3
& 0
& 6
\\ \hline
$Q^{0,1} _6$ & 0
& 0
& 0
& 0
& 0
& 0
& 0
\\ \hline
$Q^{1,1} _6$ & 9
& 2
& 0
& 2
& 0
& 2
& 4
\\ \hline \hline
Total number of pairs & & 234 & 252 & 234 & 90 & 126 & 72
\\ \hline \hline
\end{tabular}
\end{center}
\caption{Branching rules for genus 3 orbits into genus 2 orbits}
\label{table:18}
\end{table}
\bigskip
The computer analysis also shows that, in the above table,
all copies of any given orbit $Q_p^{a,b}[\delta]$ always occur with the sign $+$.
Thus there is no cancellation between the various copies
of any orbit $Q_p^{a,b}[\delta]$. (Of course, the global
sign in front of each $F_p^{a,b}[\delta]$ is a matter of
convention, depending on the choice of global sign for
the definition of $F_p^{a,b}[\delta]$.
\medskip
To each $Sp[\Delta](6,{\bf Z})$ orbit ${\cal Q}_{pq}$, we can associate then a polynomial $P_{pq}$ in genus $2$ $\vartheta$-constants, defined
as the linear combination of the polynomials $F_p^{a,b}[\delta]$,
with coefficients given by the multiplicities with which
the $Sp[\delta](4,{\bf Z})$ orbit $Q_p^{a,b}[\delta]$ appears.
Thus $P_{01}$ and $P_{02}$ stand for the
two polynomials corresponding to the two genus 3 orbits of pairs
with 0 common spin structures; $P_{21}, P_{22}, P_{23}$ stand for the 3 orbits
of pairs with 2 common spin structures; and $P_6$ stands for the single orbit of pairs with 6 common spin structures. The overall sign of
each polynomial is arbitrary. The relative signs are of course fixed
by the stabilizer group of the genus 3 spin structure $\Delta$.
We have (we omit reference to $\Delta$ in $F$),
\begin{eqnarray}
P_{01} & = & - 12 F^{0,0} _3 + 4 F^{0,1}_3 + 2 F^{1,1} _6
\hskip .7in =
-12 \Xi_6 ^2 + 6 F - 2 F^c
\nonumber \\
P_{02} & = & - 4 F^{0,1} _3 + 4 F^{1,1} _3 + 6 F^{0,0} _6
\hskip .78in =
6 F + 6 F^c
\nonumber \\
P_{21} & = & + 8 F^{0,1} _4 - 2 F^{1,1} _4 + 6 F^{0,0} _6 + 2 F^{1,1} _6
\hskip .2in =
6 F + 12 F^c
\nonumber \\
P_{22} & = & - 4 F ^{0,0} _4 - 2 F^{1,1} _4 + 3 F^{0,0} _6
\hskip .77in
=4 \Xi _6 ^2 + 3 F + 2 F^c
\nonumber \\
P_{23} & = & - 8 F^{0,0} _4 - 2 F^{1,1} _4 + 2 F^{1,1} _6
\hskip .76in
= 8 \Xi _6 ^2 + 4 F^c
\nonumber \\
P_6 & = & + 6 F^{0,0} _6 + 4 F^{1,1}_6
\hskip 1.34in
= 6 F + 4 F^c
\end{eqnarray}
where we have used (\ref{Fexp}) to express all of these in terms of
the quantities $\Xi_6 ^2$, $F$ and $F^c$.
The previous discussion results in the following lemma:
\bigskip
\noindent
{\bf Lemma 4.} {\it Let the sign assignments $\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})$ satisfy the transformation (\ref{phase3pairs}) for each orbit ${\cal Q}_{pq}$. Then we have}
\begin{equation}
\lim_{t\to 0}
\sum_{(\{\Delta_i\},\{\Delta_i'\})\in {\cal Q}_{pq}}
\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})
\prod_{i=1}^6
\vartheta[\Delta_i](0,\Omega^{(3)})^2
\vartheta[\Delta_i'](0,\Omega^{(3)})^2
=
2^8\eta(\Omega^{(1)})^{24}
P_{pq}(\Omega^{(2)})
\end{equation}
\subsection{Candidates for $\Xi_6[\Delta](\Omega^{(3)})^2$}
Each orbit ${\cal Q}_{pq}$ contributes a consistent term to the candidate for the genus $3$ superstring measure, transforming covariantly under $Sp(6,{\bf Z})$ transformations.
Thus we can take an arbitrary linear combination of these
orbits and obtain a modular covariant expression
\begin{equation}
\label{combination}
\sum_{p,q}
N_{pq}
\sum_{(\{\Delta_i\},\{\Delta_i'\})\in {\cal Q}_{pq}}
\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})
\prod_{i=1}^6
\vartheta[\Delta_i](0,\Omega^{(3)})^2
\vartheta[\Delta_i'](0,\Omega^{(3)})^2
\end{equation}
Candidates for $\Xi_6[\Delta](\Omega^{(3)})^2$ must tend to
$2^8\eta(\Omega^{(1)})^2
\,
\Xi_6[\delta](\Omega^{(2)})^2$.
In view of Lemma 4, the limit at $t\to 0$ of the linear combination (\ref{combination}) will be a multiple of $\eta(\Omega^{(1)})^2\,\Xi_6[\delta](\Omega^{(2)})^2$ if the multiplicities $N_{pq}$
satisfy
\begin{eqnarray}
\label{N1}
2N_{01}+2N_{02}+2N_{21}+N_{22}+2N_6
&=& 0\nonumber\\
-N_{01}+3N_{02}+6N_{21}+N_{22}+2N_{23}
+2N_6&=& 0
\end{eqnarray}
in which case the limit is given by
\begin{equation}
\label{N2}
2^8
\
\eta(\Omega^{(1)})^2
\
(-12\,N_{01}+4\,N_{22}+8\,N_{23})
\
\Xi_6[\delta](\Omega^{(2)})^2
\end{equation}
It is convenient to summarize our findings in the following theorem:
\bigskip
\noindent
{\bf Theorem 2.} {\it Let the genus $3$ expression
$\Xi_6[\Delta](\Omega^{(3)})^2$ be defined by (\ref{Xisquare}),
where $Q_{pq}$ are the orbits of pairs of $\Delta$-admissible sextets from Table.
Assume that the multiplicities $N_{pq}$ satisfy the condition
(\ref{N1}), and set $N=-12\,N_{01}+4\,N_{22}+8\,N_{23}$.
Let $\Delta$ factorize into an even spin structure $\delta$ at genus 2.
Then the expression $\Xi_6[\Delta](\Omega^{(3)})^2$ satisfies
the three conditions}
({\it i'}) $\Xi_6[\Delta](\Omega^{(3)})^2$ {\it is holomorphic on the Siegel upper half space};
\smallskip
({\it ii'}) $\Xi_6[\tilde\Delta](\tilde\Omega^{(3)})^2
=
{\rm det}\,(C\Omega^{(3)}+D)^{12}\,\Xi_6[\Delta](\Omega^{(3)})^2$;
\smallskip
({\it iii'})
$\lim _{t\to 0}
\Xi_6[\Delta](\Omega^{(3)})^2
=\eta(\Omega^{(1)})^{24}
\Xi_6[\delta](\Omega^{(2)})^2$.
\bigskip
For example, an integer combination leading to a multiple of $2^8
\
\eta(\Omega^{(1)})^2
\Xi_6[\delta](\Omega^{(2)})^2$
by the square of an integer is $N_{01}=-2$, $N_{02}=4$, $N_{21}=-2$, $N_{23}=-1$, in which case we get
\begin{eqnarray}
\label{ans1}
&&
\lim _{t\to 0}
\sum_{(\{\Delta_i\},\{\Delta_i'\})\in {\cal Q}_{pq}}
\epsilon(\Delta;\{\Delta_i\},\{\Delta_i'\})
\prod_{i=1}^6
\vartheta[\Delta_i](0,\Omega^{(3)})^2
\vartheta[\Delta_i'](0,\Omega^{(3)})^2
\nonumber\\
&&
\qquad\qquad\qquad
=
16\cdot 2^8\eta(\Omega^{(1)})^{24}
\
\Xi_6[\delta](\Omega^{(2)})^2.
\end{eqnarray}
\subsection{Vanishing of the genus 3 cosmological constant}
We address a final issue of physical and mathematical significance,
namely the behavior of the genus 3 {\sl cosmological constant}, defined by
\begin{eqnarray}
\Upsilon_8 \equiv
\sum _\Delta \Xi _6 [\Delta] (\Omega ^{(3)}) \vartheta [\Delta] (0, \Omega ^{(3)})^4
\end{eqnarray}
By the its very construction, $\Xi _6 [\Delta]$ transforms under the
modular group $Sp (6, {\bf Z})$ as $\vartheta [\Delta](0,\Omega ^{(3)})^{12}$,
and therefore the quantity $\Upsilon_8$ is a genus 3 modular form of weight 8.
An infinite family of modular forms of weight $4k$ may be generated as follows,
\begin{eqnarray}
\Psi _{4k} (\Omega ^{(3)}) \equiv \sum _{\Delta}
\vartheta [\Delta ](0, \Omega ^{(3)})^{8k}
\end{eqnarray}
for $k$ any positive integer.
In \cite{dp04}, it was argued that $\Psi _8 = \Psi _4 ^2 /8$, based on
asymptotic identifications and numerical calculations.
We shall assume that this is the only independent holomorphic modular form
of weight 8, as we are not aware of any proof that this statement is true.
Given this assumption, as well as the asymptotic behavior established
in this paper for $\Xi _6 [\Delta] (\Omega ^{(3)}) $, as the surface undergoes
a separating degeneration, it is clear that the modular form $\Upsilon_8$
must vanish in this limit. But $\Psi _8$ is non-zero in the same limit.
As a result, $\Upsilon _8 =0$ throughout moduli space, and the
cosmological constant vanishes to three loop order.
\bigskip
\noindent
{\large \bf Acknowledgments}
\medskip
We are happy to thank Edward Witten for stimulating discussions.
Part of this work was carried out while one of us (E.D.) was at
the Aspen Center for Physics. All calculations were carried out using Maple 9.
\newpage
|
{
"timestamp": "2004-11-19T18:35:15",
"yymm": "0411",
"arxiv_id": "hep-th/0411182",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/0411182"
}
|
\section{INTRODUCTION}
The necessity of a nonperturbative input for effective explicit
calculations of hadronic processes even in high energy domain is
an important reason for development of nonperturbative methods in
QCD \cite{pQCD}. The nonperturbative effects are naturally related
to the nontrivial structure of the QCD vacuum. In the last
decades, a great progress has been made in study of the QCD ground
state and a number of important results have been obtained that
connect the properties of the vacuum with the hadron
characteristics treating the QCD vacuum in the framework of the
instanton liquid model \cite{ILM,DP,DK,REV}. The idea that the
nontrivial vacuum structure can be relevant as well in high energy
hadronic processes was explicitly formulated in the context of the
so-called soft Pomeron problem in Refs. \cite{NL,LN}, and
further developed using the eikonal approximation and the method
of Wilson path-ordered exponentials in Refs. \cite{NACH,KRP}. Recently,
we investigated the instanton induced effects in high energy regime for the
electromagnetic (EM) quark form factor in the {\it weak-field approximation}, Ref. \cite{DCH1}.
In the present work, we report our results on evaluation of the
{\it total instanton contribution} to this quantity.
\section{EVOLUTION EQUATION FOR EM QUARK FORM FACTOR}
The electromagnetic Dirac quark form factors are determined via
the elastic scattering amplitude of a quark in an external EM
field: \begin{equation} \mathcal M_\mu =F_q\[(p_1-p_2)^2\] \bar u(p_1) \gamma_\mu
v(p_2) \ , \label{eq:ampl} \end{equation} where $u(p_1), \ v(p_2)$ are the
spinors of outgoing and incoming quarks.
It is assumed that both the momentum transfer $- t = Q^2 = (p_1 -p_2)^2$ and the total
center-of-mass energy $s = (p_1 +p_2)^2$ are large compared to the quark mass, that is:
$ (p_1p_2) \gg p_{1,2}^2 =m^2 $, or $ \cosh \chi \gg
1 $ .
Using the classification of the diagrams with respect to the momenta
carried by their internal lines, the resummation of all
logarithmic terms coming from the soft gluon subprocesses allows
us to express the ``soft'' part of the form factor, $F_q$, in terms of the vacuum average of the gauge
invariant path-ordered Wilson exponential (within the eikonal approximation) \cite{ALLLOGA,KRON,MMP,Pol}
\begin{equation} W (C_\chi, \alpha_s) = \frac{1}{N_c} \hbox{Tr} \Big< \mathcal P \hbox{e}^{i g \int_{C_\chi} \! d x_{\mu} \hat A_{\mu} (x)
} \Big\>_{0} \ . \label{1a} \end{equation} In Eq. (\ref{1a}), the
integration path corresponding to the considered process goes
along the closed contour $C_\chi$: the angle (cusp) with infinite
sides. We parameterize the integration path $C_\chi=\{z_\mu(t);
t=[-\infty,\infty]\}$ as follows \begin{equation} z_{\mu}(t)=\left\{
\begin{array}
[c]{c}%
v_{1}t \ ,\qquad-\infty<t<0 \ , \\
v_{2}t \ ,\qquad0<t<\infty \ .
\end{array} \right. \label{path}\end{equation} The Wilson integral (\ref{1a}) is
multiplicatively renormalizable \cite{KRC1,WREN}, therefore,
we can define the cusp anomalous dimension $\Gamma_{cusp}$:
$ \Gamma_{cusp} (\chi; \alpha_s) = - \mu \frac{d}{d \mu} \ln W(\chi; \mu^2/\lambda^2, \alpha_s)$ ,
which determines the high-energy asymptotics of the form factor \cite{KRON}.
Here $\bar \mu^2$ is the UV cutoff, $\mu^2$ is the normalization point, and $\lambda^2$ is the IR cutoff.
In one-loop order of perturbative expansion, the cusp anomalous dimension is given by \cite{KRON}: $
{\Gamma_{cusp}^{(1)}} ( \alpha_s) = \frac{\alpha_s}{\pi} C_F $ . In what
follows, we explicitly calculate the nonperturbative part $W_{np}$
within the ILM. The latter plays a role of initial conditions for perturbative evolution.
\section{INSTANTON INDUCED CORRECTIONS}
Let us consider the instanton contribution to the evolution equation for the form factor.
The instanton field has the
general form \begin{equation} \hat A_\mu (x; \rho) = \frac{1}{g} {\hbox{\tt R}}^{ab} \sigma^a {\eta^\pm}^b_{\mu\nu} (x-z_0)_\nu \varphi
(x-z_0; \rho) , \label{if1} \end{equation} where $\varphi (x)$ is the gauge
dependent profile function, ${\hbox{\tt R}}^{ab}$ is the color
orientation matrix, $\sigma^a$'s are the Pauli matrices,
${\eta^\pm}^a_{\mu\nu}=\varepsilon_{4a\mu\nu}\mp(1/2)\varepsilon_{abc}\varepsilon_{bc\mu\nu}$
are 't Hooft symbols, and $(\pm)$ corresponds to the instanton or
antiinstanton solution. The averaging of the Wilson operator over
the nonperturbative vacuum is performed by the integration over
the coordinate of the instanton center $z_0$, the color
orientation and the instanton size $\rho$. The measure for the
averaging over the instanton ensemble reads $dI = d{\hbox{\tt R}}
\ d^4 z_0 \ dn_\rho $, where $ d{\hbox{\tt R}}$ refers to the
averaging over color orientation, and $dn_\rho$ depends on the
choice of the instanton size distribution. After averaging over
all possible embeddings of $SU_c(2)$ into $SU_c(3)$ \cite{SVZ80},
we write the Wilson integral (\ref{1a}) over the contour
(\ref{path}) in the single instanton approximation in the form:
$$ w_I(\chi) = \frac{1}{3} \hbox{Tr} \Big\< \frac{4}{9}\ \({\hbox{\tt I}} \times
{\hbox{\tt I}}\) \hbox{cos}{\alpha_1
} \hbox{cos}{\alpha_2}\ + \frac{1}{8} \(\lambda^A \times \lambda^A\) \cdot $$ \begin{equation}
\[\frac{1}{3}\hbox{cos}{\alpha_1} \hbox{cos}{\alpha_2} - \hat n_1^a\hat n_2^a \ \hbox{sin}{\alpha_1} \hbox{sin}{\alpha_2}
\] \Big\>_0 ,
\end{equation}
where $(i=1,2)$ \begin{equation} \hat
n^a_i = \frac{(-1)^i}{s(v_i,z_{0})} {\eta^\pm}^a_{\mu\nu} v_i^\mu
z_0^\nu \ , \label{iin} \end{equation}
\begin{equation} \alpha_i = s_i \cdot \int_0^\infty \! d\lambda \varphi\[(
v_i\lambda +(-1)^iz_0)^2; \rho_c \] \ , \label{it2}
\end{equation}
and $ s^2_i = z_0^2 -(v_i z_0)^2 $ , $(v_1v_2) = \cosh \chi$ in Minkowski
geometry. After evaluating the traces, the resulting gauge
invariant contribution to the Wilson loop of the single instanton
reads \cite{DCH04}
$$
w_I(\chi) = \frac{2}{3} \int dn_\rho \[w_c^I(\chi) + w_s^I(\chi) - w_c^I(0)-w_s^I(0)\]
$$
\begin{equation} w_c^I(\chi) = \int \! d^4 z_0 \ \hbox{cos} \ \alpha_1 \hbox{cos} \ \alpha_2 \ \end{equation}
\begin{equation} w_s^I(\chi) = - \int \! d^4 z_0 \ (\hat n^a_1\hat n^a_2) \ \hbox{sin} \ \alpha_1 \hbox{sin} \ \alpha_2 \
\end{equation}
where the normalized color correlation
factor is \begin{equation} \hat n^a_1\hat n^a_2 = -\frac{ \eta _{\mu\nu}^{a
}v_{1}^{\mu}z_{0}^\nu \eta _{\rho \sigma}^{a}v_{2}^{\rho
}z_{0}^\sigma} {s_1 s_2} \ . \end{equation}
In realistic instanton vacuum model there are two essential
effects: stabilization of the instanton density with respect to
unbounded expansion of instantons in size and screening of
instantons by surrounding background fields. To take into account
these features we approximate first the narrow instanton size
distribution by the $\delta$- function: $ dn_\rho = n_c \delta(\rho-\rho_c)
d\rho $ , where the model parameters are \cite{REV}:
\begin{equation} { n_c} \approx 1 fm^{-4}, \ \ { \rho_c} \approx 1/3 fm
\label{param} \ , \end{equation} and assume that the integration over the
instanton size is already performed. The screening effect modifies
the instanton shape at large distances leading to the constrained
instantons \cite{DEMM99}. To take into account this screening and
to have also simpler analytical form for $w_I(\chi)$, we use the
{\it Gaussian Anzatz } for the instanton profile function \begin{equation} \varphi
_{G}(x^{2})=\frac{1}{\rho_c^2}\hbox{e}^{-x^{2}/\rho_c ^{2}} \ .
\label{Inst_Profile_G} \end{equation} The parameters in this expression are
fixed by the requirement of reproducing the vacuum average $\Big\<
g^2A_\mu^a(0)A_\mu^a(0)\Big\> = 12\pi^2\rho_c^2n_c \ . $
Performing tedious calculations (for technical details, see Ref.
\cite{DCH04}), we find that the total expression
for the quark form factor at large-$Q^2$ with the one-loop
perturbative contribution and the nonperturbative contribution
found in the instanton model reads: \begin{equation} \frac{F_q \[Q^2\]}{F_q \[Q_0^2\]} =
\hbox{e}^{- \frac{2C_F}{\beta_0} \ln Q^2 \ln \ln Q^2 - \ln Q^2
\(B_{inst}^{LOG} - \frac{2C_F}{\beta_0} \)}
\label{eq:final} \end{equation} where the {\it all-order} instanton induced correction calculated
in the Gaussian approximation reads
\begin{equation}
B_{inst}^{LOG} = - 1.0053 \ \frac{\pi^2 n_c{
\rho_c}^4}{12} \ . \label{bi}
\end{equation}
Thus, the instanton induced effects (for the Gaussian simulation of instanton profile
function) yield the logarithmic, {\it i.e.,} sub-leading,
correction to the high energy behavior of quark EM form factor,
with the numerical coefficient smaller then that of corresponding
perturbative term $ B_{inst}^{LOG} \ll B_{pert}^{LOG} = \frac{2C_F}{\beta_0} $ .
Comparing the results Eqs. (\ref{eq:final}) and (\ref{bi}) with the weak-field calculations
in the Ref. \cite{DCH1}, we conclude that the latter deliver the reasonable approximation, at least in the
case of Gaussian profile function.
\section*{ACKNOWLEDGEMENTS}
The authors thank the Organizers of Diffraction'04 for hospitality
and financial support. The work is partially supported by RFBR
(Grant nos. 04-02-16445, 03-02-17291, 02-02-16194), Russian
Federation President's Grant no. 1450-2003-2, and INTAS (Grant no.
00-00-366).
|
{
"timestamp": "2004-11-30T19:27:14",
"yymm": "0411",
"arxiv_id": "hep-ph/0411384",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/0411384"
}
|
\section{Introduction}
Originally the notion of relative hyperbolicity was proposed by
Gromov \cite{Gro} in order to generalize various examples of
algebraic and geometric nature such as Kleinian groups,
fundamental groups of hyperbolic manifolds of pinched negative
curvature, small cancellation quotients of free products, etc. It
has been extensively studied in the last several years from
different points of view. The main aim of this paper is to
generalize the small cancellation theory over hyperbolic groups
developed by Olshanskii \cite{Ols2} to relatively hyperbolic
settings. Our approach is based on author's papers \cite{RHG,ESBG,RDF}, where the necessary background is provided. In the present paper we apply small cancellations over relatively hyperbolic groups to prove embedding theorems for countable groups. Further applications of our methods can be found in \cite{ABJ,SQ,BO,Min,Nor,Fac}.
In the paper \cite{HNN}, Higman, B.H. Neumann, and H. Neumann
proved that any countable group $G$ can be embedded into a
countable group $B$ such that every two elements of the same order
are conjugate in $B$. We notice that the group $B$ in \cite{HNN}
is constructed as a union of infinite number of subsequent
HNN--extensions and thus $B$ is never finitely generated. On the
other hand, any countable group can be embedded into a
$2$--generated group \cite{HNN}. Our first theorem is a natural
generalization of both of these results. For a group $G$, we
denote by $\pi (G)$ the set of all finite orders of elements of
$G$.
\begin{thm}\label{Conj}
Any countable group $G$ can be embedded into a $2$--generated
group $C$ such that any two elements of the same order are
conjugate in $C$ and $\pi (G)=\pi (C)$.
\end{thm}
\begin{cor}
Any countable torsion--free group can be embedded into a
(torsion--free) $2$--generated group with exactly $2$ conjugacy
classes.
\end{cor}
Since the number of finitely generated subgroups in any
2--generated groups is at most countable and the number of all
torsion--free finitely generated groups is uncountable, we have
\begin{cor}\label{2klassa}
There exists an uncountable set of pairwise non--isomorphic
torsion--free $2$--generated groups with exactly $2$ conjugacy
classes.
\end{cor}
We note that the question of the existence of an (at least one)
finitely generated group with exactly $2$ conjugacy classes other
than $\mathbb Z/2\mathbb Z$ was open until now. It can be found,
for example, in \cite[Problem 9.10]{Kou} or in \cite[Problem
FP20]{Prob}. (Positive solution has been announced by Ivanov in 1989 \cite{Iva,IO0} but the complete proof has never been published.) Corollary \ref{2klassa} provides first examples of such groups. Starting with the group $G=\mathbb Z/p^{n-2}\mathbb Z \times H$ for $n\ge 3$, where $p$ is a prime number and $H$ is a torsion--free group, we can generalize the previous result.
\begin{cor}\label{n}
For any $n\in \mathbb N$, $n\ge 2$, there is an uncountable set of
pairwise non--isomorphic finitely generated groups with exactly
$n$ conjugacy classes.
\end{cor}
For large enough prime numbers $n$, the first examples of finitely
generated infinite periodic groups with exactly $n$ conjugacy
classes were constructed by Ivanov (see \cite{Ols-book}) as limits
of hyperbolic groups (although hyperbolicity was not used
explicitly). Here we say that $G$ is a {\it limit of hyperbolic
groups} if there exists a finitely generated free group $F$ and a
series of normal subgroups $N_1\lhd N_2\lhd \ldots $ of $F$ such
that $G\cong F/N$ for $N=\bigcup\limits_{i=1}^\infty N_i$ and each
of the groups $F/N_i$, $i=1, 2, \ldots $ is hyperbolic. In
contrast it is impossible to construct a finitely generated group
other than $\mathbb Z/2\mathbb Z$ with exactly $2$ conjugacy
classes in this way.
Indeed suppose that a finitely generated group $G$ has exactly $2$
conjugacy classes. If $G$ is not torsion--free, then $G$ is a
group of exponent $p$ for some prime $p$ as the orders of all
nontrivial elements of $G$ are equal. If $p=2$, $G$ is abelian and
hence is isomorphic to $\mathbb Z/2\mathbb Z$. In case $p>2$,
there exist non--trivial elements $g,t\in G $ such that
\begin{equation} \label{BS}
t^{-1}gt=g^2.
\end{equation}
The equality $g^{2^p-1}=t^{-p}gt^pg^{-1}=gg^{-1}=1$ implies
$2^p-1\equiv 0(mod\; p)$. However, by the Fermat Little Theorem,
we have $2^p-2\equiv 0(mod\; p)$, which contradicts the previous
equality. Assume now that $G$ is torsion--free. If $G$ is a limit
of hyperbolic groups $F/N_i$, $i=1,2,\ldots $, then for some $i$
large enough, there are elements $g,t\in F/N_i$ of infinite order
that satisfy (\ref{BS}). This leads to a contradiction again since
the equality of type (\ref{BS}) is impossible in a hyperbolic
group if the order of $g$ is infinite \cite{GH,Gro}.
Another theorem from \cite{HNN} states that any countable group
$G$ can be embedded into a countable divisible group $D$. We
recall that a group $D$ is said to be {\it divisible} if for every
element $d\in D$ and every positive integer $n$, the equation
$x^n=d$ has a solution in $D$. A natural example of a divisible
groups is $\mathbb Q$. The question of
the existence of a finitely generated divisible group was open
during a long time. The first examples of such a type were
constructed by Guba \cite{Guba} (see also \cite{Ols-book}).
Later Mikhajlovskii and Olshanskii \cite{MO} constructed a more
general example of a finitely generated {\it verbally complete
group}, that is a group $W$ such that for every nontrivial freely reduced word $w(x_i)$ in
the alphabet $x_1^{\pm 1} ,x_2^{\pm 1},\ldots $ and every $v\in
W$, the equation $w(x_i)=v$ has a solution in $W$. That is, there
are elements $v_1, v_2, \ldots \in W$ such that $w(v_i)=v$ in $W$,
where $w(v_i)$ is the word obtained from $w(x_i)$ by substituting
$v_i$ for $x_i$, $i=1,2 \ldots $. Comparing these results one may
ask whether any countable group can be embedded into a finitely
generated divisible (or verbally complete) group. The next theorem
provides the affirmative answer.
\begin{thm} \label{WAC}
Any countable group $H$ can be embedded into a $2$--generated
verbally complete group $W$. Moreover, if $H$ is torsion--free,
then $W$ can be chosen to be torsion--free.
\end{thm}
Note that the condition $\pi (G)=\pi (W)$ can not be ensured in
Theorem \ref{WAC}. Indeed, it is easy to show that if a divisible
group $W$ contains a nontrivial element of finite order, then $\pi
(W)=\mathbb N$. As above, we obtain
\begin{cor} \label{WAC1}
There exists an uncountable set of pairwise non--isomorphic
$2$--generated verbally complete groups.
\end{cor}
{\bf Acknowledgment.} The author is grateful to A. Minasyan and the anonymous referee for careful reading of the manuscript and useful remarks.
\section{Outline of the method}
In this section we give the proofs of Theorem \ref{WAC} and
Theorem \ref{Conj} modulo technical results which are obtained in
Sections 4-6. We assume the reader to be familiar with the notion
of a relatively hyperbolic group and refer to the next section for
precise definitions.
Let $G$ be a group that is hyperbolic relative to a collection of
subgroups $\Hl $. We divide the set of all elements of $G$ into
two subsets as follows. An element $g\in G$ is said to be {\it
parabolic} if $g$ is conjugate to an element of $H_\lambda $ for
some $\lambda \in \Lambda $. Otherwise $g$ is said to be {\it
hyperbolic}. Recall also that a group is {\it elementary } if it
contains a cyclic subgroup of finite index. The following result
concerning maximal elementary subgroups is proved in \cite{ESBG}[Theorem 4.3, Corollary 1.7].
\begin{thm}\label{E(g)}
Let $G$ be a group hyperbolic relative to a collection of
subgroups $\Hl $, $g$ a hyperbolic element of infinite order of
$G$. Then the following conditions hold.
\begin{enumerate}
\item The element $g$ is contained in a unique maximal elementary
subgroup $E_G(g)$ of $G$, where $$E_G(g)=\{ f\in G\; :\;
f^{-1}g^nf=g^{\pm n}\; {\rm for \; some\; } n\in \mathbb N\} .$$
\item The group $G$ is hyperbolic relative to the collection
$\Hl\cup \{ E_G(g)\} $.
\end{enumerate}
\end{thm}
Given a subgroup $H\le G$, we denote by $H^0$ the set of all
hyperbolic elements of infinite order in $H$. Recall also that two
elements $f,g\in G^0$ are said to be {\it commensurable} (in G) if
$f^k$ is conjugate to $g^l$ in $G$ for some non--zero $k,l$.
\begin{defn} \label{suit} A subgroup $H\le G$ is called
{\it suitable}, if there exist two non--commensurable elements
$f_1, f_2\in H^0$ such that $E_G(f_1)\cap E_G(f_2)=1$.
\end{defn}
The next lemma is proved in Section 6.
\begin{lem}\label{non-com}
Let $G$ be a group hyperbolic relative to a collection of
subgroups $\Hl $, $H$ a suitable subgroup of $G$. Then there exist
infinitely many pairwise non--commensurable (in G) elements $h_1,
h_2, \ldots \in H^0$ such that for all $i=1,2, \ldots $,
$E_G(h_i)=\langle h_i\rangle $. In particular, $E_G(h_i)\cap
E_G(h_j)=\{ 1\} $ whenever $i\ne j$.
\end{lem}
Our main tool is the following theorem proved in Section 6. The
proof is based on a certain small cancellation techniques developed
in Sections 4 and 5.
\begin{thm}\label{glue}
Let $G$ be a group hyperbolic relative to a collection of
subgroups $\Hl $, $H$ a suitable subgroup of $G$, and $t_1, \ldots
, t_m$ arbitrary elements of $G$. Then there exists an epimorphism
$\eta \colon G\to \overline{G}$ such that:
\begin{enumerate}
\item The group $\overline{G}$ is hyperbolic relative to $\{ \eta
(H_\lambda ) \} _{\lambda \in \Lambda } $. \item For any $i=1,
\ldots , m$, we have $\eta (t_i)\in \eta (H)$. \item The
restriction of $\eta $ to $\bigcup\limits_{\lambda\in \Lambda
}H_\lambda $ is injective. \item $\eta (H)$ is a suitable subgroup
of $\overline{G}$. \item An element of $\overline{G} $ has finite
order, only if it is an image of an element of finite order in
$G$. In particular, if all hyperbolic elements of $G$ have
infinite order, then all hyperbolic elements of $\overline G$ have
infinite order.
\end{enumerate}
\end{thm}
The next theorem is proved in \cite{HNN}[Corollary 1.4]. For
finitely generated groups this result was also proved by Dahmani
in \cite{Dah}. It is worth noting that we use the theorem for
infinitely generated groups in this paper.
\begin{thm}\label{HNN0}
Suppose that a group $G$ is hyperbolic relative to a collection of
subgroups $\Hl \cup \{ K\} $ and for some $\nu \in \Lambda $,
there exists a monomorphism $\iota \colon K\to H_{\nu }$. Then the
HNN--extension
\begin{equation}\label{HNN-pres}
\widetilde G=\langle G,t\; |\; t^{-1}kt=\iota (k),\; k\in K\rangle
\end{equation}
is hyperbolic relative to $\Hl $.
\end{thm}
Theorems \ref{WAC} and \ref{Conj} can be obtained in a uniform way
from the following result.
\begin{thm}\label{0}
Suppose that $R$ is a countable group such that for any elementary
group $E$ satisfying the condition $\pi (E)\subseteq \pi (R)$,
there exists a subgroup of $R$ isomorphic to $E$. Then there is an
embedding of $R$ into a 2--generated group $S=S(R)$ such that any
element of $S$ is conjugate to an element of $R$ in $S$. In
particular, $\pi (S)=\pi (R)$.
\end{thm}
\begin{proof}
The desired group $S$ is constructed as an inductive limit of
relatively hyperbolic groups as follows. Let us set $$G(0)=R\ast
F(x,y),$$ where $F(x,y)$ is the free group of rank $2$ generated
by $x$ and $y$. We enumerate all elements of $$R=\{ 1=r_0, r_1,
r_2, \ldots \} $$ and $$G(0)=\{ 1=g_0, g_1, g_2, \ldots \} .$$
Suppose that for some $i\ge 0$, the group $G(i)$ has already been
constructed together with an epimorphism $\xi _i\colon G(0)\to
G(i)$. We use the same notation for elements $x,y, r_0, r_1,
\ldots , g_0, g_1, \ldots $ and their images under $\xi _i $ in
$G(i)$. Assume that $G(i)$ satisfies the following conditions. (It
is straightforward to check these conditions for $G(0)$ and the
identity map $\xi _0\colon G(0)\to G(0)$.)
\begin{enumerate}
\item[(i)] The restriction of $\xi _i$ to the subgroup $R$ is
injective. In what follows we identify $R$ with its image in
$G(i)$.
\item[(ii)] $G(i)$ is hyperbolic relative to $R$.
\item[(iii)] The elements $x$ and $y$ generate a suitable subgroup
of $G(i)$.
\item[(iv)] All hyperbolic elements of $G(i)$ have infinite order.
In particular, $\pi (G(i))=\pi (R)$.
\item[(v)] The elements $g_0, \ldots , g_i$ are parabolic in
$G(i)$.
\item[(vi)] In the group $G(i)$, the elements $r_0, \ldots , r_i$
are contained in the subgroup generated by $x$ and $y$.
\end{enumerate}
The group $G(i+1)$ is obtained from $G(i)$ in two steps.
{\bf Step 1.} Let us take the element $g_{i+1}$ and construct a
group $G(i+1/2)$ as follows. If $g_{i+1}$ is a parabolic element
of $G(i)$, we set $G(i+1/2)=G(i)$. If $g_{i+1}$ is hyperbolic, the
order of $g_{i+1}$ is infinite by (iv). Furthermore, since $\pi
(E_{G(i)}(g_{i+1}))\subseteq \pi (G(i))=\pi (R)$, there is a
monomorphism $\iota\colon E_{G(i)}(g_{i+1})\to R$. Then we take
the HNN--extension
$$G(i+1/2 )=\langle G(0), t\; | \; t^{-1}et=\iota (e), \, e\in
E_{G(i)}(g_{i+1})\rangle .$$
In both cases $G(i+1/2)$ is hyperbolic relative to $R$. Indeed
this is obvious in the first case and follows from the second
assertion of Theorem \ref{E(g)} and Theorem \ref{HNN0} in the
second one. Note also that all hyperbolic elements of $G(i+1/2)$
have infinite order. (In the second case this immediately follows
from the description of periodic elements in HNN--extensions
\cite[Ch. IV, Theorem 2.4]{LS}.)
{\bf Step 2.} First we wish to show that the subgroup generated by
$x$ and $y$ is suitable in $G(i+1/2)$. This is obvious in case
$g_{i+1}$ is parabolic in $G(i)$, so we consider the second case
only. Since $\langle x, y\rangle $ is suitable in $G(i)$ by (iii),
Lemma \ref{non-com} yields the existence of infinitely many
pairwise non--commensurable (in $G(i)$) hyperbolic elements
$h_j\in\langle x, y\rangle $ of infinite order, $j=1,2 \ldots $,
such that $E_{G(i)}(h_j)=\langle h_j\rangle $. At most one of
these elements is commensurable with $g_{i+1}$ in $G(i)$.
Therefore, there exist two non--commensurable in $G(i)$ hyperbolic
elements of infinite order, say $h_1,h_2\in \langle x, y\rangle $,
such that $h_j$ is not commensurable with $g_{i+1}$ in $G(i)$ for
$j=1,2$. In particular, $h_j$, $j=1,2$, is not conjugate to an
element of $E_{G(i)}(g_{i+1})$ as $\langle g_{i+1}\rangle $ has
finite index in $E_{G(i)}(g_{i+1})$. According to Britton's Lemma
on HNN--extensions \cite[Ch. 5, Sec.2]{LS}, this implies that
$h_1$ and $h_2$ are hyperbolic and non--commensurable in
$G(i+1/2)$. Furthermore, if for some $j=1,2$, $n\in \mathbb N$,
and $u\in G(i+1/2)$, we have $u^{-1}h_j^nu=h_j^{\pm n}$, then
$u\in G(i)$ by the Britton Lemma. Thus the explicit description of
maximal elementary subgroups from the first assertion of Theorem
\ref{E(g)} yields the equality $E_{G(i+1/2)}(h_j)=E_{G(i)}(h_j)$
for $j=1,2$. Finally since $E_{G(i)}(h_j)=\langle h_j\rangle $ and
$h_1$, $h_2$ are non--commensurable, we have
$$E_{G(i+1/2)}(h_1)\cap E_{G(i+1/2)}(h_2)=E_{G(i)}(h_1)\cap
E_{G(i)}(h_2)=\langle h_1\rangle \cap \langle h_2\rangle=\{ 1\}
.$$ By Definition \ref{suit} this means that the subgroup
generated by $x$ and $y$ is suitable in $G(i+1/2)$.
We now apply Theorem \ref{glue} to the group $G=G(i+1/2)$, the
subgroup $H=\langle x,y\rangle \le G(i+1/2)$, and the set of
elements $\{ t, r_{i+1}\} $. Let $G(i+1)= \overline{G}$, where
$\overline {G}$ is the quotient group provided by Theorem
\ref{glue}. Since $t$ becomes an element of $\langle x,y\rangle $
in $G(i+1)$, there is a naturally defined epimomorphism $\xi
_{i+1}\colon G(0)\to G(i+1)$. Using Theorem \ref{glue} it is
straightforward to check properties (i)--(vi) for $G(i+1)$. This
completes the inductive step.
Let $N_i$ denote the kernel of $\xi _i $. Observe that $N_1, N_2,
\ldots $ form an increasing normal series and set $S=G(0)/N$,
where $N=\bigcup_{i=1}^\infty N_i$. By (i) the subgroup $R$ is
embedded into $S$. Further it is easy to see that $S$ is
$2$--generated. Indeed, $G(0)$ is generated by $x,y, r_1, r_2,
\ldots $. Condition (vi) yields $ r_i\in \langle x,y \rangle $ in
$S$ for any $i\in \mathbb N$. Thus $S$ is generated by $x$ and
$y$. Finally let $s$ be an element of $S$. We take an arbitrary
preimage $g\in G(0)$ of $s$. Then the image of the element $g$
becomes parabolic at a certain step according to (v). Thus $s$ is
conjugate to an element of $R$ in $S$. The theorem is proved.
\end{proof}
It remains to derive Theorems \ref{WAC} and \ref{Conj}.
\begin{proof}[Proof of Theorem \ref{Conj}.]
Let $\mathcal E$ denote the free product of all elementary groups
$E$ (taken up to isomorphism) such that $\pi (E)\subseteq \pi
(G)$. We set $G^\ast =G\ast \mathcal E$. By a theorem from
\cite{HNN}, we can embed $G^\ast $ into an (infinitely generated)
group $R$ such that all elements of the same order are conjugate
in $R$ and
\begin{equation}\label{pi}
\pi (R)=\pi (G^\ast )=\pi (G).
\end{equation}
We now apply Theorem \ref{0} and embed the group $R$ into a
2-generated group $C=S(R)$ such that any element of $C$ is
conjugate to an element of $R$. As all elements of the same order
are conjugate in $R$, this is so in $C$. The equality $\pi
(C)=\pi (G)$ follows from (\ref{pi}) as $\pi (C)=\pi (R)$ by
Theorem \ref{0}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{WAC}.]
First note that any countable group $G$ can be embedded into an
infinitely generated countable verbally complete group $R$ in the
following way. (The idea comes from the proof of the
Higman--Neumann--Neumann theorem on embeddings into divisible
groups.) We denote by $F=F(a_1, a_2, \ldots )$ the free group with
basis $a_1, a_2, \ldots $. Let us enumerate the set of all pairs
$$\{ p_1, p_2, \ldots \} =\{ (v, g)\; :\; v\in
F\setminus \{ 1\} ,\, g\in G\setminus\{ 1\} \} .$$ Starting with
the group $G$ we first set $G^\ast =G$ if $G$ is torsion--free,
and $G^\ast =G\ast E_1\ast E_2 \ast \ldots $, where $\{ E_1, E_2,
\ldots \} $ is the set of all elementary groups (up to
isomorphism), otherwise. Further we construct a sequence of groups
$G=U_0\le U_1\le \ldots $ as follows. Suppose that for some $i\ge
0$, the group $U_i$ has already been constructed and take
$p_{i+1}=(v,g)$. There are two possibilities to consider.
1) The element $g$ has infinite order. Then we define $U_{i+1}$ to
be the free product of $U_i$ and $F$ with the amalgamated
subgroups $\langle g\rangle $ and $\langle v \rangle $.
2) The order of $g$ is $n< \infty $. It is well--known
\cite[Theorem 5.2, Ch. 4]{LS} that the order of the element $v$ in
the group $H=\langle a_1, a_2, \ldots \; |\; v^n=1\rangle $ equals
$n$. Thus the free product of $U_i$ and $H$ with amalgamated
subgroups $\langle g\rangle $ and $\langle v \rangle $ is
well--defined. We set $U_{i+1}=U_i\ast _{\langle g\rangle =\langle
v \rangle } H$.
Now let $U(G)=\bigcup\limits_{i=0}^\infty U_i $. Obviously $G$
embeds in $U(G)$, $U(G)$ is countable and torsion--free whenever
$G$ is torsion--free, and any equation of type $w(x_i)=g$, where
$w(x_i)$ is a word in the alphabet $x_1^{\pm 1}, x_2^{\pm 1},
\ldots $ and $g\in G$, has a solution in $U(G)$. Finally we
consider the sequence of groups $R_1\le R_2 \le \ldots $, where
$R_1=U(G)$ and $R_{i+1}=U(R_i)$, $i=1,2 \ldots $. Clearly the
group $R=\bigcup\limits_{i=0}^\infty R_i $ is countable, verbally
complete, torsion--free whenever $G$ is torsion--free, and
contains a copy of every elementary group $E$ such that $\pi
(E)\subseteq \pi (G)$. Let $W=S(R)$ be the group provided by
Theorem \ref{0}.
Consider an equations $w(x_i)=v$ for some $v\in W$. By Theorem
\ref{0}, there is an element $t\in W$ such that $t^{-1}vt\in R$.
Since $R$ is verbally complete, there is a solution $x_1=r_1$,
$x_2=r_2$, $\ldots $ to the equation $w(x_i)=t^{-1}vt $ in $R$.
Clearly $x_1=tr_1t^{-1}$, $x_2=tr_2t^{-1}$, $\ldots $ is a
solution to the equation $w(x_i)=v$.
\end{proof}
\section{Preliminaries}
{\bf Some conventions and notation.} We write $W\equiv V$ to express the letter--for--letter equality of
words $W$ and $V$ in some alphabet. If a word $W$ decomposes as $W\equiv V_1UV_2$, we call $V_1$ (respectively, $V_2$) a {\it prefix} (respectively, {\it suffix}) of $W$.
For elements $g$, $t$ of a group $G$, $g^t$
denotes the element $t^{-1}gt$. Recall that a subset $X$ of a
group $G$ is said to be {\it symmetric} if for any $x\in X$, we
have $x^{-1}\in X$. In this paper all generating sets of groups
under consideration are supposed to be symmetric.
\vspace{3mm}
\noindent {\bf Word metrics and Cayley graphs.} Let $G$ be a group
generated by a (symmetric) set $\mathcal A$. Recall that the {\it
Cayley graph} $\Ga $ of a group $G$ with respect to the set of
generators $\mathcal A$ is an oriented labelled 1--complex with
the vertex set $V(\Ga )=G$ and the edge set $E(\Ga )=G\times
\mathcal A$. An edge $e=(g,a)$ goes from the vertex $g$ to the
vertex $ga$ and has label $\phi (e)\equiv a$. As usual, we denote
the origin and the terminus of the edge $e$, i.e., the vertices
$g$ and $ga$, by $e_-$ and $e_+$ respectively. Given a
combinatorial path $p=e_1e_2\ldots e_k$ in the Cayley graph $\Ga
$, where $e_1, e_2, \ldots , e_k\in E(\Ga )$, we denote by $\phi
(p)$ its label. By definition, $\phi (p)\equiv \phi
(e_1)\phi(e_2)\ldots \phi (e_k).$ We also denote by $p_-=(e_1)_-$
and $p_+=(e_k)_+$ the origin and the terminus of $p$ respectively.
The length $l(p)$ of $p$ is the number of edges of $p$.
Associated to $\mathcal A$ is the so--called {\it word metric} on
$G$. More precisely, the length $|g|_\mathcal A$ of an element
$g\in G$ is defined to be the length of a shortest word in
$\mathcal A$ representing $g$ in $G$. By abuse of notation, we also write
$|W|_\mathcal A$ to denote the lengths of the element of $G$ represented by a word $W$ in the alphabet $\mathcal A$. This is
to be distinguished from the lengths of the word $W$ itself, which is denoted by $\| W\| $.
The word metric on $G$
is defined by $dist_\mathcal A(f,g)=|f^{-1}g|_\mathcal A$. We also denote by
$dist _\mathcal A$ the natural extension of the word metric to the
Cayley graph $\Ga $.
\vspace{3mm}
\noindent {\bf Van Kampen Diagrams.} Recall that a {\it van Kampen
diagram} $\Delta $ over a presentation
\begin{equation}
G=\langle \mathcal A\; | \; \mathcal O\rangle \label{ZP}
\end{equation}
is a finite oriented connected 2--complex endowed with a labelling
function $\phi : E(\Delta )\to \mathcal A$, where $E(\Delta ) $
denotes the set of oriented edges of $\Delta $, such that $\phi
(e^{-1})\equiv (\phi (e))^{-1}$. Labels and lengths of paths are
defined as in the case of Cayley graphs. Given a cell $\Pi $ of
$\Delta $, we denote by $\partial \Pi$ the boundary of $\Pi $;
similarly, $\partial \Delta $ denotes the boundary of $\Delta $.
The labels of $\partial \Pi $ and $\partial \Delta $ are defined
up to a cyclic permutation. An additional requirement is that for
any cell $\Pi $ of $\Delta $, the boundary label $\phi (\partial
\Pi)$ is equal to a cyclic permutation of a word $P^{\pm 1}$,
where $P\in \mathcal O$. Sometimes it is convenient to use the
notion of the so--called {\it $0$--refinement} in order to assume
diagrams to be homeomorphic to a disc. We do not explain here this
notion and refer the interested reader to \cite[Ch. 4]{Ols-book}.
The van Kampen Lemma states that a word $W$ over the alphabet
$\mathcal A$ represents the identity in the group given by
(\ref{ZP}) if and only if there exists a simply--connected planar
diagram $\Delta $ over (\ref{ZP}) such that $\phi (\partial \Delta
)\equiv W$ \cite[Ch. 5, Theorem 1.1]{LS}.
For every van Kampen diagram $\Delta $ over (\ref{ZP}) and ani
fixed vertex $o$ of $\Delta $, there is a (unique) combinatorial
map $\gamma \colon Sk^{(1)} (\Delta )\to \Ga $ that preserves
labels and orientation of edges and maps $o$ to the vertex $1$ of
$\Ga $.
\vspace{3mm}
\noindent {\bf Hyperbolic spaces.} Here we briefly discuss some
properties of hyperbolic spaces used in this paper. For more
details we refer to \cite{BH,GH,Gro}.
One says that a metric space $M$ is {\it $\delta $--hyperbolic}
for some $\delta \ge 0$ (or simply {\it hyperbolic}) if for any
geodesic triangle $T$ in $M$, any side of $T$ belongs to the union
of the closed $\delta $--neighborhoods of the other two sides.
Recall that a path $p$ in a metric space is called {\it $(\lambda
, c)$--quasi--geodesic} for some $\lambda > 0$, $c\ge 0$, if
$$dist(q_-, q_+)\ge \lambda l(q)-c$$ for any subpath $q$ of $p$.
Let $p$ be a path in a van Kampen diagram $\Delta $ over
(\ref{ZP}). We
need the following result about quasi--geodesics in hyperbolic
spaces (see, for example, \cite[Ch. III. H, Theorem 1.7]{BH}).
\begin{lem}\label{qg}
For any $\delta \ge 0$, $\lambda > 0$, $c\ge 0$, there exists a
constant $\kappa =\kappa (\delta , \lambda , c)$ with the
following property. If $M$ is a $\delta $--hyperbolic space and
$p, q$ are $(\lambda , c)$--quasi--geodesic paths in $M$ with same
endpoints, then $p$ and $q$ belong to the closed $\kappa
$--neighborhoods of each other.
\end{lem}
The next result can be easily derived from the definition of a
hyperbolic space by drawing the diagonal.
\begin{lem}\label{GeodQ}
Let $M$ be a $\delta $--hyperbolic metric space, $Q$ a geodesic
quadrangle in $M$. Then each side of $Q$ belongs to the closed
$2\delta $--neighborhood of the other three sides.
\end{lem}
From Lemma \ref{qg} and Lemma \ref{GeodQ}, we immediately obtain
\begin{cor}\label{qgq}
For any $\delta \ge 0$, $\lambda > 0$, $c\ge 0$, there exists a
constant $K=K(\delta , \lambda , c)$ with the following property.
Let $Q$ be a quadrangle in a $\delta$--hyperbolic space whose
sides are $(\lambda , c)$--quasi--geodesic. Then each side of $Q$
belongs to the closed $K$--neighborhood of the
union of the other three sides.
\end{cor}
\noindent Indeed it suffices to set $K=2(\kappa +\delta )$, where $\kappa $ is provided by Lemma \ref{qg}.
The proof of the next lemma is also straightforward (see \cite[Lemma 1.7]{Ols2}).
\begin{lem}\label{subseg}
Let $Q$ be a geodesic quadrangle with sides $a,b,c,d$ in a $\delta $-hyperbolic space. Suppose that $l(a)\ge 4\max \{ l(b),\, l(d)\} $. Then
there exist subsegments $p$ , $q$ of the sides $a$ and $b$, respectively, such that $\min \{ l(p), \, l(q)\} \ge \frac{7}{20} l(a) -8\delta $ and ${\rm dist\,} (p_{\pm }, q_\pm )\le 8\delta $.
\end{lem}
Passing from geodesic to quasi-geodesic quadrangles one can easily obtain the following. The proof is straightforward and consists of replacing the quasi-geometric quadrangle with a geodesic one having the same vertices, application of Lemma \ref{subseg}, and then Lemma \ref{qg}. We leave details to the reader.
\begin{cor}\label{subsegcor}
Let $Q$ be a $(\lambda , c)$--quasi-geodesic quadrangle with sides $a,b,c,d$ in a $\delta $-hyperbolic space, $\kappa =\kappa (\lambda , c)$ the constant provided by Lemma \ref{qg}. Suppose that $l(a)\ge (4\max \{ l(b),\, l(d)\} +c)/\lambda $. Then there exist subsegments $p$, $q$ of the sides $a$ and $b$, respectively, such that $\min \{ l(p), \, l(q)\} \ge \frac{7}{20} (\lambda l(a) -c) -8\delta - 2\kappa $ and ${\rm dist\,} (p_{\pm }, q_\pm )\le 8\delta + 2\kappa $.
\end{cor}
The following lemma is also well known (see, for example, \cite[CH.
III.H, Theorem 1.13]{BH}). Recall that a path in a metric space is
said to be $k$--local geodesic if any its subpath of length at
most $k$ is geodesic.
\begin{lem}\label{k-loc}
Let $r$ be a $k$--local geodesic in a $\delta $--hyperbolic metric
space for some $k>8\delta $. Then $r$ is $(1/3, 2\delta
)$--quasi--geodesic.
\end{lem}
\vspace{3mm}
\noindent {\bf Relatively hyperbolic groups.} There are many
equivalent definitions of relatively hyperbolic groups (see
\cite{Bow,F,RHG} and references therein). In this paper we use the
isoperimetric characterization suggested in \cite{RHG}.
More precisely, let $G$ be a group, $\Hl $ a collection of
subgroups of $G$, $X$ a subset of $G$. We say that $X$ is a {\it
relative generating set of $G$ with respect to $\Hl $} if $G$ is
generated by $X$ together with the union of all $H_\lambda $. (In
what follows we $X$ to be symmetric.) In this situation the group
$G$ can be regarded as a quotient group of the free product
\begin{equation}
F=\left( \ast _{\lambda\in \Lambda } H_\lambda \right) \ast F(X),
\label{F}
\end{equation}
where $F(X)$ is the free group with the basis $X$. Let $N$ denote
the kernel of the natural homomorphism $F\to G$. If $N$ is a
normal closure of a subset $\mathcal Q\subseteq N$ in the group
$F$, we say that $G$ has {\it relative presentation}
\begin{equation}\label{G}
\langle X,\; H_\lambda, \lambda\in \Lambda \; |\; \mathcal Q
\rangle .
\end{equation}
If $\sharp\, X<\infty $ and $\sharp\, \mathcal Q<\infty $, the
relative presentation (\ref{G}) is said to be {\it finite} and the
group $G$ is said to be {\it finitely presented relative to the
collection of subgroups $\Hl $.}
Set
\begin{equation}\label{H}
\mathcal H=\bigsqcup\limits_{\lambda\in \Lambda} (H_\lambda
\setminus \{ 1\} ) .
\end{equation}
Given a word $W$ in the alphabet $X\cup \mathcal H$ such that $W$
represents $1$ in $G$, there exists an expression
\begin{equation}
W=_F\prod\limits_{i=1}^k f_i^{-1}Q_i^{\pm 1}f_i \label{prod}
\end{equation}
with the equality in the group $F$, where $Q_i\in \mathcal Q$ and
$f_i\in F $ for $i=1, \ldots , k$. The smallest possible number
$k$ in a representation of the form (\ref{prod}) is called the
{\it relative area} of $W$ and is denoted by $Area^{rel}(W)$.
\begin{defn}
A group $G$ is {\it hyperbolic relative to a collection of
subgroups} $\Hl $ if $G$ is finitely presented relative to $\Hl $
and there is a constant $L>0$ such that for any word $W$ in $X\cup
\mathcal H$ representing the identity in $G$, we have $Area^{rel}
(W)\le L\| W\| $.
\end{defn}
In particular, $G$ is an ordinary {\it hyperbolic group} if $G$ is
hyperbolic relative to the trivial subgroup. An equivalent
definition says that $G$ is hyperbolic if it is generated by a
finite set $X$ and the Cayley graph $\Gamma (G, X)$ is hyperbolic.
In the relative case these approaches are not equivalent, but we
still have the following \cite[Theorem 1.7]{RHG}.
\begin{lem}\label{CG}
Suppose that $G$ is a group hyperbolic relative to a collection of
subgroups $\Hl $. Let $X$ be a finite relative generating set of
$G$ with respect to $\Hl $. Then the Cayley graph $\G $ of $G$
with respect to the generating set $X\cup \mathcal H$ is a
hyperbolic metric space.
\end{lem}
Observe also that the relative area of a word $W$ representing $1$
in $G$ can be defined geometrically via van Kampen diagrams. Let
$G$ be a group given by the relative presentation (\ref{G}) with
respect to a collection of subgroups $\Hl $. We denote by
$\mathcal S$ the set of all words in the alphabet $\mathcal H$
representing the identity in the groups $F$ defined by (\ref{F}).
Then $G$ has the ordinary (non--relative) presentation
\begin{equation}\label{Gfull}
G=\langle X\cup\mathcal H\; |\;\mathcal S\cup \mathcal Q \rangle .
\end{equation}
A cell in van Kampen diagram $\Delta $ over (\ref{Gfull}) is
called a {\it $\mathcal Q$--cell} if its boundary is labelled by a
word from $\mathcal Q$. We denote by $N_\mathcal Q(\Delta )$ the
number of $\mathcal Q$--cells of $\Delta $. Obviously given a word
$W$ in $X\cup\mathcal H$ that represents $1$ in $G$, we have
$$
Area^{rel}(W)=\min\limits_{\phi (\partial \Delta ) \equiv W} \{
N_\mathcal Q (\Delta )\} ,
$$
where the minimum is taken over all van Kampen diagrams with
boundary label $W$.
\vspace{3mm}
\noindent {\bf $H_\lambda $--components.} Finally we are going to
recall an auxiliary terminology introduced in \cite{RHG}, which
plays an important role in our paper. Let $G$ be a group, $\Hl $ a
collection of subgroups of $G$, $X$ a finite generating set of $G$
with respect to $\Hl $, $q$ a path in the Cayley graph $\G $. A
subpath $p$ of $q$ is called an {\it $H_\lambda $--component} for
some $\lambda \in \Lambda $ (or simply a {\it component}) of $q$,
if the label of $p$ is a word in the alphabet $H_\lambda\setminus
\{ 1\} $ and $p$ is not contained in a bigger subpath of $q$ with
this property. Two $H_\lambda $--components $p_1, p_2$ of a path
$q$ in $\G $ are called {\it connected} if there exists a path $c$
in $\G $ that connects some vertex of $p_1$ to some vertex of
$p_2$ and ${\phi (c)}$ is a word consisting of letters from $
H_\lambda\setminus\{ 1\} $. In algebraic terms this means that all
vertices of $p_1$ and $p_2$ belong to the same coset $gH_\lambda $
for a certain $g\in G$. Note that we can always assume $c$ to have
length at most $1$, as every nontrivial element of $H_\lambda $ is
included in the set of generators. An $H_\lambda $--component $p$
of a path $q$ is called {\it isolated } if no distinct $H_\lambda
$--component of $q$ is connected to $p$.
The lemma below is a simplification of Lemma 2.27 from \cite{RHG}.
\begin{lem}\label{Omega}
Suppose that $G$ is a group that is hyperbolic relative to a
collection of subgroups $\Hl $, $X$ a finite generating set of $G$
with respect to $\Hl $. Then there exists a constant $K>0$ and a
finite subset $\Omega \subseteq G$ such that the following
condition holds. Let $q$ be a cycle in $\G $, $p_1, \ldots , p_k$
a set of isolated $H_\lambda $--components of $q$ for some
$\lambda\in \Lambda $, $g_1, \ldots , g_k$ the elements of $G$
represented by the labels of $p_1, \ldots , p_k$ respectively.
Then for any $i=1, \ldots , k$, $g_i$ belongs to the subgroup
$\langle \Omega \rangle \le G$ and the lengths of $g_i$ with
respect to $\Omega $ satisfy the inequality $$ \sum\limits_{i=1}^k
|g_i|_{\Omega }\le Kl(q).$$
\end{lem}
\section{Small cancellation conditions}
The main aim of this and the following four sections is to
generalize the small cancellation theory over hyperbolic groups
developed by Olshanskii in \cite{Ols2} to relatively hyperbolic
groups. The fact that the Cayley graph $\Gamma (G,X)$ of a
hyperbolic group $G$ generated by a finite set $X$ is a hyperbolic
metric space plays the key role in \cite{Ols2}. Lemma \ref{CG}
allows to extend this theory to the case of relatively hyperbolic
groups. However this extension is not straightforward as the
Cayley graph $\G $ defined in the previous section is not
necessary locally finite.
Roughly speaking, one can divide results about small cancellation conditions from \cite{Ols2}
into three classes. The first class consists of results about diagrams over presentations
satisfying small cancellation conditions, which do not use local finiteness of Cayley graphs at all. They can be stated and proved in our settings without any essential changes. The main result of this part is Lemma \ref{Gr0} stated below.
Proof of results from the second class do not use local finiteness of the Cayley graph either, but they do employ certain facts concerning geometric and algebraic properties of ordinary hyperbolic groups. These results can also be reproved with minor changes modulo the paper
\cite{RHG}, where the corresponding facts about relatively hyperbolic groups are proved. For convenience of the reader we provide self-contained proofs in Sections 5 and 6.
Finally results of the third type explain how to chose words of some specific form satisfying small cancellation conditions. Unlike in ordinary small cancellation theory over a free group, verifying small cancellation conditions over hyperbolic groups is much harder and local finiteness of Cayley graphs is essentially used in \cite{Ols2}. Our approach here is different and is explained in Section \ref{wwsc}.
Given a set of words $\mathcal R$ in an alphabet $\mathcal A$, we
say that $\mathcal R$ is {\it symmetrized}, if for any $R\in
\mathcal R$, $\mathcal R$ contains all cyclic shifts of $R^{\pm
1}$. Further let $G$ be a group generated by a set $\mathcal A$.
We say that a word $R$ is {\it $(\lambda , c)$--quasi--geodesic in
$G$}, if any path in the Cayley graph $\Gamma (G, \mathcal A)$
labelled $R$ is $(\lambda , c)$--quasi--geodesic.
\begin{defn}\label{piece}
Let $G$ be a group generated by a set $\mathcal A$, $\mathcal R$ a
symmetrized set of words in $\mathcal A$. For $\e
>0$, a subword $U$ of a word $R\in \mathcal R$ is called an {\it
$\e $--piece} if there exists a word $R^\prime \in \mathcal R$
such that:
\begin{enumerate}
\item[(1)] $R\equiv UV$, $R^\prime \equiv U^\prime V^\prime $, for
some $V, U^\prime , V^\prime $; \item[(2)] $U^\prime = YUZ$ in $G$
for some words $Y,Z$ in $\mathcal A$ such that $\max \{ \| Y\| ,
\,\| Z\| \} \le \e $; \item[(3)] $YRY^{-1}\ne R^\prime $ in the
group $G$.
\end{enumerate}
Similarly, a subword $U$ of $R\in \mathcal R$ is called {\it an
$\e ^\prime $--piece} if:
\begin{enumerate}
\item[($1^\prime $)] $R\equiv UVU^\prime V^\prime $ for some $V,
U^\prime , V^\prime $; \item[($2^\prime $)] $U^\prime =YU^{\pm
1}Z$ in the group $G$ for some $Y,Z$ satisfying $\max\{ \| Y\| ,
\| Z\| \}\le \e $.
\end{enumerate}
\end{defn}
Recall that a word $R$ in $\mathcal A$ is said to be {\it $(\lambda , c)$-quasi-geodesic}, if some (equivalently, any) path labelled by
$R$ in $\Gamma (G, \mathcal A)$ is $(\lambda , c)$-quasi-geodesic.
\begin{defn}\label{DefSC}
We say that the set $\mathcal R$ satisfies the {\it $C(\e , \mu ,
\lambda , c, \rho )$--condition} for some $\e \ge 0$, $\mu >0$,
$\lambda >0$, $c\ge 0$, $\rho >0$, if
\begin{enumerate}
\item[(1)] $\| R\| \ge \rho $ for any $R\in \mathcal R$;
\item[(2)] any word $R\in \mathcal R$ is $(\lambda ,
c)$--quasi--geodesic; \item[(3)] for any $\e $--piece of any word
$R\in \mathcal R$, the inequality $\max \{ \| U\| ,\, \| U^\prime
\| \} < \mu \| R\| $ holds (using the notation of Definition
\ref{piece}).
\end{enumerate}
Further the set $\mathcal R$ satisfies the {\it $C_1(\e , \mu ,
\lambda , c, \rho )$--condition} if in addition the condition
$(3)$ holds for any $\e ^\prime $--piece of any word $R\in
\mathcal R$.
\end{defn}
\begin{figure}
\unitlength 1mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(77.43,43.07)(-28,1)
\put(48.26,24.48){\oval(58.34,35.18)[]}
\put(45.96,21.3){\oval(31.82,10.78)[]}
\put(35,10.41){\vector(-1,-3){.07}}\qbezier(35.95,15.87)(35.53,9.41)(33.01,6.94)
\put(55.6,11.46){\vector(-1,3){.07}}\qbezier(58.02,6.94)(54.45,11.56)(55.08,15.77)
\put(45.3,15.92){\vector(-1,0){.07}}
\put(45.3,26.7){\vector(1,0){.07}}
\put(45.3,28.6){\makebox(0,0)[cc]{$u$}}
\put(46.14,42.08){\vector(-1,0){.07}}
\put(46.14,39){\makebox(0,0)[cc]{$w$}}
\put(46.14,6.9){\vector(1,0){.07}}
\put(38.68,21.55){\makebox(0,0)[cc]{$\Pi $}}
\put(46.04,17.97){\makebox(0,0)[cc]{$q_1$}}
\put(45.3,4.31){\makebox(0,0)[cc]{$q_2$}}
\put(59,11.46){\makebox(0,0)[cc]{$s_1$}}
\put(32.5,11.46){\makebox(0,0)[cc]{$s_2$}}
\multiput(35.11,6.83)(.033715186,.034111836){265}{\line(0,1){.034111836}}
\multiput(37.1,6.94)(.033700047,.034101238){262}{\line(0,1){.034101238}}
\multiput(39,6.83)(.03356519,.03444849){119}{\line(0,1){.03444849}}
\multiput(40.99,6.94)(.03349725,.03349725){91}{\line(1,0){.03349725}}
\multiput(48.04,13.98)(.0332855,.0332855){60}{\line(0,1){.0332855}}
\multiput(42.99,6.94)(.0334447,.0334447){88}{\line(1,0){.0334447}}
\multiput(48.04,11.98)(.03444849,.03356519){119}{\line(1,0){.03444849}}
\multiput(52.98,6.94)(.03349725,.03465232){91}{\line(0,1){.03465232}}
\multiput(54.97,6.94)(.0331933,.0331933){57}{\line(0,1){.0331933}}
\multiput(48.04,15.87)(-.0331933,-.0350374){57}{\line(0,-1){.0350374}}
\multiput(54.03,15.87)(-.033652475,-.034067938){253}{\line(0,-1){.034067938}}
\multiput(54.97,14.93)(-.03370682,-.03415033){237}{\line(0,-1){.03415033}}
\multiput(54.97,12.93)(-.03363586,-.0342365){175}{\line(0,-1){.0342365}}
\multiput(55.5,11.56)(-.03372999,-.03451441){134}{\line(0,-1){.03451441}}
\multiput(42.04,15.87)(-.03415033,-.03370682){237}{\line(-1,0){.03415033}}
\multiput(40.05,15.87)(-.03370005,-.03450243){131}{\line(0,-1){.03450243}}
\multiput(38.05,15.87)(-.0333689,-.0333689){63}{\line(-1,0){.0333689}}
\put(46,11.77){\makebox(0,0)[cc]{$\Gamma $}}
\put(65.06,34.16){\makebox(0,0)[cc]{$\Xi $}}
\end{picture}
\caption{Contiguity subdiagram of a cell to the boundary of the
diagram.}\label{ConSubd}
\end{figure}
Suppose that $G$ is a group defined by (\ref{ZP}). Given a set of
words $\mathcal R$, we consider the quotient group of $G$
represented by
\begin{equation}\label{quot}
G_1= \langle \mathcal A\; |\;
\mathcal O\cup \mathcal R\rangle .
\end{equation}
A cell in a van Kampen diagram over (\ref{quot}) is called an {\it
$\mathcal R$--cell} if its boundary label is a word from $\mathcal
R$.
Let $\Delta $ be a van Kampen diagram over (\ref{quot}), $\Pi $ an
$\mathcal R$--cell of $\Delta $. Suppose that there is a simple
closed paths
\begin{equation}\label{pathp}
p=s_1q_1s_2q_2
\end{equation}
in $\Delta $, where $q_1$ (respectively $q_2$) is a subpath of
$\partial \Pi $ (respectively $\partial \Delta $) and
\begin{equation}\label{sides}
\max \{ \| s_1\| ,\, \| s_2\| \} \le \e
\end{equation}
for some constant $\e $. By $\Gamma $ we denote the subdiagram of
$\Delta $ bounded by $p$ (see Fig. \ref{ConSubd}). If $\Gamma $
contains no $\mathcal R$--cells, we say that $\Gamma $ is an {\it
$\e $--contiguity subdiagram} of $\Pi $ to $\partial \Delta $, and
$q_1$ is the {\it contiguity arc} of $\Pi $
to $\partial \Delta $. The ratio $l(q_1)/l(\partial \Pi )$ is called the {\it contiguity degree}
of $\Pi $ to $\partial \Delta $ (respectively $\Pi_1$ to $\Pi_2$)
and is denoted by $(\Pi , \Gamma , \partial \Delta )$.
Recall that a van Kampen diagram $\Delta $ over (\ref{quot}) is said to be
{\it reduced} if $\Delta $ has minimal number of $\mathcal
R$--cells among all diagrams over (\ref{quot}) having the same
boundary label. A path $p$ in $\Delta $ is called {\it $(\lambda , c)$-quasi-geodesic}, if its label is $(\lambda , c)$-quasi-geodesic, i.e., some (equivalently, any) path in $\Gamma (G, \mathcal A)$ with the same label is $(\lambda , c)$-quasi-geodesic.
The following is an analogue of the well known Greendlinger Lemma.
\begin{lem}\label{Gr0}
Let $G$ be a group with a presentation $G=\langle \mathcal A\, |\, \mathcal O\rangle $. Suppose that the Cayley graph $\Gamma (G, \mathcal A)$ of $G$ is hyperbolic. Then for any $\lambda\in (0,1]$, there exists
$\mu >0$ such that for every $c\ge 0$, there are $\e \ge 0$ and $\rho >0$ with the following property.
Let $\mathcal R$ be a symmetrized set of words in $\mathcal A$
satisfying the $C(\e, \mu , \lambda , c , \rho )$--condition,
$\Delta $ a reduced van Kampen diagram over the presentation
(\ref{quot}) whose boundary can be represented as $\partial \Delta
=q^1\ldots q^r$ for some $1\le r\le 4$, where $q^1, \ldots , q^r$
are $(\lambda , c)$--quasi--geodesic. Assume that $\Delta $ has at
least one $\mathcal R$--cell. Then there exists an $\mathcal
R$--cell $\Pi $ in $\Delta $ and $\e$--contiguity subdiagrams
$\Gamma _1, \ldots , \Gamma _r$ of $\Pi $ to $q^1, \ldots , q^r$,
respectively, such that
$$ \sum\limits_{i=1}^r(\Pi , \Gamma _i, q^i)> 1-23\mu .$$
\end{lem}
This lemma is proved in \cite{Ols2} (see Lemma 6.6 there) under the assumption
that the group $G$ is hyperbolic, i.e., $\Gamma (G, \mathcal A)$ is hyperbolic and $\mathcal A$ is finite.
However the finiteness of $\mathcal A$ is not used in the proof, and can be removed from the assumptions of the lemma.
More precisely, to prove Lemma \ref{Gr0} one needs to repeat Section 5 of and Section 6 (from Lemma 6.1 to Lemma 6.6) of \cite{Ols2} verbatim. We note that all results of Section 5 and Lemma 6.1 from Section 6 of \cite{Ols2} are stated and proved for arbitrary (not necessarily finite) presentations of arbitrary (not necessarily hyperbolic) groups. The only place where hyperbolicity of $G$ is used in Sections 5 and 6 of \cite{Ols2} is the proof of Lemma 6.2. The assumption that $G$ is hyperbolic is used there to conclude that the Cayley graph $\Gamma (G, \mathcal A)$ is hyperbolic, which allows to apply Lemmas 1.8 and 1.9 from \cite{Ols2}. However Lemmas 1.8 and 1.9 are stated for arbitrary (not necessarily locally finite) hyperbolic metric spaces. Thus, assuming hyperbolicity of $\Gamma (G, \mathcal A)$, the proof of Lemma 6.2 and the proofs of subsequent Lemmas 6.3--6.6 in \cite{Ols2} work without any changes.
Lemma \ref{Gr0} in the form stated above plays an important role in \cite{Fac}, where it is used to analyze centralizers of elements in the quotient group. However in this paper we only need the following particular case corresponding to $r=1$.
\begin{cor}\label{Gr}
Suppose that the Cayley graph $\Gamma (G, \mathcal A)$ of a group $G=\langle \mathcal A\, |\, \mathcal O\rangle $ is hyperbolic. Then for any $\lambda\in (0,1]$, there exist
$\mu >0$ such that for every $c\ge 0$, there are $\e \ge 0$ and $\rho >0$ with the following property.
Let $\mathcal R$ be a symmetrized set of words in $\mathcal A$
satisfying the $C(\e, \mu , \lambda , c , \rho )$--condition,
$\Delta $ a reduced van Kampen diagram over the presentation
(\ref{quot}) whose boundary is $(\lambda , c)$--quasi--geodesic. Assume that $\Delta $ has at
least one $\mathcal R$--cell. Then there exists an $\mathcal
R$--cell $\Pi $ in $\Delta $ and $\e$--contiguity subdiagram
$\Gamma $ such that $(\Pi , \Gamma , \partial \Delta )> 1-23\mu .$
\end{cor}
\section{Hyperbolicity of the quotient}
Our next goal is to show that adding relations satisfying
sufficiently strong small cancellation conditions preserves
relative hyperbolicity. Throughout this section let $G$ be a group hyperbolic
relative to a collection of subgroups $\Hl $, $X$ a finite
relative generating set of $G$ with respect to $\Hl $. We set
$\mathcal A=X\cup\mathcal H$ and $\mathcal O=\mathcal S\cup
\mathcal Q$, where $\mathcal S$ and $\mathcal Q$ are defined as in
(\ref{Gfull}). In the proof of the lemma below we follow the idea
from \cite[Lemma 6.7]{Ols2} with little changes.
\begin{lem}\label{G1}
For any $\lambda\in (0,1]$, $c\ge 0$, $N>0$, there exist $\mu
>0$, $\e \ge 0$, and $\rho >0$ such that for any finite
symmetrized set of words $\mathcal R$ satisfying the $C(\e , \mu ,
\lambda , c, \rho )$--condition, the following hold.
\begin{enumerate}
\item The group $G_1$ defined by (\ref{quot}) is hyperbolic
relative to the collection of images of subgroups $H_\lambda,
\lambda \in \Lambda $, under the natural homomorphism $G\to G_1$.
In particular, the Cayley graph of $G_1$ with respect to the
generating set $\mathcal A$ is hyperbolic.
\item The restriction of the natural homomorphism $\gamma \colon
G\to G_1$ to the subset of elements of length at most $N$ with
respect to the generating set $\mathcal A$ is injective.
\end{enumerate}
\end{lem}
\begin{proof}
Let us choose the constants $\mu $, $\e$, $\rho $ according to Corollary \ref{Gr}.
For a word $W$ in the alphabet $X\cup \mathcal H$ that represents
$1$ in $G_1$, we denote by $Area_1^{rel} (W)$ its relative area,
that is the minimal number $k$ in a representation of type
$$ W=_F\prod\limits_{i=1}^k f_iR_i^{\pm 1}f_i^{-1}$$ with the equality in the group
$F$ defined by (\ref{F}), where $R_i\in \mathcal R\cup \mathcal
Q$. Similarly, by $Area^{rel}(W) $ we denote the relative area of
a word $W$ representing $1$ in $G$, i.e., the minimal number $k$
in the above decomposition, where $R_i\in \mathcal R$.
As $G$ is hyperbolic relative to $\Hl $, there exists a constant
$L>0$ such that for any word $W$ in $X\cup \mathcal H$
representing $1$ in $G$, we have $Area^{rel}(W)\le L\| W\| $. To
prove the lemma it suffices to show that there is a constant
$\alpha >0$ such that for any word $W$ representing $1$ in $G_1$,
$Area_1^{rel}(W)\le \alpha \| W\| $. We are going to prove this
inequality for
$$\alpha =\frac{3L}{\lambda -47\mu }.$$
We proceed by induction on the length of $W$. Let $p$ be a path
labelled $W$ in $\G $. Suppose first that $p$ is not $(1/2
,0)$--quasi--geodesic. Passing to a cyclic shift of $W$ if
necessary, we may assume that $p=p_0p_1$, where $p_0$ is a subpath
of $p$ such that $\dxh ((p_0)_-, (p_0)_+)< l(p_0)/2 $. Let $q$ be
a geodesic path in $\G $ that goes from $(p_0)_-$ to $(p_0)_+$.
Thus $l(q)< l(p_0)/2$. We denote by $U_0$, $U_1$, and $V$ the
labels of $p_0$, $p_1$, and $q$ respectively. Then
$$W=_F (U_0V^{-1})(VU_1) ,$$
where $U_0V^{-1}$ represents $1$ in $G$ and $VU_1$ represents $1$
in $G_1$. Obviously we have $$\| VU_1\| \le l(p)-l(p_0)+l(q)< \|
W\|-\frac{l(p_0)}{2} $$ and $$\| U_0V^{-1}\| \le l(p_0)+l(q) <
\frac{3l(p_0)}{2}.$$ Using the inductive hypothesis, we obtain
$$
\begin{array}{rl}
Area_1^{rel}(W) & \le Area_1^{rel}(VU_1)+Area^{rel}(U_0V^{-1})\\
& \\ & < \alpha \left( \| W\|-\frac12 l(p_0)\right) +\frac32 L
l(p_0)< \alpha \| W\|
\end{array}
$$
as $\alpha > 3L$.
Now suppose that $p$ is a $(1/2, 0)$--quasi--geodesic path in $\G
$. Since $C(\e , \mu , \lambda , c, \rho )$--condition implies
$C(\e , \mu , 1/2 , c, \rho )$ whenever $\lambda
>1/2$, it suffices to prove the lemma for $\lambda \le 1/2$.
So we may assume that $p$ is $(\lambda , c)$--quasi--geodesic as
well. Let $\Delta $ be a reduced diagram over (\ref{quot}) such
that $\partial \Delta $ is labelled $W$. Assume that $\Delta $ has
at least one $\mathcal R$--cell. (Otherwise the lemma is obviously
true.) By Corollary \ref{Gr} there exists a
$\mathcal R$--cell $\Pi $ and a contiguity subdiagram $\Gamma $ of
$\Pi $ to $\partial \Delta $ with $(\Pi , \Gamma , \partial \Delta
)>1-23\mu $. Let $\partial \Gamma =s_1q_1s_2q_2 $, where $q_1$
(respectively $q_2$) is a subpath of $\partial \Pi $ (respectively
$\partial \Delta $) and $\max \{ \| s_1\| ,\, \| s_2\| \} \le \e $
(see Fig. \ref{ConSubd}). Let also $\partial \Pi =q_1u$ and
$\partial \Delta =wq_2$.
Note that perimeter of the subdiagram $\Xi $ of $\Delta $ bounded
by the path $s_2^{-1}us_1^{-1}w$ is smaller than perimeter of
$\Delta $ if $\rho $ is large enough and $\mu $ is close to zero.
Indeed as $\Gamma $ contains no $\mathcal R$--cells, we can regard
$s_1q_1s_2q_2$ as a cycle in the Cayley graph $\G $. Thus,
\begin{equation}\label{cont1}
\begin{array}{rl}
l(q_2)& \ge \dxh ((q_2)_-, (q_2)_+)\ge \dxh ((q_1)_-, (q_1)_+)-2\e
\ge \lambda l(q_1)-c-2\e \\ & \\ & \ge \lambda (1-23\mu )
l(\partial \Pi )-c-2\e .
\end{array}
\end{equation}
On the other hand,
\begin{equation}\label{cont2}
l(s_2^{-1}us_1^{-1})\le 2\e +23\mu l(\partial \Pi ).
\end{equation}
Since $l(\partial \Pi )\ge \rho $, if $\rho $ is big enough and
$\mu $ is close to zero, the right side of (\ref{cont1}) is
greater than the right side of (\ref{cont2}) and hence $l(\partial
\Xi )<l(\partial \Delta )$.
Therefore, by induction the total number $n_1$ of $\mathcal R$--
and $\mathcal Q$--cells in $\Xi $ is at most
\begin{equation}\label{n1}
\begin{array}{rl}
n_1& \le \alpha l(\partial \Xi )\le \alpha \big( \| W\| -l(q_2) +
l (s_2^{-1}us_1^{-1})\big)
\\ & \\ & \le \alpha \big( \| W\| -l(\partial \Pi )(\lambda
-46\mu ) +c +4\e \big) .
\end{array}
\end{equation}
Furthermore, as $q_2$ is $(1/2, 0)$--quasi--geodesic in $\G $, we
have
$$l(q_2)\le 2\dxh ((q_2)_-, (q_2)_+)\le 2(\dxh ((q_1)_-,
(q_1)_+)+2\e) \le 2l( \partial \Pi ) +4\e .$$ Therefore the
perimeter of $\Gamma $ satisfies
$$
l(\partial \Gamma )\le 2\e +l(q_1)+ l(q_2)\le 3l(\partial \Pi )
+6\e .
$$
Hence we may assume that the number $n_2$ of $\mathcal Q$--cells
of $\Gamma $ is at most
\begin{equation}\label{n2}
n_2\le Ll(\partial \Gamma )\le 3L(l(\partial \Pi ) +2\e )\le
\alpha (l(\partial \Pi )(\lambda -47\mu ) +2\e ).
\end{equation}
Finally, combining (\ref{n1}) and (\ref{n2}), we obtain
$$
Area_1^{rel}(W)\le n_1+n_2\le \alpha ( \| W\| -\mu l(\partial \Pi
) +c+6\e )\le \alpha \| W\|
$$
whenever $l(\partial \Pi ) $ is big enough. This completes the
proof of relative hyperbolicity of $G_1$. The hyperbolicity of the
Cayley graph follows from Lemma \ref{CG}.
Note that if $\mu <1/50$, the inequality (\ref{cont1}) implies
$$
\| W\| \ge \frac12\lambda \rho -c-2\e
$$
for every non--trivial word $W$ which is geodesic in $G$ and
represents $1$ in $G_1$. Therefore, the second statement of the
lemma holds if we assume $\rho \ge 2(N+c+2\e )/\lambda $.
\end{proof}
\section{Torsion in the quotient}
Our next goal is to describe periodic elements in the quotient
group (\ref{quot}) of a relatively hyperbolic group $G$. To this
end we need an auxiliary result. Recall that for an element
$g$ of a group $G$ generated by a set $\mathcal A$, the {\it
translation number} of $g$ with respect to $\mathcal A$ is defined
to be $$\tau _\mathcal A (g)=\lim\limits_{n\to \infty
}\frac{|g^n|_\mathcal A }{n}.$$ This limit always exists and is
equal to $\inf\limits_n (|g|_\mathcal A /n) $ \cite{GS}. The lemma
below can be found in \cite[Theorem 4.43]{RHG}.
\begin{lem}\label{cyc}
There exists $d>0$ such that for any hyperbolic element of
infinite order $g\in G$ we have $\tau _{X\cup \mathcal H} (g)\ge
d$.
\end{lem}
For our goals, even a stronger result is necessary.
\begin{lem}\label{Un}
There exist $1\ge \alpha >0$, $a\ge 0$ with the following
property. Suppose that $g$ is a hyperbolic element of $G$ of
infinite order such that $g$ has the smallest length among all
elements of the conjugacy class $g^G$. Denote by $U$ a shortest
word in the alphabet $X\cup \mathcal H$ representing $g$. Then for
any $n\in \mathbb N$, any path in $\G $ labelled $U^n$ is
$(\alpha, a)$--quasi--geodesic.
\end{lem}
\begin{proof}
Recall that $\G $ is hyperbolic by Lemma \ref{CG}. First suppose
that $|g|_{X\cup \mathcal H}=\| U\| > 8\delta $, where $\delta $
is the hyperbolicity constant of $\G $. Since $g$ is a shortest
element in $g^G$ and $U$ is a shortest word representing $g$, the
path $p$ labelled $U^n$ is $k$--local geodesic in $\G $,
$k>8\delta $, for any $n$. Therefore, by Lemma \ref{k-loc}, $p$
is $(1/3, 2\delta )$--quasi--geodesic.
Further if $|g|_{X\cup \mathcal H}=\| U\| \le 8\delta $, then for
any $n\in \mathbb N$, we have
$$ |g^n|_{X\cup \mathcal H}\ge n \inf\limits_i \left( \frac{1}{i} |g^i|_{X\cup \mathcal
H}\right) \ge n d\ge \frac{d}{8\delta }n|g|_{X\cup \mathcal H},$$
where $d$ is the constant provided by Lemma \ref{cyc}. Hence the
path $p$ labelled $U^n$ is $\left(\frac{d}{8\delta } , 8\delta
\right)$--quasi--geodesic. It remains to set $\alpha = \min\{
\frac13 ,\, \frac{d}{8\delta }\} $ and $a= 8\delta $.
\end{proof}
\begin{lem}\label{torsion}
For any $\lambda \in (0, 1]$, $c\ge 0$ there are $\mu >0$, $\e
\ge 0$, and $\rho >0$ such that the following condition holds. Suppose that $\mathcal R$ is a symmetrized set of words in $\mathcal A$
satisfying the $C_1 (\e, \mu , \lambda , c , \rho )$--condition.
Then every element of finite order in the group $G_1$ given by
(\ref{quot}) is the image of an element of finite order of $G$.
\end{lem}
\begin{proof}
Let us fix $\lambda , c>0$.
Observe that the $C_1 (\e, \mu,\lambda , c, \rho )$--condition becomes stronger as $\lambda $ increases and $c$ decreases. Hence it suffices to prove the lemma assuming that $\lambda < \alpha $ and $c>a$ for $\alpha $ and $a$ provided by Lemma \ref{Un}. Let us choose constants $1>\mu >0$, $\e >0$, $\rho >0$ such that
\begin{enumerate}
\item[($\ast$)] the conclusion of Corollary \ref{Gr} holds.
\end{enumerate}
Again we note that the $C_1 (\e, \mu,\lambda , c, \rho )$--condition becomes stronger as $\mu $ decreases and $\e, \rho $ increase. Hence we may decrease $\mu$ and increase $\rho $ and $\e$ if necessary without violating ($\ast $). In particular, without loss of generality, we may assume that
\begin{equation}\label{eps}
\e > 2\kappa + 8\delta ,
\end{equation}
where $\kappa =\kappa (\lambda, c)$ is the constant provided by Lemma \ref{qg} and $\delta $ is the hyperbolicity constant of $\G $. We fix $\e $ from now on.
Suppose that $g$ is an element of order $n>0$ in $G_1$ such that its preimage has infinite order in $G$. Assume also that $g$ has the smallest length among all elements from the conjugacy class $g^{G_1}$. Denote by $U$ a shortest word in the alphabet $X\cup \mathcal H$ representing $g$ in $G_1$. Then there exists a diagram $\Delta $ over (\ref{quot}) with boundary label $U^n$. By Lemma \ref{Un}, the label of $\partial \Delta $ is $(\lambda , c)$--quasi-geodesic (in $G$) and $\Delta $ contains at least one $\mathcal R$-cell. (For otherwise $\Delta $ is a diagram over (\ref{ZP}) and $U^n=1$ in $G$.) Hence by Corollary \ref{Gr} there exists an $\mathcal R$--cell $\Pi $ in $\Delta $ and $\e $--contiguity subdiagram
$\Gamma $ of $\Pi $ to $\partial \Delta $ such that $(\Pi , \Gamma , \partial \Delta )> 1-23\mu .$
Let $\partial \Gamma =s_1q_1s_2q_2$ as on Fig. 1. Since $\Gamma $ has no $\mathcal R$-cells, we may think of $s_1q_1s_2q_2$ as a qadrangle in $\G $, where $q_i$ is $(\lambda , c)$-quasi-geodesic, and $l(s_i)\le \e $ for $i=1,2$. Therefore we have
\begin{equation}\label{lq2}
\begin{array}{rl}
l(q_2)&\ge \dxh ((q_2)_-, (q_2)_+) \ge \dxh ((q_1)_-, (q_1)_+)-l(s_1)-l(s_2) \\ &\\ &\ge \lambda l(q_2) -c -2\e \ge \lambda (1-23\mu) l(\partial \Pi ) -c-2\e .
\end{array}
\end{equation}
In particular, choosing $\rho $ large enough and $\mu $ small enough we can make $l(q_2)$ as large as we want. (Recall that $l(\partial \Pi)\ge \rho$ by the $C_1 (\e, \mu,\lambda , c, \rho )$--condition.)
Passing from $U$ to a cyclic shift of $U^\pm 1$ if necessary, we may assume that $\phi (q_2)$ is a prefix of $U^n$ . We now have three cases to consider. Our goal is to show that neither of them is possible whenever $\rho $ is large enough and $\mu $ is small enough.
{\bf Case 1.} Suppose that $l(q_2)\ge 4\| U\| /3$. This allows us to find two long disjoint but equal subwords of $\phi (q_2)$. More precisely, we can decompose $\phi (q_2)$ as $\phi (q_2)\equiv WV_1WV_2$, where $\lambda ^2 l(q_2)/5\le \| W\| \le \lambda ^2l(q_2)/4$ and $\| V_1\| \ge l(q_2)/3$. Let $q_2=w_1v_1w_2v_2$ be the corresponding decomposition of the path $q_2$. Corollary \ref{qgq} applied to the quadrangle $s_1q_1s_2q_2$ implies that there is a point $o\in q_1$ such that $\dxh (o, (w_1)_+)\le K +\e$, where $K$ depends on $\lambda$, $c$, and $\delta $ only. Let $q_1^{-1}=r_1t$, where $(r_1)_+=o$. Thus we have
\begin{equation}\label{rw1}
\dxh ((r_1)_\pm , (w_1)_\pm )\le K+\e
\end{equation}
Similarly one can find a subpath $r_2^\xi $, $\xi =\pm 1$, of $q_1^{-1}$ such that
\begin{equation}\label{rw2}
\dxh ((r_2^\xi)_\pm , (w_2)_\pm )\le K+\e .
\end{equation}
We note that $r_1$ and $r_2$ are disjoint. Indeed otherwise $r_1$ passes through $(r_2)_-$ or $(r_2)_+$. For definiteness, assume that $(r_2)_-\in r_1$. Then we have
$$
\begin{array}{rl}
l(r_1) & \le \frac1\lambda (\dxh ((r_1)_-, (r_1)_+) +c)\le \frac1\lambda (l(w_1) +2\e + K+c)\\ &\\ & = \frac1\lambda (\|W\| +2\e + K+c)\le
\lambda l(q_2)/4 + (2\e +K+c)/\lambda .
\end{array}
$$
On the other hand,
$$
\begin{array}{rl}
l(r_1)& \ge \dxh ((r_1)_-, (r_2)_-) \ge \lambda l(w_1v_1) -c-2\e -K\ge \\ &\\ & \lambda \| V_1\| -c-2\e -K \ge \lambda l(q_2)/3 -c-2\e -K.
\end{array}
$$
These inequalities contradict each other if $l(q_2)$ is large enough. As explained before, the later condition can always be ensured by choosing sufficiently small $\mu $ and sufficiently large $\rho $ (see (\ref{lq2})).
Thus we have a decomposition $q_1^{-1} =r_1t_1r_2^{\xi }t_2$, $\xi =\pm 1$. Let $\phi (q_1)^{-1} \equiv R_1T_1R_2T_2$ be the corresponding decomposition of the label. By (\ref{rw1}) and (\ref{rw2}), we have $R_1=Y_1WZ_1$ and $R_2= Y_2W^{\pm 1} Z_2$ in $G$, where $\| Y_i\|, \| X_i\| \le K+\e $ for $i=1,2$. Hence $R_1=YR_2^{\pm 1 } Z$ in $G$, where $\| Y\| , \| Z\| \le 2(K +\e)$. Without loss of generality, we may assume that words $Y$ and $Z$ are geodesic in $G$. Let $a_1ya_2z$ be the corresponding rectangle in $\G$, where $\phi (a_1)\equiv R_1^{-1}$, $\phi (y)\equiv Y$, $\phi (a_2)\equiv R_2^{\pm 1}$, $\phi (z)\equiv Z$. Recall that $R_1, R_2$ are subwords of the $(\lambda , c)$--quasi-geodesic word $\phi (\partial \Pi )$. Hence $a_1, a_2$ are $(\lambda , c)$--quasi-geodesic. Moreover, we have
$$
\begin{array}{rl}
l (a_1)& =\| R_1\| \ge \dxh ((r_1)_-, (r_1)_-) \\ & \\ &\ge \dxh ((w_1)_-, (w_1)_+) -l(s_1) -\dxh ((r_1)_+, (w_1)_+)\\ & \\ & \ge \lambda l(w_1) -c-2\e -K= \lambda \| W\| -c - 2\e - K\\ & \\ & \ge \lambda ^3 l(q_2)/5 -c-2\e -K\ge \lambda ^3 (1-23\mu )l(\partial \Pi)/5 -c-2\e-K.
\end{array}
$$
In particular, if $\mu $ is small enough and $\rho $ is large enough, we have $l(a)>(4\max\{ l(y), l(z)\} +c)/\lambda $. This allows us to apply Corollary \ref{subsegcor}. Thus we obtain subsegments $a^\prime , b^\prime $ of the sides $a$ and $b$, respectively, such that the distances between the corresponding ends of these subsegments is at most $8\delta +2\kappa $. Let $A=\phi (a^\prime )$, $B=\phi (b^\prime )$. Then $A=CB^{\pm 1}D$ in $G$, where $\| C\| , \| D\| \le 8\delta +2\kappa <\e$ by (\ref{eps}) and
$$
\begin{array}{rl}
\min \{ \| A\|, \| B\| \} & \ge \frac{7}{20} (\lambda l(a_1) -c) -8\delta - 2\kappa \\ & \\ & \ge \frac{7}{20} (\lambda (\lambda ^3 (1-23\mu )l(\partial \Pi)/5 -c-2\e-K) -c) -8\delta - 2\kappa \ge \mu l(\partial \Pi )
\end{array}
$$
if $\mu $ is small enough and $\rho $ is large enough. Since $A$ and $B$ are disjoint subwords of $\phi (\partial \Pi )$, we obtain a contradiction with the $C_1 (\e, \mu,\lambda , c, \rho )$--condition.
{\bf Case 2.} Suppose that $\| U\| \le l(q_2)\le 4\| U\| /3$, i.e. $\phi (q_2)\equiv UV$ for some (may be empty) word $V$, $\|V\|\le \| U\| /3$.
Note that $\phi (q_2)=\phi(s_2^{-1}us_1^{-1})$
in $G_1$ (see Fig. 1), hence $g=\phi(s_2^{-1}us_1^{-1})V^{-1}$ in $G_1$. Since $U$ is the shortest word representing $g$ in $G_1$, we obtain
$$
\| U\| \le 2\e +23\mu l(\partial \Pi) +\| V\| \le 2\e +23\mu l(\partial \Pi) + \| U\|/3.
$$
Consequently,
\begin{equation}\label{lule}
\| U \| \le 3(2\e +23\mu l(\partial \Pi))/2.
\end{equation}
On the other hand, using (\ref{lq2}) we obtain
$$
\| U\| \ge 3l(q_2)/4 \ge 3(\lambda (1-23\mu) l(\partial \Pi ) -c-2\e )/4.
$$
The later inequality contradicts (\ref{lule}) whenever $\mu $ is small enough and $\rho $ is large enough.
{\bf Case 3.} Suppose that $ l(q_2)< \| U\| $, i.e., $\phi (q_2) $ is a subword of $U$. Again since $U$ is the shortest word representing $g$ in $G_1$, we have $\| \phi (q_2)\| \le \| Q\| $ for every word $Q$ such that $Q=\phi (q_2)$ in $G_1$. In particular, for $Q\equiv \phi (s_2^{-1}us_1^{-1})$, we obtain
\begin{equation}\label{Qge}
\| Q\| \ge \| \phi (q_2)\| \ge \lambda (1-23\mu) l(\partial \Pi ) -c-2\e
\end{equation}
by (\ref{lq2}). On the other hand, we obviously have $\| Q\| \le 2\e +23 l(\partial \Pi )$, which contradicts (\ref{Qge}) if $\mu $ is small enough and $\rho $ is large enough.
\end{proof}
\section{Words with small cancellations}\label{wwsc}
Throughout the rest of the paper, $G$ denotes a group hyperbolic
relative to a collection of subgroups $\Hl $, $X$ denotes a finite
relative generating set of $G$ with respect to $\Hl $. As above we
set $\mathcal A=X\cup \mathcal H.$ Our main goal here is to show
that a certain set of words over the alphabet $\mathcal A$
satisfies the small cancellation conditions described above.
More precisely, suppose that $W$ is a word satisfying the following
conditions.
\begin{enumerate}
\item[(W1)] $W\equiv xa_1b_1\ldots a_nb_n$ for some $n\ge 1$, where:
\item[(W2)] $x\in X\cup \{ 1\} $;
\item[(W3)] $a_1, \ldots , a_n$
(respectively $b_1, \ldots , b_n$) are elements of $H_\alpha $
(respectively $H_\beta $), where $H_\alpha \cap H_\beta =\{ 1\} $ ;
\item[(W4)] the elements $a_1, \ldots , a_n, b_1, \ldots , b_n$ do not belong to the set \begin{equation}\label{finiteset} \mathcal F = \mathcal F (\e ) =\{ g\in \langle \Omega \rangle\; : \; |g|_{\Omega} \le K(32\e +70) \} , \end{equation} where $\e $ is some non--negative integer and the set $\Omega $ and the constant $K$ are provided by Lemma \ref{Omega}.
\end{enumerate}
\begin{figure}
\unitlength=1mm \linethickness{0.4pt}
\begin{picture}(140,23)(-40,4)
\qbezier(65.23,7.95)(36.15,35.89)(6.01,7.95)
\put(5.83,7.95){\circle*{1}} \put(15.94,15.68){\circle*{1}}
\put(55.92,15.23){\circle*{1}} \put(50.28,18.65){\circle*{1}}
\put(22.03,18.95){\circle*{1}} \put(65.29,7.8){\circle*{1}}
\put(37.01,21.9){\vector(1,0){.07}}
\put(13,9){\makebox(0,0)[cc]{$p_1$}}
\put(60,9){\makebox(0,0)[cc]{$p_3$}}
\put(16.5,19.62){\makebox(0,0)[cc]{$s$}}
\put(54.5,19.3){\makebox(0,0)[cc]{$t$}}
\put(35,11.4){\makebox(0,0)[cc]{$e$}}
\put(35.64,25){\makebox(0,0)[cc]{$p_2$}}
\thicklines \linethickness{1pt}
\qbezier(15.76,15.61)(18.28,16.95)(22,18.88)
\qbezier(50.09,18.58)(53.74,16.65)(55.89,15.31)
\qbezier(22,18.88)(33.52,9.51)(50.09,18.58)
\put(54.07,16.6){\vector(2,-1){.07}}
\put(20,17.7){\vector(2,1){.07}}
\put(36.7,14.1){\vector(1,0){.07}}
\end{picture}
\caption{}\label{ahm}
\end{figure}
Let $\mathcal {SW}$ denote the set of all subwords of cyclic
shifts of $W^{\pm 1}$. As in \cite{RHG}, we say that a path $p$ in
$\G $ is a {\it path without backtracking} if all components of
$p$ are isolated.
\begin{lem}\label{132}
Suppose $p$ is a path in $\G $ such that $\phi (p)\in \mathcal
{SW}$. Then
1) $p$ is a path without backtracking.
2) $p$ is $(1/3, 2)$--quasi--geodesic.
\end{lem}
\begin{proof}
1) Suppose that $p=p_1sp_2tp_3$, where $s$ and $t$ are two
connected components. Passing to another pair of connected
components of $p$ if necessary, we may assume that $p_2$ is a path
without backtracking. For definiteness, we also assume that $s$
and $t$ are $H_\alpha $--components. Let $e$ denote a path of
length at most $1$ in $\G $ connecting $s_+$ to $t_-$ and labelled
by an element of $H_\alpha $ (see Fig. \ref{ahm}). It follows from
our choice of $W$ and the condition $H_\alpha \cap H_\beta =\{ 1\}
$ that $l(p_2)\ge 2$. Thus $p_2$ contains $m\ge l(p_2)/2\ge 1$
$H_\beta $--components, say $r_1, \ldots , r_m$, and all these
components are isolated components of the cycle $d=ep_2^{-1}$. Let
$g_1, \ldots , g_m $ be elements of $G$ represented by the labels
of $r_1, \ldots , r_m$. By Lemma \ref{Omega}, $g_i\in \langle
\Omega \rangle $, $i=1, \ldots , m$. According to (W4), we have
$$\sum\limits _{i=1}^m |g_i|_{\Omega } \ge 70Km\ge 35Kl(p_2)>
K(l(p_2)+1)= Kl(d).$$ This contradicts Lemma \ref{Omega}.
2) Since the set $\mathcal {SW}$ is closed under taking subwords,
it suffices to show that $\dxh (p_-, p_+)\ge l(p)/3-2$. In case
$l(p)\le 6$ this is obvious, so we assume $l(p)> 6$. Suppose that
$\dxh (p_-, p_+)< l(p)/3-2$. Let $c$ denote a geodesic in $\G $
such that $c_-=p_-$ and $c_+=p_+$. Since $p$ is a path without
backtracking, any $H_\alpha $--component of $p$ is connected to at
most one $H_\alpha $--component of $c$. Obviously the path $p$
contains at least $l(p)/2-1$ $H_\alpha $--components. Therefore,
at least $$k=l(p)/2-1-l(c)>l(p)/2-1-(l(p)/3-2)> l(p)/6>1$$ of them
are isolated $H_\alpha $--components of the cycle $pc^{-1}$. Let
$f_1, \ldots , f_k$ be the elements of $G$ represented by these
components. Then as above we have $f_i\in \langle \Omega \rangle
$, $i=1, \ldots , k$. By (W4), we obtain
$$\sum\limits _{i=1}^k |f_i|_{\Omega } \ge 70Kk >
2Kl(p)>Kl(pc^{-1}).$$ This leads to a contradiction again.
\end{proof}
\begin{lem}\label{quad}
Suppose that $upv^{-1}q^{-1}$ is an an arbitrary quadrangle in $\G
$ satisfying the following conditions:
(a) $\phi (p)\equiv \phi (q^{-1})\in \mathcal {SW} $;
(b) $\max\{ l(u), \, l(v)\} \le \e $;
(c) $l(p)=l(q)\ge 6\e +22$.
\noindent Then the paths $p$ and $q$ have a common edge.
\end{lem}
The proof of Lemma \ref{quad} is based on the following two results. (We
keep the notation of Lemma \ref{quad} there.)
\begin{lem}\label{me1} Let us divide $p$ into three parts $p=p_1p_0p_2$ such that
\begin{equation}\label{p1p2}
l(p_1)=l(p_2)=3\e +6.
\end{equation}
Suppose that $s$ is a component of $p_0$. Then $s$ can not be
connected to a component of paths $u$ or $v$.
\end{lem}
\begin{proof}
Suppose that a component $s$ of $p_0$ is connected to a component
$t$ of $u$. Then $\dxh (s_+, t_+)\le 1$. Recall that the segment
$[p_-, s_+]$ of $p$ is $(1/3, 2)$--quasi--geodesic by Lemma
\ref{132}. Hence,
$$
\begin{array}{rl}
l(p_1) & < l([p_-, s_+])\le 3(\dxh (p_-, s_+) +2) \\ & \\
& \le 3(\dxh (p_-, t_+) +\dxh (t_+, s_+)+2) \\ &\\
& \le 3(l(u)-1+1+2)\le 3\e +6.
\end{array}
$$
However, this contradicts (\ref{p1p2}). Similarly $s$ can not be
connected to a component of $v$.
\end{proof}
\begin{lem}\label{me2}
Let $s_1, \ldots , s_k$ be conseqcutive components of $p_0$. Then
$q$ can be decomposed as $q=q_1t_1\ldots q_kt_kq_{k+1}$, where
1) $t_i$ is a component of $q$ connected to $s_i$, $i=1, \ldots ,
k$;
2) $q_i$ contains no components for $i=2,\ldots , k$, i.e. either
$\phi (q_i)\equiv x$ or $q_i$ is trivial.
\end{lem}
\begin{proof}
To prove the first assertion of the lemma we proceed by induction.
First let us show that $s_1$ is not isolated in
$d=upv^{-1}q^{-1}$.
Indeed assume $s_1$ is isolated in $d$. Suppose for definiteness
that $s_1$ is an $H_\alpha $--component. We consider the maximal
subpath $s$ of $p_0$ such that $s$ contains $s_1$ and all
$H_\alpha $--components of $s$ are isolated in $d$. By maximality
of $s$, either $s_-=(p_0)-$, or $s_-=r_+$ for a certain $H_\alpha
$--component $r$ of $p_0$ such that $r$ is not isolated in $d$.
(According to Lemma \ref{me1} this means that $r$ is connected to
an $H_\alpha $--component of $q$.) In the first case we denote by
$f_1$ the path $up_1$. In the second case, let $f_1$ be a path of
length $\le 1$ that connects an $H_\alpha $--component of $q$ to
$r$. In both cases we have $l(f_1)\le 4\e +6$. It follows from the
choice of $s$ that no $H_\alpha $--component of $s$ is connected
to an $H_\alpha $--component of $f_1$. Similarly we construct a
path $f_2$ such that $(f_2)_-\in q$, $(f_2)_+=s_+$, $l(f_2)\le 4\e
+6 $, and no $H_\alpha $--component of $s$ is connected to an
$H_\alpha $--component of $f_2$.
Clearly all $H_\alpha $--components of $s$ are isolated in the
cycle $c=f_1sf_2^{-1}[(f_2)_-, (f_1)_-]$, where $[(f_2)_-,
(f_1)_-]$ is a segment of $q^{\pm 1}$. We have
\begin{equation}\label{me21}
\dxh ((f_1)_-, (f_2)_-) \le l(f_1)+l(s)+l(f_2)\le 8\e + 12 +l(s).
\end{equation}
Consequently,
\begin{equation}\label{me22}
l([(f_1)_-, (f_2)_-])\le 3\dxh ((f_1)_-, (f_2)_-)+2)\le 24\e
+42+3l(s).
\end{equation}
Finally,
\begin{equation}\label{me23}
l(c)\le l(f_1)+l(s)+l(f_2)+l([(f_1)_-, (f_2)_-]) \le 32\e +54
+4l(s).
\end{equation}
Let $g_1, \ldots , g_m$ denote the elements represented by
$H_\alpha $--components of $s$. Note that $l(s)\le 2m+2$.
Applying Lemma \ref{Omega}, we obtain $g_i\in \langle \Omega
\rangle $, $i=1, \ldots , m$, and
\begin{equation}\label{me231}
\sum\limits_{i=1}^m |g_i|_{\Omega } \le Kl(c) \le K(32\e +54
+4l(s))\le K(32\e +62 +8m).
\end{equation}
Therefore, at least one of the elements $g_1, \ldots , g_m$ has
length at least
\begin{equation}\label{me232}
\frac{1}{m} K(32\e +62 +8m)\le K(32\e +70)
\end{equation}
that contradicts (W4). Thus $s_1$ is not isolated in $d$. By Lemma
\ref{me1} this means that $s_1$ is connected to an $H_\alpha
$--component $t_1$ of $q$.
Now assume that we have already found components $t_1, \ldots ,
t_i$ of $q$, $1\le i<k$, that are connected to $s_1, \ldots , s_i$
respectively. The inductive step is similar to the above
considerations. For definiteness, we assume that $s_i$ is an
$H_\alpha $--component. Then $s_{i+1}$ is an $H_\beta $--component
by the choice of $W$. We denote by $f_1$ a path of length $\le 1$
labelled by an element of $H_{\alpha }$ that connects $(t_i)_+$ to
$(s_i)_+$ (Fig. \ref{qfig}). If $s_{i+1}$ is isolated in the cycle
$c = f_1[(s_i)_+, p_+]v^{-1}[q_+, (t_i)_+]$, we denote by $s$ the
maximal initial subpath of the segment $[(s_i)_+, (p_0)_+]$ of
$p_0$ such that $s$ contains $s_{i+1}$ and all $H_\alpha
$--components of $s$ are isolated in $c$. As above, we can find a
path $f_2$ such that $(f_2)_-\in q$, $(f_2)_+=s_+$, $l(f_2)\le 4\e
+6 $, and no $H_\alpha $--component of $s$ is connected to an
$H_\alpha $--component of $f_2$. The inequalities
(\ref{me21})--(\ref{me232}) remain valid and we arrive at a
contradiction in the same way. Thus $s_{i+1}$ is not isolated in
$c$, i.e., it is connected to a component $t_{i+1}$ of the segment
$[(t_i)_+, q_+]$ of $q$. This completes the inductive step.
\begin{figure}
\unitlength 1mm
\linethickness{0.4pt}
\begin{picture}(124.75,36)(-13,50)
\qbezier[1000](5.13,81.67)(60.55,73.89)(124.1,81.67)
\qbezier[1000](5.13,54.09)(60.55,61.87)(124.1,54.09)
\put(7.2,68.38){\vector(0,1){.07}}\qbezier(5.13,54)(9.13,68.94)(5.13,81.63)
\put(121.33,68.38){\vector(0,1){.07}}\qbezier(124.13,54.13)(118.38,68.88)(124.13,81.63)
\thicklines
\put(42,78.4){\vector(1,0){.07}}\qbezier(33.25,78.75)(42.94,78.25)(48.88,78)
\put(33.25,78.63){\circle*{1}} \put(60,78){\circle*{1}}
\put(77,78){\circle*{1}} \put(90.13,78.75){\circle*{1}}
\put(90.13,57.25){\circle*{1}} \put(30.13,56.75){\circle*{1}}
\put(19.75,55.88){\circle*{1}} \put(48.88,77.88){\circle*{1}}
\put(26,56.38){\vector(1,0){.07}}\qbezier(19.63,55.88)(27.13,56.5)(30.13,56.8)
\put(68.5,77.94){\vector(1,0){.07}}\multiput(59.88,77.88)(4.3125,.0313){4}{\line(1,0){4.3125}}
\put(42.1,67.5){\vector(1,1){.07}}\qbezier(30,56.75)(44.38,67.69)(48.75,77.88)
\thinlines
\put(88.1,68.38){\vector(0,1){.07}}\qbezier(90,57.25)(86,68.69)(90,78.88)
\put(4,68.13){\makebox(0,0)[cc]{$u$}}
\put(124.5,69){\makebox(0,0)[cc]{$v$}}
\put(11.88,52.38){\makebox(0,0)[cc]{$q$}}
\put(12.75,83.13){\makebox(0,0)[cc]{$p$}}
\put(25.88,53){\makebox(0,0)[cc]{$t_i$}}
\put(42,81){\makebox(0,0)[cc]{$s_i$}}
\put(49.38,80.2){\makebox(0,0)[cc]{$s_-$}}
\put(90,81.3){\makebox(0,0)[cc]{$s_+$}}
\put(67.75,80.8){\makebox(0,0)[cc]{$s_{i+1}$}}
\put(44.4,66){\makebox(0,0)[cc]{$f_1$}}
\put(85.25,67.63){\makebox(0,0)[cc]{$f_2$}}
\put(65.13,67.13){\makebox(0,0)[cc]{$c$}}
\end{picture}
\unitlength 1mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\caption{}\label{qfig}
\end{figure}
Let us prove the second assertion of the lemma. Suppose that $s_i,
s_{i+1}$ are two subsequent components of $p_0$ and such that
$q_{i+1}$ contains at least one component (say, an $H_\alpha
$--component). As above for definiteness we assume that $s_i$
(respectively $s_{i+1}$) is an $H_\alpha $--component
(respectively $H_\beta $--component). Let $e_1$, $e_2$ be the
paths of lengths $\le 1$ labelled by elements of $H_\alpha $ and
$H_\beta $ respectively such that $(e_1)_+=(s_i)_+$,
$(e_1)_-=(q_{i+1})_-$, $(e_2)_+=(s_{i+1})_-$,
$(e_2)_-=(q_{i+1})_+$ (see Fig. \ref{3}). As $q$ is a path without
backtracking, each $H_\alpha $--component of $q_{i+1}$ is isolated
in the cycle $e= q_{i+1} e_2[(s_{i+1})_-, (s_{i})_+]e_1^{-1}$,
where $[(s_{i+1})_-, (s_{i})_+]$ is a segment of $p^{-1}$. Notice
that $l([(s_{i+1})_-, (s_{i})_+])\le 1$ as $s_i$ and $s_{i+1}$ are
subsequent components of $p_0$. We denote by $f_1, \ldots , f_m$
the elements represented by $H_\alpha $--components of $q_{i+1}$.
By Lemma \ref{Omega}, we have $f_j\in \langle \Omega \rangle $,
$j=1, \ldots , m$, and
$$
\sum\limits_{j=1}^m |f_j|_{\Omega } \le Kl(e) \le
K(3+l(q_{i+1}))\le K(3+2m+2)\le 7mK.
$$
This contradicts (W4) again.
\end{proof}
\begin{figure}
\unitlength 1mm
\begin{picture}(124.75,36)(-15,50)
\qbezier[1000](5.13,81.67)(60.55,73.89)(124.1,81.67)
\qbezier[1000](5.13,54.09)(60.55,61.87)(124.1,54.09)
\put(7.13,68.38){\vector(0,1){.07}}\qbezier(5.13,54)(9.13,68.94)(5.13,81.63)
\put(121.33,68.38){\vector(0,1){.07}}\qbezier(124.13,54.13)(118.38,68.88)(124.13,81.63)
\thicklines
\put(42,78.4){\vector(1,0){.07}}\qbezier(33.25,78.75)(42.94,78.25)(48.88,78)
\put(33.25,78.63){\circle*{1}} \put(60,78){\circle*{1}}
\put(77,78){\circle*{1}} \put(80.25,57.63){\circle*{1}}
\put(90,57.13){\circle*{1}} \put(30.13,56.75){\circle*{1}}
\put(19.75,55.88){\circle*{1}} \put(48.88,77.88){\circle*{1}}
\put(26,56.38){\vector(1,0){.07}}\qbezier(19.63,55.88)(27.13,56.5)(30.13,56.8)
\put(68.5,77.94){\vector(1,0){.07}}\multiput(59.88,77.88)(4.3125,.0313){4}{\line(1,0){4.3125}}
\put(4,68.13){\makebox(0,0)[cc]{$u$}}
\put(124.5,69){\makebox(0,0)[cc]{$v$}}
\put(11.88,52.38){\makebox(0,0)[cc]{$q$}}
\put(12.75,83.13){\makebox(0,0)[cc]{$p$}}
\put(25.88,53){\makebox(0,0)[cc]{$t_i$}}
\put(42,81){\makebox(0,0)[cc]{$s_i$}}
\put(67.75,80.63){\makebox(0,0)[cc]{$s_{i+1}$}}
\put(85.88,57.38){\vector(1,0){.07}}\qbezier(80.13,57.63)(86.75,57.31)(89.88,57.25)
\put(58.38,57.88){\vector(1,0){.07}}\put(58,57.88){\line(1,0){.375}}
\put(42.38,67.13){\vector(1,1){.07}}\qbezier(30.13,56.63)(45.19,67)(49,77.88)
\put(65.13,67.38){\vector(-1,1){.07}}\qbezier(80.13,57.63)(60.25,66.94)(59.88,78)
\put(84.5,53.88){\makebox(0,0)[cc]{$t_{i+1}$}}
\put(57.13,54.5){\makebox(0,0)[cc]{$q_{i+1}$}}
\put(45,65.5){\makebox(0,0)[cc]{$e_1$}}
\put(63.75,65.5){\makebox(0,0)[cc]{$e_2$}}
\put(54.5,68.88){\makebox(0,0)[cc]{$e$}}
\end{picture}
\caption{}\label{3}
\end{figure}
\begin{proof}[Proof of Lemma \ref{quad}]
We keep the notation introduced in Lemma 5.3 and Lemma 5.4. Let
also $$p_0=p_1s_1\ldots p_ks_kp_{k+1}$$ for some (may be trivial)
subpaths $p_1, \ldots p_{k+1}$ of $p_0$.
According to (\ref{p1p2}) and condition c) of Lemma \ref{quad}, we
have $$ l(p_0)\ge 6\e +22 -l(p_1)-l(p_2)\ge 10.$$ Since $s_1,
\ldots , s_k$ are subsequent components and $\phi (p_0)$ is a
subword of a cyclic shift of $W^{\pm 1}$, at most one of the paths
$p_2, \ldots , p_{k}$ is non--trivial. Therefore, there are at
least $5$ subsequent components $s_{i}, \ldots , s_{i+4}$, such
that $p_{i+1}, \ldots , p_{i+4}$ are trivial. Without loss of
generality we may assume $i=1$. Similarly, by Lemma \ref{me2}, we
can find at least three subsequent components among $t_1, \ldots
t_5$, say $t_1, t_2, t_3$, such that $(t_1)_+=(t_2)_-$ and
$(t_2)_+=(t_3)_-$. Let $w$ be an element represented by the label
of any path that goes from $(t_1)_+=(t_2)_-$ to $(s_1)_+=(s_2)_-$.
For definiteness we assume that $t_1$ and $s_1$ are $H_\alpha
$--components. Since $t_1$ and $s_1$ are connected, we have $w\in
H_\alpha $. On the other hand, the $H_\beta $--components $t_2$
and $s_2$ are also connected. Hence $w\in H_\beta $. Thus $w\in
H_\alpha \cap H_\beta =\{ 1\} $, i.e. the vertices $(t_2)_-$ and
$(s_2)_-$ coincide. Similarly the vertices $(t_2)_+$ and $(s_2)_+$
coincide. In particular, $t_2$ and $s_2$ are edges labelled by the
same element of $H_\beta $, i.e., $t_2$ and $s_2$ coincide.
\end{proof}
Now we are ready to prove the main result of this section.
\begin{thm}\label{w}
Suppose that $W$ is a word in $\mathcal A$ satisfying the
conditions (W1)--(W4) and, in addition, $a_i\ne a_j^{\pm 1}$,
$b_i\ne b_j^{\pm 1}$ whenever $i\ne j$ and $a_i\ne a_i^{-1}$,
$b_i\ne b_i^{-1}$, $i,j\in \{ 1, \ldots , n\} $. Then the set
$\mathcal W$ of all cyclic shifts of $W^{\pm 1} $ satisfies the
$C_1(\e , \frac{3\e +11}{n} ,\frac13, 2, 2n+1)$ small cancellation
condition.
\end{thm}
\begin{proof}
The first two conditions from Definition \ref{DefSC} follow from
the choice of $W$ and Lemma \ref{132}. Suppose that $U$ is an $\e
$--piece of a word $R\in \mathcal {W}$. Assume that $\max \{ \|
U\| , \| U^\prime \| \} \ge \mu \| R\| $ for $\mu = \frac{3\e
+11}{n} $, that is, $$\max \{ \| U\| , \| U^\prime \| \} \ge
\frac{3\e +11}{n} (2n+1) > 6\e +22.$$ (Here and below we use the
notation of Definitions \ref{DefSC} and \ref{piece}.) Without loss
of generality we may assume that $\| U\| \ge 6\e +22$. By the
definition of an $\e $--piece, there is a quadrangle
$upv^{-1}q^{-1}$ in $\G $ satisfying conditions (a)--(c) of Lemma
\ref{quad} and such that labels of $p$ and $q$ are $U$ and
$U^\prime $ respectively.
Let $e$ be the common edge of $p$ and $q$. Then we have $$R\equiv
U_1\phi (e)U_2V$$ and $$R^\prime \equiv U_1^\prime\phi
(e)U_2^\prime V^\prime ,$$ where $U_1\phi (e)U_2\equiv U$ and
$U^\prime \equiv U_1^\prime\phi (e)U_2^\prime $. Since $\phi (e)$
appears in $W^{\pm 1}$ only once, $R$, $R^\prime $ are cyclic
shifts of the same word $W^{\pm 1}$ and
$$U_2VU_1\equiv U_2^\prime V^\prime U_1^\prime .$$ Note also that
$$Y=U_1^\prime U_1^{-1} $$ in $G$ as $Y^{-1}U_1^\prime U_1^{-1} $ is
a label of a cycle in $\G $. Therefore, the following equalities
hold in the group G:
$$
YRY^{-1}= U_1^\prime U_1^{-1} U_1\phi (e)U_2V U_1(U_1^\prime
)^{-1} = U_1^\prime \phi (e) U_2^\prime V^\prime U_1 ^\prime
(U_1^\prime )^{-1} =R^\prime
$$
that contradicts the third condition from Definition \ref{DefSC}.
Similarly, if $U$ is an $\e ^\prime $--piece, then $R\equiv
UVU^\prime V^\prime $ for some $U, U^\prime ,V, V^\prime $, where
both subwords $U$ and $U^\prime $ contain a certain letter from
$X\cup\mathcal H$. In this case we arrive at a contradiction again
as any letter $a\in X\cup\mathcal H$ appears in $R$ only once, and
if $a$ appears in $R$, then $a^{-1}$ does not.
\end{proof}
Finally we note that the condition $x\in X\cup \{ 1\} $ in Theorem
\ref{w} is imposed by some technical reasons and it is not
restrictive since we can always add any element $x\in G$ to the
set $X$ without violating relative hyperbolicity.
\section{Suitable subgroups and quotients}
Throughout this section, we keep the assumption that $G$ is
hyperbolic relative to a collection of subgroups $\Hl $. The proof
of Lemma 2.3 is based on the following auxiliary result.
\begin{lem}\label{f1f2}
Suppose that for some
$\lambda , \mu \in \Lambda $, $\lambda \ne \mu $, $H_\lambda $ and $H_\mu $ contain elements of infinite order and $H_\lambda \cap H_\mu=\{ 1\} $. Then there are $f\in H_\lambda $, $g\in H_\mu $ such that $fg$ is a hyperbolic element of infinite order and $E_G(fg)=\langle fg \rangle $.
\end{lem}
\begin{proof}
We set $\e = 2(\kappa + \delta )$, where $\kappa =\kappa (\delta
,1/3 , 2 )$ is the constant provided by Lemma \ref{qg} and $\delta
$ is the hyperbolicity constant of $\G $. It is convenient to assume that $\kappa , \delta \in \mathbb N$.
Let $\mathcal F=\mathcal
F (\e )$ be the set defined by (\ref{finiteset}). Since $\Omega $
is finite, $\mathcal F$ is finite, and hence there are elements $f\in
H_\lambda \setminus \mathcal F$ and $g\in H_\mu \setminus \mathcal
F$ of infinite order. In particular,
\begin{equation}\label{invol}
f^2\ne 1,\;\;\; g^2\ne 1.
\end{equation}
Note that the word $W=(fg)^m$ satisfies conditions (W1)--(W4).
(Here $f$ and $g$ are regarded as letters of $\mathcal H$.) By
Lemma \ref{132}, for any $m\in \mathbb N$, the word $(fg)^m$ is
$(1/3, 2)$--quasi--geodesic in $\G $. In particular, $fg$ is a
hyperbolic element. Indeed otherwise the length $|(fg)^m|_{X\cup
\mathcal H}$ would be bounded uniformly on $m$.
Now suppose that $a\in E_G(fg)$. Then by Theorem \ref{E(g)},
$a(fg)^ma^{-1}=(fg)^{\pm m }$ for some $m\in \mathbb N$. Passing
to a multiple of $m$ if necessary, we may assume that
\begin{equation} \label{m}
m\ge 3|a|_{X\cup \mathcal H} +12(\kappa +\delta) +20.
\end{equation}
Let $a_1b_1a_2b_2$ be a quadrangle in $\G $ such that $a_1$, $a_2$
are geodesic, the labels $\phi (a_1)=\phi (a_2^{-1})$ represent
$a$ in $G$, and $\phi (b_1)=\phi (b_2^{-1})\equiv (fg)^{\pm m}$.
Let also $b_1=b_1^\prime pb_1^{\prime\prime } $ (Fig.
\ref{E(fg)}), where
\begin{equation}\label{b1}
l(b_1^\prime )=l(b_1^{\prime\prime }) =3(l(a_1)+2(\kappa +\delta )
+3 ).
\end{equation}
As $b_1$ is $(1/3, 2)$--quasi--geodesic, we have
\begin{equation}\label{ss1}
\dxh (p_\pm , (b_1)_\pm ) \ge \frac13 l(b_1^\prime ) -2 \ge l(a_1)
+2(\kappa +\delta )+1.
\end{equation}
Further by Corollary \ref{qgq}, there is a point $s\in a_1\cup
b_2\cup a_2 $ such that $\dxh (s, p_-)\le 2(\kappa +\delta )$. If
$s\in a_1$, then we have $$\dxh (p_-, b_-)\le \dxh (p_-, s)+\dxh
(s, b_-)\le 2(\kappa +\delta )+l(a_1)$$ that contradicts
(\ref{ss1}). For the same reason $s$ can not belong to $a_2$. Thus
$s\in b_2$. Without loss of generality, we may assume that $s$ is
a vertex of $\G$. Similarly there exists a vertex $t\in b_2$ such
that $\dxh (t, p_+)\le 2(\kappa +\delta )$. Let $u,v$ be geodesics
in $\G $ connecting $s$ to $p_-$ and $t$ to $p_+$ respectively and
let $q$ denote the segment $[s,t]$ of $b_2^{-1}$.
According to (\ref{m}) and (\ref{b1}), we have
$$
l(p)\ge 2m - 6(l(a_1)+2(\kappa +\delta )+3)\ge 12 (\kappa +\delta
)+22.
$$
Hence we may apply Lemma \ref{quad} for the quadrangle
$upv^{-1}q^{-1}$ and $\e =2(\kappa +\delta )$. Thus there exists a
common edge $e$ of $p$ and $q$. In particular, this and (\ref{invol}) imply that
$a(fg)^ma^{-1}=(fg)^{m }$ (not $(fg)^{-m}$).
There are two possibilities for labels of the segments $[(b_1)_-,
e_-]$ and $[(b_2)_+, e_-]$ of $b_1$ and $b_2^{-1}$ respectively.
Namely both these labels are either of the form $(fg)^n$ (possibly
for different $n$) or of the form $(fg)^kf$. In both cases
$a=(fg)^l$ for a certain $l$ as labels of $a_1$ and $[(b_2)_+,
e_-][(b_1)_-, e_-]^{-1}$ represent the same element of $G$. Thus
$a\in \langle fg \rangle $ and $E_G(fg)=\langle fg\rangle $.
\end{proof}
\begin{figure}
\unitlength 1mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(116.48,25)(-18,35)
\put(55.95,44.9){\vector(1,0){.07}}\put(45.96,44.9){\line(1,0){19.976}}
\qbezier(65.94,44.9)(100.67,49.06)(115.97,57.81)
\qbezier(67.04,44.89)(96.1,43.18)(115.95,35.82)
\qbezier(45.93,44.74)(21.63,44.37)(2.97,35.97)
\qbezier(45.93,44.74)(18.88,47.05)(3.12,57.97)
\put(45.78,44.89){\circle*{1}} \put(66.9,44.89){\circle*{1}}
\put(97.96,50.54){\circle*{1}} \put(97.96,41.03){\circle*{1}}
\put(115.8,35.82){\circle*{1}} \put(115.95,57.68){\circle*{1}}
\put(3.12,35.97){\circle*{1}} \put(3.57,57.68){\circle*{1}}
\put(18.14,50.39){\circle*{1}} \put(17.84,41.03){\circle*{1}}
\put(18.73,46.53){\vector(0,1){.07}}\qbezier(17.84,41.03)(19.47,47.35)(18.14,50.39)
\put(97.5,46.38){\vector(0,1){.07}}\qbezier(97.96,41.03)(96.85,46.97)(97.81,50.54)
\put(4.91,46.38){\vector(0,1){.07}}\qbezier(3.12,35.82)(6.61,46.01)(3.27,57.68)
\put(10.7,53.51){\vector(2,-1){.07}}\multiput(10.41,53.66)(.05946,-.02973){5}{\line(1,0){.05946}}
\put(105.1,52.94){\vector(3,1){.07}}
\put(12,55.89){\makebox(0,0)[cc]{$b_1^\prime $}}
\put(105.5,55.5){\makebox(0,0)[cc]{$b_1^{\prime\prime }$}}
\put(96.3,53){\makebox(0,0)[cc]{$p_+$}}
\put(21,53){\makebox(0,0)[cc]{$p_-$}}
\put(55,47.42){\makebox(0,0)[cc]{$e$}}
\put(22,38.2){\makebox(0,0)[cc]{$s=q_-$}}
\put(95,38.5){\makebox(0,0)[cc]{$t=q_+$}}
\put(2.23,45.64){\makebox(0,0)[cc]{$a_1$}}
\put(113.8,45.04){\vector(0,-1){.07}}\qbezier(115.8,57.83)(111.64,43.18)(115.8,35.97)
\put(117,46){\makebox(0,0)[cc]{$a_2$}}
\put(100,45.78){\makebox(0,0)[cc]{$v$}}
\put(16.2,46.08){\makebox(0,0)[cc]{$u$}}
\end{picture}
\caption{}\label{E(fg)}
\end{figure}
\begin{proof}[Proof of Lemma \ref{non-com}]
Let $f_1, f_2\in H^0$ be non--commensurable elements of $H$ such
that $E_G(f_1)\cap E_G(f_2)=\{ 1\} $. By Theorem \ref{E(g)}, $G$
is hyperbolic relative to the collection $\Hl\cup E_G(f_1)\cup
E_G(f_2)$.
We construct a sequence of desired elements $h_1, h_2, \ldots $ by
induction. By Lemma \ref{f1f2}, there are $f\in E_G(f_1)$, $g\in
E_G(f_2)$ such that the element $h_1=fg$ is hyperbolic ( with
respect to the collection $\Hl\cup E_G(f_1)\cup E_G(f_2)$) and
$E_G(h_1)=\langle h_1\rangle $. Theorem \ref{E(g)} implies that
$G$ is hyperbolic relative to the collection $\Hl\cup E_G(f_1)\cup
E_G(f_2)\cup E_G(h_1)$. Further we construct a hyperbolic (with
respect to $\Hl\cup E_G(f_1)\cup E_G(f_2)\cup E_G(h_1)$) element
$h_2$ as a product of an element of $E_G(f_1)$ and an element of
$E_G(f_2)$ as above. As $h_2$ is hyperbolic, it is not
commensurable with $h_1$. Applying Theorem \ref{E(g)} again we
join $E_G(h_2)$ to the collection of subgroups with respect to
which $G$ is hyperbolic and so on. Continuing this procedure, we
get what we need.
\end{proof}
To prove Theorem \ref{glue}, we need the following two
observations. The first one is a particular case of Theorem 2.40
from \cite{RHG}.
\begin{lem}\label{exhyp}
Suppose that a group $G$ is hyperbolic relative to a collection of
subgroups $\Hl \cup \{ S_1, \ldots , S_m\} $, where $S_1, \ldots ,
S_m $ are finitely generated and hyperbolic in the ordinary
(non--relative) sense. Then $G$ is hyperbolic relative to $\Hl $.
\end{lem}
The next lemma is a particular case of Theorem 1.4 from \cite{RHG}
\begin{lem}\label{malnorm}
Suppose that a group $G$ is hyperbolic relative to a collection of
subgroups $\Hl $. Then
\begin{enumerate}
\item[(a)] For any $g\in G$ and any $\lambda , \mu \in \Lambda $,
$\lambda \ne \mu $, the intersection $H_\lambda ^g\cap H_\mu $ is
finite.
\item[(b)] For any $\lambda \in \Lambda $ and any $g\notin
H_\lambda $, the intersection $H^g_\lambda \cap H_\lambda $ is
finite.
\end{enumerate}
\end{lem}
\begin{proof}[Proof of Theorem \ref{glue}]
Obviously it suffices to deal with the case $m=1$. The general
case will follow if we apply the theorem for $m=1$ several times.
Let $t\in G$ be an arbitrary element. Passing to a new relative
generating set $X^\prime =X\cup \{ t\} $ if necessary, we may
assume that $t\in X$. By Lemma \ref{non-com} there are
non--commensurable elements $h_1,h_2\in H^0$ such that $E_G(h_1)$
and $E_G(h_2)$ are cyclic. According to Theorem \ref{E(g)}, $G$ is
hyperbolic relative to $\Hl \cup E_G(h_1)\cup E_G(h_2)$.
Let $\mu ,\e , \rho $ be
constants that satisfy the conclusions of Lemma \ref{G1} and Lemma
\ref{torsion} for $\lambda =1/3 $, $c=2$, $N=1$. By Theorem
\ref{w}, there are $n$ and $m_1, \ldots , m_n$ such that
the set $\mathcal R$ of all cyclic shifts and their inverses of the word
$$R\equiv th_1^{m_1}h_2^{m_1}\ldots h_1^{m_n}h_2^{m_n}$$
in the alphabet $\mathcal A= X\cup\mathcal H\cup
(E_G(h_1)\setminus \{ 1\} )\cup (E_G(h_2)\setminus \{ 1\} )$ satisfies the $C(1/3, 2, \e , \mu , \rho )$--condition (here $h_i^{m_j}$ is regarded as a letter in $E_G(h_i)\setminus \{
1\}$, $i=1,2$, $j=1, \ldots , n$). Indeed it suffices to chose large enough $n$ and $m_1, \ldots , m_n$ satisfying $m_i\ne \pm m_j$ whenever $i\ne j$. Let $G_1$ be the quotient of $G$ obtained by
imposing the relation $R=1$ and $\eta $ the corresponding natural
homomorphism.
By Lemma \ref{G1}, $G_1$ is hyperbolic relative to the images of
$H_\lambda , \lambda \in \Lambda $ and $E_G(h_1)$, $E_G(h_2)$. As
any elementary group is hyperbolic, $G_1$ is also hyperbolic
relative to $\{ \eta (H_\lambda \} _{\lambda \in \Lambda }\} $
according to Lemma \ref{exhyp}. The inclusion $\eta (t)\in \eta
(H)$ follows immediately from the equality $R=1$ in $G_1$. The
third assertion of the theorem follows from Lemma \ref{G1} b) as
any element from the union $\bigcup\limits_{\lambda \in \Lambda }
H_\lambda $ has length $1$.
Similarly as $\eta $ is injective on $E_G(h_1)\cup E_G(h_2)$,
$\eta (h_1)$ and $\eta (h_2) $ are elements of infinite order.
Note also that $\eta (h_1)$ and $\eta (h_2) $ are not
commensurable in $G_1$. Indeed otherwise the intersection $\big(
\eta (E_G(h_1))\big) ^g\cap \eta (E_G(h_2))$ is infinite for some
$g\in G$ contradictory the first assertion of Lemma \ref{malnorm}.
Assume now that $g\in E_{G_1}(\eta (h_1))$, where $E_{G_1}(\eta
(h_1))$ is the maximal elementary subgroup of $G_1$ containing
$\eta (h_1)$. By the first assertion of Theorem \ref{E(g)}, $\big(
\eta (h_1^m)\big) ^g= \eta (h_1^{\pm m})$ for a certain $m\ne 0$.
Therefore, $\big( \eta (E_G(h_1)) \big) ^g\cap \eta (E_G(h_1))$
contains $\eta (h_1^m)$ and in particular this intersection is
infinite. By the second assertion of Lemma \ref{malnorm}, this
means that $g\in \eta (E_G(h_1))$. Thus we proved that $\eta
(E_G(h_1))=E_{G_1}(\eta (h_1))$. The same is true for $h_2$.
Finally, using injectivity of $\eta $ on $E_G(h_1)\cup E_G(h_2)$
again, we obtain $$E_{G_1}(\eta (h_1))\cap E_{G_1}(\eta
(h_2))=\eta (E_G(h_1))\cap \eta (E_G(h_2))=\eta \big( E_G(h_1)\cap
E_G(h_2)\big) = \{ 1\} .$$ This means that the image of $H$ is a
suitable subgroup of $G_1$. To complete the proof it remains to
note that the last assertion of the theorem follows from Lemma
\ref{torsion}.
\end{proof}
|
{
"timestamp": "2008-05-09T01:48:38",
"yymm": "0411",
"arxiv_id": "math/0411039",
"language": "en",
"url": "https://arxiv.org/abs/math/0411039"
}
|
\section{Introduction}
Most young (\about 1~Myr) solar mass stars are surrounded by circumstellar
disks \citep{Strom89}. These disks have masses \citep{Beckwith90} and sizes
\citep{McCaughrean96,Dutrey96} comparable to that expected for the primitive
solar nebula, and are thus the suspected sites for the formation of planetary
systems. Indeed, the increasing number of planets discovered around
main-sequence stars \citep{Marcy00} strongly supports the notion
that planet formation in circumstellar disks is a common occurrence. The
physics of how planets form within circumstellar disks is unclear though
and remains the subject of ongoing investigation (cf. Pollack et~al.~1996 and
Boss~2001). Since the direct detection of planets around young stars is
challenging for current instrumentation, observational constraints on the
planet formation process are frequently inferred from the evolutionary time
scales of circumstellar disks.
The evolution of circumstellar disks has been studied primarily in the
infrared by identifying stars that exhibit emission in excess of that expected
from the stellar photosphere. Near-infrared $JHKL$ band photometry of nearby
star-forming regions \citep{Strom89,Haisch01}, which probes the inner
\about 0.1~AU of the disk around solar-type stars, suggest that the percentage
of stars with 2-4\micron\ excesses is $\simgreat$\ts 80\% for \about 1~Myr stars
and diminishes to \about 50\% by an age of \about 3~Myr. The inner disk does
persist to relatively old ages in at least some stars, as evidenced by the
high fraction of stars (60\%) with $L$-band excesses in the 5-9~Myr
$\eta$~Cha cluster \citep{Lyo03}, the $K$-band excess in the 7-17~Myr star
PDS~66 \citep{Mamajek02,Mamajek04}, and the disk accretion signatures in at
least two members of the \about 10~Myr old TW~Hydra association
\citep{Muzerolle00}. By ages of $\simless$\ts 10-15~Myr,
the inner disk as traced by $K$-band photometric excesses has
diminished to nearly zero (Mamajek, Meyer, \& Liebert~2002).
Similarly, 10\micron\ observations
show that \about 20\% of the stars in the TW Hydra association
\citep{Jay99,Wein04} exhibit evidence for dust between 0.1~AU and 1~AU,
with the excess fraction falling to approximately 0 by 30~Myr in at least the
Tucana-Horologium association \citep{Mamajek04}. Far-infrared observations
obtained with IRAS and ISO suggest disk evolution from 1-10~AU on similar
time scales \citep{Meyer00}, but are inconclusive on whether subsequent
evolution is continuous \citep{Spangler01} or discrete
\citep{Habing99,Habing01,Decin03}.
Millimeter-wavelength continuum observations provide a complimentary picture
of disk evolution by probing colder disk material that is not
detectable in the near- and mid-infrared. Continuum surveys of stars
in the Taurus (Beckwith et~al.~1990, Osterloh \& Beckwith~1995; see also
Motte \& Andr\'e~2001), $\rho$~Oph (Andr\'e \& Montmerle 1994; see also
Motte, Andr\'e, \& Neri~1998, N\"urnberger et~al.~1998, Johnstone et~al.~2000),
and Chamaeleon I\&II \citep{Henning93} molecular clouds suggest that the
median disk mass (including gas and dust components) around low-mass stars
is \about 0.005\ts M$_\odot$\ for stellar ages of \about 1~Myr. Moreover,
\citet{Beckwith90} found no evidence of temporal evolution in the mass of
cold, small ($\simless$\ts 1~mm) dust particles between ages of 0.1 and 10~Myr.
\citet{Zuckerman93} conducted a continuum survey of stars in the Pleiades and
Ursa Major and found that the dust masses decrease by at least two orders of
magnitudes by an age of \about 300~Myr. Similar surveys of Lindroos binaries
(A- and B- stars with ages of \about 10-200~Myr; see Jewitt~1994, Gahm
et~al.~1994, Wyatt, Dent, \& Greaves~2004) and a limited sample of stars in
the $\beta$~Pic Moving Group and Local Association \citep{Liu04} suggest that
few massive disks ($\simgreat$\ts 0.01\ts M$_\odot$) survive beyond an age of
\about 10~Myr.
Despite the substantial progress made by recent observations, our
empirical understanding of the evolution of cold circumstellar dust remains
incomplete. Analysis of the stellar ages in the Taurus molecular cloud
(from which the Beckwith et~al.~1990 sample was drawn) suggest that the
apparent age spread can be attributed predominantly to observational
uncertainties \citep{Hartmann01}, and, therefore, studies of Taurus alone are
insufficient to establish the time scales for disk evolution. The youngest
nearby ($<$ 200~pc) open clusters (e.g. Pleiades, IC~2602, $\alpha$~Per)
are frequently used to characterize the disk properties at ages of
50-120~Myr. Few stars have been studied at intermediate ages (3-30~Myr)
though when planets of all masses are thought to be in their final assemblage
stages. Lindroos binaries provide a few examples of these intermediate-age
stars, but since these are systems where by definition the primary is an
intermediate-mass star, they may not share the evolutionary history of
``typical'' solar-type stars.
In recent years, an increasing number of intermediate-aged, solar-type
stars have been identified in nearby stellar associations and moving groups
(see, e.g., Mamajek, Lawson, \& Feigelson~2000; Neuh\"auser et~al.~2000;
Mamajek, Meyer, \& Liebert~2002; Wichmann, Schmitt, \& Hubrig~2003; Song,
Zuckerman, \& Bessell~2003). Some of these stars are being observed by the
{\it Spitzer} Legacy program ``Formation and Evolution of Planetary Systems''
(or FEPS; Meyer et~al.~2004) as part of a photometric and spectroscopic survey
of \about 325 solar-type stars. The mid- and far-infrared {\it Spitzer}
observations will sensitive to relatively warm dust, and to survey for
potentially cold dust around these stars, we conducted a
millimeter-wavelength continuum survey of 125 stars in the FEPS sample.
Figure~\ref{fig:tdust} shows the sensitivity to dust mass of our millimeter
continuum survey relative to that of {\it IRAS} and the upcoming FEPS
{\it Spitzer} survey. As shown in this Figure, our observations are
more sensitive to dust mass than {\it IRAS} 60 and 100\micron\ data for dust
colder than \about 25~K, and more sensitive than {\it Spitzer} 24\micron,
70\micron,
and 160\micron\ observations for dust colder than \about 30~K, 15~K, and 10~K
respectively. (Note, however, that the sensitivity limits for {\it IRAS}
100\micron\ and {\it Spitzer} 70/160 160\micron\ observations depend on the
local background level from cirrus.)
Accordingly, we have initiated a millimeter-wavelength continuum
survey of a substantial fraction of the FEPS sample. The primary advantage of
these observations over previous studies is that they sample a large number of
stars (125) spanning a small range of stellar masses (0.5-2\ts M$_\odot$) to
specifically address the time scale for the evolution of cold circumstellar
dust around solar-type stars.
In \S\ref{sample}, we summarize the sample selection and properties of
the FEPS stars observed for this study. The telescopes and instruments
used for these observations are described in \S\ref{obs}, along with the
data reduction procedures and the measured continuum fluxes.
\S\ref{detections} summarizes the properties of the sources with detected
millimeter continuum emission, and an analysis of the entire survey is
presented in \S\ref{analysis}. The implications for disk evolution are
discussed in \S\ref{discussion}, and \S\ref{summary} summarizes our
conclusions.
\section{Stellar Sample}
\label{sample}
The sources observed for this study were taken from the target list for the
FEPS {\it Spitzer} Legacy program \citep{Meyer04}, which consists of \about
325 stars with stellar masses between \about 0.5\ts M$_\odot$\ and 2\ts M$_\odot$\ and ages
spanning from \about 3~Myr to 3~Gyr. The FEPS source list includes field
stars, stellar clusters (Pleiades, Alpha~Per, Hyades, and IC~2602), and
members of the Scorpius-Centaurus OB association. Ages were assigned based on
a number of considerations, including:
i) pre-main-sequence evolutionary tracks for stars with ages $\simless$\ts 10~Myr,
ii) the observed lithium equivalent width,
iii) x-ray activity,
iv) the strength of the \ion{Ca}{2} H\&K emission for a volume limited sample
of solar-type stars ($\simless$\ts 50~pc), and
v) the association of stars with clusters or star-forming regions of known age.
Distances were derived based upon Hipparcos parallaxes for nearby stars,
kinematic distances for stars associated with young moving groups and
associations (Mamajek et~al.~2004, in preparation), and adopting nominal
cluster distances for older clusters. Priority was placed on observing the
youngest ($\simless$\ts 300 Myr) and closest ($\simless$\ts 100~pc) sources within
the FEPS program.
Table~\ref{tbl:sources} lists the 138 sources observed for this study along
with the adopted distances. We emphasize that the adopted distances and
especially the assigned ages are preliminary, and a detailed analysis of the
properties of the FEPS sample is forthcoming (Hillenbrand et~al.~2004, in
preparation). While all of these sources were initially in the FEPS target
list, 13 sources were subsequently dropped due to time limitations or because
ancillary ground-based observations cast doubt on the preliminary assigned age.
These 13 stars are listed separately in Table~\ref{tbl:sources} for
completeness and are otherwise not analyzed in this paper. None of the dropped
sources were detected in the millimeter continuum with a signal to noise ratio
$\ge 3$. The 125 observed sources that are still
included in the FEPS program are grouped into age bins that span a factor
of 3 as indicated in Table~\ref{tbl:sources}. The sources observed here
include 14 stars between the ages of 3 and 10~Myr, 11 with ages of
10-30~Myr, 39 with ages of 30-100~Myr, 39 with ages of 100-300~Myr,
8 with ages of 300-1000~Myr, and 16 with ages of 1-3~Gyr. Submillimeter
continuum photometry has been previously presented for five FEPS sources
observed here: \citet{Williams04} detected 450\micron\ and 850\micron\
emission toward HD~107146, and and \citet{Liu04} report upper limits to the
850\micron\ flux toward HD~35850, HD~199143, HD~129333, and HD~77407.
In addition, 13 sources, as indicated in Table~\ref{tbl:sources}, have a
$\ge3\sigma$ excess above the photospheric flux in one or more of the $IRAS$
bands as determined by fitting a Kurucz model atmosphere to compiled optical
and near-infrared photometry.
\section{Observations and Data Reduction}
\label{obs}
\subsection{OVRO}
Millimeter continuum observations of 57 stars were obtained in the summer
of 2002 and the spring and summer of 2003 using the Owens Valley Radio
Observatory (OVRO) millimeter-wave interferometer. The array contains six 10.4
meter antennas, but during the summer months, only 3-5 antennas were typically
operational at any one time. Data were obtained in either the OVRO ``compact''
or ``low'' configurations that provided a FWHM angular resolution of 5-10$''$.
Continuum data were recorded simultaneously in four, 1~GHz wide continuum
channels, and starting in the summer of 2003, low spectral resolution
(32.5~MHz) data were also recorded using the COBRA spectrometer. At the time
of the observations, 7 of the 8 COBRA modules were functional, providing
7~GHz out of a possible maximum 8~GHz of bandwidth. When available, the COBRA
spectral channels were averaged in 500~MHz intervals and used in place of the
1~GHz continuum data during the data analysis. The majority of the
observations were obtained at a mean frequency
of 101~GHz as a compromise between achieving low system temperatures and
atmospheric phase stability (favoring low frequencies) and dust emissivity
(favoring high frequencies). Seven sources were observed at a mean frequency
of 112~GHz as part of a survey for $^{12}$CO(J=1--0) molecular line emission;
only the continuum data are presented here. A phase and
amplitude calibrator was observed every 15 minutes, and the data were flux
calibrated by observing Neptune or Uranus.
The OVRO data were reduced using MIR, an IDL-based data reduction package for
millimeter-wave interferometry developed by N. Scoville and J. Carpenter.
Observations were repeated as necessary to achieve a typical RMS
noise level of 0.5-1.0\ts mJy\ts beam$^{-1}$. To assess the calibration uncertainties, the
dispersion of the measured fluxes was computed for each calibrator
observed on 5 or more days within a 1 week time period (to reduce the
likelihood of intrinsic source variability). Out of 161 calibrator sequences
meeting these criteria, the median dispersion is 5\%, and 96\% of the
sequences have a dispersion of less than 10\%. We adopt 5\% as the ``typical''
1$\sigma$ calibration uncertainty. Source fluxes were computed through least
squares fitting of the visibility data assuming that the emission originates
from a point source at the phase center. The reduced chi-square from the
fit was typically \about 1.8, where nominally it should be \about 1 if the
assigned uncertainties to the data points are correct and the point source
model is an adequate representation of the continuum emission. We assumed that
the large value of the reduced chi-square indicates that the uncertainties
in the visibility data have been underestimated, and accordingly we increased
the uncertainties in the derived fluxes by the square root of the reduced
chi-square value. Images were produced for each source using MIRIAD and
visually inspected to search for continuum detections that may be offset from
the phase center. None of the sources in this initial survey were detected at
a signal to noise ratio $\ge$ 3.
During the course of this program, \citet{Williams04} detected 450\micron\ and
850\micron\ continuum emission from HD~107146. This source was a non-detection
in our original survey, but the submillimeter observations motivated us to
obtain more sensitive, higher resolution 101~GHz data. These additional
observations were obtained between 2003~April and 2004~May. We used 3C273 as
the phase and gain calibrator, which is located at an angular separation of
14.7$^\circ$ from HD~107146. Beginning in 2003~August, the radio source
J1215+169, located 1.0$^\circ$ from HD~107146, was monitored after
every 2 integrations on HD~107146 as a test of the accuracy of the phase
calibration derived from 3C273. For most of these observations we were not
able to observe a planet to directly flux calibrate the data, and instead we
used 3C273 as a secondary flux calibrator. Since the millimeter flux from
3C273 is frequency dependent and time variable (see, e.g., Steppe et~al.~1993),
we determined the flux history of 3C273 using observations obtained by other
programs at OVRO. Using 201 OVRO ``tracks'' in which both 3C273 and a planet
(either Neptune or Uranus) were observed in the time frame of the HD~107146
observations, we found that the flux from 3C273 varied in time between
7 and 10~Jy at observed frequencies of 86 to 115~GHz. We determined that the
flux from 3C273 varied as $S_\nu \propto \nu^{-0.9}$ by fitting a power law
to a portion of the data where the 3C273 flux was \about 10~Jy and appeared to
have low variability. While the spectral index can vary between \about 0.3
and 1.5 during 3C273 flare events, our derived spectral index as well as the
absolute flux level suggests 3C273 was largely in a quiescent phase for the
time period of our observations \citep{Robson93}. Therefore, we assumed that
the spectral index was constant. After scaling the
observed fluxes to 101~GHz, a median boxcar curve, with a width corresponding
to 15 observed data points, was fitted through the resulting time series data
to estimate the flux for 3C273 for any given time. The residuals from the
fitted curve have a RMS of 6\%, which we adopt as the calibration
uncertainty in the measured fluxes for 3C273 and consequently HD~107146.
The HD107146 data were reduced in MIR, and images were produced and cleaned in
MIRIAD using a Brigg's robust weighting parameter \citep{Briggs95} of 0. The
resulting image had a FWHM synthesized beam of $4.5''\times4.0''$ and a RMS
noise of 0.15~\ts mJy\ts beam$^{-1}$. Analysis of the HD~107146 data is presented in
\S\ref{hd107146}.
\subsection{SEST}
Continuum observations at $\lambda$1.2~mm were obtained for 89 stars using the
37-element SIMBA bolometer camera on the 15~m Swedish-ESO Submillimetre
Telescope (SEST) in 2001 November, 2002 June, and 2002 November. The
instantaneous field of view of SIMBA is \about $4'\times4'$ with 44\arcsec\
separation between the bolometer channels. The FWHM beam size of the
observations is 24\arcsec. Fully sampled maps were generated by dithering the
array to produce images with 8$''$ sampling. The telescope pointing was
checked before and after each map, and the pointing offsets were repeatable
to within 2$''$ in both azimuth and elevation in most cases. However,
for the observations of both RX~J1852.3$-$3700 and MML~1 in June~2002, the
pointing offsets derived for the start and end of the maps differed by
\about 40$''$ in elevation. A continuum source was detected in the
RX~J1852.3$-$3700 map (see \S\ref{photometry} and \S\ref{rxj1852}) that was
offset 40$''$ from the pointing center. We assumed that the 40$''$ angular
offset was due to the pointing error and associate this continuum source with
RX~J1852.3$-$3700. The atmospheric opacity at $\lambda$1.2~mm as
measured from sky dips ranged from 0.12 to 0.20 in 2002~June and 0.25 to 0.35
in 2001~November and 2002~November. Flux calibration was derived from maps of
Uranus and Neptune which were obtained each night. The estimate 1$\sigma$
calibration uncertainty is 10\% based on repeated observations of the planets.
Source fluxes were measured using a beam-weighted aperture \citep{Naylor98} of
16$''$ in radius with a background annulus that extended from 30$''$ to
120$''$. If no source was apparent in the images, the aperture was centered on
the stellar position. The typical RMS noise measured in the background annulus
varied from \about 8\ts mJy\ts beam$^{-1}$\ for the 2002 June observations to 17\ts mJy\ts beam$^{-1}$\
for the 2001~November and 2002~November observations.
\subsection{CSO}
Submillimeter continuum observations at $\lambda350$\micron\ were obtained
for 6 stars using the SHARC bolometer camera \citep{Hunter96} on the 10.4~m
telescope of the Caltech Submillimeter Observatory (CSO). The observations
were conducted during the second half of the night on 2001 October 1-3. SHARC
contains a 24 pixel monolithic silicon bolometer array, of which 19 pixels were
operational at the time of the observations. Each pixel subtends
$5''\times10''$ on the sky. The FWHM beam size of CSO telescope at the
$\lambda$350\micron\ is \about 9$''$. Data were obtained in
point-source mode by chopping 1\arcmin\ in azimuth, with on-source integration
times ranging from 12 to 36 minutes. Zenith opacities at 225~GHz as measured
by the CSO tau meter ranged between 0.04 and 0.09. The observations were flux
calibrated with regular observations of Saturn, with an estimated 1$\sigma$
calibration uncertainty of 20\% based on the repeated observations. Pointing
was checked every 1-2 hours, where it was discovered that there was an
elevation pointing drift as a function of elevation for the first two nights
before the pointing model was updated on the last night. During reduction, the
data were combined using a shift-and-add procedure to account for the drift.
The RMS noise in the calibrated, coadded data ranged from 37 to 57\ts mJy\ts beam$^{-1}$.
\subsection{Photometry}
\label{photometry}
The measured millimeter and submillimeter fluxes for all observed sources are
summarized in Table~\ref{tbl:sources}. With the exception of HD~107146,
which was resolved (see \S\ref{hd107146}), the fluxes are for point sources
at the stellar position. These flux measurements then do not account for
any extended continuum emission. For cases where large pointing
uncertainties were present and the stellar position in the image is uncertain
by more than one beam width, the table lists the flux as the 3$\sigma$ upper
limit, which was computed as 3 times the RMS noise in the image. The flux
uncertainties listed in Table~\ref{tbl:sources} represent the internal
uncertainties only and do not include the calibration uncertainties.
Figure~\ref{fig:snr} presents a histogram of the ratio of observed flux to the
internal flux uncertainty. The dashed curve shows the expected frequency
distribution of this ratio for gaussian noise; i.e. a normal distribution
centered on zero with unit dispersion. Evidently gaussian noise is an adequate
representation of the observed flux distribution for most sources. Four
sources were detected with a signal to noise ratio $\ge 3$:
HD~107146 at $\lambda$3.0~mm with the OVRO interferometer, and PDS 66,
RX~J1842.9$-$3532, and RX~J1852.3$-$3700 at $\lambda$1.2~mm with the
SEST bolometer. As discussed in \S\ref{hd107146}, the emission toward
HD~107146 is resolved
with OVRO. For the 3 sources detected with SEST, the observed FWHM of the
continuum emission, measured by fitting a gaussian to the emission radial
profile using the IMEXAMINE task in IRAF, varies between 24-28$''$.
Given that the FWHM beam size of the SEST telescope is 24$''$, the observed
emission is consistent with a point source at this angular resolution.
\section{Detected Millimeter Continuum Sources}
\label{detections}
Figures~\ref{fig:pds66}--\ref{fig:rx1852} show the observed spectral energy
distribution (SED) for the 3 stars detected with SEST. The SED for HD~107146
is presented and analyzed by \citet{Williams04} and is not shown here.
In addition to the millimeter-wavelength continuum photometry, the SEDs
include photometry from Tycho-2 \citep{Hog00}, {\it 2MASS}, {\it IRAS},
and $V(RI)_C$ for two sources \citep{Neuhauser00}. Also shown in these
figures is the best fit Kurucz model atmosphere to the optical and
near-infrared photometry. The best-fit model was obtained by minimizing the
chi-square between the model magnitudes and observed $(BV)_TV(RI)_C$ and $JH$
photometry, where the free parameters in the fit are the overall normalization
constant, the effective temperature, and visual extinction. Model magnitudes
were computed by convolving the Kurucz model with the appropriate filter
transmission as described in Cohen et~al.~(2003a,b) and references therein.
Each of these sources exhibits excess emission in the {\it IRAS} bands,
and both PDS~66 and RX~J1842.9$-$3532 may have excess emission at wavelengths
as short as 2.2\micron. An analysis of the SEDs is not presented
here since the FEPS project will soon have {\it Spitzer} spectrophotometry
from 3.6 to 160\micron\ for these sources that will permit significantly more
detailed circumstellar disk models to be constructed than is possible with the
limited data available now. The following discussion briefly summarizes the
properties of the four sources detected in the millimeter continuum.
\subsection{PDS 66}
\label{pds66}
PDS~66 (also known as Tycho~9246~971~1, Hen~3-892, IRAS~13185-6922, and MML~34)
was first identified by \citet{Gregorio92} as a classical T Tauri star (CTTS)
by virtue of having a far-infrared excess detected by {\it IRAS}, strong
H$\alpha$ emission, and deep lithium absorption at $\lambda$6707\AA.
\citet{Mamajek02} identified the star as a likely proper motion member of the
Lower Centaurus-Crux subgroup of the Scorpius-Centaurus OB association
and derived the stellar parameters, which we adopt here. The
star has a spectral type of K1~IVe, and is distinguished as the only known
CTTS in a sample of 110 stars in the Lower Centaurus-Crux and
Upper Centaurus-Lupus subgroups. Depending on the pre-main-sequence
evolutionary tracks used to estimate the stellar properties, the stellar age
and mass are 7-17~Myr and 1.1-1.2\ts M$_\odot$, respectively. The mean age of the
Lower Centaurus-Crux subgroup is 17-23~Myr. We adopt the secular parallax
distance of 86 pc as derived by \citet{Mamajek02}. The observed
millimeter continuum flux corresponds to a dust mass (see \S\ref{discussion})
of \about $5\times10^{-5}$\ts M$_\odot$, which is comparable to the circumstellar disk
masses found around young stars in the Taurus \citep{Beckwith90,Osterloh95}
and $\rho$~Oph \citep{Andre94,Motte98} molecular clouds. Thus PDS~66 appears
to be a rare example of a relatively old ($\simgreat$\ts 10~Myr) CTTS, and the
millimeter continuum emission likely originates from a circumstellar accretion
disk.
\subsection{RX J1842.9$-$3532}
\label{rxj1842}
As described in \citet{Neuhauser00}, the x-ray source RX~J1842.0$-$3532 was
detected in the {\it ROSAT} All-sky survey and subsequently associated with
the star Hen~3-1722 and IRAS 18396$-$3535. The star has a K2 spectral type,
exhibits strong H$\alpha$ in emission (see also Henize~1976, Beers et~al.~1996),
has a deep lithium absorption feature, and contains excess {\it IRAS}
far-infrared emission. Based on these characteristics the star has been
classified as a CTTS. RX~J1842.0$-$3532 is located at an angular separation of
\about 3.6$^\circ$ from the RCrA molecular cloud. The observed radial
velocities and UCAC2 proper motions for the star agree to within 1~km~s$^{-1}$
of the values predicted for RCrA members at their positions
(using the space motion vector for the RCrA group from Mamajek \&
Feigelson~2001; E.Mamajek, private communication). The secular parallax
distance is statistically consistent with the best available distance
of 130~pc to the RCrA association \citep{Casey98}. For this distance,
\citet{Neuhauser00} estimate a stellar age of \about 10~Myr and a stellar mass
of 1.2\ts M$_\odot$\ based on pre-main-sequence evolutionary tracks. The observed
millimeter continuum flux implies a dust mass (see \S\ref{discussion}) of
\about $5\times10^{-5}$\ts M$_\odot$. Given that the star has characteristics similar
to that of a CTTS, the dust emission likely originates from an accretion disk.
\subsection{RX J1852.3$-$3700}
\label{rxj1852}
As described in \citet{Neuhauser00}, the x-ray source RX~J1852.3$-$3700 was
detected in the {\it ROSAT} All-sky survey and associated with a stellar
counterpart that has strong H$\alpha$ emission and far-infrared emission as
detected by {\it IRAS} (IRAS 18489$-$3703). The star has a K7 spectral type and
has been classified as a CTTS. It is located at an angular separation of
\about 1$^\circ$ from the RCrA molecular cloud, and similar to
RX~J1842.9$-$3532, the observed radial velocities and UCAC2 proper motions
agree to within 1~km~s$^{-1}$ to the predicted values for RCrA members
(E. Mamajek, private communication). For the adopted distance of 130~pc to
RCrA \citep{Casey98}, the estimated stellar age and mass from
pre-main-sequence evolutionary tracks are $\sim$10~Myr and 1.1\ts M$_\odot$\
respectively \citep{Neuhauser00}. The observed millimeter continuum flux
implies a dust mass (see \S\ref{discussion}) of \about $5\times10^{-5}$\ts M$_\odot$.
Since the star has characteristics similar to that of a CTTS, the dust
emission likely originates from a circumstellar accretion disk.
\subsection{HD 107146}
\label{hd107146}
HD~107146 has a G2~V spectral type with an Hipparcos distance estimate of
29~pc. The star has a number of age indicators (lithium equivalent width,
x-ray luminosity, space motions similar to Pleiades moving group) which in
aggregate suggest an age of \about 80-200~Myr (see discussion in
Williams et~al.~2004). \citet{Silverstone00} and \citet{Metchev04} noted that
the source contains an {\it IRAS} far infrared excess at 60\micron\ and
100\micron. \citet{Williams04} further detected continuum emission at
450\micron\ and 850\micron\ that firmly established that the source is
surrounded by circumstellar dust, most likely originating from a
debris disk. Their analysis of the SED suggested that the disk contains an
inner hole with a radius $>$ 31~AU.
Figure~\ref{fig:hd107146} presents the OVRO $\lambda$3~mm map of HD~107146.
The left panel shows the contour map of the continuum emission
obtained by combining all available data. The middle panel displays the
continuum image for a subset of the data when J1215+169 was monitored
as a test of the phase/gain solution derived from 3C273, and the right panel
shows the corresponding image of J1215+169. The observed peak flux in the
HD~107146 image is $0.69\pm0.15$\ts mJy\ts beam$^{-1}$\ in a $4.5''\times4.0''$~beam. The
integrated flux in a 8$''$ radius aperture, measured in a naturally-weighted
map ($6.7''\times6.0''$ beam size) to optimize the signal to noise, is
$1.42\pm0.23$~mJy. (The uncertainties in the peak and integrated fluxes do not
include the 6\% calibration uncertainty; see \S\ref{obs}.) \citet{Williams04}
fitted the excess infrared and submillimeter emission with a graybody to
derive a dust temperature of 51$\pm$4~K, and estimated the frequency
dependence of the submillimeter emission to be
$S_\nu \propto \nu^{2.69\pm0.15}$ in the Rayleigh-Jeans limit. Based on the
observed 850\micron\ flux of $20\pm4$~mJy, the expected 3~mm flux for these
model parameters is $0.78\pm0.21$~mJy. The difference between the observed
and expected 3~mm flux, including calibration uncertainties, is
$0.64\pm0.32$~mJy, or a 2.0$\sigma$ difference. Thus the observed OVRO 3~mm
flux is marginally consistent with that expected from the graybody
extrapolation of the submillimeter emission.
The centroid position of the continuum emission toward HD~107146, measured
by fitting an elliptical gaussian to the visibility data, is offset by
($-0.46''\pm0.45,-0.51''\pm0.43$) from the stellar position. (The uncertainty
in the stellar position, including uncertainties in the Hipparcos astrometry
and Tycho-2 proper motions, are \about 0.013$''$ for both the
right ascension and declination.) The astrometric uncertainty due to
phase-correction uncertainties is $<0.35''$ as determined from the measured
offset of J1215+169 from the phase center (see Fig.~\ref{fig:hd107146}). Thus
within the astrometric uncertainties, the centroid of the millimeter continuum
emission is centered on the stellar position. \citet{Williams04} indicated
that the peak 450\micron\ emission may be offset by 4.4$''$ from the stellar
position. This offset is inconsistent with the OVRO observations assuming
that the 450\micron\ and 3~mm emission arise from the same location in the
disk.
Figure~\ref{fig:hd107146} suggests that the continuum emission around HD~107146
may be resolved with the OVRO interferometer in that the lowest contours are
extended at a position angle (east of north) of approximately $-55^\circ$,
which is not observed in J1215+164. To quantify the possible extension in the
continuum emission, Figure~\ref{fig:amp} shows the average observed visibility
amplitude for the HD~107146 data as a function of $uv$ distance from the phase
center. The phase center was adopted as the reference point since within the
astrometric uncertainties the continuum emission is centered on the stellar
position as discussed above. The visibility data were averaged in 5 $uv$ bins,
where the bin sizes vary as a function of $uv$ distance to maintain a constant
number of visibility points per bin. The amplitude uncertainties
represent the standard deviation of the mean in the visibility data, and the
horizontal lines through each data point represent the full width of the $uv$
bin. The dashed curves show the expected amplitudes as a function of $uv$
distance in the absence of noise for model circular gaussians that have
integrated intensities of 1.42~mJy and FWHM ranging from $\theta=0''$ (i.e. a
point source) to 10$''$. The shaded region indicates the $\pm1\sigma$
amplitude uncertainty of the integrated flux measured in the image domain.
As Figure~\ref{fig:hd107146} shows, the last two radial bins are below the
expected amplitudes for a point source model. The vector averaged amplitude
of the visibility data in these two bins combined is $0.33\pm0.17$~mJy, such
that the deviation from the point source amplitude flux is $1.1\pm0.29$~mJy,
or a 3.8$\sigma$ deviation. We thus conclude that the emission toward
HD~107146 has indeed been resolved. To estimate the source size, an elliptical
gaussian was fitted to the unbinned visibility data points. The position
angle of the best-fit gaussian is $-34^\circ\pm23^\circ$, which agrees well
with the direction of extended emission at 450\micron\ noted by
\citet{Williams04}. The fitted FWHM gaussian size is
$(6.5''\pm1.4'') \times (4.2''\pm1.3'')$, which corresponds to a spatial size
of \about $185~{\rm AU}\times120~{\rm AU}$. Assuming that the disk is
intrinsically circularly symmetric, the ratio of the minor to major axis
implies an inclination angle of $50^\circ\pm18^\circ$ (where $0^\circ$ is
for a face on disk). The derived FWHM size is smaller than the
$10.5''\times7.4''$ size inferred from the 450\micron\ emission, which is
unexpected since both the 450\micron\ and 3mm emission are likely
optically thin and in the Rayleigh-Jeans limit \citep{Williams04}. To have a
smaller $\lambda$3~mm size, one needs a grain population in the disk interior
that radiates more effectively at longer wavelengths. In principle this could
signify the presence of larger grains which would be in radiative equilibrium
at colder temperatures. While in fact the observed $\lambda$3mm flux is
slightly larger than the extrapolated submillimeter flux, the moderate signal
to noise ratio for both the $\lambda$450\micron\ and $\lambda$3mm images
are such that higher signal to noise detections are needed before drawing firm
conclusions on any size differences as a function of wavelength.
Finally, we examine the consistency of the debris disk size with the
characteristic dust temperature of $51\pm4$~K inferred from the SED analysis.
For $\beta$=0.69 (see \S\ref{dustmass}) and assuming a single grain size that
efficiently absorbs the stellar radiation, the radial dependence of the dust
temperature around a solar luminosity star will be
$T_{\rm dust} \approx 412{\rm K}~r_{\rm AU}^{-0.43} L_*^{0.21} a_{\mu\rm m}^{-0.15}$,
where $r_{\rm AU}$ is the orbital radius measured in AU, $L_*$ is the
luminosity in solar luminosities, and $a_{\mu\rm m}$ is the grain size in
microns \citep{Backman93}. The resolved disk implies a range of dust
temperatures must be present. At the half-width radius of 90~AU, the dust
temperature will be 60~K for 1\micron\ sized grains with warmer dust
at small radii interior. These are warmer temperatures than implied by the
SED modeling, but can be reconciled by invoking larger grains a few microns
in size that radiate more efficiently and result in cooler temperatures. In a
future FEPS publication, the HD~107146 debris disk will be presented that
incorporates the submillimeter photometry, disk size, and {\it Spitzer} data
to present a derive a self-consistent model.
\section{Analysis}
\label{analysis}
\subsection{Dust Masses}
\label{dustmass}
The observed millimeter continuum fluxes can be used to estimate, or place
limits on, the circumstellar dust masses. Assuming that the emission is
isothermal and optically thin, the dust mass was computed using the following
formula
\begin{equation}
M_{dust} = {S_\nu\:D^2 \over \kappa_\nu\:B_\nu(T_{dust})},
\label{eq:mass}
\end{equation}
where $\kappa_\nu = \kappa_o({\nu\over\nu_o})^\beta$ is the mass absorption
coefficient, $\beta$ parameterizes the frequency dependence of $\kappa_\nu$,
$S_\nu$ is the observed flux, $D$ is the distance to the source, $T_{dust}$
is the dust temperature, and
$B_\nu(T_{dust})$ is the Planck function \citep{Hildebrand83}. We assumed
$\beta$=1.0 and $\kappa_o$ = 2\ts cm$^2$\ts g$^{-1}$\ at $\lambda$1.3~mm \citep{Beckwith90}, and
adopted a dust temperature of 40~K, which is a compromise between the expected
cold (\about 20-30~K) dust in optically
thick accretion disks \citep{Beckwith90,Andre94}, and the warmer dust
(45-100~K) inferred for optically thin debris disks around solar type stars
\citep{Zuckerman04}. These mass estimates reflect the mass contained in small
dust grains, and do not account for mass contained in large (millimeter-sized)
particles that do not contribute significantly to the emission at millimeter
wavelengths. For these adopted parameters, the inferred dust masses for the
detected sources are $3.2\times10^{-7}$\ts M$_\odot$, $5.0\times10^{-5}$\ts M$_\odot$,
$5.1\times10^{-5}$\ts M$_\odot$, and $5.0\times10^{-5}$\ts M$_\odot$\ for HD~107146, PDS~66,
RX~J1842.9$-$3532, and RX~J1852.3$-$3700 respectively.
A number of sources of systematic errors are present in computing in the dust
masses, a few of which we mention here. Since the dust mass is proportional
to $T_{\rm dust}^{-1}$ in the Rayleigh-Jeans limit, the dust masses will be
uncertain by a factor of \about 2 due to the dust temperature alone. Further,
the value for $\beta$ can vary among sources over the range of \about 0 to
1.5, with most values $\simless$\ts 1 \citep{Beckwith91,Weintraub89} most common.
Adopting a value of $\beta=0.5$ for example would decrease the masses derived
from the OVRO $\lambda$3mm observations by a factor of 1.5. A larger source of
uncertainty in computing the dust masses is the value for the dust opacity,
$\kappa_\nu$ (see review by Beckwith, Henning, \& Nakagawa~2000). For example,
\citet{Pollack94} computed $\kappa_\nu$ for a variety of grain sizes and
compositions, and found values that ranged from 0.14 to 0.87\ts cm$^2$\ts g$^{-1}$\ at
$\lambda$1.3~mm for 0.1\micron-3~mm radius grains (see also Stognienko,
Henning, \& Ossenkopf~1995). The commonly used value for debris disks of
1.7\ts cm$^2$\ts g$^{-1}$\ at $\lambda800$\micron, adopted by \citet{Zuckerman93} to place a
lower limit on the amount of dust, corresponds to 1.0\ts cm$^2$\ts g$^{-1}$\ at $\lambda1.3$~mm
assuming $\beta=1$. Given that $\kappa_o$ is poorly constrained, the relative
values of disk masses should be significantly more reliable than the absolute
values. We recognize, however, that systematic changes in $\kappa_\nu$ with
age are likely present, and therefore any trends of dust mass with stellar age
can be interpreted as variations in dust mass and/or grain properties.
The average amount of dust as a function of stellar age in the observed sample
can be examined by computing the mean dust mass and standard deviation of the
mean in each age bin. If a star was observed at multiple wavelengths, the
observation that provided the best sensitivity to dust mass for the adopted
value of $\beta$ and $T_{\rm dust}$ was used in computing the mean values.
In principle, the mean dust mass for an ensemble of stars may yield a
significant detection if many of the stars possess small quantities of dust
which produces a small overall bias in the measured fluxes. In computing the
averages, the stars were weighted uniformly since otherwise the derived values
would be heavily weighted toward the few nearest stars that had the best
sensitivity to dust mass, and therefore would not reflect the mass limits
placed on the typical star in the sample. In each age bin, the mean dust mass
was less than three times the standard deviation of the mean.
Figure~\ref{fig:mass} shows the 3$\sigma$ upper limits to the mean dust masses
for each age bin, which range from $5.9\times10^{-7}$\ts M$_\odot$\ for the
100-300~Myr stars to $2.5\times10^{-5}$~\ts M$_\odot$\ for the 3-10~Myr age bin. The
difference in the mass sensitivity limits as a function of age primarily
reflects the fact that the younger stars are typically found at larger
distances than the older stars in this sample.
\subsection{Comparison to Published Surveys}
To compare the results for the FEPS targets with other observations, we
compiled published millimeter and submillimeter continuum surveys of stars
with stellar masses between 0.5 and 2.0\ts M$_\odot$, which is the same mass range
encompassed by the FEPS sample. These studies include the young stars in
Taurus (40 stars total with the FEPS stellar mass range, with 16 millimeter
continuum detections; Beckwith et~al.~1990, Osterloh \& Beckwith~1995, Duvert
et~al.~2000), IC~348 (14 stars, with no detections; Carpenter~2002),
the $\beta$ Pic moving group and Local Association (8 stars total, with 2
detections; Liu et~al.~2004), Lindroos binary systems (27 stars total, with 1
detection; Jewitt~1994; Gahm et~al.~1994; Wyatt, Dent, \& Greaves~2003), the
Pleiades (12 stars with no detections; Zuckerman \& Becklin~1993), the Ursa
Major moving group (12 stars with no detections; Zuckerman \& Becklin~1993),
and stars with radial-velocity planets (8 stars total, with no detections;
Greaves et~al.~2004). The \citet{Wyatt03} observations of Lindroos binaries
partially overlaps the sample observed by \citet{Jewitt94} and \citet{Gahm94},
and are significantly more sensitive. In presenting the results, we analyze
the \citet{Wyatt03} data separately and only consider stars observed by
\citet{Jewitt94} and \citet{Gahm94} that were not observed at higher
sensitivity. For the Taurus sample, stellar masses were estimated by first
placing the stars in a Hertsprung-Russell diagram based on compiled photometry
and spectral types, and then using the \citet{DM98} pre-main-sequence tracks
to infer stellar masses (L. Hillenbrand, private communication). Similarly,
for IC~348, we adopted the membership list, effective temperatures, and
bolometric luminosities from \citet{Luhman03}, and also estimated stellar
masses using the \citet{DM98} pre-main-sequence evolutionary tracks. In the
remaining samples, we used the observed spectral type and estimated ages of
the stellar group to estimate the stellar masses from evolutionary tracks.
Several continuum surveys for circumstellar dust around young stars
were not included in this analysis since there were few stars within the
desired stellar mass range or the continuum surveys were biased toward stars
with known infrared excesses. Continuum emission has been detected
toward TW~Hydra \citep{Weintraub89} and reported for other members of the
association \citep{Zuckerman01}, but an unbiased survey of the association
members has yet to be published. The extensive millimeter continuum surveys
of $\rho$~Oph \citep{Andre94,Motte98,N98,Johnstone00}, NGC~2024
\citep{Eisner03}, and Serpens \citep{Testi98} were omitted since these are
heavily obscured regions and few stars have the necessary photometric and
spectroscopic data to place the stars in the HR diagram and infer stellar
masses. The continuum surveys of Lupus \citep{N97}, Chamaeleon~I
\citep{Henning93} and MBM~12 \citep{Itoh03,Hogerheijde03} were not
included since few stars in these samples have stellar masses within the
0.5-2.0\ts M$_\odot$\ range. Continuum surveys of {\it IRAS}-detected debris disks
\citep{Sylvester96,Holland98,Coulson98,Sylvester01,Holmes03,Sheret04} were
excluded since they represent a biased sample of stars known to
contain debris dust.
The dust masses were re-computed from the observed fluxes in the surveys
described above using the assumptions for the dust properties adopted here.
For those surveys that report the observed fluxes (as opposed to upper
limits), the fluxes were averaged in a similar manner as was done for the
FEPS targets to provide unbiased estimates of the mean dust mass. In each
case, the mean was detected at a significance of less than 3$\sigma$. Some
surveys provided only upper limits to the observed fluxes for stars with
non-detections, and therefore the computed average disk mass is strictly an
upper limit. Figure~\ref{fig:mass} shows the upper limits derived from
published observations along with the upper limits from the FEPS targets.
Since the mean dust mass is not detected at the 3$\sigma$ limit in any of
the samples, these average values cannot be compared to
test for evolution in the mean mass. Nonetheless, there is some suggestion of
evolution in that the Taurus sample contains a number of sources with dust
masses in excess of 10$^{-4}$\ts M$_\odot$, but such massive disks are rare around
stars with ages older than \about 10~Myr. These trends have been noted
previously \citep{Zuckerman93,Jewitt94,Gahm94,Wyatt03,Liu04}.
\subsection{Temporal Evolution}
The evolution of dust masses was examined quantitatively using
ASURV Rev 1.2 \citep{Lavalley92}, which implements the survival analysis
methods presented in \citet{Feigelson85}. Using the Gehan, logrank, and
Peto-Prentice tests, we computed the probability that the distribution of
dust masses in any two stellar samples shown in Figure~\ref{fig:mass} could
have been drawn from the same parent population. Specifically, we used these
tests to determine which stellar samples have different dust mass distributions
than stars in Taurus, which may indicate the evolutionary time scale for
cold dust in accretion disks. The probability that the Taurus sample and the
3-10~Myr FEPS stars share the same parent population is between 0.098 and
0.132 for the different tests. The corresponding probability, again in
comparison to the Taurus sample, is between
a) 0.047 and 0.067 for the 10-30~Myr FEPS sample,
b) 0.003 and 0.005 for a combined 10-30~Myr sample that includes the FEPS
sources, the $\beta$~Pic moving group, and Lindroos binaries that are within
this age range, and
c) $4.9\times10^{-6}$ and $1.3\times10^{-5}$ for the 30-100~Myr FEPS stars.
Before drawing conclusions from the comparisons between the different stellar
samples, we investigate the robustness of the results to various observational
uncertainties. The survival analysis routines do not incorporate
uncertainties on a formal basis, and, therefore, we invoke an ad~hoc procedure
to address how uncertainties in the stellar ages and dust masses affect
the derived probabilities. We first consider the uncertainties in the
stellar ages. The stellar ages for PDS~66, RX~J1842.9$-$3532 and
RX~J1852.3$-$3700 are critical since these 3 stars have dust masses
comparable to that found in Taurus, and therefore the inclusion of these
stars in a particular age bin will increase the probability that the mass
distribution is similar to that found in Taurus. PDS~66 was placed in the
10-30~Myr age bin, but the derived age is between 7 and 17~Myr depending
on which pre-main-sequence evolutionary tracks are used \citep{Mamajek02},
and therefore the star may reasonably be placed in the 3-10~Myr age bin.
Given that RX~J1842.9$-$3532 and RX~J1852.3$-$3700 have estimated ages of
\about 10~Myr \citep{Neuhauser00}, it may be appropriate to assign these stars
to the 10-30~Myr bin instead of 3-10~Myr. If we assign all three stars to the
3-10~Myr bin, then the probability that the 3-10~Myr FEPS sample has the same
dust mass distribution as Taurus increases from \about 0.11 to 0.19. If
instead we assign all three stars to the 10-30~Myr bin, then the probability
that the combined 10-30~Myr sample has the same mass distribution as Taurus
increases from \about 0.003 to 0.017.
We assessed how uncertainties in the stellar distances and measured
fluxes (and consequently the derived dust masses) influence the survival
analysis results by using a Monte Carlo simulation. We randomly assigned
distances to each star using a gaussian random number generator that has a
mean value centered on the nominal distance with a dispersion corresponding
to the distance uncertainty. Similarly we varied the observed fluxes
using both the observed flux measurement and calibration uncertainties. The
dust masses were determined for these adjusted parameters, and the
probabilities that the various samples
are drawn from the same population were recomputed. We adopted a 10\% global
uncertainty for the distance to Taurus \citep{Kenyon94}. For stars with
Hipparcos parallaxes, we computed the distance uncertainty based on the
parallax uncertainty, and for the remaining stars, we arbitrarily adopt an
distance uncertainty of 20\%. In comparing the Taurus and 3-10~Myr FEPS
sample, we found that the dispersion in the probabilities based on the
measurement uncertainties is \about 0.04, and for Taurus and the combined
10-30~Myr stellar samples, the dispersion is \about 0.002. Thus the largest
source of uncertainty in comparing the stellar samples is assigning the stars
to the appropriate age bin.
In summary, the above results show that the dust mass distribution for disks
around 30-100~Myr stars is different from that in Taurus at high significance
(\about 4.5$\sigma$). The mass distribution for the combined sample 10-30~Myr
of stars is different from that of Taurus at the \about $2.5-3.4\sigma$ level,
where the range reflects whether or not RX~J1842.9$-$3532 and
RX~J1852.3$-$3700 are included in this age bin. Thus there is weak, but
significant, evidence that the dust disks around 10-30~Myr solar-type stars
are different than that found in Taurus. The differences in the dust mass
distribution between Taurus and the 3-10~Myr FEPS sample is significant at the
1.0-2.3$\sigma$ level, where the range reflects whether are not the 3 massive
disks detected in this survey are included in this age bin. The current
observational data are insufficient to determine if the circumstellar disk
masses around solar-type stars, as traced by millimeter and submillimeter
continuum observations, have evolved on time scales as short as 3-10~Myr.
\section{Discussion}
\label{discussion}
The differences in the dust mass distribution between the Taurus population
and stars older than 10~Myr can be attributed to the relatively few massive
disks found at older ages. The lack of old massive disks is unlikely to be an
artifact of errors in the assumed dust temperatures. Any systematic variations
in the dust temperature as a function of stellar age are such that the
optically thick young disks are expected to be colder on average than the dust
around older disks (cf. Beckwith et~al.~1990 and Zuckerman \& Song~2004). Since
the dust masses vary as $T_{dust}^{-1}$ in the Rayleigh-Jeans limit, the
analysis presented here probably underestimates the differences in the dust
mass distributions between the two samples. The apparent variations in dust
mass distributions can likely be attributed then to a decrease in the amount of
mass contained in small dust grains, and/or changes in the dust opacity due
to variations in the grain composition and the growth of grains into larger
particles. These scenarios have interesting implications for the evolution of
dust in disks, but the observations presented here cannot establish the
dominant affect.
We caution however that the evidence for evolution in the dust properties is
relative to stars in the Taurus molecular cloud. Taurus is considered a
representative region of ``isolated'', low mass star formation, while many of
the 3-30~Myr stars surveyed here are members of the Scorpius-Centaurus
OB association. Environmental differences, as well as stellar age, then, may
contribute to the differences in the mass distributions between Taurus and the
10-30~Myr sample. \citet{Carp02} and \citet{Eisner03} in fact found that the
dust mass distributions for stars in Taurus and the young clusters NGC~2024
and IC~348 are different at the \about 2-3$\sigma$ level in that the cluster
regions contain few massive disks. These studies included a broader
range of stellar masses than considered here, and the sample sizes of
0.5-2\ts M$_\odot$\ stars in these clusters is insufficient to identify any
differences in the dust mass distributions over this narrower mass range.
The evolution of disk properties for ages $\simless$\ts 30~Myr inferred from
millimeter continuum observations are qualitatively consistent with that found
from near- and mid-infrared observations. Nearly all low-mass stars with ages
of \about 1~Myr have near-infrared excesses characteristic of circumstellar
disks \citep{Strom89,Haisch01}. Only 1\% of the solar-type stars in Lower
Centaurus-Crux and Upper Centaurus-Lupus possess such inner disks, suggesting
that the inner disk dissipates on time scales less than 15~Myr
\citep{Mamajek02}. Observations at 10\micron\ (Mamajek et~al.~2004 and
references therein) and 60\micron\ \citep{Meyer00} also show that a lower
fraction of 10-30~Myr stars exhibit an excess at these wavelengths relative to
1~Myr stars. Thus observations at wavelengths from 2\micron\ to 3~mm all
point to a decrease in the number of stars with disks between \about 1~Myr and
\about 10-30~Myr. However, the nature of the decline in the disk frequency at
ages of 3-10~Myr remains ambiguous. \citet{Haisch01} suggest that the
inner disk frequency diminishes to near zero by an age of 6~Myr, while
\citet{Lyo03} suggest that \about 60\% of the stars in the 5-9~Myr
$\eta$~Cha cluster contain an inner disk. Available millimeter observations
are insufficient to distinguish differences in the dust mass distributions
around 3-10~Myr stars relative to Taurus.
The temporal evolution in the mass of warm dust around solar-type stars has
been investigated by \citet{Spangler01} and \citet{Habing01} using {\it ISO}
60\micron, 90\micron, 100\micron, and 170\micron\ observations. Both surveys
find a decrease in the dust luminosity with stellar age. \citet{Habing01}
though suggests a dramatic decrease in the number of debris disks
for stars older than \about 400~Myr, while \citet{Spangler01} suggested that
the mean fractional dust luminosity, $f_d$, declines steadily with stellar age
as $f_d \propto t^{-1.76}$. Figure~\ref{fig:mass} shows the temporal evolution
of dust mass derived from the {\it ISO} observations using the approximate
relation between $f_d$ and dust mass from \citet{Silverstone00}. The dust
mass upper limits from the millimeter-wavelength observations are generally
above the \citet{Spangler01} relation and therefore do not resolve the
discrepancy between the \citet{Habing01} and \citet{Spangler01} results.
The lack of millimeter-wavelength continuum detections does suggest that
there are not massive reservoirs of cold dust ($\simless$\ts 20~K; see
Fig.~\ref{fig:tdust}) that has been missed by {\it IRAS} and {\it ISO}
observations.
\section{Summary}
\label{summary}
We present submillimeter (CSO 350\micron) and millimeter (SEST 1.2~mm, OVRO
3~mm) photometry
for 125 stars that will be observed by the FEPS {\it Spitzer} Legacy Program.
These stars have stellar masses between 0.5 and 2\ts M$_\odot$\ and stellar ages
between \about 3~Myr and 1~Gyr, and are used to investigate the evolution of
cold circumstellar dust around solar-type stars. Four sources in this survey
were detected in the millimeter continuum: RX~J1842.9$-$3532,
RX~J1852.3$-$3700, and PDS~66 with SEST, and HD~107146 with OVRO.
RX~J1842.9$-$3532 and RX~J1852.3$-$3700 are located in projection near the
RCrA molecular cloud with estimated ages of \about 10~Myr \citep{Neuhauser00}.
PDS~66 is a kinematic member of the \about 20~Myr old Lower Centaurus-Crux
subgroup of the Scorpius-Centaurus OB association and is probably a rare
example of a old classical T Tauri star surrounded by an accretion disk
\citep{Mamajek02}. HD~107146 is a young (80-200~Myr) debris disk system that
was recently detected in the submillimeter continuum \citep{Williams04}.
The SEST detections of RX~J1842.9$-$3532, RX~J1852.3$-$3700, and PDS~66 are
unresolved at a FWHM resolution of 24$''$, and the observed fluxes imply dust
masses of \about 5$\times10^{-5}$\ts M$_\odot$\ around each star, assuming a dust
temperature of 40~K, a mass absorption coefficient of 2~cm$^2$g$^{-1}$ at
$\lambda$1.3~mm, and $\beta=1$ \citep{Beckwith90}. Since these three
stars have observational characteristics similar to that of classical T Tauri
stars, the continuum emission likely originates from a circumstellar
accretion disk. The OVRO observations of HD~107146 resolve the continuum
emission with a gaussian-fit FWHM size of
$(6.5''\pm1.4'') \times (4.2''\pm1.3'')$, or 185~AU$\times$120~AU, that is
centered on the stellar position within the astrometric uncertainties.
To investigate the evolution of cold circumstellar dust around 0.5-2.0\ts M$_\odot$\
stars, our results are compared with published continuum observations of
stars in Taurus \citep{Beckwith90,Osterloh95,Duvert00}, the $\beta$~Pic moving
group and Local Association \citep{Liu04}, and Lindroos binaries
\citep{Jewitt94,Gahm94,Wyatt03}. The stars were grouped into age bins that
span a factor of 3, and are compared with the dust masses inferred around
solar-type stars in Taurus to investigate the evolutionary time scales for
dust mass in circumstellar disks. Using ``survival
analysis'' techniques \citep{Feigelson85}, the mass distribution of disks
in the Taurus molecular cloud is distinguished from that in the 10-30~Myr
stellar sample at a significance level of 2.5-3.4$\sigma$, where the
range in significance reflects the uncertainty in determining the
age of stars that have continuum detections. The difference in the dust
mass distributions between Taurus and the 3-10~Myr stars is significant
at the 1.0-2.3$\sigma$ confidence level. These results suggest that
significant evolution has occurred in the circumstellar dust properties
around solar-type stars by ages of 10-30~Myr, either by a decrease
in the mass of small dust grains with time and/or changes in the dust opacity.
These time scales are consistent with that inferred for the inner disk as
traced by 2-60\micron\ infrared emission. Additional observations are needed
to establish if evolution occurs on even shorter time scales.
\acknowledgements
We gratefully acknowledge the FEPS team, especially Erik Mamajek, John
Stauffer, and Lynne Hillenbrand, for their efforts in defining the FEPS sample.
We would also like to thank Michael Meyer for detailed comments on the paper,
and Darren Dowell for his assistance with the CSO observations and data
reduction. JMC acknowledges support from the Long Term Space Astrophysics Grant
NAG5-8217, the {\it Spitzer} Legacy Science Program through an award issued by
JPL/CIT under NASA contract 1407, and the Owens Valley Radio Observatory,
which is supported by the National Science Foundation through grant
AST-9981546. S.~Wolf is supported through the Emmy Noether grant WO 857/2-1 of
the German Research Foundation, the NASA grant NAG5-11645, and the
{\it Spitzer} Legacy Science Program NASA contract 1407 to JPL/CIT.
The Caltech Submillimeter Observatory is supported by NSF grant AST 02-29008.
This publication makes use of data products from the Two Micron All Sky
Survey, which is a joint project of the University of Massachusetts and the
Infrared Processing and Analysis Center, funded by the National Aeronautics
and Space Administration and the National Science Foundation. This research
has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
\clearpage
|
{
"timestamp": "2004-10-31T20:34:51",
"yymm": "0411",
"arxiv_id": "astro-ph/0411020",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411020"
}
|
\section{Introduction}
Observations show that a significant fraction, perhaps a majority, of
AGN of different types are obscured by a screen of a cold dusty
matter, thought to be a molecular torus-like structure with a scale
between 1 to a 100 parsec
\citep{Antonucci85,Antonucci93,Maiolino95,Maiolino99,Sazonov04,Jaffe04}.
Moreover, many lines of observational evidence suggest that the
unobscured AGN have similar torii as well, but we are viewing these
AGN at an angle allowing a direct view of their central engines. The
orientation-dependent obscuration is thus said to unify the different
AGN classes \citep{Antonucci85,Antonucci93}.
Despite the observational importance, the theory of molecular torii is
still in an exploratory state, which is indicative of the difficulty
of the problem. \cite{Krolik88} \citep[see also][]{Krolik86} showed
that the large geometrical thickness of the torus and yet its
apparently low temperature are consistent only if the torus is made of
many molecular clouds moving with very high random velocities (Mach
number $\simgt 30$). Given that stellar feedback processes appear
inefficient to explain these large random speeds, the authors were
``forced to seek a much more speculative solution: viscous heating of
the cloud system due to partially elastic cloud-cloud collisions.''
However, until now there is no definitive answer (via numerical
simulations) on whether clouds colliding at large Mach numbers would
behave even partially elastic, or would rather share and dump the
random velocity component and collapse to a disk configuration.
\cite{Wada02} took a point with \cite{Krolik88} assertion of the
weakness of the stellar processes. Considering ``large'', e.g. 100
parsec torii, and via numerical simulations of supernova explosions
resulting from star formation in a self-gravitating, massive disk,
they have shown that large random cold gas velocities result from the
interaction of the gas with the supernova shells. However as such
about $99$~\% of the gaseous disk in the simulations is consumed in
the star formation episode rather than being accreted by the black
hole. Unless most of the stars formed in the disk are later somehow
accreted onto the SMBH, this mechanism can only work for torii on
scales much larger than the SMBH gravitational sphere of influence,
$R_h$, i.e. where the stellar mass is much greater than $M_{\rm
BH}$. It is also not clear whether this mechanism would work for
smaller accretion/star formation rates because then supernova
explosions become too rare. Finally, time variability of obscuring
column depths
argues for much smaller radial sizes of the ``torus'', e.g. in the
range $0.1-10$ parsec \citep{Risaliti02}.
Here we would like to emphasize that accretion disks in AGN are very
unlikely to be planar. There are several mechanisms that are capable
of producing strong warps. We believe that such warps have to be an
integral part of the AGN obscuration puzzle. In particular, we shall
describe and calculate a disk warping mechanism driven by an
axi-symmetric gravitational potential.
The sequence of events that takes place in an AGN feeding cycle in our
model is as following. First of all, a merger with another galaxy or a
satellite, or another source of cold gas, fills the inner part of the
galaxy with plenty of gas. This gas will in general have a significant
angular momentum oriented practically {\em randomly}. The gas will
start settling in a disk that is too massive and too cold to be stable
against self-gravity
\citep[e.g.,][]{Paczynski78,Shlosman89,Collin99,Goodman03}. Stars are formed
inside the accretion disk, producing a flat stellar system. This
long-lived axi-symmetric (or perhaps also warped) structure will
torque any orbit which is not exactly co-planar with it, resulting in
a precession around the symmetry axis. Now, as the time goes on, the
orientation of the angular momentum vector of the incoming gas
changes, and the newly built disk is exposed to the torque from the
stellar disk remnant. Different rings of the disk precess at different
rates, thus the disk becomes warped.
We shall emphasize that existence of these stellar disk remnants is
hardly a question based on the severe self-gravity problems for the
standard accretion disks at large radii. The recent observational
evidence supports this conclusion too. The best known example of such
flat stellar systems is in our Galactic Center \citep{Genzel03} where
two young stellar rings are orbiting the SMBH only a tenth of a parsec
away. The orbital planes of these rings are oriented at very large
angles to the Galactic plane. Such flat stellar systems are also being
found elsewhere \citep[e.g.,][]{KJ04}.
In this paper, we concentrate on the linear gravitational warping
effect to investigate its main features. Taking the simplest case of a
warp produced by a massive ring, we calculate the gravitational
warping torque. Starting from a thin test accretion disk, we calculate
its time dependent shape. Under quite realistic assumptions, the disk
becomes strongly warped in some $10^2-10^3$ orbital times, i.e. in
$10^5-10^7$ years. Since the warping is gravitational in nature,
gaseous and clumpy molecular disks alike are subject to such
deformations. We speculate that in the non-linear stage of the effect,
a clumpy warped disk will form a torus-like structure.
\section{Torques in a linear regime}
The basic physics of the effect is very simple and has been explained
by e.g., \cite{Binney92} in his consideration of the galactic warps
(most galactic disks are somewhat warped). Consider a nearly circular
orbit of radius $R$ for a test particle in an axi-symmetric potential
(e.g. a central point mass plus a flat disk). Let the axis $z$ to be
perpendicular to the plane of the disk. The particle makes radial and
vertical oscillations with slightly different frequencies, and the
difference is the precession frequency, $\omega_p$. Due to the
symmetry, the $z$-projection of the angular momentum of the particle
is exactly conserved \citep[see, e.g., \S 3.2 in][]{BT87}. Thus the
angle between the angular momentum of the particle and the $z$-axis,
$\beta$, is a constant. The line of the nodes (the line over which the
two planes intersect), however, precesses.
Now consider a ``test-particle'' disk initially in the same plane as
the circular orbit. The disk is a collection of rings, i.e. circular
orbits. Since precession rates $\omega_p$ are different for different
$R$, rings turn around the $z$-axis on unequal angles. The initially
flat disk will be warped with time.
\subsection{Gravitational torques between two rings}\label{sec:rings}
Let us calculate the gravitational torque between two rings with radii
$R_1$ and $R$, inclined at angle $\beta$ with respect to each
other. We work in two rigid coordinate systems, $x,y,z$ and
$x',y',z'$, centers of which are at the SMBH. For convenience, we
place the first ring in the $z=0$ plane, whereas the second ring is at
$z'=0$ plane. Angle $\beta$ is obviously the angle between the axes $z$
and $z'$. Further, the axes $x$ and $x'$ are chosen to coincide with
the line of the nodes.
The total torque exerted by the second ring on the first one is
\begin{equation}
\vec{\tau}_{21} = G \sigma_1 R_1 \sigma R \int_0^{2\pi} d\phi_1 \;
\int_0^{2\pi} d\phi \frac{\; [\vec{r}_1 \times
\vec{r}]}{|\vec{r} - \vec{r}_1|^{3}}
\end{equation}
where $\sigma_1=M_1/2\pi R_1$ and $\sigma = M/2\pi R$, with $M_1$ and
$M$ being the masses of the two rings, respectively. The integration
goes over angles $\phi_1$ and $\phi$ that are azimuthal angles in the
respective frames of the rings. From this expression it is
immediately clear that co-planar rings do not exert any torque on each
other, $[\vec{r}_1\times \vec{r}]=0$. In addition, if $\beta =
\pi/2$, the integral vanishes as well because for each $\vec{r}_1$ the
opposites sides of the second ring (e.g. $\phi$ and $\phi+ \pi$) make
equal but opposing contributions.
It is also possible to show that due to symmetry the torque's
$z$-projection vanish. Thus we only have the $\tau_{21,x} =
-\tau_{12,x}$ component, meaning that the angular momentum vector of
the ring will rotate without changing its magnitude. Also, if $M_1 \gg
M$, then one can neglect warping of the first ring.
The absolute distance between two ring elements with the respective
angles $\phi_1$ and $\phi$ is
\begin{equation}
\left[\vec{r}_1 - \vec{r}\right]^2 = R_1^2 + R^2 -2 R_1 R
\cos\lambda\;,
\end{equation}
where
\begin{equation}
\cos\lambda \; = \; \cos\beta \sin\phi \sin\phi_1 +
\cos\phi_1\cos\phi\;.
\end{equation}
Without going into tedious detail, we write the torque expression
separating out the leading radial dependence and the integral over
angles, which we label $I(\delta, \beta)$:
\begin{equation}
\tau_{21,x} = \frac{G M_1 M R_1 R}{ (R_1^2 + R^2)^{3/2}} \; I(\delta,
\beta)\;,
\end{equation}
with
\begin{equation}
I(\delta, \beta) \; \equiv \sin\beta \int_0^{2\pi} \frac{d\phi_1}{2\pi}
\int_0^{2\pi} \frac{d\phi}{2\pi}\; \frac{\sin\phi_1\sin\phi}{\left[1
- \delta\cos\lambda\right]^{3/2}}
\label{i}
\end{equation}
and
\begin{equation}
\delta \equiv \frac{2 R_1 R}{R_1^2 + R^2}\;.
\end{equation}
Total angular momentum of the second ring is $L= M \Omega_K R^2$.
Recall that $L_z = L \cos\beta =$~const, whereas the component of
$\vec{L}$ in the plane of the first ring ($z=0$) precesses. We chose
it to be initially in the $y$-direction, so that $L_y(t=0) = L
\sin\beta$ (cf. equation \ref{defl} below). The precession frequency
for the second ring is then
\begin{equation}
\omega_p = \frac{\tau_{12,x}}{L_y} = \frac{\tau_{12,x}}{M \Omega_K R^2
\sin\beta}\;,
\label{omegaex}
\end{equation}
In general, the integral in equation \ref{i} is calculated
numerically, but for $\delta\ll 1$ one can decompose
$[1-\delta\cos\lambda]^{-3/2} \approx 1 + 3/2 \delta\cos\lambda$ and
obtain $I(\delta, \beta) \approx (3/8) \delta \sin 2\beta$. When this
approximation holds, that is when $R \gg R_1$ or $R\ll R_1$, the
precession frequency for the second ring is
\begin{equation}
\frac{\omega_p}{\Omega_K} \approx -
\frac{3 M_1}{4 M_{\rm BH}} \;\cos \beta\; \frac{R^3
R_1^2} {\left[R^2 + R_1^2\right]^{5/2}}\;.
\label{omegapa}
\end{equation}
Few estimates can now be made. First of all, $\omega_p$ reaches a
maximum at radius $R_m$ at which $R_m/R_1 = \sqrt{3/7}$. The maximum
growth rate of the warp is thus
\begin{equation}
\hbox{max}\;|\omega_p| \simeq 0.085 \Omega_K \; \cos
\beta\frac{M_1}{M_{\rm BH}}\;.
\label{maxomk}
\end{equation}
$M_1/M_{\rm BH}$ may be expected on the order of a percent or so since
this is when the accretion disks become self-gravitating
\citep[e.g.,][]{NC04}, and the resulting stellar mass would probably
be of that order too. In this case one notices that to produce a
sufficiently large warp one has to wait for at least $\sim 10 \times
M_{\rm BH}/M_1 \sim 1000 \Omega_K^{-1}$ or $\sim 150$ orbital times at
the radius of the maximum warp $R_m$. While this is a fairly long
time, it is still much shorter than the corresponding disk viscous
time (cf., e.g., Fig. 2 in \cite{NC04}).
The assimptotic dependence of precession frequency on radius $R$ is
\begin{eqnarray}
\label{ass1}
\omega_p \rightarrow \; - \frac{3 M_1}{4 M_{\rm BH}} \cos
\beta\;\frac{R^3}{R_1^3}
\; \Omega_K
\quad \hbox{for}\; R \ll R_1 \;, \quad \hbox{and}
\\
\omega_p
\rightarrow \; - \frac{3 M_1}{4 M_{\rm BH}} \cos
\beta \; \frac{R_1^2}{R^2} \;
\Omega_K \quad
\hbox{for}\; R \gg R_1 \;.
\label{ass}
\end{eqnarray}
A thin massive stellar ring would thus only warp a range of radii,
leaving the portions of the disk much internal and also much external
to it unaffected.
\subsection{Test disk warping}\label{sec:testdisk}
We now want to calculate the shape of a light non self-gravitating
disk warped by a massive ring $R_1$. The disk is treated as
a collection of rings with different radii $R$ and negligible
mass. We assume that the mass of the whole disk, $M$, is negligible
in comparison with the mass of the ring, $M \ll M_1$, and that
therefore the massive ring's orientation (angular momentum) does not
change with time.
In application to the real disks, it should be remembered that any
attempt to warp a disk will be resisted by disk viscous forces
\citep[e.g.,][]{Bardeen75} that transfer the angular momentum through
the disk. However, we are interested in the outermost regions of AGN
accretion disks where they ``connect'' to the galaxy. On these, a
tenth of a parsec to 100 parsec scales (the range depends on $M_{\rm
BH}$), the usual $\alpha$-disk viscosity \citep{SS73} becomes
ineffective. The viscous time scale becomes too long and the disks are
believed to be prone to self-gravity
\citep[e.g.,][]{Paczynski78,Shlosman89,Collin99,Goodman03}. Therefore
we are justified in neglecting the restoring viscous forces. A much
more important effect will be the restoring force from the
self-gravity of the disk that is being warped, but we differ the study
of this and other non-linear effects to a future paper.
It is convenient to work with angle $\beta$, already introduced, and
one additional angle, $\gamma$. Recall that angle $\beta$ is the local
angle of the ring's tilt to the $z$-axis and it remains constant in
the test particle regime. Angle $\gamma$ is needed to introduce the
projections of the unit tilt vector normal to the ring, $l(R, t)$, on
the $x$ and $y$ axes \citep{Pringle96}:
\begin{equation}
\vec{l} = \; \left(\cos\gamma \sin\beta, \sin\gamma \sin\beta,
\cos\beta\right)\;.
\label{defl}
\end{equation}
Angle $\gamma$ therefore describes the precession of each ring around
the $z$-axis. With the chosen coordinate system, at time $t=0$,
$\gamma = \pi/2$ (and we get, in accord with conventions of \S
\ref{sec:rings}, $L_y(0) = L \sin\beta$, $L_x(0)=0$). As each of the
disk rings precesses,
\begin{equation}
\gamma = \pi/2 + \omega_p t\;.
\end{equation}
It is useful to write the expression for the unit tilt vector in the
$(x', y', z')$ coordinate system rigidly bound to the initial disk
plane:
\begin{eqnarray}
l'_x = \sin\beta \cos\gamma \;,\label{deflpx}\\
l'_y= -\cos\beta\sin\beta(1-\sin\gamma) \;, \\
l'_z = \cos^2\beta +\sin^2\beta\sin\gamma\;.
\label{deflpz}
\end{eqnarray}
Note that if $\beta=0$, $l'_x = l'_y =0$, e.g. the rings are never
tilted with respect to the $z=z'$ axis, as it should. Also, when
$\gamma = \pi/2$, $\vec{l'}$ indeed coincides with the $z'$-axis,
i.e. the disk is flat.
Using equations \ref{deflpx}-\ref{deflpz}, we can now calculate the
shape of the warped disk in that system given the function
$\gamma(R,t)$. To accomplish this, one first introduces the azimuthal
angle $\phi$ on the surface of the ring. The coordinates of the points
on the ring, $\vec{r}$, are then given by equations 2.2 and 2.3 in
\cite{Pringle96}. The corresponding coordinates in the primed system
of reference are easily obtained by $x' = (\vec{r} \vec{e_x}')$, etc.,
where $\vec{e_x}'$ and so on are the unit coordinate vectors of the
primed system. The result is:
\begin{eqnarray}
\frac{x'}{R} = \cos\beta \cos\gamma\sin\phi + \sin\gamma\cos\phi\;,\\
\frac{y'}{R} = -\cos\beta\cos\gamma\cos\phi +
\sin\phi(\cos^2\beta\sin\gamma + \sin^2\beta)\;,\\ \frac{z'}{R} =
-\sin\beta\left[\cos\gamma\cos\phi +
(1-\sin\gamma)\cos\beta\sin\phi\right]\;.
\end{eqnarray}
\subsection{An example}\label{sec:example}
To illustrate the results, we calculate the precession rates
$\omega_p(R)$ for the following case, $R_1 = 2$, $M_1 = 0.01 M_{\rm
BH}$, $\beta = \pi/4$. The resulting shape of the disk in the original
un-warped disk plane is plotted in Figure \ref{fig:fig1}. The time is
in units of $\Omega_K^{-1}$ at $R=1$, i.e. $t=1$ corresponds to time
$t=1400 \; r_{pc}^{3/2} M_8^{-1/2}$ years, where $r_{pc}$ is the
distance in units of parsec and $M_8 = M_{\rm BH}/10^8 \msun$. Note
that the warp is strongest around radius $R\sim R_1=2$, as
expected. The inner disk is hardly tilted, which is not surprising
given that $\omega_p(R) \rightarrow 0$ for small $R$ (equation
\ref{ass1}). Same is true for the outer radii, where the small tilt of
the original plane can be seen on the edges of the disk (we do not
calculate the tilt beyond $R=5.2$ and hence the original disk plane
is still seen on the edges of the Figure).
\begin{figure*}
\centerline{\psfig{file=warp.ps,width=1\textwidth,angle=0}}
\caption{Snapshots of the shape of a massless accretion disk warped by
a stellar ring of radius $R=2$ inclined at angle $\beta=\pi/4$ with
respect to the disk. The snapshots are for four different time as
indicated on the top of each panel. The time unit is $1/\Omega_K(R=1)
= 1400 \; r_{pc}^{3/2} M_8^{-1/2}$ years. At times larger than those
used in the Figure, the disk becomes warped so much that, looking from
its initially non-warped plane, the plane equation $z'(x',y')$ becomes
a multiple-valued function for some $(x',y')$. In reality non-linear
effects will limit the growth of the warp.}
\label{fig:fig1}
\end{figure*}
\begin{figure}
\centerline{\psfig{file=obscure.epsi,width=.5\textwidth,angle=0}}
\caption{The accretion disk surface as seen from the SMBH
location for four different times: $t=100, 400, 800, 3200$ for the
black, green, red and yellow dots, respectively. Note that at the
largest time most of the available solid angle is obscured by the
strongly warped disk.}
\label{fig:fig2}
\end{figure}
\section{Discussion}
\subsection{Obscuration of the central engine in AGN}
We believe that the gravitational disk warping due to non-spherical
mass distribution within the SMBH sphere of influence is a common
occurrence in real AGN. It is hard to see why AGN disks should always
form in one plane and even why they should be planar when they are
born \citep{Phinney89}. The time scales for development of strong
warps are short or comparable to characteristic AGN lifetimes (which
are thought to be in the range of $10^7-10^8$ years). We thus expect
that the outer edges of the disk will obscure a significant fraction
of the sky as seen from the central source and hence be directly
relevant to the unification schemes of AGN \citep{Antonucci93}.
Figure \ref{fig:fig2} shows the warped disk surface from Figure
\ref{fig:fig1} at different times as seen from the origin of
coordinates. The vertical axis shows $\cos\theta \equiv z'/\sqrt{
(x')^2 + (y')^2 + (z')^2}$ and the horizontal one shows $\phi$ defined
previously. The shape of the disk at different times is shown with
different colors. At time $t=0$ the disk is flat and its projection is
simply $\cos\theta=0$ for all $\phi$. At later times the rings of the
disk near $R = R_1 = 2$ precess and bulge out of the initial disk
plane. With time the fraction of the sky obscured from the central
engine becomes greater than a half. In reality non-linear effects and
interaction of the disk with AGN winds and radiation, etc., will
become important in shaping the disk for significant warps.
\subsection{The non-linear evolution}
In general, the non-linear stage of the evolution of the system of
disks or rings of stars and molecular gas is far from a thin disk as
long as collisions are of minor importance. We have already explored
the non-linear evolution of collisionless systems with N-body codes
and the results will be reported elsewhere. When two massive disks
warp each other, mixing occurs when $\gamma-\pi/2$ becomes large, and
the resulting configuration reminds that of a torus.
Now, inelastic dissipative collisions between gas clumps will
eventually become important. Relaxation of a collisionless N-body
system leads to non-circular orbits, and hence different disk rings
will start to overlap. Collisions should then tend to destroy the
random motions and should establish a common disk ``plane''. However,
such a disk will normally be warped itself. Therefore inelastic
dissipative collisions do not necessarily turn off the obscuration in
our model. Further, if molecular gas clumps are constantly supplied
from the outside and come with fluctuating angular momentum, the disk
may never arrive in a flat thin configuration \citep[see also
][]{Phinney89}.
\cite{Nenkova02} and \cite{Risaliti02} convincingly argued that the
AGN obscuring medium must not be uniform in density. This does not
contradict our model at all since AGN accretion disks on large
(e.g. 0.1 pc and beyond) scales are self-gravitating unless the
accretion rates are tiny \citep[e.g., ][]{Shlosman89}.
The source of warping potential does not have to be a
thin stellar ring or a disk: it may be any non-spherical distribution
of stars in the SMBH vicinity that retains a non-zero quadrupole
moment; it can also be a second (smaller) super-massive black hole
during a merger of two galaxies.
Both the collision-dominated \citep{Krolik88} and the stellar-feedback
inflated torii \citep{Wada02} share a common starting point: the torus
is the result of some internal disk physics. If our interpretation
applies, AGN torii lend their existence to the way in which the cold
gas arrives in the central part of the galaxy. Compared with the
model of \cite{Krolik88}, large random speeds for the cold gas are not
required in our model. Although the gas may be quite high up away from
the original plane of the disk, the disk is locally coherent and
thin. High speed elastic collisions between molecular clouds are thus
not needed to explain obscuration of AGN in our model.
There are also two other mechanisms potentially able to produce warped
disks at parsec and beyond scales around AGN. As mentioned in the
X-ray binaries context, accretion disks develop twists and warps due
to instabilities driven by X-ray heated wind off the disk surface
\citep{SM94}. Same can be achieved by the radiation pressure force
from the central source \citep{Pringle96}. However, it seems that the
majority of disks in X-ray binaries are either not warped or warped
not strongly enough \citep{Ogilvie01} to provide the large obscuration
needed for the AGN unification schemes. In contrast, the warping
mechanism discussed here is not applicable to X-ray binary systems,
and hence it may be natural that AGN disks are stronger warped than
X-ray binary disks on appropriately scaled distances from the center.
\del{However in both cases the
efficiency of the instability depends on the detail of the radiation
transfer from the central source to the outer disk, and have not been
explored in the context of clumpy accretion disks. Such disks will be
effectively optically thin at large distances. In contrast, the
gravitational warping discussed here works for both gaseous continuous
disks or disks composed of molecular clumps. Finally, the wind and
radiation pressure driven warping instabilities also work for disks in
X-ray binaries. Whereas there apparently are warped disk systems, the
majority of disks are either not warped or warped not strongly enough
\citep{Ogilvie01} to provide the large obscuration needed for the AGN
unification schemes.}
\subsection{The stellar disks in \sgra}
The two young stellar disks discovered recently in \sgra\ present a
challenge to the usual star formation modes because the gas densities
required to avoid tidal shearing are many orders of magnitude larger
than the highest densities observed anywhere in the galactic molecular
clouds. On the other hand, star formation inside a massive accretion
disk is a long expected outcome of the self-gravitational instability
of such disks
\citep[e.g.][]{Paczynski78,Shlosman89,Collin99,Goodman03}. As such,
the young stars in the \sgra\ star cluster therefore are a first
example of star formation in this extreme environment. It is very
likely that the star formation efficiency and the initial mass
function (IMF) are quite different in the immediate AGN vicinity and
elsewhere in a galaxy. The ``astro-archeology'' of \sgra\ can be used
to study these issues.
The gravitational warping effect discussed in this paper constrains
the time-averaged mass of the outer ring, $M_{\rm outer}$, since the
moment of its creation, assumed to be $t = 2 \times 10^6$ years. The
inner stellar ring is rather well defined \citep{Genzel03}, and we
thus estimate $\omega_p t$ for the inner ring to be smaller than
$\pi/4$. Taking the radius of the inner stellar ring to be $R =
3''\simeq 4\times 10^4 R_S$ for the GC black hole, and for the outer,
$R_1 = 5''$, and using equation \ref{omegapa} with $\cos\beta=1/4$,
one obtains $M_{\rm outer}< 10^5 \msun$. Preliminary numerical N-body
simulations show this limit may be even smaller.
\subsection{Other implications for AGN}
There are clearly other observational implications of gravitationally
warped disks. For example, Narrow \fe lines, observed in many Seyfert
galaxies, can be explained with X-ray reflection off such warped cold
disks. In addition, warped disks will yield different coherent paths
for maser amplification than flat disks do, which may be a part of the
explanation for the complexity of the observed AGN maser emission.
\section{conclusions}
We argued that clumpy self-gravitating accretion disks in AGN are
generically strongly warped. We believe such warping should be an
integral part of the explanation for the AGN unification schemes. Our
model provides arguably the easiest way to obscure the central engine
without the need to lift cold gas clouds off the disk plane high up
via elastic collisions, supernova explosions or winds.
\section{ACKNOWLEDGMENTS}
The author thanks Jorge Cuadra and Walter Dehnen for their help with
numerical simulations that motivated this semi-analytical paper, and
for very useful discussions. In addition, the author benefited from
discussions with Andrew King and Friedrich Meyer.
\bibstyle{mn2e}
|
{
"timestamp": "2004-11-30T12:30:15",
"yymm": "0411",
"arxiv_id": "astro-ph/0411791",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411791"
}
|
\section*{Introduction}
\noindent
For some homogeneous spaces the method of horospheres delivers an
effective way to decompose representations in irreducible ones.
For Riemannian symmetric spaces $Y=G/K$
horospheres are
orbits of maximal unipotent subgroups of
$G$. They are parameterized by points of the horospherical
homogeneous space $\Xi_\mathbb{R}= G/MN$ where $N$ is a fixed maximal
unipotent subgroup and $M=Z_K(A)$ as usual.
The horospherical transform maps sufficiently regular
functions on $Y$ to the corresponding average along the
horospheres. The crucial point is, that the abelian group
$A$ acts on $\Xi$ and that this action
commutes with the action of $G$. The decomposition
of the natural representation of $G$ in $L^2(\Xi)$ in irreducible ones reduces to
the decomposition relative to $A$. In this way we obtain
all unitary spherical representations on $Y$ (with constant
multiplicity), except the complementary series.
The computation of the Plancherel measure on $Y$ is
equivalent to the inversion of the horospherical transform.
The method of horospheres works for several other types of homogeneous
spaces, including complex semisimple Lie groups (considered as
symmetric spaces) but it has very serious restrictions: discrete
series representations lie in the kernel of the horospherical
transform, as well as all representations induced from
parabolic subgroups that are not minimal. In short, the
kernel is the orthocomplement of the most continuous
part of the spectrum. The simplest example when the
horospherical transform can not be inverted is for the group
$\mathrm{SL}(2,\mathbb{R})$. In \cite {g00,g02,g04} a modification
of the method of horospheres was suggested: complex horospherical transform
(the horospherical Cauchy-Radon transform). For a homogeneous
space $X$ we consider the complexification $X_\mathbb{C}$ and instead
of real horospheres on $X$ we consider complex horospheres on
$X_\mathbb{C}$ without real points (they do not intersect $X$). The
integration along a real horosphere is equivalent to the
integration of a $\delta$-function on $X$ with support on
this horosphere. In the complex version we replace this
$\delta$-function by a Cauchy type kernel with singularities
on the complex horosphere without real point. In \cite {g00,g02} it
is shown that such a complex horospherical transform has no kernel for
$\mathrm{SL}(2;\mathbb{R})$ and that it reproduces the Plancherel formula; in \cite {g04}
it is shown for all compact symmetric spaces.
\par The objective of this paper is to show that the complex horospherical
transform has no kernel on the holomorphic discrete series.
Holomorphic discrete series exist for affine symmetric spaces
$X=G/H$ of Hermitian type; $G$ is here a group of Hermitian type
\cite {hc55,oo91} . The corresponding part of $L^2(X)$ can be realized as
boundary values of Hardy space $\mathcal{H}^2(D_+)$ in a Stein tube
$D_+\subset X_\mathbb{C}$ with edge $X$
\cite{hoo91}. Our aim is
to define a
complex horospherical transform which has no kernel on holomorphic
$H$-spherical representations.
\par The first step is a construction of the space
that is going to be the image of the complex
horospherical transform. For that, we consider those complex horospheres in the
Stein symmetric space $X_\mathbb{C}=G_\mathbb{C}/H_\mathbb{C}$ that
are parameterized by points of the complex horospherical space
$\Xi=G_\mathbb{C}/M_\mathbb{C} N_\mathbb{C}$. In $\Xi$ we then consider an orbit $\Xi_+$ of
$G\times \mathcal{T}_+$ where $\mathcal{T}_+$ is an abelian semigroup in the complex
torus $T_\mathbb{C}=AT$ with the compact torus
$T$ as the edge. The space $\mathcal{O}(\Xi_+)$ of holomorphic functions
on $\Xi_+$
is the Fr\'echet model of the holomorphic discrete series. More exactly,
if we decompose this representation with respect
to the compact torus $T$ we obtain $G$-modules which are lowest weight modules
(if they are irreducible); we obtain all such modules with
multiplicity one.
Using the abelian semigroup $\mathcal{T}_+$ we can define a
Hardy type space $\mathcal{H}^2(\Xi_+)$ with spectrum
"almost all" of the holomorphic discrete series.
\par The next step is a geometrical background for the construction of
the horospherical transform. Firstly, we prove that the
horospheres $E(\xi)$ parameterized by points $\xi \in \Xi_+$ do
not intersect $X$. We construct a simple Cauchy type kernel which
has no singularities on $X$ and the edge of its singularities
coincides with $E(\xi)$. Using this kernel we define the
horospherical Cauchy transform from $L^1(X)$ to $\mathcal{O}(\Xi_+)$ which
can be extended on $L^2(X)$. The horospherical
transform decomposed under
$T$ yields the holomorphic spherical Fourier transform.
\par The last step is the inversion of the horospherical Cauchy
transform. We give the Radon type inversion formula using results
from \cite{k} for the holomorphic discrete series. Let
us remark that for $X=\mathrm{SL}(2,\mathbb{R})$ the inversion formula was
obtained in \cite{g00,g02} with tools from integral geometry on
quadrics. This method automatically extends on any symmetric
spaces of Hermitian type of
rank 1, i.e., the hyperboloids of signature $(2,n)$. Let us also pay
attention to the complete similarity of formulas of this paper and
formulas in \cite {g04} for compact symmetric spaces. It confirms
the view that finite-dimensional spherical representations are
similar to representations of holomorphic discrete series.
\section{Symmetric spaces of Hermitian type}\label{s-one}
\noindent
The objective of this section is to set up a standard
choice of terminology that will be used throughout the text.
\medskip Let us fix
some conventions upfront. For a real Lie algebra $\mathfrak{g}$
let us denote by $\mathfrak{g}_\mathbb{C}=\mathfrak{g}\otimes_\mathbb{R} \mathbb{C}$ its
complexification. Likewise, if not stated otherwise,
for a connected Lie group $G$ we write $G_\mathbb{C}$
for its universal complexification.
If $\varphi: G\to H$ is a homomorphism of connected Lie groups, then we
will also denote by $\varphi$
\begin{itemize}
\item the derived homomorphism $d\varphi({\bf 1}): {\rm Lie}(G)
\to {\rm Lie}(H)$,
\item the extension of $\varphi$ to a holomorphic homomorphism
$G_\mathbb{C}\to H_\mathbb{C}$.
\end{itemize}
\smallskip Let $G$ be a connected semisimple Lie group with Lie algebra
$\mathfrak{g}$. We assume that $G\subset G_\mathbb{C}$ and that
$G_\mathbb{C}$ is simply connected.
Let $\tau :G\to G$ be a non-trivial involution and write
$H$, resp. $H_\mathbb{C}$, for the $\tau$-fixed points in $G$, resp. $G_\mathbb{C}$.
The object of concern is the affine symmetric space
$X=G/H$. We observe that $X$ is contained in
its complexification $X_\mathbb{C} =G_\mathbb{C}/H_\mathbb{C}$ as a totally real
submanifold. Write $x_0=H_\mathbb{C}$ for the base point in $X_\mathbb{C}$.
\par Let $\mathfrak{h}$ be the Lie algebra of $H$ and note that
$\mathfrak{g}=\mathfrak{h}+\mathfrak{q}$ with $\tau|_{\mathfrak{q}}= -\mathrm{id}_{\mathfrak{q}}$.
The symmetric pair $(\mathfrak{g},\mathfrak{h})$ is called
irreducible if $\mathfrak{g}$ does not contain any
$\tau$-invariant ideals except the trivial ones,
$\{0\}$ and $\mathfrak{g}$. In that case, either $\mathfrak{g}$ is
simple or $\mathfrak{g}=\mathfrak{g}_1\times \mathfrak{g}_1$ with $\mathfrak{g}_1$ simple
and $\tau (x,x')=(x',x)$. We say that $X$ is
irreducible, if $(\mathfrak{g},\mathfrak{h})$ is irreducible. From now on
we will assume, that $X$ is irreducible.
\par Fix a Cartan involution
$\theta :G\to G$ commuting with $\tau$. Denote by
$K<G$ the subgroup of $\theta$-fixed points and
write $Y=G/K$ for the associated
Riemannian symmetric space.
Write $\mathfrak{k}$ for the Lie algebra of $K$.
Then $\mathfrak{g}=\mathfrak{k}+\mathfrak{s}$ with $\theta|_{\mathfrak{s}}=-\mathrm{id}_{\mathfrak{s}}$.
Notice that the universal complexification
$K_\mathbb{C}$ of $K$ naturally identifies with the $\theta$-fixed points
in $G_\mathbb{C}$.
\par We will assume that $G$ is a Lie group
of Hermitian type, i.e. $Y$ is Riemannian symmetric space
of Hermitian type. The assumption can be phrased
algebraically: $\mathfrak{z}(\mathfrak{k})\neq \{0\}$ with $\mathfrak{z}(\mathfrak{k})$ the center of
$\mathfrak{k}$.
\par We assume that $\tau$ induces an anti-holomorphic involution
on $Y$ and then call $X$ an {\it affine symmetric space
of Hermitian type}.
\begin{remark} (a) Our assumptions on $G$ and $\tau$ can be
phrased algebraically, namely:
$$\mathfrak{z}(\mathfrak{k})\cap\mathfrak{q}\neq\{0\}\ .\leqno{\rm (A)}$$
Let us mention that
another way to formulate (A) is to say that $\mathfrak{q}$ admits
an $H$-invariant regular elliptic cone, i.e.
$X$ is compactly causal \cite{ho}.
\par \noindent (b) Symmetric spaces of Hermitian type resemble
compact symmetric spaces on an analytical level. Combined they form
the class of symmetric spaces which admit lowest weight modules
in their $L^2$-spectrum (holomorphic discrete series).
\end{remark}
\medskip Since $X$ is irreducible, it follows that
$\mathfrak{z}(\mathfrak{k})\cap \mathfrak{q}=i\mathbb{R} Z_0$ is one dimensional.
It is possible to normalize $Z_0$ in such a way that
the spectrum of $\mathrm{ad} (Z_0)$ is
$\{-1, 0,1\}$. The zero-eigenspace is $\mathfrak{k}_\mathbb{C}$.
We denote the $+1$-eigenspace in $\mathfrak{s}_\mathbb{C}$ by $\mathfrak{s}^+$, and the $-1$-eigenspace
by $\mathfrak{s}^-$.
\medskip Let $\mathfrak{t}$ be a maximal abelian subspace
in $\mathfrak{q}$ containing $iZ_0$. Then $\mathfrak{t}$ is contained in $\mathfrak{k}\cap \mathfrak{q}$.
Set $\mathfrak{a}=i\mathfrak{t}$ and note that $\mathfrak{a}_\mathbb{C}=\mathfrak{t}_\mathbb{C}$.
\par Let $\Delta$ be the set of roots of $\mathfrak{t}_\mathbb{C}$ in $\mathfrak{g}_\mathbb{C}$,
$$\Delta_n=\{\alpha\in\Delta\mid \mathfrak{g}_\mathbb{C}^\alpha \subseteq \mathfrak{s}_\mathbb{C} \}
=\left\{\alpha\in \Delta\mid \alpha (Z_0)\in \{-1, 1\}\right\}$$
and
$$\Delta_k=\{\alpha\in\Delta\mid \mathfrak{g}_\mathbb{C}^\alpha\subseteq \mathfrak{k}_\mathbb{C}\}
=\{\alpha\in\Delta\mid \alpha (Z_0)=0\}\, .$$
Then $\Delta=\Delta_k\dot{\cup}\Delta_n$. The elements of $\Delta_n$ are
called \textit{non-compact roots}, and the elements in
$\Delta_k$ are called \textit{compact roots}. We choose an ordering
in $i\mathfrak{t}^*$
such that $\alpha (Z_0)>0$ implies that $\alpha\in \Delta_n^+\subseteq \Delta^+$.
Let ${\mathcal W}$ be the Weyl group of $\Delta$ and ${\mathcal W}_k$ the
subgroup generated by the reflections coming from the compact roots.
As $s(Z_0)=Z_0$ for
all $s\in {\mathcal W}_k$, it follows that $\Delta^+_n$ is ${\mathcal W}_k$-invariant.
\subsection{Polyhedrons, cones and the minimal tubes}\label{ss=11}
Set $A=\exp(\mathfrak{a})$, $A_\mathbb{C}=\exp(\mathfrak{a}_\mathbb{C})$,
$T=\exp (\mathfrak{t})$ and $T_\mathbb{C}=\exp (\mathfrak{t}_\mathbb{C})$.
We note that
$$A_\mathbb{C}=T_\mathbb{C}= TA\simeq T\times A\, .$$
\par For $\alpha\in \Delta$ let $\check\alpha\in \mathfrak{a}$ be its coroot, i.e.
$\check\alpha\in [\mathfrak{g}_\mathbb{C}^\alpha, \mathfrak{g}_\mathbb{C}^{-\alpha}]\cap \mathfrak{a}$
and $\alpha(\check\alpha)=2$.
Then
\begin{equation}\label{eq=om}
\Omega=\sum_{\alpha\in \Delta_n^+} \mathbb{R}_{>0} \cdot \check\alpha
\end{equation}
defines a ${\mathcal W}_k$-invariant open convex cone in $\mathfrak{a}=i\mathfrak{t}$ which contains $Z_0$. Often one refers
to $\Omega$ as the {\it minimal cone} (it is denoted $c_{\rm min}$
in \cite{ho}). Let us
remark that one can characterize $\Omega$ as
the smallest $\mathcal{W}_k$-invariant open convex cone in $\mathfrak{a}$ which contains
a long non-compact coroot, i.e.
\begin{equation}\label{eq=br} \Omega= {\rm co}
\left({\mathcal W}_k (\mathbb{R}_{>0}\cdot\check\alpha)\right)\qquad (\alpha
\ \hbox{long in}\ \Delta_n^+)\, .
\end{equation}
Here ${\rm co}(\cdot)$ denotes the convex hull of
$(\cdot)$.
\par
We set $A_+=\exp(\Omega)$ and note that
$A_+\subset A$ is an open semigroup.
Moreover
$$\mathcal{T}_+=T\exp (\Omega)=TA_+\subset T_\mathbb{C}\, $$
defines a semigroup and complex polyhedron with edge $T$.
We also use the notation $A_-=\exp(-\Omega)$ and
$\mathcal{T}_-= TA_-$.
\par Define $G$-invariant subsets of
$X_\mathbb{C}$ by
$$D_{\pm}=G A_\pm\cdot x_0\subset X_\mathbb{C}\, .$$
According to \cite{n99} $D_+$ and $D_-$ are Stein domains in $X_\mathbb{C}$
with $X=G\cdot x_0$ as Shilov boundary. Subsequently we will refer
to $D_+$ and $D_-$ as {\it minimal tube in $X_\mathbb{C}$ with edge $X$}.
\subsection{Minimal $\overline\theta\tau$-stable parabolics}
Denote by $g\mapsto \overline g$ the complex conjugation
in $G_\mathbb{C}$ with respect to the real form $G$.
Let
$$\mathfrak{n}^+_\mathbb{C}=\bigoplus_{\alpha\in\Delta^+_k}\mathfrak{k}_\mathbb{C}^\alpha\quad\mathrm{and}
\quad \mathfrak{n}^-_\mathbb{C}=\bigoplus_{\alpha\in\Delta^+_k}\mathfrak{k}_\mathbb{C}^{-\alpha}\, .$$
Set
$$\mathfrak{n}_\mathbb{C}=\mathfrak{n}_\mathbb{C}^+\oplus \bigoplus_{\alpha\in\Delta^+_n}\mathfrak{g}_\mathbb{C}^\alpha=\mathfrak{n}_\mathbb{C}^+\oplus \mathfrak{s}^+\, ,$$
$$\mathfrak{m}_\mathbb{C}=\{U\in \mathfrak{h}_\mathbb{C}\mid (\forall V\in\mathfrak{t}) \ [U,V]=0\}\, ,$$
and
$$\mathfrak{p}_\mathbb{C}= \mathfrak{m}_\mathbb{C}\oplus \mathfrak{t}_\mathbb{C} \oplus \mathfrak{n}_\mathbb{C} \, .$$
Notice, that $\mathfrak{m}_\mathbb{C}$ is contained in $\mathfrak{k}_\mathbb{C}$, as $Z_0\in \mathfrak{t}_\mathbb{C}$.
The Lie algebra $\mathfrak{p}_\mathbb{C}$ is a \textit{minimal} $\overline \theta\tau$\textit{-stable
parabolic subalgebra of} $\mathfrak{g}_\mathbb{C}$.
Define subgroups of $G_\mathbb{C}$ by $M_\mathbb{C}=Z_{H_\mathbb{C}}(\mathfrak{t}_\mathbb{C})\subset K_\mathbb{C}$,
and $N_\mathbb{C}=\exp (\mathfrak{n}_\mathbb{C})$.
Note that $T_\mathbb{C}=A_\mathbb{C}$. Then the prescription
$$P_\mathbb{C} =M_\mathbb{C} A_\mathbb{C} N_\mathbb{C}= M_\mathbb{C} T_\mathbb{C} N_\mathbb{C}$$
defines a
\textit{minimal} $\overline\theta\tau$\textit{-stable parabolic subgroup of} $G_\mathbb{C}$
whose Lie algebra is $\mathfrak{p}_\mathbb{C}$. Write $\Gamma=M_\mathbb{C} \cap A_\mathbb{C} =M\cap T$
and observe that $\Gamma$ is a finite $2$-group.
The isomorphic map
$$(M_\mathbb{C} \times _\Gamma A_\mathbb{C})\times N_\mathbb{C} \to P_\mathbb{C}, \ \
([m,a],n)\mapsto man$$
yields the structural decomposition of $P_\mathbb{C}$.
\smallskip We denote by $\mathfrak{t}\subseteq \mathfrak{c}$ a
$\tau$-stable Cartan subalgebra of $\mathfrak{g}$
contained in $\mathfrak{k}$.
Then $\mathfrak{c} = \mathfrak{t}\oplus \mathfrak{c}_h$, where $\mathfrak{c}_h=\mathfrak{c} \cap \mathfrak{h}$.
Denote by $\Sigma$ the set of roots of
$\mathfrak{c}_\mathbb{C}$ in $\mathfrak{g}_\mathbb{C}$. Similarly we set
$\Sigma_n$, the set of non-compact roots, $\Sigma_k$, the
set of compact roots. We choose a positive system $\Sigma^+$ such that
$\Sigma^+|_{\mathfrak{t}} \backslash \{0\}=\Delta^+$.
\par Define tori in $G$ by $C=\exp \mathfrak{c}$ and $C_h=\exp\mathfrak{c}_h$.
We note that $C=TC_h\simeq T\times_\Gamma C_h$.
\section{Complex Horospheres I: Definition and basic properties}\label
{section=hor-I}
The objective of this section is to discuss (generic) horospheres
on the complex symmetric space $X_\mathbb{C}=G_\mathbb{C}/H_\mathbb{C}$. We will
show that the space of horospheres is $G_\mathbb{C}$-isomorphic
to the homogeneous space $\Xi=G_\mathbb{C} / M_\mathbb{C} N_\mathbb{C}$.
Further we will introduce a $G$-invariant subdomain $\Xi_+\subset \Xi$
which will be a central object for the rest of this paper.
\par Set
$$\Xi= G_\mathbb{C}/ M_\mathbb{C} N_\mathbb{C}\ $$
and write $\xi_0=M_\mathbb{C} N_\mathbb{C}$ for the base point of $\Xi$.
Usually we express elements
$\xi\in \Xi$ as $\xi=g\cdot \xi_0$
for $g\in G_\mathbb{C}$.
\par Consider the double fibration
\begin{equation}\label{eq=df}
\xymatrix { & G_\mathbb{C}/ M_\mathbb{C} \ar[dl]_{\pi_1} \ar[dr]^{\pi_2} &\\
\Xi & & X_\mathbb{C}\,.}\end{equation}
By a {\it horosphere} in $X_\mathbb{C}$
we understand a subset of the
form
\begin{equation}\label{eq=E} E(\xi)=\pi_2(\pi_1^{-1}(\xi)) \qquad (\xi\in \Xi)\, .
\end{equation}
For $\xi=g\cdot \xi_0$ we record that
$$E(\xi)=gM_\mathbb{C} N_\mathbb{C}\cdot x_0=gN_\mathbb{C} \cdot x_0\subset X_\mathbb{C} $$
(use $M_\mathbb{C} \subset H_\mathbb{C}$).
\par Similarly, for $z\in X_\mathbb{C}$ we set
\begin{equation}\label{eq=S} S(z)=\pi_1(\pi_2^{-1}(z))\ .\end{equation}
If $z=g\cdot x_0$ for $g\in G_\mathbb{C}$, then notice
$S(z)=gH_\mathbb{C}\cdot \xi_0$.
Moreover, for $z\in X_\mathbb{C}$ and $\xi\in \Xi$ one has
the incidence relations
\begin{equation}\label{eq=indi}
z\in E(\xi)\iff \pi_1^{-1}(\xi)\cap \pi_2^{-1}(z)\neq \emptyset
\iff \xi\in S(z)\, .\end{equation}
\par The space of horospheres on $X_\mathbb{C}$ shall be denoted
by ${\rm Hor}(X_\mathbb{C})$, i.e.
$${\rm Hor}(X_\mathbb{C})=\{ E(\xi)\mid \xi\in \Xi\}\, .$$
Our first objective is to show that $\Xi$
parameterizes ${\rm Hor}(X_\mathbb{C})$:
\begin{proposition}\label{prop=ident} The map
$$E: \Xi \to {\rm Hor}(X_\mathbb{C}), \ \ \xi\mapsto E(\xi)$$
is a $G_\mathbb{C}$-equivariant bijection.
\end{proposition}
\begin{proof} Surjectivity and $G_\mathbb{C}$-equivariance are clear
by definition. It remains to establish
injectivity. For that write $G_\mathbb{C}^{E(\xi_0)}$
for the stabilizer of $E(\xi_0)$ in $G_\mathbb{C}$. By $G_\mathbb{C}$-equivariance it
is enough to show that $G_\mathbb{C}^{E(\xi_0)} \subseteq M_\mathbb{C} N_\mathbb{C} $.
Assume that
$g\cdot E(\xi_0)=E(\xi_0)$. Then $gN_\mathbb{C}\subseteq N_\mathbb{C} H_\mathbb{C}$. In particular,
$g=nh\in N_\mathbb{C} H_\mathbb{C}$. As $G_\mathbb{C}^{E(\xi_0)}$ is a group, and $n\in G_\mathbb{C}^{E(\xi_0)}$, it
follows, that $h\in G_\mathbb{C}^{E(\xi_0)}$. By Lemma \ref{l-12one}
from below it follows that
$h\in M_\mathbb{C}$. Hence $g=h(h^{-1}nh)\in M_\mathbb{C} N_\mathbb{C}$, as $M_\mathbb{C}$ normalizes $N_\mathbb{C}$.
\end{proof}
\begin{lemma}\label{l-12one} Assume that $h\in H_\mathbb{C}$ is such that
$h \cdot E(\xi_0)=E(\xi_0)$. Then $h\in M_\mathbb{C}$.
\end{lemma}
\begin{proof} Identify the tangent space $T_{x_0}(G_\mathbb{C} /H_\mathbb{C} )$ with
$\mathfrak{g}_\mathbb{C} /\mathfrak{h}_\mathbb{C}$.
Then, as $(hN_\mathbb{C} h^{-1})\cdot x_0=N_\mathbb{C} \cdot x_0$, it follows that
$$\mathop{\mathrm{Ad}} (h)(\mathfrak{n}_\mathbb{C}\oplus \mathfrak{h}_\mathbb{C})=\mathfrak{n}_\mathbb{C}\oplus \mathfrak{h}_\mathbb{C}\, .$$
Thus, if $U\in \mathfrak{n}_\mathbb{C}$, there exists $Z\in \mathfrak{n}_\mathbb{C}$ and $L\in \mathfrak{h}_\mathbb{C}$ such that
$\mathop{\mathrm{Ad}} (h)U = Z+L$. Applying $({\bf 1}-\tau)$ this equality, we get
$\mathop{\mathrm{Ad}} (h)(U-\tau (U))=Z-\tau (Z)$. As $\mathfrak{q}_\mathbb{C}=({\bf 1}-\tau)(\mathfrak{n}_\mathbb{C})\oplus \mathfrak{t}_\mathbb{C}$, and
this sum is orthogonal with respect to Killing form, it follows that $\mathop{\mathrm{Ad}} (h)\mathfrak{t}_\mathbb{C}
=\mathfrak{t}_\mathbb{C}$. In particular, $h\in N_{H_\mathbb{C}}(\mathfrak{t}_\mathbb{C})$.
\par We recall the Riemannian dual $X^r=G^r/K^r$ of $X=G/H$ which corresponds
to the Lie algebras $\mathfrak{g}^r=\mathfrak{k}^r + \mathfrak{s}^r$ with $\mathfrak{k}^r=(\mathfrak{h}\cap \mathfrak{k})
+ i (\mathfrak{h}\cap \mathfrak{s})$ and $\mathfrak{s}^r=i(\mathfrak{q}\cap \mathfrak{k}) + (\mathfrak{q}\cap\mathfrak{s})$.
Notice that $\mathfrak{a}$ is maximal abelian in $\mathfrak{s}^r$.
\par To continue with the proof, we observe that $N_{H_\mathbb{C}}(\mathfrak{t}_\mathbb{C})=N_{K^r}(\mathfrak{a})M_\mathbb{C}$.
Thus we may assume that $h\in N_{K^r}(\mathfrak{a})$. Write $\sigma_r$ for the complex conjugation
in $G_\mathbb{C}$ with respect to the real form $G^r$. Then
taking $\sigma^r$ fixed points in $hN_\mathbb{C}\in N_\mathbb{C} H_\mathbb{C}$ yields
$h N^r\in N^r K^r$ with $N^r=G^r\cap N_\mathbb{C}$. Thus the situation is
reduced to the Riemannian case where it follows from
\cite{hel}, p.78.
\end{proof}
It is crucial to observe
that there is a $T_\mathbb{C}$-action on $\Xi$ which commutes with the
left $G_\mathbb{C}$-action:
\begin{proposition} \label{prop=comm}Let $\xi=g\cdot\xi_0\in \Xi$, $g\in G_\mathbb{C}$.
For $t\in T_\mathbb{C}$ the prescription
\begin{equation} \label{eq=h-action}\xi\cdot t=gt\cdot\xi_0
\end{equation}
defines an element of $\Xi$. In particular,
\begin{equation} \label{eq=h-action2}
T_\mathbb{C}\times \Xi\to \Xi, \ \ (t,\xi)\mapsto \xi\cdot t
\end{equation}
defines an action of $T_\mathbb{C}$ on $\Xi$, which commutes with the
natural action of $G$ on $\Xi$.
\end{proposition}
\begin{proof} As $T_\mathbb{C}$ normalizes $M_\mathbb{C} N_\mathbb{C}$ it follows
that (\ref{eq=h-action}) is defined.
Finally, (\ref{eq=h-action}) implies that (\ref{eq=h-action2}).
defines a left-action of $T_\mathbb{C}$.
\end{proof}
It is obvious
that the map
\begin{equation}\label{eq=haction3}
(G_\mathbb{C} \times T_\mathbb{C})\times \Xi\to \Xi, \ \ \left((g,t), \xi\right)
\mapsto g\cdot\xi\cdot t
\end{equation}
is a holomorphic action of the complex group $G_\mathbb{C}\times T_\mathbb{C}$
on the homogeneous space $\Xi$.
\par The remainder of this section will be devoted to
the definition and basic discussion of an important
$G\times T$-invariant subset
$\Xi_+$ of $\Xi$.
\par We recall from Subsection \ref{ss=11} the
polydisc $\mathcal{T}_+ =TA_+$ and define
$$\Xi_+= G\mathcal{T}_+\cdot \xi_0=GA_+\cdot\xi_0\, .$$
We record that $\Xi_+$ is a $(G\times T)$-invariant
subset of $\Xi$.
\par The set $G P_\mathbb{C} $ is open in $G_\mathbb{C}$ and $G\cap P_\mathbb{C} = MT$. Hence
$G/MT$ can be viewed as an open, complex submanifold of the flag
manifold $F=G_\mathbb{C} /P_\mathbb{C}$. We write $F_+=G P_\mathbb{C}/P_\mathbb{C}$ for the image of $G/MT$ in $F$
and call $F_+$ the {\it flag domain}. Although obvious we emphasize
that $F_+$ is $G$-homogeneous.
\par Notice $G/MT$ is the base space of the holomorphic fiber bundle
$G/M\times_T\mathcal{T}_+\to G/MT$ with fiber $\mathcal{T}_+/\Gamma$.
There is a natural action of $G\times T$
on $G/M\times_T\mathcal{T}_+$ given by
$$(G\times T)\times \left(G/M\times_T\mathcal{T}_+\right)\to
G/M\times_T\mathcal{T}_+, \ \ \left((g,t), [xM,a]\right)
\mapsto [gxM, at]\, . $$
The next lemma gives us basic structural information on
$\Xi_+$.
\begin{lemma}\label{lemma=iso}
The set $\Xi_+$ is open in $\Xi=G_\mathbb{C}/ M_\mathbb{C} N_\mathbb{C}$.
Moreover, the mapping
$$\Phi: G/M\times_T \mathcal{T}_+\to \Xi, \ \ [gM,t]\mapsto gt\cdot \xi_0$$
is a $G\times T$-equivariant biholomorphism onto $\Xi_+$.
\end{lemma}
\begin{proof} Clearly, $\Phi$ is a defined
$G\times T$ -equivariant map with $\mathop{\rm im} \Phi=\Xi_+$.
By the definition of the complex structure of
$G/MT$ the holomorphicity of the map is clear, too. Let us
show that $\Phi$ is injective.
For that assume that $g_1t_1\cdot \xi_0 = g_2t_2\cdot \xi_0$, $g_j\in G$,
$t_j\in \mathcal{T}_+$. By $G$-equivariance we may assume that
$g_2={\bf 1}$. Then $g_1\in G\cap P_\mathbb{C} =MT$ and
w.lo.g. we may assume that $g_1\in M$.
Consequently, as $T_\mathbb{C} \cap M_\mathbb{C} N_\mathbb{C}=\Gamma$, we obtain $t_1\in t_2\Gamma$,
i.e. $[M,t_1]=[M,t_2]$. Hence $\Phi$ is injective.
\par A standard computation yields that $d\Phi$ is an immersion
and a simple dimension count shows that $\dim G/MT + \dim \mathcal{T}_+
=\dim \Xi$. In particular, $\Phi$ is a submersion
and $\mathop{\rm im} \Phi=\Xi_+$ is open, concluding the proof
of the lemma.
\end{proof}
\subsection{Fiberings.} To conclude this section we mention three natural
fibrations in relation to $\Xi_+$ and $F_+$.
\par Write $S^+= \exp (\mathfrak{s}^+)$ and recall that
the map
$$Y=G/K\to G_\mathbb{C} /K_\mathbb{C} S^+, \ \ gK\mapsto gK_\mathbb{C} S^+$$
is a $G$-equivariant open embedding. Henceforth
$Y$ will be understood as an open subset of
the flag manifold $G_\mathbb{C}/ K_\mathbb{C} S^+$.
\begin{lemma} The following assertions hold:
\begin{enumerate}
\item The natural map
$$\Xi_+\to F_+, \ \ z M_\mathbb{C} N_\mathbb{C} \mapsto z P_\mathbb{C}$$
is a holomorphic fibration with fiber
${\mathcal T}_+/\Gamma$.
\item The natural map
$$F_+\to Y, \ \ gMT\mapsto gK$$
is a holomorphic fibration with fiber the flag variety $K/MT$.
\item The natural map
$$\Xi_+\to Y, \ \ gt\cdot\xi_0\mapsto gK$$
is a holomorphic fibration with fiber
$K/M\times_T \mathcal{T}_+$.
\end{enumerate}
\end{lemma}
\begin{proof} (i) follows from
$G\cap P_\mathbb{C} = MT$ and (ii) is obvious. Finally (iii) is a consequence
(i) and (ii).
\end{proof}
\section{The $G\times T$-Fr\'echet module $\mathcal{O}(\Xi_+)$}
The natural action of $G\times T$ on $\Xi_+$
gives rise to a representation of $G\times T$ on the Fr\'echet
space $\mathcal{O}(\Xi_+)$ of holomorphic functions on $\Xi_+$.
We will decompose $\mathcal{O}(\Xi_+)$ with respect to this
action. By the compactness of $T$,
it is clear that $\mathcal{O}(\Xi_+)$ decomposes discretely
under $T$. It turns out that each $T$-isotypical component
is the section module of a holomorphic line bundle over the flag
domain $F_+$ and that all such section modules arise in this
manner.
\par In the second part of this section we turn our attention to
$G\times T$-invariant Hilbert spaces of holomorphic functions
on $\Xi_+$. By definition these are unitary $G\times T$-modules
$\mathcal{H}$ with continuous $G\times T$-equivariant embeddings
into $\mathcal{O}(\Xi_+)$. There are many interesting
examples such as weighted Bergman and weighted Hardy spaces.
We will discuss the Hardy space $\cH^2(\Xi_+)$ on $\Xi_+$ with constant
weight and show that $\cH^2(\Xi_+)$ constitutes a natural model for the
the $H$-spherical holomorphic discrete series of $G$.
\subsection{The decomposition of $\mathcal{O}(\Xi_+)$}
In Section \ref{section=hor-I} we exhibited a natural action of
$G\times T$ on $\Xi_+$, namely
\begin{equation}\label{h4}
(G\times T)\times \Xi_+\to \Xi_+, \ \ \left((g,t),\xi\right)
\mapsto g\cdot\xi\cdot t\ .\end{equation}
We recall that $\mathcal{O}(\Xi_+)$ becomes a Fr\'echet space when endowed with
the topology of compact convergence.
\begin{remark} Finite dimensional representation theory of $G_\mathbb{C}$
shows that $\Xi$ (and hence $\Xi_+$)
is holomorphically separable. In particular
$\mathcal{O}(\Xi_+)\neq \{0\}$. \end{remark}
Denote by
${\rm GL} (\mathcal{O}(\Xi_+))$ the group of bounded invertible operators
on $\mathcal{O}(\Xi_+)$.
\par The action
(\ref{h4}) induces a continuous representation
of $G\times T$ on $\mathcal{O}(\Xi_+)$:
$$L\otimes R: G\times T \to {\rm GL} (\mathcal{O}(\Xi_+)), \ \
\left((L\otimes R)(g,t) f\right)(\xi)=f(g^{-1}\cdot \xi\cdot t^{-1})\, ,$$
$(g,t)\in G\times T$, $f\in \mathcal{O}(\Xi_+)$, and $\xi\in \Xi_+$.
We first decompose $\mathcal{O}(\Xi_+)$ under the action of the compact
torus $T$. Denote by
$\widehat {T/\Gamma}$ the character group
of $T/\Gamma$, i.e. $\widehat{T/\Gamma}={\rm Hom}_{\rm cont}(T/\Gamma, {\Bbb S}^1)$.
In the sequel we identify $\widehat{T/\Gamma}$ with the lattice
$$ \Lambda=\{\lambda \in \mathfrak{a}^*\mid
\forall U\in (\exp|_\mathfrak{t})^{-1}(\Gamma)\,\, \lambda (U)\in 2\pi i \mathbb{Z}\}\, .$$
Explicitly, to $\lambda\in \Lambda$ one associates the character
$\chi_\lambda(t\Gamma)=e^{\lambda(\log t)}$.
Often we will write $t^\lambda$ for $\chi_\lambda (t\Gamma)$.
\par The assumption that $G_\mathbb{C}$ is simply connected allows
an uncomplicated description of the lattice $\Lambda$.
\begin{lemma} $\Lambda=\left\{ \lambda\in \mathfrak{a}^*\mid (\forall
\alpha\in \Delta)\ {\langle \lambda, \alpha\rangle \over \langle \alpha,\alpha\rangle}
\in \mathbb{Z}\right\}$.
\end{lemma}
\begin{proof} $''\subseteq''$: Let $\lambda\in \Lambda$.
We first show that
${\langle \lambda, \alpha\rangle \over \langle \alpha,\alpha\rangle}
\in \mathbb{Z}$ for all $\alpha\in\Delta_k$. For that observe that
the compact symmetric space $K/ H\cap K$ embeds into $G/H$ via
the natural map
$$K/H\cap K\to G/H, \ k(H\cap K)\mapsto kH\, .$$
Thus \cite{he84}, Ch. V, Th. 4.1, yields that
${\langle \lambda, \alpha\rangle \over \langle \alpha,\alpha\rangle}
\in \mathbb{Z}$ for all $\alpha\in\Delta_k$. To complete the
proof of $''\subseteq''$ we still have to verify
${\langle \lambda, \alpha\rangle \over \langle \alpha,\alpha\rangle}
\in \mathbb{Z}$ for all $\alpha\in\Delta_n$. Fix $\alpha\in\Delta_n$.
Standard structure theory implies that there is an
embedding of symmetric Lie algebras
$(\mathfrak{su}(1,1), \mathfrak{so}(1,1))\to (\mathfrak{g},\mathfrak{h})$ such that
$\left[\begin{matrix} i & 0\\ 0 & -i\end{matrix}\right]\in\mathfrak{su}
(1,1)$ is mapped to $i\check\alpha\in \mathfrak{t}$.
As $G_\mathbb{C}$ is simply connected, we thus obtain an immersive map
${\rm SU}(1,1)/ {\rm SO(1,1)}\to G/H$. In particular,
${\langle \lambda,\alpha\rangle \over \langle \alpha, \alpha\rangle}\in \mathbb{Z}$ must hold.
\par $''\supseteq''$: Suppose that
${\langle \lambda, \alpha\rangle \over \langle \alpha,\alpha\rangle}
\in \mathbb{Z}$ holds for all $\alpha$. Recall the extension
$\mathfrak{t}\subseteq \mathfrak{c}$ of $\mathfrak{t}$ to a compact Cartan
subalgebra of $\mathfrak{g}$. In the sequel we consider $\lambda$ as
an element of $\mathfrak{c}^*$ which is trivial on $\mathfrak{c}\cap \mathfrak{h}$.
On p. 537 in \cite{he84}, it is
shown that $\lambda$ is analytically integral for $C=\exp \mathfrak{c}$ (again this needs that
$G_\mathbb{C}$ is simply connected). In particular $\lambda$ defines an element
$\chi_\lambda\in \hat T$. It remains to show that
$\chi_\lambda|_\Gamma={\bf 1}$. As $M=Z_{H\cap K}(\mathfrak{a})$ and $\Gamma=M\cap T$,
this reduces to an assertion on the compact symmetric space
$K/H\cap K$, where it follows from
\cite{he84}, Ch. V, Th. 4.1.
\end{proof}
For each $\lambda\in \Lambda$ define the $\lambda$-isotypical component of $\mathcal{O}(\Xi_+)$ by
\begin{equation}
\mathcal{O}(\Xi_+)_\lambda =
\{f\in \mathcal{O}(\Xi_+)\mid ( \forall t\in T)\ R(t)f=t^{\lambda} f\}\ .
\end{equation}
As $(R, \mathcal{O}(\Xi_+))$ is a continuous representation of the
compact torus $T$ on a Fr\'echet space, the Peter-Weyl theorem
yields
\begin{equation
\mathcal{O}(\Xi_+) =\bigoplus_{\lambda \in \Lambda}\mathcal{O}(\Xi_+)_\lambda
\end{equation}
Each $\mathcal{O}(\Xi_+)_\lambda$ is a $G$-module for the representation $L$.
In order to describe them explicitly we recall
some facts on holomorphic line bundles.
For $\lambda\in \Lambda$ we write $\mathbb{C}_\lambda$ for $\mathbb{C}$ when considered
as a $MT$-module with trivial $M$-action and $T$ acting by $\chi_\lambda$.
Recall that $G/MT$ inherits a complex manifold structure through
its identification with the flag domain $F_+$.
In particular, to each $\lambda\in \Lambda$
one associates the holomorphic line bundle
\begin{equation}\label{eq=lineb}
\mathcal {L}_{\lambda}=G\times_{MT}\mathbb{C}_{-\lambda}\, .
\end{equation}
Write $\mathcal{O}(\mathcal {L}_{\lambda})$ for its $G$-module of holomorphic sections, i.e.
$\mathcal{O}(\mathcal {L}_{\lambda})$ consists of smooth functions
$f: G\to \mathbb{C}$ such that
\begin{itemize}\item $f(gmt)=t^{-\lambda} f(g)$ for $g\in G$, $t\in T$
and $m\in m$.
\item $G/MT\to\mathcal {L}_{\lambda}, \ \ gMT\mapsto [gMT, f(g)]$ is
holomorphic.
\end{itemize}
\par The restriction of $\mathcal {L}_\lambda$ to
the flag variety $K/MT$ yields the holomorphic line
bundle
$$\mathcal{K}_\lambda=K\times_{MT} \mathbb{C}_{-\lambda}$$
over $K/MT$.
Write $\Lambda_0$ for the
$\Delta_k^-$-dominant elements of
$\Lambda$, i.e.
\begin{equation
\Lambda_0=\left\{\lambda\in \Lambda\mid (\forall \alpha\in \Delta_k^+)
\ \langle \lambda, \alpha\rangle \leq 0\right\}\, .
\end{equation}
According to Bott \cite{b}, $V_\lambda=\mathcal{O}(\mathcal{K}_\lambda)$
is of finite dimension, and non-trivial if and only if $\lambda\in\Lambda_0$.
By
$\mathcal {L}_\lambda=G\times_{MT}\mathbb{C}_{-\lambda}\simeq
G\times_K (K\times_{MT} \mathbb{C}_{-\lambda})
$
we retrieve the standard isomorphism
$$\mathcal{O}(\mathcal {L}_\lambda)\simeq \mathcal{O}(G\times_K V_\lambda)\, .$$
In particular,
\begin{equation}\label{eq=standard}
\mathcal{O}(\mathcal {L}_\lambda)\neq\{0\} \quad\iff\quad \lambda\in \Lambda_0\, .
\end{equation}
\par We remind the reader that the
$T$-weight spectrum of $\pi_\lambda$ is
contained in $\lambda+\mathbb{Z}_{\geq 0} [\Delta^+]$.
In particular, $\mathcal{O}(\mathcal {L}_{\lambda})$, if irreducible, is a lowest
weight module for $G$
with respect to the positive system $\Delta^+$ and lowest weight $\lambda$.
\par Finally we establish the connection between $\mathcal{O}(\Xi_+)_\lambda$
and $\mathcal{O}(\mathcal {L}_\lambda)$. For that let us denote by $\Xi_0$
the pre-image of $F_+$ in $\Xi$, i.e.
$$\Xi_0=GT_\mathbb{C}\cdot\xi_0\, .$$
Notice that $\Xi_+\subset \Xi_0$.
Holomorphicity and $T$-equivariance yield $\mathcal{O}(\Xi_+)_\lambda
=\mathcal{O}(\Xi_0)_\lambda$. Likewise holds for $\mathcal{O}(\mathcal {L}
_\lambda)$.
Thus holomorphic extension and restriction gives a natural
$G$-isomorphism $\mathcal{O}(\Xi_+)_\lambda\simeq \mathcal{O}(\mathcal {L}_\lambda)$.
\par We summarize our discussion.
\begin{proposition} The $G\times T$-Fr\'echet module $\mathcal{O}(\Xi_+)$
decomposes as
$$\mathcal{O}(\Xi_+)=\bigoplus_{\lambda\in \Lambda_0} \mathcal{O}(\Xi_+)_\lambda\ .$$
Moreover, holomorphic extension and restriction canonically
identifies $\mathcal{O}(\Xi_+)_\lambda$
with the section module $\mathcal{O}(\mathcal {L}_\lambda)$.
\end{proposition}
We conclude this subsection with some comments
on unitarization of the section modules $\mathcal{O}(\mathcal {L}_\lambda)$.
\begin{remark} Let $\lambda\in \Lambda_0$ and let us denote by $\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$ the
$(\mathfrak{g},K)$-module of $K$-finite sections of $\mathcal{O}(\mathcal {L}_\lambda)$.
Let us assume that $\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$ is
irreducible. Then $\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$
identifies with the generalized Verma module
$N(\lambda)=\mathcal{U}(\mathfrak{g}_\mathbb{C})\otimes_{\mathcal{U}(\mathfrak{k}_\mathbb{C} +\mathfrak{s}^-)} V_\lambda$
and the Shapovalov form on $N(\lambda)$ gives rise
to the (up to scalar unique) contravariant Hermitian form on
$\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$. We say that
$\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$ is {\it unitarizable} if the
Shapovalov form is positive definite. Another way to formulate it
is that there exists a unitary lowest weight representation
$(\pi_\lambda, {\mathcal H}_\lambda)$ such that the $(\mathfrak{g}, K)$-module
of $K$-finite vectors
${\mathcal H}_\lambda^{K-{\rm fin}}$ is $(\mathfrak{g},K)$-isomorphic to
$\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$. In this situation
$\mathcal{O}(\mathcal {L}_\lambda)$ is then naturally $G$-isomorphic to
the hyperfunction vectors ${\mathcal H}_\lambda^{-\omega}$
of $\pi_\lambda$.
\par We want to emphasize that not all $\lambda\in \Lambda_0$
correspond to unitarizable modules $\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$
(a necessary condition is $\lambda|_\Omega\geq 0$ and we
refer to \cite{EHW} for more precise information).
However, we want to stress that
$\mathcal{O}(\mathcal {L}_\lambda)^{K-{\rm fin}}$ is automatically unitarizable
if $\lambda|_\Omega$ is sufficiently positive
(for example if condition (\ref{eq=HC}) below is
satisfied).
\end{remark}
\subsection{The Hardy space on $\Xi_+$}
The objective of this section is to introduce
the Hardy space on $\Xi_+$ and to prove some of its basic
properties.
\par We begin with some measure theoretic preliminaries.
The groups $G_\mathbb{C}$ and $M_\mathbb{C} N_\mathbb{C}$ are unimodular, and hence $\Xi=G_\mathbb{C} /M_\mathbb{C} N_\mathbb{C}$
carries a $G_\mathbb{C}$-invariant measure $\mu$.
\par Recall that $M$ is a compact subgroup of $G$ and denote by
$dm$ a normalized Haar measure on $M$. Further we let $dg$ and $d(gM)$
denote left $G$-invariant measures on $G$, resp. $G/M$, normalized
subject to the condition
$$\int_G f(g)\ dg=\int_{G/M}\int_M f(gm)\ dm\ d(gM)$$
for all $f\in L^1(G)$.
\par Notice that
the stabilizer in $G$ of any point $\xi\in \mathcal{T}_+\cdot\xi_0\subset \Xi_+$ is the
compact subgroup $M$. In particular one has
\begin{equation}\label{eq-integral}
\int_G f(g\cdot \xi)\, dg
=\int_{G/M} f(g\cdot \xi)\, d(gM)
\end{equation}
for all $\xi\in \mathcal{T}_+\cdot\xi_0$ and integrable functions $f$ on $\Xi_+$.
\par Write $\|\cdot \|_2$ for the
$L^2$-norm on $L^2(G)$. Let us
remark that the representation $(R, \mathcal{O}(\Xi_+))$ of $T$ naturally
extends to a representation of the semigroup
$t\in \mathcal{T}_-\cup T $, also denoted by $R$.
Furthermore if $f\in \mathcal{O}(\Xi_+)$ and $t\in \mathcal{T}_-$ then we can
define the restriction of $R(t)f$ to $G$ by
$R(t)f|_G : G\to \mathbb{C}$ by $R(t)f|_G (g) =
f(gt^{-1}\cdot \xi_0)$. The Hardy norm of $f\in \mathcal{O}(\Xi_+)$ is
defined by
\begin{equation}\label{def-Hardynorm}
\|f\|^2=\sup_{t\in \mathcal{T}_+}\int_G | f(gt\cdot\xi_0)|^2\, dg
=\sup_{t\in \mathcal{T}_-}\| R(t)f|_G\|^2_2\, .
\end{equation}
Let
\begin{equation}\label{de-Hardy}
\mathcal{H}^2 (\Xi_+ )= \{f\in \mathcal{O} (\Xi_+)\mid \| f\|<\infty \}\, .
\end{equation}
Obviously
\begin{equation}\label{eq=contract}
\|R(t)f\| \le \|f\| \qquad \hbox{for all $t\in \mathcal{T}_-$} \end{equation}
and hence $\mathcal{T}_-$ acts on $\cH^2(\Xi_+)$ by contractions.
Note, that $R(t)f|_G$ is right $M$-invariant, and, by the definition
of the Hardy space norm $R(t)f|_G\in L^2(G/M)\subseteq L^2(G)$.
\begin{lemma} The space $\cH^2(\Xi_+)$ is a Hilbert space. Furthermore, the
following holds:
\begin{enumerate}
\item
For $\xi\in \Xi_+$ the point evaluation map
$\mathrm{ev}_\xi :\cH^2(\Xi_+)\ni f\mapsto f(\xi )\in \mathbb{C}$
is continuous.
\item The boundary value map $\beta :\cH^2(\Xi_+) \to L^2(G/M)\subseteq L^2(G)$
$$\beta (f)=\lim_{\mathcal{T}_-\ni t\to e}R(t)f|_G$$
is an isometry into $L^2(G/M)$.
\end{enumerate}
\end{lemma}
\begin{proof} The proof follows a standard procedure
and will be more a sketch. We refer to
\cite{hoo91}, in particular the proof of Theorem 2.2, for a
detailed discussion of the underlying methods.
\par Let $\xi \in \Xi_+$. Then there exist relatively compact
open sets $U_G\subseteq G$ and $U_T\subseteq \mathcal{T}_+$ such that
$\xi \in U_GU_T\cdot \xi_0$. Thus, there is a constant $c>0$ such that
the Bergman-type estimate
$$\int_{U_GU_T\cdot\xi_0}|f(\xi )|^2\, d\mu(\xi)\le c\cdot \| f\|^2\, $$
holds for all $f\in \cH^2(\Xi_+)$.
This implies that $\cH^2(\Xi_+) $ is complete, and that
point evaluations are continuous.
\par Write $\mathbb{C}_+=\{z\in \mathbb{C}\mid \mathrm{Im}(z)>0\}$
for the upper half plane and fix $Z\in i\Omega$. We
notice that the map $\mathcal{T}_-\ni t\mapsto R(t)f|_G\in L^2(G)$
is well defined and holomorphic.
Hence
$$L_f : \mathbb{C}_+\to L^2(G/M); \ L_f(z)= R(\exp (z Z))f|_G\in L^2(G/M)$$
defines a holomorphic function on $\mathbb{C}_+$.
By Lemma 2.3 in \cite{hoo91} it follows, that
$\lim_{z\to 0}L_f(z)$ exists, and is monotonically
increasing as $s\searrow 0$ along each line segment
$\exp (siZ)$, or, because of the right invariance
of $dg$, on each $t\exp (siZ)$, $t\in T$. As in \cite{hoo91}, one shows, that this limit
is independent of $Z$. Thus, we get a boundary value map
$\beta : \cH^2(\Xi_+) \to L^2(G/M)$, defined by
$$\beta (f)=\lim_{t\to e}R(t)f|_G\, .$$
By the definition of the Hardy space norm, we obviously have
$$\|\beta (f)\|_2\le \|f\|\, .$$
But, as the norm $\| R(\exp (sZ))f\|_2$ is monotonically increasing for
$s\searrow 0$, it follows that
$$\|R(\exp (sZ))f\|_2\le \|\beta (f)\|$$
for all $s\in \mathbb{R}^+$. Thus
$$\|R(t)f|_G\|\le \|\beta (f)\|\, .$$
It follows, that $\beta : \cH^2(\Xi_+) \to L^2(G)$ is an isometry, and
hence $\cH^2(\Xi_+) $ is a Hilbert space.
\end{proof}
Clearly $L\otimes R$ defines a unitary representation of
$G\times T$ on $\cH^2(\Xi_+)$. We are going to decompose
$\cH^2(\Xi_+)$ with respect to this action. As before
we begin with the decomposition under $T$.
For $\lambda\in \Lambda$ the
$\lambda$-isotypical component of $\cH^2(\Xi_+)$
is given by $\cH^2(\Xi_+)_\lambda=\cH^2(\Xi_+)\cap \mathcal{O}(\Xi_+)_\lambda$.
The Peter-Weyl theorem yields the orthogonal
decomposition
\begin{equation
\cH^2(\Xi_+) =\bigoplus_{\lambda \in \Lambda_0}\cH^2(\Xi_+)_\lambda\,
\end{equation}
of $\cH^2(\Xi_+)$ in $G$-modules.
We draw our attention to the unitary $G$-modules
$\cH^2(\Xi_+)_\lambda$ inside of $\mathcal{O}(\Xi_+)_\lambda$.
\par Suppose that $\cH^2(\Xi_+)_\lambda\neq \{0\}$. Then
$\mathcal{O}(\mathcal {L}_{\lambda})\neq \{0\}$ and the restriction mapping
$$\cH^2(\Xi_+)_\lambda\to \mathcal{O}(\mathcal {L}_{\lambda})$$
gives a $G$-equivariant embedding.
Moreover $\beta(\cH^2(\Xi_+)_\lambda)\subset L^2(G)$.
Thus $\cH^2(\Xi_+)_\lambda$ is a module of the holomorphic discrete series of $G$.
In terms of $\lambda$ this means that
$\lambda$ satisfies the Harish-Chandra condition \cite{hc55}
\begin{equation}\label{eq=HC}
\langle \lambda -\rho(\mathfrak{c}),\alpha\rangle >0 \qquad (\forall \alpha\in \Sigma_n^+)\, ,
\end{equation}
where $\rho(\mathfrak{c})={1\over 2}
\sum_{\alpha\in \Sigma^+}\alpha$.
Write $\Lambda_{{\rm sd}}$ for the set of all $\lambda\in \Lambda_0$ which satisfy
(\ref{eq=HC}).
\par Conversely, let $\lambda\in \Lambda_{\rm sd}$ and write ${\mathcal H}_\lambda$ for a
corresponding unitary lowest weight module with lowest weight $\lambda$.
Denote by $v_\lambda\in {\mathcal H}_\lambda$ a normalized lowest weight vector and
write $d(\lambda)$ for the formal dimension
(see \cite{hc55} or (\ref{***}) below).
It is then straightforward
that
$${\mathcal H}_\lambda\to \cH^2(\Xi_+), \ \ v\mapsto \left(gt\cdot\xi_0\mapsto \sqrt{d(\lambda)}\cdot
t^{-\lambda}\langle \pi_\lambda(g^{-1})v, v_\lambda\rangle\right) $$
defines a $G$-equivariant isometric embedding. Hence $\cH^2(\Xi_+)_\lambda
\simeq {\mathcal H}_\lambda\neq \{0\}$.
Summarizing our discussion we obtain the Plancherel decomposition for $\cH^2(\Xi_+)$.
\begin{proposition}\label{th=Plancherel} As a $G$-module
the Hardy space decomposes
as
$$\cH^2(\Xi_+) \simeq \bigoplus_{\lambda\in \Lambda_{\rm sd}} {\mathcal H}_\lambda\, .$$
\end{proposition}
\begin{remark}\label{rem=disc} (a) The set $\Lambda_{\rm sd}$ describes
the set of all
$H$-spherical unitary lowest weight representations (up to equivalence)
whose matrix coefficients are square integrable on $G$, i.e.
$\Lambda_{\rm sd}$ is the spectrum of the $H$-spherical holomorphic
discrete series of $G$.
\par \noindent(b) Later we well mainly deal
with the spectrum $\Lambda_2$ of the holomorphic
discrete series on $X$. One has
$$\Lambda_2\subseteq \Lambda_{\rm sd}$$
with equality precisely for the equal rank cases \cite{oo88,oo91}.
\end{remark}
\section{Complex Horospheres II: Horospheres with no real points}
\label{section=hor-II}
\noindent
We continue our discussion of complex horospheres from Section \ref{section=hor-I}.
We will introduce the notion of horosphere without real points and investigate
$\Xi_+$ with respect to this property. In addition we will
prove some dual statements for the minimal tubes $D_{\pm}$.
\begin{definition} We say that the complex horosphere $E(\xi)\subset X_\mathbb{C}$
has
\textit{no real points} if $E(\xi)\cap X =\emptyset$. We denote by
$\Xi_{nr}\subset \Xi$ the subset of those $\xi$ which correspond to
horospheres with no real points.
\end{definition}
\begin{lemma} The set $\Xi_{nr}$ is a $G$-invariant
subset of $\Xi$.
\end{lemma}
\begin{proof} Let $\xi \in \Xi_{nr}$ and $g\in G$. Assume, that
$ x\in E(g\cdot \xi)\cap X$. Then $g^{-1}x\in E(\xi) \cap X$, contradicting the
assumption that $E(\xi)$ has no real points.
\end{proof}
Recall the open $G$-invariant subset $\Xi_+=GA_+\cdot\xi_0\subset \Xi$.
In the sequel it will be useful to consider with $\Xi_+$ its
pre-image $\widetilde \Xi_+$ in $G_\mathbb{C}$, i.e.
$$\widetilde \Xi_+=GA_+ M_\mathbb{C} N_\mathbb{C}\ .$$
It is clear that $\widetilde\Xi_+$ is a left $G$ and right $M_\mathbb{C} N_\mathbb{C}$ invariant
open subset of $G_\mathbb{C}$.
\par Next we draw our attention to the Zariski open subset
$N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} $ of $G_\mathbb{C}$. Our objective
is to study $\widetilde \Xi_+$ in relation to $N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} $.
\begin{remark} Notice that $\widetilde\Xi_+^{-1}\subset N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} $
is equivalent to $G\subset N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} $. However the latter is
true only for ${\rm rank}\, X=1$, i.e. $\dim \mathfrak{t} =1$. In general,
$G\cap N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} $ is an open and dense subset of $G$
(cf. Theorem \ref{th=monoton} below).
\end{remark}
There is a right $H_\mathbb{C}$ and left $N_\mathbb{C}$-invariant
holomorphic middle-projection
$$a_H: N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} \to A_\mathbb{C}/ \Gamma, \ \ z\mapsto a_H(x)$$
In particular, for each $\lambda\in \Lambda$ we obtain natural
$(N_\mathbb{C}, H_\mathbb{C})$-invariant holomorphic maps
$$N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} \to \mathbb{C}, \ \ x\mapsto a_H(x)^\lambda\ .$$
The holomorphic function $N_\mathbb{C} A_\mathbb{C}\cdot x_0
\to A_\mathbb{C}/\Gamma$ induced by $a_H$ shall also be denoted by
$a_H$.
\par The function $a_H$ enables us to give a
useful geometric description of horospheres.
\begin{lemma}\label{lemma=Hc} Let $\xi=g\cdot\xi_0\in \Xi$ for
$g\in G_\mathbb{C}$. Then
\begin{eqnarray*} E(\xi) &=&
\{ z\in X_\mathbb{C} \mid g^{-1}z\in N_\mathbb{C} A_\mathbb{C}\cdot x_0, \ a_H(g^{-1}z)=\Gamma\}\\
&=&\{ z\in X_\mathbb{C} \mid g^{-1}z\in N_\mathbb{C} A_\mathbb{C}\cdot x_0, \ a_H(g^{-1}z)^\lambda=1
\quad\text{for all $\lambda\in \Lambda$}\}
\end{eqnarray*}
\end{lemma}
\begin{proof} $''\subseteq''$: If $z\in E(\xi)$, then $z=gn\cdot x_o$
for some $n\in N_\mathbb{C}$. Thus $g^{-1}z=n\cdot \xi_0\in N_\mathbb{C} A_\mathbb{C} \cdot x_0$
and $a_H(g^{-1}z)=a_H(n\cdot x_0)=\Gamma$.
\par $''\supseteq''$: Conversely, let $z\in X_\mathbb{C}$ be such that
$g^{-1}z\in N_\mathbb{C} A_\mathbb{C} \cdot x_0$ and $a_H(g^{-1}z)=\Gamma$.
>From the first condition follows that $g^{-1} z =na\cdot x_0$
for some $n\in N_\mathbb{C}$ and $a\in A_\mathbb{C}$; the second condition implies
$a\in \Gamma$. Thus $z\in g\cdot\xi_0$, as was to be shown.
\end{proof}
We define a subset of $\Lambda_0$ by
\begin{eqnarray}\label{lattice}
\Lambda_{\geq 0}&=&\{\lambda\in \Lambda_0\mid \lambda|_\Omega\geq 0\}\\
&=&\left\{\lambda\in \Lambda\mid \lambda|_\Omega\geq 0, \
(\forall \alpha\in \Delta_k^+)\quad \langle \lambda, \alpha\rangle \leq 0\right\}\, .
\end{eqnarray}
The following theorem is the main geometric result of the paper.
\begin{theorem}\label{th=monoton}The following assertions hold:
\begin{enumerate}
\item $G\cap N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} $ is open and dense
in $G$.
\item Let $\lambda\in\Lambda_{\geq 0}$. Then the function $a_H^{\lambda}|_{G\cap N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} }$
extends to a continuous function on $G$ and
$$|a_H(g)^\lambda|\leq 1\qquad(g\in G) .$$
\end{enumerate}
\end{theorem}
\begin{proof} The approach to prove this theorem lies in the use of the structural
decomposition
\begin{equation}\label{decomp} G=KA_qH \end{equation}
where $A_q=\exp(\mathfrak{a}_q)$ with $\mathfrak{a}_q\subseteq \mathfrak{s}\cap \mathfrak{q}$
a maximal abelian subspace. There is a natural way to construct
a flat $\mathfrak{a}_q$ out of the weight space decomposition $\mathfrak{g}_\mathbb{C}=\mathfrak{a}_\mathbb{C} +\mathfrak{m}_\mathbb{C} + \bigoplus_{\alpha\in \Delta}\mathfrak{g}_\mathbb{C}^\alpha$.
It will be briefly reviewed.
Let $\gamma_1,\ldots ,\gamma_r\in \Delta^+_n$ be a maximal set of long strongly orthogonal roots.
Then one can find
$Z_j\in \mathfrak{g}_\mathbb{C}^{\gamma_j}$, $j=1,\ldots ,r$, such
that
\begin{equation}\label{def-ap}
\mathfrak{a}_q=\bigoplus_{j=1}^r\mathbb{R} (Z_j-\tau (Z_j))
\end{equation}
is a maximal abelian subspace of $\mathfrak{s}\cap \mathfrak{q}$;
further
\begin{equation} \label{inc} A_q\subset S^+ \overline{A_-} H_\mathbb{C} \end{equation}
(see \cite{ho}, p. 210-211 for all that).
\par(i) As $S^+\subseteq N_\mathbb{C}$ , we obtain from (\ref{decomp}) and (\ref{inc}) that
$$G\subset N_\mathbb{C} K \overline{A_-} H_\mathbb{C}\ . $$
Hence it is sufficient to show that
\begin{equation} \label{dense} K \overline{A_-} \cap N_\mathbb{C}^+ A_\mathbb{C} (H_\mathbb{C} \cap K_\mathbb{C})
\quad\text{
is open and dense in $K \overline{A_-}$} \end{equation}
\par To continue, we first have to recall some facts related to
the Iwasawa decomposition of $K_\mathbb{C}$.
Write $\widetilde N_\mathbb{C}^+$ for a maximal $C$-stable
unipotent subgroup of $K_\mathbb{C}$ containing $N_\mathbb{C}^+$ and set $\widetilde A=\exp (i\mathfrak{c})$.
Then $K_\mathbb{C}=\widetilde N_\mathbb{C}^+ \widetilde A K$ is an Iwasawa decomposition
of $K_\mathbb{C}$. We recall that $\Omega$ and hence
$\overline{A_-}$ is $\mathcal{W}_k$-invariant. Thus
Kostant's non-linear convexity theorem (cf. \cite{he84}, Ch. IV, Th. 10.5)
implies that $K \overline{A_-} \subset \tilde N_\mathbb{C}^+ \overline{A_-} K$.
As $\widetilde N_\mathbb{C}^+\subset N_\mathbb{C}^+ M_\mathbb{C}$ and
$A\subseteq \widetilde A\subseteq AM_\mathbb{C}$, we thus get
$K \overline{A_-} \subset N_\mathbb{C}^+ \overline{A_-} M_\mathbb{C} K$.
In particular, in order to establish (\ref{dense}) it is enough to
verify that $K\cap N_\mathbb{C}^+ A_\mathbb{C} (H_\mathbb{C} \cap K_\mathbb{C}) $ is dense
in $K$. But this is known (for example it follows from
Lemme 2.1 in \cite{Cl88}).
\par(ii) In the proof of (i) we have seen that $G\subset N_\mathbb{C} M_\mathbb{C} \overline{A_-} K H_\mathbb{C} $.
Thus we only have to show that $a_H^\lambda$ can be defined as a holomorphic
function on $K_\mathbb{C}$ with $|a_H(ak)^\lambda|\leq 1 $ for all $k\in K$ and $a\in \overline{A_-}$.
For that let $(\tau_\lambda, V_\lambda)$
denote the holomorphic $(H_\mathbb{C}\cap K_\mathbb{C})$-spherical representation
of $K_\mathbb{C}$ with lowest weight $\lambda$. Write $(\cdot, \cdot)$ for
a $K$-invariant inner product on $V_\lambda$. Let
$v_\lambda$ be a normalized
lowest weight vector and $v_H$ be the spherical vector with
$(v_H, v_\lambda)=1$.
Then for all $x\in N_\mathbb{C}^+ A_\mathbb{C} (H_\mathbb{C}\cap K_\mathbb{C}) \subset K_\mathbb{C}$ we have
$$(\pi_\lambda(x) v_H, v_\lambda)=a_H(x)^{\lambda}\ .$$
As the left hand side has a holomorphic extension to $K_\mathbb{C}$, the same holds
for $a_H^{\lambda}$. Finally, for $a\in \overline{A_-}$ and $k\in K$ we have
$$a_H(ak)^{\lambda}=a^{\lambda} a_H(k)^{\lambda}\ .$$
Observe that $a^{\lambda}\leq 1 $ as $\lambda\in \Lambda_{\geq 0}$
and that $|a_H(k)^{\lambda}|\leq 1$ for all $k\in K$ by Lemma 2.3
in
\cite{Cl88}.
This completes the proof of (ii).
\end{proof}
Theorem \ref{th=monoton} features interesting and important corollaries.
\begin{corollary}\label{cor=1} Let $\lambda\in \Lambda_{\geq 0}$ be such that
$\lambda|_\Omega>0$. Then $a_H^{\lambda}|_{\widetilde\Xi_+^{-1}\cap N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} }$
extends to a holomorphic function on $\widetilde\Xi_+^{-1}$ with
$$|a_H(x)^{\lambda}|< 1 \qquad (x\in \widetilde \Xi_+^{-1})\, .$$
\end{corollary}
\begin{corollary}\label{cor=Hc} $\Xi_+\subseteq
\Xi_{nr}$, i.e. $E(\xi)\cap X=\emptyset$ for all $\xi\in \Xi_+$.
\end{corollary}
\begin{proof} Suppose that there exists $\xi\in \Xi_+$ such that $E(\xi)\cap X\neq\emptyset$.
In other words $\widetilde\Xi_+\cap H_\mathbb{C} \neq \emptyset$
$\iff$ $\widetilde\Xi_+^{-1}\cap H_\mathbb{C} \neq \emptyset$
; a contradiction to
the previous corollary.
\end{proof}
\begin{remark} (Monotonicity/Convexity) Theorem \ref{th=monoton} (ii) has a natural
interpretation in terms of convexity/monotonicity. Write $\mathrm{pr}_{\mathfrak{a}}=\Im \log a_H$
and note that $\mathrm{pr}_\mathfrak{a}: N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} \to \mathfrak{a}$ is a well
defined continuous map. Theorem \ref{th=monoton} (ii) is then equivalent to
the inclusion
\begin{equation} \label{eq=incl}
\mathrm{pr}_\mathfrak{a}(G\cap N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} )\subseteq\bigoplus_{\alpha\in \Delta_n^-\cup \Delta_k^+}
\mathbb{R}_{\geq 0}\cdot \check\alpha\, .
\end{equation}
\end{remark}
\subsection{Dual statements for the minimal tubes}
Recall from Subsection \ref{ss=11} the minimal tubes
$D_\pm=GA_\pm\cdot x_0$ in $X_\mathbb{C}$ with edge $X$.
\par It follows from Neeb's non-linear convexity theorem \cite{n94}
that
\begin{equation}\label{conv} GA_- \subseteq N_\mathbb{C} M_\mathbb{C} A_- G\, .\end{equation}
This fact combined with Theorem \ref{th=monoton} yields
\begin{equation}\label{eq=ci}
GA_-H_\mathbb{C} \cap N_\mathbb{C} A_\mathbb{C} H_\mathbb{C} \subseteq N_\mathbb{C} TA_-
\exp\left(\bigoplus_{\alpha\in \Delta_k^+}
\mathbb{R}_{\geq 0} \cdot \check\alpha\right) H_\mathbb{C} \ .\end{equation}
We have shown:
\begin{corollary}\label{cor=D} Let $\lambda\in\Lambda_{\geq 0}$ be such that
$\lambda|_\Omega>0$. Then, $a_H^\lambda|_{D_-\cap N_\mathbb{C} A_\mathbb{C} \cdot x_0}$
extends to a holomorphic function on $D_-$ such that
$$|a_H(x)^\lambda|<1 \qquad (x\in D_-)\, .$$
\end{corollary}
We recall the definition of the orbits $S(z)\subset \Xi$
for $z\in X_\mathbb{C}$ (cf. equation \ref{eq=S}). The convexity inclusion
(\ref{eq=ci}) delivers the dual statement
to Corollary \ref{cor=Hc}:
\begin{corollary}\label{cor=S} $S(z)\cap G/M=\emptyset$ for all
$z\in D_-$.
\end{corollary}
\begin{proof} Let $z=ga\cdot x_0$ for $g\in G$ and $a\in A_-$.
Suppose that $S(z)\cap G/M\neq\emptyset$. As $S(z)=gaH_\mathbb{C} \cdot \xi_0$, this
is equivalent to $aH_\mathbb{C} N_\mathbb{C} \cap G\neq \emptyset$. In other words
$Ga\cap N_\mathbb{C} H_\mathbb{C} \neq \emptyset$; a contradiction to (\ref{eq=ci}).
\end{proof}
\begin{remark} Note that (\ref{conv}) is equivalent
to $A_+G \subseteq G A_+M_\mathbb{C} N_\mathbb{C}$. This inclusion
exhibits interesting additional structure of $\Xi_+$;
it implies
\begin{equation}\label{newxi}
\Xi_+= GA_+G\cdot \xi_0\ .\end{equation}
\end{remark}
\begin{remark}\label{rem=oc}(Generalization to other cones)
Let $\tilde \Omega$
be a $\mathcal{W}_k$-invariant convex open sharp cone in $\mathfrak{a}$
containing $\Omega$. A particular interesting example
is the maximal cone (denoted by $c_{\rm max}$ in
\cite{ho}). In this context we would like to mention
that the results in this section remain true for $\Omega$
replaced by $\tilde \Omega$, the obvious adjustment
of $\Lambda_{\geq 0}$ understood.
\end{remark}
\section{The horospherical Cauchy transform}
\noindent
Our geometric results from Section 4 enable us to define a
natural horospherical Cauchy kernel on $\Xi_+$. The kernel
gives rise to the horospherical Cauchy transform
$L^1(X)\to \mathcal{O}(\Xi_+)$. The main result is a geometric inversion
formula for the horospherical Cauchy transform for functions in the
holomorphic discrete series on $X$.
\subsection{The horospherical Cauchy kernel}
In this subsection we define the horospherical Cauchy kernel
and the corresponding horospherical Cauchy transform.
We will introduce the
holomorphic spherical Fourier transform and relate it
the horospherical Cauchy transform.
\par To begin with
we have to recall some features of the root system $\Delta$.
Let us denote by
$$\Pi=\{ \alpha_1, \ldots, \alpha_m\}$$
a basis of $\Delta$ corresponding to the positive system $\Delta_n^+\cup \Delta_k^-$. As ${\rm Spec}\, \mathrm{ad}(Z_0)=\{ -1, 0,1\}$, it follows
that exactly one member of $\Pi$ is non-compact, say
$\alpha_m$. Define weights $\omega_1, \ldots, \omega_m\in \mathfrak{a}^*$
by
$${\langle \omega_i,\alpha_j\rangle\over \langle \alpha_j, \alpha_j\rangle} =
\delta_{ij}\qquad (1\leq i, j\leq n)\, .$$
Set
$$\Lambda_{>0}=\mathbb{Z}_{\geq 0}\cdot \omega_1 +\ldots+ \mathbb{Z}_{\geq 0}\cdot
\omega_{m-1} +\mathbb{Z}_{>0} \cdot \omega_m\ .$$
Recall the definition of $\Lambda_{\geq 0}$ from (\ref{lattice}).
\begin{lemma} \label{lemma=pos}The following assertions hold:
\begin{enumerate}
\item $\omega_i|_\Omega>0$ for all $1\leq i\leq n$. In particular,
$\lambda|_\Omega>0$ for all $\lambda\in \Lambda_{>0}$.
\item $\Lambda_{\geq 0}=\mathbb{Z}_{\geq 0}\cdot \omega_1 +\ldots +\mathbb{Z}_{\geq 0}
\cdot \omega_m$. In particular, $\Lambda_{>0}\subset
\Lambda_{\geq 0}$.
\end{enumerate}
\end{lemma}
\begin{proof} (i) Fix $x\in \Omega$. Then $x=\sum_{\alpha\in \Delta_n^+} k_\alpha \check\alpha$
with $k_\alpha>0$. Now each $\alpha\in \Delta_n^+$ can be uniquely expressed as
$\alpha=\alpha_m +\gamma$ with $\gamma\in \mathbb{Z}_{\geq 0}[\Delta_k^-]$. Moreover if $\alpha=\beta$
is the highest root, then $\gamma\in \mathbb{Z}_{>0}[\alpha_1, \ldots, \alpha_{n-1}]$.
As $k_\beta>0$, the assertion follows.
\par (ii) Set $\Lambda_{\geq 0}'=\mathbb{Z}_{\geq 0}\cdot \omega_1 +\ldots +\mathbb{Z}_{\geq 0}
\cdot \omega_m$. We first show that $\Lambda_{\geq 0}'\subseteq \Lambda_{\geq 0}$.
For that let $\lambda\in \Lambda_{\geq 0}'$, say $\lambda=\sum_{i=1}^m k_i \omega_i$
with $k_i\in \mathbb{Z}_{\geq 0}$. As $\alpha_1, \ldots, \alpha_{n-1}$ constitutes
a basis of $\Delta_k^-$, it follows that ${\langle \lambda, \alpha\rangle\over
\langle \alpha, \alpha\rangle}\in \mathbb{Z}_{\leq 0}$ for all $\alpha\in \Delta_k^+$.
Furthermore, $\lambda|_{\Omega}\geq 0$ by (i). Hence $\Lambda_{\geq 0}'\subseteq \Lambda_{\geq 0}$.
\par Finally we establish $\Lambda_{\geq 0}\subseteq \Lambda_{\geq 0}'$. For that
fix $\lambda\in \Lambda_{\geq 0}$. Then $\lambda=\sum_{i=1}^m k_i \omega_i$ with
some real numbers $k_i$. We have to show that $k_i\in \mathbb{Z}_{\geq 0}$.
Now $\lambda\in \Lambda_{\geq 0}$ means in particular that
${\langle \lambda, \alpha\rangle\over
\langle \alpha, \alpha\rangle}\in \mathbb{Z}_{\leq 0}$ for all $\alpha\in \Delta_k^+$.
Hence ${\langle \lambda, \alpha_i\rangle\over
\langle \alpha_i, \alpha_i\rangle}\in \mathbb{Z}_{\geq 0}$ for all $1\leq i\leq n-1$.
It remains to show that ${\langle \lambda, \alpha_m\rangle\over
\langle \alpha_m, \alpha_m\rangle}\in \mathbb{Z}_{\geq 0}$. Integrality is
clear. Also since $\mathbb{R}_{\geq 0}\cdot \check{\alpha_m}$
constitutes a boundary ray of the cone $\Omega$, non-negativity
follows.
\end{proof}
Define the {\it horospherical
Cauchy kernel} on $\Xi_+$ as the function
$$\mathcal{K}(\xi)={1\over a_H(\xi^{-1})^{-\omega_m} -1}\cdot
\prod_{j=1}^{m-1} {1\over
1 -a_H(\xi^{-1})^{\omega_j}} \qquad
(\xi\in \Xi_+)\ .$$
In view of Corollary \ref{cor=1} and Lemma \ref{lemma=pos}(i),
the function $\mathcal{K}$ is holomorphic, left $H$-invariant
and bounded on subsets of the form $GU\cdot\xi_0$ for $U\subset A_+$
compact.
This allows us to define for a function $f\in L^1(X)$ its
{\it horospherical Cauchy transform} by
$$\widehat{f} (\xi) =\int_{X} f(x) \cdot \mathcal{K}(x^{-1}\xi)\, dx
\qquad (\xi\in \Xi_+)\, .$$
We notice that the horospherical Cauchy transform is a
$G$-equivariant continuous map
$$L^1(X)\to \mathcal{O}(\Xi_+), \ \ f\mapsto \widehat{f}\, .$$
\begin{remark} (a) The horospherical Cauchy kernel $\mathcal{K}$ is
tied to the geometry of the minimal cone $\Omega$:
there is no larger $\mathcal{W}_K$-invariant open convex cone
$\tilde\Omega$ such that $\mathcal{K}$ would be holomorphic
on $G\exp(\tilde\Omega)\cdot \xi_0$ (this follows from
Lemma \ref{lemma=pos} and (\ref{eq=br})). In this context we
wish to point
the difference to the results of Section \ref{section=hor-II}
which are valid for a wider class of convex cones (cf. Remark
\ref{rem=oc}).
\par \noindent(b) For each $\lambda\in \Lambda_0$ and $\xi\in \Xi_+$
consider the complex hypersurface
$$L(\lambda, \xi)=\{z\in X_\mathbb{C}\mid {a_H(\xi^{-1} z)^\lambda -1=0\}}$$
in $X_\mathbb{C}$. Their intersection is $E(\xi)$
and they do not intersect $X$.
The singular set of the
horospherical Cauchy kernel is the union of the $m$ hypersurfaces
$L(\omega_i, \lambda)$ and the
edge of this set is just $E(\xi)$. It means that if $f$
is boundary value of a holomorphic
function on $D_+$ then $\widehat f$ is a residue on $E(\xi)$.
\end{remark}
\par The horospherical Cauchy transform can be decomposed in its constituents
associated to the elements $\lambda\in \Lambda_{>0}$. More precisely,
for $\lambda\in \Lambda_{>0}$ and $f\in L^1(X)$ let us define
$\widehat{f}_\lambda\in \mathcal{O}(\Xi_+)$ by
$$\widehat{f}_\lambda(\xi)=\int_X f(x)\cdot
a_H(\xi^{-1}x)^\lambda \, dx\, .$$
We will call the map $\lambda\mapsto \widehat{f}_\lambda\in \mathcal{O}(\Xi_+)$ the
{\it spherical holomorphic Fourier transform of $f$}.
\begin{lemma}\label{convergence} The following assertions
hold:
\begin{enumerate}
\item Let $U\subset A_+$ be a compact subset. The series
$$\sum_{\lambda\in \Lambda_{>0}} a_H(\xi^{-1})^\lambda \qquad (\xi\in\Xi_+)$$
converges uniformly on $GU\cdot \xi_0 \subset \Xi_+$.
\item For all $\xi\in \Xi_+$ one has
$$\sum_{\lambda\in \Lambda_{>0}} a_H(\xi^{-1})^\lambda =\mathcal{K}(\xi)
\, .$$
\end{enumerate}
\end{lemma}
\begin{proof} Uniform convergence on $GU\cdot\xi_0$
is immediate from Corollary \ref{cor=1} and
Lemma \ref{lemma=pos}. Summing up the geometric series one obtains
\begin{eqnarray*} \sum_{\lambda\in \Lambda_{>0}} a_H(\xi^{-1})^\lambda &=&
\sum_{k_1=\ldots=k_{m-1}=0}^\infty\sum_{k_m=1}^\infty
a_H(\xi^{-1})^{k_1 \omega_1+\ldots+k_m \omega_m} \\
&=& \left({1\over
1-a_H(\xi^{-1})^{\omega_m}}-1\right)\cdot
\prod_{j=1}^{m-1}{1\over
1-a_H(\xi^{-1})^{\omega_j}}\\
&=& {1\over
a_H(\xi^{-1})^{-\omega_m}-1}\cdot
\prod_{j=1}^{m-1}{1\over
1-a_H(\xi^{-1})^{\omega_j}}\\
&=&\mathcal{K}(\xi)
\end{eqnarray*}
\end{proof}
We conclude from Lemma \ref{convergence} that
the horospherical Cauchy transform of a function $f\in L^1(X)$ can be
decomposed as
$$\widehat{f}=\sum_{\lambda\in \Lambda_{>0}} \widehat{f}_\lambda $$
with the right hand side converging uniformly on compacta.
\begin{remark} We wish to point out that the
horospherical Cauchy kernel is a product of
geometrical and not functional analytic reasoning.
We emphasize that in general not all parameters
$\lambda\in \Lambda_{>0}$ in the decomposition of the horospherical
Cauchy kernel correspond to unitarizable lowest weight modules
(see Remark \ref{rem=la} below for a more detailed
discussion).
\end {remark}
\subsection{Holomorphic Fourier transform on
lowest weight representations}
The objective of this subsection is to give a more
detailed discussion of the holomorphic Fourier transform for
functions $f\in L^2(X)$ which are contained in lowest weight module.
\par To begin with we collect some material
on spherical unitary lowest weight representations. A reasonable source
might be the overview article \cite{ko}.
\par Let $(\pi_\lambda, {\mathcal H}_\lambda)$ be a non-trivial
$H$-spherical unitary lowest weight
representation of $G$. As before we denote by $v_\lambda$ a normalized
lowest weight vector. Write $v_H$ for the unique $H$-fixed distribution
vector which satisfies $\langle v_\lambda, v_H\rangle =1$.
We record the fundamental
identity
\begin{equation}\label{eq=mc}
a_H(x)^\lambda=\langle \pi_\lambda(x)v_H, v_\lambda\rangle\qquad (x\in X)\, ,
\end{equation}
which allows us to link our geometric discussion in Section
\ref{section=hor-II} with representation theory.
\begin{remark} It follows from Corollary \ref{cor=D} that
$a_H^\lambda$ admits a holomorphic extension to the minimal
tube $D_-$. Traditionally this fact was explained via
(\ref{eq=mc}) in the context of holomorphic extension of
unitary lowest weight modules (see
\cite{n99}). We wish to point
out that Corollary \ref{cor=D} asserts more,
namely that $a_H^\lambda|_{D_-}$ is bounded by $1$.
In addition Corollary \ref{cor=D} is more geometric, i.e.
not restricted to unitary parameters $\lambda$.
\end{remark}
Pairing the $G$-module of smooth vectors $\mathcal{H}_\lambda^\infty $
with $v_H$ yields the $G$-equivariant embedding
\begin{equation}\label{eq=Xembed}
\iota: {\mathcal H}_\lambda^\infty \to C^\infty (X), \ \ v\mapsto
\left(x\mapsto
\langle \pi_\lambda(x^{-1})v, v_H\rangle\right)\, .\end{equation}
We say that $\pi_\lambda$ is {\it $X$-square integrable} if
there exists a constant $d_s(\lambda)>0$, the
{\it spherical formal dimension} (cf. \cite{k}),
such that $\sqrt{d_s(\lambda)}\cdot \iota$
extends to an isometric map $\mathcal{H}_\lambda\to L^2(X)$.
\par $X$-square
integrable parameters $\lambda$ are characterized by the condition
\cite{oo91}
\begin{equation}\label{par2}
\langle\lambda-\rho, \alpha\rangle>0 \qquad \text {for all $\alpha\in \Delta_n^+$}\ .
\end{equation}
Here $\rho={1\over 2}\sum_{\alpha\in \Delta^+}
m_\alpha \alpha$ with $m_\alpha=\dim_\mathbb{C} \mathfrak{g}_\mathbb{C}^\alpha$.
\par Likewise we say $\pi_\lambda$ is {\it $X$-integrable} if
$\iota(\mathcal{H}_\lambda^{\rm K-fin})\subset L^1(X)$. Integrability
is described by the inequality
\begin{equation}\label{par1}
\langle\lambda-2\rho, \alpha\rangle>0
\qquad \text {for all $\alpha\in \Delta_n^+$}\ .
\end{equation}
\par The set of parameters
$\lambda\in \Lambda_{>0}$ which satisfy condition (\ref{par1}), resp.
(\ref{par2}), shall be denoted by $\Lambda_1$, resp. $\Lambda_2$.
Note that $\Lambda_1\subset \Lambda_2$.
\begin{remark}\label{rem=la} We will
discuss the lattice $\Lambda_{>0}$ with regard to $\Lambda_1$ and
$\Lambda_2$. One recognizes a strong dependence
on the multiplicities $m_\alpha$ which we will exemplify for
three basic cases below. Recall that elements $\lambda\in
\Lambda_{>0}$ are described by $\lambda=\sum_{i=1}^n \lambda_i
\omega_i$ with $\lambda_i\in \mathbb{Z}_{\geq 0}$ and $\lambda_m>0$.
In addition let us keep in mind that conditions
(\ref{par2}) and (\ref{par1}) are equivalent to
$\langle\lambda-\rho, \alpha_m\rangle>0$, resp.
$\langle\lambda-2\rho, \alpha_m\rangle>0$.
\par The equal rank case: In this situation one has $\mathfrak{t}=\mathfrak{c}$ and
$m_\alpha=1$ for all $\alpha$. Thus
$\rho={1\over 2}\sum_{i=1}^m \omega_i$ and therefore
$\lambda-\rho=\sum_{i=1}^m (\lambda_i-{1\over 2})\omega_i$.
In particular $\langle \lambda-\rho,\alpha_m\rangle =
\langle \alpha_m, \alpha_m\rangle (\lambda_m-{1\over 2})$ and thus
$\Lambda_{>0}\subset \Lambda_2$ as $\lambda_m\geq 1$ for elements
$\lambda\in \Lambda_{>0}$.
\par The group case: In this situation one has
$m_\alpha =2$ for all $\alpha$ and so
$\rho=\sum_{i=1}^n \omega_i$. Accordingly we obtain
$\langle \lambda-\rho, \alpha_m\rangle =
\langle \alpha_m, \alpha_m\rangle (\lambda_m-1)$.
It follows that $\Lambda_{>0}$ parameterizes
the holomorphic discrete series and their limits; in particular
$\Lambda_2 \subset \Lambda_{>0}$.
\par The rank one case: Here one has $\Lambda_{>0}=\mathbb{Z}_{>0}\cdot \omega$
and $\rho={{m_\alpha}\over 2}\alpha$. Thus
$\Lambda_2=(\mathbb{Z}_{>0}+ \left[{m_\alpha\over 2}\right])\omega$ and
$\Lambda_2\subset \Lambda_{>0}$ with equality precisely for
$m_\alpha=1$, i.e. $\mathfrak{g}=\mathfrak{sl}(2,\mathbb{R})$.
\end{remark}
\par For $\lambda\in \Lambda_2$ we set $L^2(X)_\lambda=\iota(\mathcal{H}_\lambda)$.
\begin{lemma}\label{lemma=Schur} Let $\lambda, \mu\in \Lambda_2$. Fix $v\in \mathcal{H}_\lambda$
and define $f(x)=\langle \pi_\lambda (x^{-1})v, v_H\rangle\in L^2(X)_\lambda$.
Then for all $\xi\in \Xi_+$, the function
$$X\to \mathbb{C},\ \ x\mapsto f(x)a_H(\xi^{-1}x)^\mu$$
is integrable and
\begin{equation}
\label{eq=Int} \int_X f(x)a_H(\xi^{-1}x)^\mu \, dx=
{\delta_{\lambda\mu}\over
d_s(\lambda)}
\langle v, \pi_\lambda(\overline \xi)v_\lambda\rangle\ .
\end{equation}
Here
$$\pi_\lambda(\overline \xi)v_\lambda=a^{-\lambda}
\pi_\lambda(g)v_\lambda\in \mathcal{H}_\lambda \qquad\hbox{for
$\xi=ga\cdot \xi_0$, $g\in G$ and $a\in A_+$}\ .$$
\end{lemma}
\begin{proof} Fix $\xi\in \Xi_+$. Holomorphic extension of
(\ref{eq=mc}) yields
$$a_H(\xi^{-1} g)^\mu =\langle \pi_\lambda(g)v_H, \pi_\lambda(\overline \xi)
v_\lambda\rangle$$
for all $g\in G$ and $\xi\in \Xi_+$. It follows that
$x\mapsto a_H(\xi^{-1}x)^\mu$ is square integrable on $X$. Thus
$x\mapsto f(x)a_H(\xi^{-1}x)^\mu$ is integrable.
Finally we apply Schur-orthogonality
(cf.\ \cite{k}, Prop. 3.2) and obtain
\begin{eqnarray*}
\int_{X}f(x) a_H(\xi^{-1}x)^\mu \, dx
& =& \int_{X}\langle \pi_\lambda (x^{-1})v, v_H^\lambda\rangle
\langle \pi_\mu (x)v_H^\mu, \pi_\mu(\overline \xi)v_\mu\rangle
\, dx\\
& =& \int_{X}\langle \pi_\lambda (x^{-1})v, v_H^\lambda\rangle
\overline{\langle \pi_\mu (x^{-1})\pi_\mu(\overline {\xi} )v_\mu, v_H^\mu\rangle}
\, dx\\
& =& {\delta_{\mu\lambda}\over d_s(\lambda)}
\langle v, \pi_\lambda(\overline \xi)v_\lambda\rangle\, .
\end{eqnarray*}
\end{proof}
For $\lambda\in \Lambda_1$ let us write
$L^1(X)_\lambda$ for the closure of
$\iota(\mathcal{H}_\lambda^{\rm K-fin})$ in $L^1(X)$.
The next lemma
can be understood as an $L^1$-version of Schur-orthogonality
for the Cauchy-transform.
\begin{lemma}\label{lem1} Let $\lambda\in \Lambda_1$. Then
$$\widehat{f} =\widehat{f}_\lambda\qquad \hbox {\rm for all $f\in L^1(X)_\lambda$}\, .$$
\end{lemma}
\begin{proof} Fix $f\in L^1(X)_\lambda$.
We have to show that $\widehat{f}_\mu=0$ for all
$\mu\in \Lambda_{>0}\backslash \{\lambda\}$. For
$\mu\in \Lambda_2$ this is a consequence of
Lemma \ref{lemma=Schur}.
Therefore, we may assume that
$\mu\in \Lambda_{>0}\backslash \Lambda_2$. It means
that condition (\ref{par2}) is violated which
we will express as
\begin{equation} \label{parin}
\langle \mu-\rho,\alpha_m\rangle \leq 0\, .\end{equation}
\par We now show that
$$\widehat{f}_\mu(\xi)=\int_X f(x) a_H(\xi^{-1}x)^\mu\ dx =0 \qquad
\text{for all $\xi\in \Xi_+$}\, .$$
Equation above has boundary values on $G/M\subset \partial \Xi_+$
and it will be sufficient to prove that
$$\widehat{f}_\mu(gM)=\int_X f(x) a_H(g^{-1}x)^\mu\ dx =0 \qquad
\text{for all $g\in G$}\, .$$
We compute
\begin{eqnarray*}
\widehat{f}_\mu(gM)&=&\int_X f(x) a_H(g^{-1}x)^\mu\ dx\\
&=&\int_X f(gx) a_H(x)^\mu\ dx\\
&=&\int_T \int_X f(tgx) a_H(tx)^\mu\ dx\, dt\\
&=&\int_X \left(\int_T t^\mu f(tgx)\, dt\right)
a_H(x)^\mu\ dx\, .
\end{eqnarray*}
To arrive at a contradiction, suppose that
$\int_T t^\mu f(tgx)\, dt\neq 0$. This
can only happen if $\mu$ belongs to the $T$-weight spectrum of $\pi_\lambda$.
Now the $T$-weights of $\pi_\lambda$ are contained
in $\lambda+\mathbb{Z}_{\geq 0}[\Delta^+]$. Thus $\mu=\lambda+\gamma$ for some
$\gamma\in \Delta^+$.
But then
$$\langle \mu-\rho, \alpha_m\rangle =\langle \lambda-\rho, \alpha_m\rangle +
\langle \gamma, \alpha_m\rangle\, .$$
Observe that both summands on the right hand side
are positive, the desired contradiction to (\ref{parin})
\end{proof}
\begin{remark}\label{rem0} (Analytic continuation) Let $\lambda\in \Lambda_1$ and $f\in L^1(X)_\lambda
\cap L^2(X)_\lambda$. Write $f(x)=\langle \pi_\lambda(x^{-1})v, v_H\rangle $
for some $v\in \mathcal{H}_\lambda$.
Then Lemma \ref{lemma=Schur} and Lemma \ref{lem1} imply
\begin{equation}\label{eq=44}
\widehat{f}(\xi) ={1\over d_s(\lambda)}
\langle v, \pi_\lambda(\overline \xi)v_\lambda\rangle\, .
\end{equation}
Clearly, the right hand side makes sense for all
$v\in \mathcal{H}_\lambda$ and all $X$-square integrable parameters
$\lambda\in \Lambda_2$. We now explain how passing
to parameters $\lambda\in \Lambda_2$ in (\ref{eq=44})
has a natural explanation in terms of
analytic continuation.
For that let $\tilde G$ denote the universal cover of $G$. Write
$\tilde \Lambda_1$,
$\tilde \Lambda_2$ for the sets of $\tilde G$-integral parameters
which satisfy (\ref{par1}), resp. (\ref{par2}).
Clearly $\Lambda_{1,2}\subset \tilde\Lambda_{1,2}$. The effect
of passing to the universal cover is that the parameter
spaces involved become continuous in the central variable,
i.e. there exists constants $0<c_2<c_1$ such that
$$\tilde \Lambda_1|_{\mathbb{R} Z_0}=]c_1,\infty[\cdot (\alpha_m|_{\mathbb{R} Z_0})
\quad\hbox{and}\quad\tilde \Lambda_2|_{\mathbb{R} Z_0}=
]c_2,\infty[\cdot (\alpha_m|_{\mathbb{R} Z_0})\ .$$
By the concrete formula for $d_s(\lambda)$ from \cite{k}, Th. 4.15,
we know that $\lambda\mapsto d_s(\lambda)$ is a meromorphic function
on $\mathfrak{a}_\mathbb{C}^*$ which is positive on $\tilde \Lambda_2$.
Now familiar techniques show that
that the assignment $\tilde\Lambda_2\ni\lambda\mapsto{1\over d_s(\lambda)}
\langle v, \pi_\lambda(\overline \xi)v_\lambda\rangle\in \mathbb{C} $ becomes
analytic in the central variable (the Shapovalov
form is polynomial in $\lambda$ and in
\cite{k99} it is explained how to make consistent analytic
choices for $v$ and $v_\lambda$
in dependence of the central coordinate of $\lambda$).
\end{remark}
Motivated by Remark \ref{rem0}
we define the horospherical Cauchy transform
for functions $f\in L^2(X)_\lambda$, $\lambda\in \Lambda_2$ by
$$\widehat{f}=\widehat{f}_\lambda\, .$$
\subsection{Hyperfunctions and generalized matrix coefficients}
In order to discuss the horospherical Cauchy transform and its inverse
in a more comprehensive way
we need some results on the analytic continuation of
generalized matrix coefficients of lowest weight representations. Proofs
of the facts cited below can be found in \cite{KNO}.
\par Let $(\pi, \mathcal{H})$ be a unitary
lowest weight representation of $G$. Write
$\mathcal{H}^\omega$ and $\mathcal{H}^{-\omega}$ for the associated
$G$-modules of analytic, resp. hyperfunction vectors.
The nature of the $T$-spectrum of $\pi$ shows that
$\pi|_{T}$ extends holomorphically to $\mathcal{T}_-$. Moreover
the so obtained self adjoint operators
$\pi(a)$, $a\in A_-$, are of trace class and strongly mollifying, i.e
\begin{equation}\label{eq=moll}
\pi(a)\mathcal{H}^{-\omega}\subset \mathcal{H}^\omega \qquad (a\in A_-)\, .\end{equation}
\par Assume that $\pi$ is $H$-spherical and denote
by $v_H$ the (up to scalar) unique $H$-fixed distribution vector.
Let $v\in \mathcal{H}^{-\omega}$ be a hyperfunction vector.
We wish to interpret
the generalized matrix coefficient $f(x)=\langle\pi(x^{-1})v, v_H\rangle$
as a generalized function on $X=G/H$.
It follows essentially from (\ref{eq=moll})
that the prescription
\begin{equation} \label{eq=he}
\tilde f(ga\cdot x_0)=\langle \pi(g^{-1})\pi(a^{-1})v, v_H\rangle \qquad
\hbox{for $g\in G$ and $a\in A_+$}\ .\end{equation}
defines a holomorphic function on $D_+=GA_+\cdot x_0$.
The minimal tube $D_+$ has $X$ as edge
and this allows us to interpret $f$ as the boundary
value of $\tilde f$. Henceforth we will identify
$f$ with the holomorphic function $\tilde f$.
\par Suppose that $\mathcal{H} \subset L^2(X)$, i.e. $\pi=\pi_\lambda$
with $\lambda\in \Lambda_2$ and $\mathcal{H}=L^2(X)_\lambda$.
We now show how the horospherical Cauchy transform restricted
to $L^2(X)_\lambda$ can be extended to $L^2(X)_\lambda^{-\omega}\subset
\mathcal{O}(D_+)$. In other words for $\xi\in \Xi_+$
we wish to give
meaning to
\begin{eqnarray*}
\widehat{f}(\xi)&=&\int_X f(x) a_H(\xi^{-1}x)^\lambda\ dx\\
&=& \int_X \langle \pi(x^{-1})v, v_H\rangle a_H(\xi^{-1}x)^\lambda\ dx
\end{eqnarray*}
as a holomorphic function on $\Xi_+$.
Express $\xi$ as $\xi=ga\cdot \xi_0$ with
$g\in G$ and $a\in A_+$. By the usual holomorphic change
of variables one obtains that
\begin{eqnarray*}
\widehat{f}(\xi)&=&\int_X f(x) a_H(a^{-1}g^{-1}x)^\lambda \ dx \\
&=& \int_X \langle \pi(x^{-1})\pi(g^{-1})\pi(a^{-1})v, v_H\rangle a_H(x)^\lambda
\ d(x)\ .
\end{eqnarray*}
Now the last expression is well defined by (\ref{eq=moll}).
Of course one has
\begin{equation}\label{eq=lala}
\widehat{f}(\xi)={1\over d_s(\lambda)} \langle \pi(a^{-1}) \pi(g^{-1})
v, v_\lambda\rangle \end{equation}
by the same argument as in Lemma \ref{lemma=Schur}.
Thus we have shown that the horospherical Cauchy transform on $L^2(X)_\lambda$
extends to a $G$-equivariant continuous map
$$L^2(X)_\lambda^{-\omega}\to \mathcal{O}(\Xi_+)_\lambda\,. $$
\par We conclude this section with a conjecture
related to the holomorphic intertwining of $\mathcal{O}(D_+)$ and
$\mathcal{O}(\Xi_+)$. It can be seen as a holomorphic analogue of
Helgason's conjecture (actually a theorem ny \cite{K-}).
\par In order to state the conjecture some new terminology is
needed.
Let us call a holomorphic function
$f$ on $\Xi_+$ {\it bounded away from the boundary} if
its restriction to $g\mathcal{T}_+a$ is bounded for all
choices of $g\in G$ and $a\in A_+$. We denote by
$\mathcal{O}_{\rm b.a.b.}(\Xi_+)$ the space of all holomorphic function
on $\Xi_+$ which are bounded away from the boundary. Note that
$\mathcal{O}_{\rm b.a.b.}(\Xi_+)$ is a closed $G$-subspace of the
Fr\'echet space $\mathcal{O}(\Xi_+)$.
\begin{conjecture}\label{con=1} Let ${\Bbb D}(X)$ be the algebra
of $G$-invariant differential operators on $X$. Naturally
we can view ${\Bbb D}(X)$ as holomorphic differential
operators on $X_\mathbb{C}$. Write $\mathcal{O}(D_+)_\lambda$ for the
common holomorphic ${\Bbb D}(X)$-eigenfunctions on $D_+$ with infinitesimal
character $\lambda-\rho$. Let $\lambda\in \Lambda_2$.
We conjecture
\begin{equation}
\mathcal{O}_{\rm b.a.b.}(D_+)_\lambda=L^2(X)_\lambda^{-\omega}\ . \end{equation}
Notice that the inclusion $''\supset''$ is clear by
(\ref{eq=moll}).
\par We have already remarked that $\mathcal{O}(\Xi_+)_\lambda\simeq
\mathcal{H}_\lambda^{-\omega}$ \cite{KNO}. Hence our conjectured equality
means that the horospherical Cauchy transform induces
an intertwining isomorphism $\mathcal{O}_{\rm b.a.b.}(D_+)_\lambda\to \mathcal{O}(\Xi_+)_\lambda$.
\par It is also interesting problem to formulate (and prove)
the conjecture for other parameters.
\end{conjecture}
\begin{remark} We illustrate Conjecture \ref{con=1} for the one-sheeted
hyperboloid $X= {\rm Sl}(2,\mathbb{R})/ {\rm SO} (1,1)$.
Fix $\lambda\in 2\mathbb{Z}_{>0}=\Lambda_2$ and denote by
$\mathcal{H}_\lambda$, resp. $\mathcal{H}_{-\lambda}$, the lowest (resp. highest)
weight module of $G={\rm Sl}(2,\mathbb{R})$ with lowest (resp. highest) weight
$\lambda$, resp. $-\lambda$. Denote by $V_{\lambda-2}$ the
finite dimensional $G$-module of highest weight $\lambda-2$.
Write $C^\infty(X)_{\lambda}$ for the
${\Bbb D}(X)$-eigenspace with eigenvalue $\lambda-1$.
Then
$$C^\infty(X)_\lambda \simeq
\mathcal{H}_\lambda^\infty \oplus \mathcal{H}_{-\lambda}^\infty \oplus V_{\lambda-2}$$
Now, the functions of $\mathcal{H}_{\pm \lambda}^\infty $ extend holomorphically
to $D_\pm$ but not beyond, while the functions
of $V_{\lambda-2}$ extend holomorphically to all of $X_\mathbb{C}$.
One deduces that $\mathcal{O}(D_+)_\lambda=\mathcal{H}_\lambda^{-\omega}\oplus
V_{\lambda-2}$. Finally, the holomorphic
functions in $V_{\lambda-2}$ grow exponentially at infinity and hence
are not bounded away from the boundary.
Thus $\mathcal{O}_{\rm b.a.b.}(D_+)_\lambda =L^2(X)_\lambda^{-\omega}$ as conjectured.
\end{remark}
\subsection{Inversion of the horospherical Cauchy transform} To begin with
we first have to explain certain facts on
incidence geometry between the Shilov boundary $X$ of $D_+$ and the
boundary piece $G/M$ of $\Xi_+$.
\par We keep in mind that we realized
$G/M$ in the boundary of $\Xi_+$ by
$G/M\simeq G\cdot \xi_0\subset \partial \Xi_+$.
\par Recall the orbits $S(z)\subset \Xi$ from
\ref{eq=S}. For a point $x\in X$ we
define the real form of $S(x)$ by
$$S_\mathbb{R}(x)=S(x)\cap G/M\ .$$
In view of the incidence relation (\ref{eq=indi}),
one has
$$S_\mathbb{R}(x)=\{ \xi\in G/M\mid \xi\in S(x)\}=
\{ \xi\in G/M\mid x\in E(\xi)\}\, .$$
\begin{lemma}\label{lemma=fiber} Let $x=g\cdot x_0\in X$, $g\in G$. Then
$$S_\mathbb{R}(x)=gH\cdot \xi_0\simeq H/M\ .$$
\end{lemma}
\begin{proof} First notice that for $x=g\cdot x_0$ with $g\in G$ one has
$S_\mathbb{R}(x)= g\cdot S_\mathbb{R}(x_0)$. Hence it suffices to show that
$$S_\mathbb{R} (x_0)=H\cdot \xi_0\simeq H/M\ .$$
Let $\xi\in S_\mathbb{R}(x_0)$ and write $\xi=y\cdot \xi_0$ for some
$y\in G$. We have to show that $y\in H$ and that
$y$ is uniquely determined modulo $M$. First observe that
$\xi\in S(x_0)$ means $yN_\mathbb{C} \subset H_\mathbb{C} N_\mathbb{C}$
and so
$y\in H_\mathbb{C} N_\mathbb{C} \cap G$.
Now $H_\mathbb{C} N_\mathbb{C} \cap G=H$ implies $y\in H$. Finally, uniqueness modulo $M$
is immediate from Lemma \ref{l-12one}.
\end{proof}
It is possible to view the boundary orbits $S_\mathbb{R}(x)$ as
certain limits.
For $z=ga\cdot x_0\in D_+$ we define
$$S_\mathbb{R}(z)=gaH\cdot \xi_0\simeq H/M\, .$$
We note that $S_\mathbb{R}(z)\subset \Xi_+$ by
(\ref{newxi}). Furthermore there is the
obvious limit relation
$$\lim_{a\to {\bf 1}\atop a\in A_+} S_\mathbb{R}(ga\cdot\xi)=S_\mathbb{R}(g\cdot x_0)\, .$$
\par Write $d_z(\xi)$ for the measure on $S_\mathbb{R}(z)$ which is
induced from a Haar measure $d(hM)$ on $H/M$ via the identification
$S_\mathbb{R}(z)\simeq H/M$. Define the space of {\it fiber integrable}
holomorphic functions on $\Xi_+$ by
$$\mathcal{O}_{\rm f.i.}(\Xi_+)=\{ \phi\in\mathcal{O}(\Xi_+)\mid D_+ \ni z\to
\int_{S_\mathbb{R}(z)} |\phi(\xi)|\, d_z(\xi) \quad \hbox{is locally bounded}\}\ .$$
For a function $\phi\in \mathcal{O}_{\rm f.i.}(\Xi_+)$ we define its
{\it inverse horospherical transform} $\phi^\vee\in \mathcal{O}(D_+)$
by
$$\phi^\vee(z)=\int_{S_\mathbb{R}(z)} \phi(\xi) \, d_z(\xi) \qquad
(z\in D_+)\, .$$
We note that
$$\mathcal{O}_{\rm f.i.}(\Xi_+)\to \mathcal{O}(D_+), \ \ \phi\mapsto \phi^\vee$$
is a $G$-equivariant continuous map.
\par Finally, we define a subset $\Lambda_c\subset \Lambda_2$ of large
parameters by
$$\Lambda_c=\{ \lambda\in \Lambda_2\mid (\forall \alpha\in \Delta_n^+)
\ (\lambda -\rho)(\check\alpha)> 2 - m_\alpha\}\, . $$
The inversion formula for the horospherical Cauchy transform is based on the following key result.
\begin{lemma} Let $\lambda\in \Lambda_c$. Let
$f\in L^2(X)_\lambda^{-\omega}\subset\mathcal{O}(D_+)$.
Then $\widehat{f}\in \mathcal{O}_{\rm f.i}(\Xi_+)$
and
\begin{equation}\label{eq=inversion}
f(z)=d(\lambda)\cdot \int_{S_\mathbb{R}(z)} \widehat{f}(\xi) \ d_z(\xi)
\qquad (z\in D_+)\, .\end{equation}
In other words, $f=d(\lambda)\cdot (\widehat{f})^\vee$.
\end{lemma}
\begin{proof} Let $f(ga\cdot \xi_0)=
\langle \pi_\lambda(a^{-1})\pi_\lambda(g^{-1})v, v_H\rangle$
for some $v\in {\mathcal H}_\lambda^{-\omega}$.
Then by (\ref{eq=lala})
$$\widehat{f}(\xi) ={1\over d_s(\lambda)}
\langle v, \pi_\lambda (\overline{\xi})v_\lambda\rangle\, .$$
As $\lambda\in \Lambda_c$, \cite{k}, Th. 2.16 and Th. 3.6, imply that
$$\int_{H/M}\pi_\lambda(h)v_\lambda \, d(hM)= {d_s(\lambda)\over d(\lambda)}\cdot v_H$$
with the left hand side understood as convergent
$\mathcal{H}_\lambda^{-\omega}$-valued integral.
Thus with $z=ga\cdot x_0$ one obtains that
\begin{eqnarray*} \int_{S_\mathbb{R}(z)}\widehat{f}(\xi)\, d_z(\xi) & =& \int_{H/M}
{1\over d_s(\lambda)}
\langle \pi_\lambda(a^{-1})\pi_\lambda (g^{-1})v, \pi_\lambda(h)v_\lambda\rangle
\, d(hM)\\
&=& {1\over d(\lambda)}\cdot f(z)\, ,
\end{eqnarray*}
completing the proof of the lemma.
\end{proof}
\begin{remark} If $\lambda\in \Lambda_2
\backslash \Lambda_c$
and $0\neq f\in L^2(X)_\lambda^{-\omega}$, then
the integral $\int_{S_\mathbb{R}(z)} \widehat{f}(\xi)\, d_z(\xi)$ does not converge.
However, using the results from \cite{k} it can be shown
that the identity (\ref{eq=inversion}) can be
analytically continued (cf. Remark \ref{rem0})
to all $\lambda\in \Lambda_2$.
Henceforth we understand (\ref{eq=inversion})
as an identity valid for all $\lambda\in \Lambda_2$.
\end{remark}
The formal dimension $d(\lambda)$ is a polynomial in $\lambda$,
explicitly given by
\cite{hc55}
\begin{equation}\label{***}d(\lambda)=
c \cdot
\prod_{\alpha\in \Sigma^+} \langle \lambda-\rho(\mathfrak{c}), \alpha\rangle
\end{equation}
with $c\in \mathbb{R}$ a constant depending on the normalization
of measures.
\par The right action of $T$ on $\Xi_+$ induces an identification
of ${\mathcal U} (\mathfrak{t}_\mathbb{C})$ with
$G$-invariant differential operators on $\Xi_+$.
As usual we identify ${\mathcal U}(\mathfrak{t}_\mathbb{C})$ with polynomial functions on
$\mathfrak{t}_\mathbb{C}$. In this
way $d(\lambda)$ corresponds to a $G$-invariant
differential operator $\mathcal {L}$ on $\Xi_+$ which acts
along the fibers of $\Xi_+\to F_+$ and has constant
coefficients in logarithmic coordinates.
In particular,
$$\mathcal {L} \phi = d(\lambda) \cdot \phi \qquad (\phi\in \mathcal{O}(\Xi_+)_\lambda)\ .$$
Combining this fact with equation (\ref{eq=inversion}) we obtain the
main result of this paper.
\begin{theorem} Let
$f\in \sum_{\lambda\in \Lambda_2} L^2(X)_\lambda^{-\omega}\subset \mathcal{O}(D_+)$.
Then
$$ f = (\mathcal {L} \widehat{f})^\vee\ .$$
\end{theorem}
\section{The example of the hyperboloid of one sheet}
\noindent
This section is devoted to the discussion of the
case $G={\rm Sl}(2,\mathbb{R})$ and $H={\rm SO}(1,1)$. Notice that
$G/H\simeq {\rm SO}_e(2,1)/{\rm SO}_e(1,1)$. For what follows it
is inconsequential to assume that $G={\rm SO}_e(2,1)$ and $H={\rm
SO}_e(1,1)$
although the universal complexification of
$G={\rm SO}_e(2,1)$ is not simply connected.
\par The map
$$G/H\to \mathbb{R}^3, \ \ gH\mapsto g\cdot\left[\begin{matrix}1\\ 0\\ 0\end{matrix}\right]$$
identifies $X=G/H$ with the one sheeted hyperboloid
$$X=\{ x=(x_1, x_2, x_3)^T\in \mathbb{R}^3\mid x_1^2 + x_2^2 -x_3^2 =1\}\ .$$
The base point $x_0$ becomes $(1,0,0)^T$.
Let us define a complex bilinear pairing on $\mathbb{C}^3$ by
$$\langle z, w\rangle = z_1w_1+ z_2w_2 -z_3w_3 \qquad \text{for}\quad
z=\left[\begin{matrix}z_1\\ z_2\\ z_3\end{matrix}\right], w=\left[\begin{matrix}w_1\\ w_2\\ w_3\end{matrix}\right]\in \mathbb{C}^3\, .$$
If we set $\Delta(z)=\langle z, z\rangle $ for $z\in \mathbb{C}^3$, then
$X=\{ x\in \mathbb{R}^3\mid \Delta(x)=1\}$.
Further one has $G_\mathbb{C}={\rm SO}(2,1; \mathbb{C})\simeq {\rm SO}(3,\mathbb{C})$
and $H_\mathbb{C}={\rm SO(1,1; \mathbb{C})}\simeq {\rm SO}(2,\mathbb{C})$. Clearly
$$X_\mathbb{C}=G_\mathbb{C}/ H_\mathbb{C} =\{ z\in \mathbb{C}^3\mid \Delta(z)=1\}\ .$$
\par Our choice of $T$ will be
$$T=K=\left\{ \left[\begin{matrix} \cos \theta & \sin \theta & 0\\ -\sin\theta & \cos \theta & 0\\
0 & 0 & 1\end{matrix}\right]\mid \theta\in \mathbb{R}\right\}\ .$$
In particular $\mathfrak{a}=\mathbb{R} U_0$ where
$$U_0=\left[\begin{matrix} 0 & i & 0\\ -i & 0 & 0\\
0 & 0 & 0 \end{matrix}\right]$$
and $\Delta=\Delta_n=\{ \alpha, -\alpha\}$ with $\alpha(U_0)=1$. If we
demand $\alpha$ to be the positive root, then
$$N_\mathbb{C} =\left\{ \left[ \begin{matrix} 1-{z^2\over 2} & i{z^2\over 2} & iz\\
i{z^2\over 2} & 1+{z^2\over 2} & z\\
iz & z & 1\end{matrix}\right]\mid z\in \mathbb{C}\right\}\ .$$
The homogeneous space $G_\mathbb{C}/M_\mathbb{C} N_\mathbb{C} $ naturally
identifies with the isotropic vectors $\Xi=\{\zeta\in
\mathbb{C}^3\backslash\{0\} \mid \Delta(\zeta)=0\}$
via the $G_\mathbb{C}$-equivariant map
$$G_\mathbb{C}/ M_\mathbb{C} N_\mathbb{C}\to \Xi,\ \ gM_\mathbb{C} N_\mathbb{C}\mapsto g\cdot\zeta_0 \qquad \text{where}
\quad \zeta_0=\left[\begin{matrix} 1 \\ -i\\ 0\end{matrix}\right].$$
The correspondence between elements
of $\zeta\in \Xi$ and horospheres on $X_\mathbb{C}$ is
explicitly given by
$$\zeta \leftrightarrow E(\zeta)=\{ z\in X_\mathbb{C}\mid \langle z, \zeta\rangle =1\}\ .$$
Elements $\zeta\in \Xi$ can be expressed as $\zeta=\xi+i \eta$ with
$\xi, \eta\in \mathbb{R}^3\backslash\{0\}$ subject to
$$\Delta(\xi)=\Delta(\eta) \qquad \text{and}\qquad \langle \xi, \eta\rangle =0\, .$$
A simple computation yields
$$\Xi_+=\{ \zeta=\xi+i\eta\in \Xi\mid \Delta(\xi)=\Delta(\eta)>1\}\, $$
and
$$D_+=\{ z=x+iy\in X_\mathbb{C} \mid \Delta(x)> 1\}\, .$$
Next we compute the kernel function.
\begin{lemma}\label{lem=ah} For all $z\in X_\mathbb{C}$ and $\zeta\in \Xi$
one has
$$a_H(\zeta^{-1} z)^{-\alpha}= \langle z,\zeta\rangle\ . $$
\end{lemma}
\begin{proof} We first show that
\begin{equation}\label{eq=33}
a_H(g)^{-\alpha}= \langle g\cdot x_0, \zeta_0\rangle\qquad (g\in G_\mathbb{C})\ .
\end{equation}
Observe that both sides are holomorphic functions on $G_\mathbb{C}$
which are left $N_\mathbb{C}$ and right $H_\mathbb{C}$-invariant.
Thus it is enough to test with
elements $a\in A
_\mathbb{C}$. Then $a_H(a)^{-\alpha}=
a^{-\alpha}$.
On the other hand
for
$a= \left[\begin{matrix} \cos \theta & \sin \theta & 0\\ -\sin\theta & \cos \theta & 0\\
0 & 0 & 1\end{matrix}\right]$ with $\theta\in \mathbb{C}$ we specifically obtain
$$\langle a\cdot x_0, \zeta_0\rangle =\langle \left[\begin{matrix} \cos\theta\\ -\sin\theta\\ 0\end{matrix}
\right], \left[\begin{matrix} 1 \\ -i\\ 0\end{matrix}
\right]\rangle= \cos\theta +i \sin\theta =a^{-\alpha}\ .$$
This proves (\ref{eq=33}).
\par It is now easy to prove the asserted statement
of the lemma. For that write $\zeta=g\cdot \zeta_0$ and $z=y\cdot x_0$
for $g,y\in G_\mathbb{C}$.
Then (\ref{eq=33}) implies
$$a_H(\zeta^{-1}z)^{-\alpha}=a_H(g^{-1}y)^{-\alpha}=
\langle g^{-1}y\cdot x_0, \zeta_0\rangle =\langle y\cdot x_0, g\cdot\zeta_0\rangle=
\langle z, \zeta\rangle\,.$$
\end{proof}
We observe that $\Lambda_{>0}=\Lambda_2=\mathbb{Z}_{>0}\cdot\alpha$. Hence
Lemma \ref{lem=ah}
implies that the horospherical Cauchy kernel is
$$\mathcal{K}(\zeta)={1\over a_H(\zeta^{-1})^{-\alpha}-1}= {1\over \langle \zeta,
x_0\rangle -1}\qquad
(\zeta\in \Xi_+)\ .$$
The horospherical Cauchy transform for $f\in L^1(X)$ is given by
$$\widehat{f}(\zeta)=\int_X {f(x)\over \langle \zeta, x\rangle -1}\ dx \qquad
(\zeta\in \Xi_+)$$
with $dx$ the invariant measure on the hyperboloid $X$.
Finally, we will discuss inversion. Let the inner product
on $\mathfrak{a}$ be normalized such that $\langle \alpha, \alpha\rangle =1$
and identify $\mathbb{R}$ with $\mathfrak{a}^*$ by means of the bijection
$\mathbb{R}\ni \lambda\mapsto \lambda\alpha\in \mathfrak{a}^*$. Then $\Lambda_{>0}=\mathbb{Z}_{>0}$
and $d(\lambda)=\lambda-{1\over 2}$. An easy calculation gives
$$\mathcal {L}=\sum_{j=1}^3 \zeta_j {\partial\over \partial \zeta_j} \ -\ {1\over 2}
\ .$$
For $f\in \sum_{\lambda>0} L^2 (X)_\lambda^{-\omega}\subset \mathcal{O}(D_+)$ the inversion formula
reads
$$f(z)=\int_{-\infty}^\infty (\mathcal {L} f)\left(\begin{matrix}z_1 -i {z_2\over r
}\cosh t -i {z_1z_3\over r}\sinh t\\
z_2 +i{z_1\over r}\cosh t
-i{z_2z_3\over r }\sinh t\\
z_3 -i r\sinh t\end{matrix} \right) \ dt, $$
where $r=\sqrt{z_1^2+z_2^2}$.
|
{
"timestamp": "2004-11-25T03:05:19",
"yymm": "0411",
"arxiv_id": "math/0411564",
"language": "en",
"url": "https://arxiv.org/abs/math/0411564"
}
|
\section{Introduction}
Luminous infrared galaxies (LIRGs) are the dominant population of
extragalactic objects in the local ($z<0.3$) universe at bolometric
luminosities above $L > 10^{11}$ L$_\odot$. Ultra luminous infrared
galaxies (ULIRGs) present $L_{\rm FIR} > 10^{12}$ L$_\odot$, and are
the most luminous local objects (see Sanders \& Mirabel 1996 for a
review). The most notable feature of LIRGs and ULIRGs is perhaps the
large concentration of molecular gas that they have in their
centers, at densities orders of magnitude larger than found in
Galactic giant molecular clouds (e.g., Downes et al. 1993; Downes
\& Solomon 1998; Bryant \& Scoville 1999). This large molecular
material is believed to be the result of a merging process between
two or more galaxies, in which much of the gas in the former spiral
disks
---particularly that located at distances less than 5 kpc from each
of the pre-merger nuclei--- has fallen together, therein triggering
a huge starburst phenomenon (e.g., Sanders et al. 1988; Melnick \&
Mirabel 1990). LIRGs and ULIRGs thus have large CO luminosities and
a high value for the ratio $L_{\rm FIR}/L_{\rm CO}$, both being
about one order of magnitude greater than for normal spirals. The
latter substantiates, based on star formation models, a greater star
formation rate per unit mass of gas.
In a recent letter \cite{tor2004}, by computing the
$\gamma$-ray flux produced by the interaction between an enhanced
cosmic ray population and the molecular material, ultimately leading
to neutral pion decay, it was shown that LIRGs and ULIRGs are
plausible sources for GLAST and the next generation of Cherenkov
telescopes. This result was deepened by a detailed analysis of the
$\gamma$-ray emission from Arp 220, the most extensively studied
ULIRG, that included the emission of secondaries \cite{torres2004}. The
enhanced population of relativistic particles is the result of the
large number of supernovae and young stellar objects present in the
central environment. The star formation rate in LIRGs is 100--1000
times larger (e.g., Gao \& Solomon 2004b; Pasquali et al. 2003) and
scales with the amount of dense molecular
gas (traced in turn by the HCN line). It is natural to expect that
the central regions of LIGs have cosmic ray enhancements comparable
to the ratio between their star formation rates (SFRs) and that of
the Milky Way. Torres et al. (2004) showed that if this is correct,
then many of the LIRGs --particularly those located in the 100 Mpc
sphere-- are going to appear as individual GLAST sources.
Although detailed predictions for a particular ULIRG (Arp 220,
Torres 2004) suggest that these galaxies were below EGRET
sensitivity, it was yet open to discussion if they would show up in
an stacking search. This search, similar to what was made by Cillis
et al. (2004) for radiogalaxies, and by Reimer et al. (2003) for
clusters of galaxies, is presented here. We also provide here upper
limits from existing EGRET data for the fluxes of LIRGs in different
energy bands, which are useful both, for future theoretical
modelling and for consistency check with new sets of data.
\section{Stacking technique}
The general stacking method we have applied follows that outlined by
Cillis et al. (2004) when studying radiogalaxies. In order to
perform the stacking technique and look for a possible collective
detection of the $\gamma$-ray emission from LIRGs, we have extracted
rectangular sky maps with the selected target objects located at the
center.
We have used EGRET data from April 1991 through September 1995 ---as
covered by the Third EGRET Catalog (Hartman et al. 1999), in
celestial and galactic coordinates. The extracted maps for each
particular target were chosen to be $60^{\circ} \times 60^{\circ}$
in size, in order to have large off fields of views and be
consistent the EGRET point spread function (PSF).
Transforming the maps to an equatorial position before co-adding
them causes a substantial image distortion, except for those
originally near the all-sky map equator. To minimize this distortion
we have extracted maps from the all-sky map, celestial or Galactic, which
had the target object closer to its equator. This was done for both, count
and the exposure maps, and for all targets. We have transformed the
coordinates of each map into pseudo-coordinates, with the target
object at the center. After doing this, the maps were co-added,
producing the stacking.
It was also necessary to extract a diffuse background map for each
target object. For this purpose, we have used the diffuse model
that is standard in EGRET analysis \cite{sdh1997}. In order to take
into account the existence of known EGRET sources, idealized sources
with the appropriate fluxes distributed following EGRET's PSF were
added to the diffuse map. This was done only for the sources that
were detected significantly during the time interval of the all-sky
maps: 3EG sources that were significantly detected only during
shorter sub-intervals were not considered for the background model.
It was necessary to normalize each one of the extracted diffuse maps ($D_i$)
for the different exposures ($\epsilon_i$) of the target objects.
The extracted diffuse map for each target object was also transformed
into pseudo-coordinates.
Finally the diffuse maps for the co-added data were obtained as:
$\frac{1}{\epsilon_{tot}}{\sum_{i} c_{i}}$ where $c_{i}$ are the counts
diffuse maps ($c_i=\epsilon_{i} D_{i}$) and
$\epsilon_{tot}=\sum_{i}\epsilon_{i}$.
Stacked maps for the different groups of objects analyzed below
were created and results are described in the next section. To find
the significance of the detection in a particular ``class'' we have
used the EGRET likelihood ratio \cite{jrm1996}, a formalism that
produces a ``test statistic'' (``TS'') $TS=-2(\ln L_0 - \ln L_1)$,
where ${ L_0}$ and ${ L_1}$ are likelihood values with and without a
possible source. $\sqrt{TS}$ is roughly equivalent to the number of
standard deviations ($\sigma$).
\section{Classes of LIRGS and results}
We have stacked galaxies from the HCN survey (Gao \& Solomon 2004a;
Gao \& Solomon 2004b) after sorting them using different criteria.
The HCN survey is a systematic observation of 53 IR-bright
galaxies, including 20 LIRGs with $L_{\rm FIR}>10^{11}$L$_\odot$, 7
with $L_{\rm FIR}>10^{12}$L$_\odot$, and more than a dozen of the
nearest normal spiral galaxies. Essentially, all galaxies with
strong CO and IR emission were chosen for survey observations. It
also includes a literature compilation of data for another dozen
IR-bright objects. This is the largest and most sensitive HCN
survey of galaxies, and thus of dense interstellar mass residing
there, to date.
We have excluded from our consideration those galaxies that are
close to the Galactic Plane $|b|<20^{\circ}$ (outer Galaxy), and
$|b|<30^{\circ}$ ($|l|<50^{\circ}$), because the background around
those objects would overwhelm any possible signal.
Table 1 shows all the galaxies in the HCN survey referred above,
ordered by $L_{HCN}/L_{CO}$ indicating their coordinates, redshift,
the ratio between line luminosities --$L_{HCN}$ and $L_{CO}$--, and
cosmic ray enhancement. In that Table, the column ``Galactic Plane''
shows whether the galaxy is within $|b|<20^{\circ}$ (outer Galaxy),
and $|b|<30^{\circ}$ ($|l|<50^{\circ}$) and thus whether that galaxy
was excluded from our tests. Following Torres et al. (2004), to
which we refer for details, we have computed the minimum average
value of cosmic ray enhancement, dubbed $k$, for which the
$\gamma$-ray flux above 100 MeV would be above 2.4 $\times 10^{-9}$
photons cm$^{-2}$ s$^{-1}$. The latter is approximately the GLAST
satellite sensitivity after 1 yr of all-sky survey. $k$-values of at
least a few hundreds are deemed probable, based on enhancements
derived in individual supernova remnants. Luminosity distances used
were those provided in the HCN survey, assuming a Hubble parameter
of $H_0$=75 km s$^{-1}$ Mpc$^{-1}$; although, since redshifts are
very small, changes in the cosmological model do not introduce
significant changes in distances. We selected the galaxies to
perform the stacking technique taking into account the brightest,
the nearest, those having the smaller cosmic ray enhancement needed
to produce fluxes above GLAST sensitivity, and finally the ratios
between the $L_{HCN}$ and $L_{CO}$, and between SFR and $k$. In the
case of the latter, those galaxies having $(SFR/SFR_{MW})/k> 1 $ are
believed to be particular good candidates for detection.
For each class or subclass we have generated stacked maps containing
$N$ galaxies, with $N$=2, 4, 6, etc. For each stacked map so
generated, we have then determined the detection significance using
the standard likelihood method. The results of this research are
summarized in Table 2. For each sorting criterion we have specified
the total number of the objects considered, the maximum detection
significance, and the number of objects yielding that maximum. We
have found no significant result above $1.8\sigma$, for any class
investigated and for any number of objects included. This number was
obtained for the case in which all ULIRGs with $L>10^{12} L_{\odot}$
were considered ordered by redshift.
In Figure 1 we show an example of the stacked maps created (maps of
counts, exposure and background). The left column corresponds to the
case of the highest TS obtained (the case described in the last
paragraph). The middle and right column show stacked maps for LIRGs
ordered by $L_{HCN}/L_{CO}$ for 54 stacked galaxies (where the TS
obtained was equal to zero), and for the first four LIRGs ordered by
$L_{HCN}/L_{CO}$ (ARP 193, MRK 273, MRK 231, UGC 05101), where the
maximum TS for the class was obtained ($\sqrt{TS}=1.2$).
\begin{figure}
\figurenum{1}
\epsscale{0.30}
\plotone{f1.eps} \epsscale{0.30}
\plotone{f2.eps} \epsscale{0.30}
\plotone{f3.eps} \epsscale{0.30}
\plotone{f4.eps} \epsscale{0.30}
\plotone{f5.eps} \epsscale{0.30}
\plotone{f6.eps} \epsscale{0.30}
\plotone{f7.eps} \epsscale{0.30}
\plotone{f8.eps} \epsscale{0.30}
\plotone{f9.eps}
\caption{First column: Example of
stacked maps for 6 ULIRGs with $L_{IR} \ge 10^{12}L_{\odot}$ ordered
by redshift (ARP220, MRK 273, UGC 05101, MRK 231, 05189-2524,
10566+2448). Middle column: Example of stacked maps for 54 LIRGs
ordered by $L_{HCN}/L_{CO}$ (see Table 1). Right column: Example of
stacked maps for 4 LIRGs ordered by $L_{HCN}/L_{CO}$ (ARP193,
MRK273, MRK231, UGC 05101). In all columns, (a) counts, (b) exposure
(c) background. The location of interest is at pseudo-Dec $0^{o}$
and pseudo-RA $180^{o}$.}
\end{figure}
Upper limits for the fluxes for all LIRGs in the HCN survey that
are located away from the Galactic plane are given in Table 3, in
units of $10^{-8}$ cm$^{-2}$ s$^{-1}$. Table 3 presents five of the
sixteen energy bins where we have conducted this
analysis.\footnote{These are 30-50, 50-70, 70-100,100-150, 150-300,
300-500, 500-1000, 1000-2000, 2000-4000, 4000-10000, 30-100,
100-300, 300-1000, $>100$, $>300$, $>1000$ MeV.} None of the LIRGs
we investigated have been individually detected, which is, in fact,
consistent with the level of flux expected from LIRGs (i.e., fluxes
above GLAST sensitivity but below the EGRET one (see, e.g., Torres
2004). The only galaxy for which a flux (not an upper limit) was
determined is Arp 55. The flux for this galaxy, in the range 500
MeV--1 GeV was $1.9\pm 0.6 \times 10^{-8}$ photons cm$^{-2}$
s$^{-1}$, with $TS=26.3$. This might be thought of as suggestive for
a detection, although Arp 55 was not significantly detected in
consequent energy intervals, and seems not to be special otherwise.
Its redshift is larger than the prototypical ULIRG, and more active
galaxy, Arp 220, and moreover, its $L_{\rm HCN}/L_{\rm CO}$ ratio is
smaller than that obtained for the former. Lastly, one expects
statistical fluctuations when investigating a sample of more than 50
galaxies in 16 different energy bins.
\section{Discussion}
We have presented an stacking search for $\gamma$-ray emission from
LIRGs and ULIRGs using data from the EGRET experiment. Our results
show that these galaxies were neither individually nor collectively
detected, under a variety of different sub-sampling and ranking
ordering. Apart from the obvious arrangement by distance (redshifts)
we have essentially explored all possible ordering parameters to
investigate the preferable cases for detectability at high energy
$\gamma$-rays. They included the ratio between line luminosities of
HCN and CO; i.e., the ratio between the dense mass most plausible
subject to higher enhancements of cosmic rays and the molecular mass
less densely distributed as traced by CO. They also included the
value of cosmic ray enhancement
---computed under simplifying assumptions-- that would make the galaxy
detectable by the GLAST-LAT. Also, we considered the ordering using
the ratio between the SFR in Milky Way units and the cosmic-ray
enhancement, in the understanding that a realistic value for the
latter should exceed the SFR. We have also imposed upper limits in
different energy bands that can be used as a constraint for future
multifrequency modelling. Even if we presently were not able to
detect LIRGs and ULIRGs in high energy $\gamma$-rays, a suggestive
excess has been found among the most active star forming regions of
the sample, especially when ordered by redshift. Summarizing, LIRGs
and ULIRGs, whereas not individually detected by EGRET (a result
consistent with theoretical expectations) well remain a plausible
candidate for GLAST and Cherenkov telescopes detections.
\acknowledgments
ANC would like to thank R.C. Hartman and D. L. Bertsch for useful
discussions. The work of DFT was performed under the auspices of the
U.S. D.O.E. (NNSA), by the University of California Lawrence
Livermore National Laboratory under contract No. W-7405-Eng-48. OR
acknowledges support by DLR QV0002.
\clearpage
\include{tab1}
\include{tab2}
\include{tab3}
\clearpage
|
{
"timestamp": "2004-11-16T00:41:08",
"yymm": "0411",
"arxiv_id": "astro-ph/0411429",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411429"
}
|
\section*{References}\list
{}{\setlength\labelwidth{1.4em}\leftmargin\labelwidth
\setlength\parsep{0pt}\setlength\itemsep{0pt}
\setlength{\itemindent}{-\leftmargin}
\usecounter{enumi}}}
\let\endthebibliography=\endlis
\begin{document}
\begin{center}
{\bf Giant Radio Sources in View of the Dynamical Evolution of FRII-type
Population. I. The Observational Data, and Basic Physical Parameters of
Sources Derived from the Analytical Model}
\end{center}
\begin{center}
by
{J. Machalski, K.T. Chy\.{z}y \& M. Jamrozy}
\end{center}
\begin{center}
Astronomical Observatory, Jagellonian University, ul. Orla 171,\\ 30--244 Cracow, Poland \\
e--mail: machalsk@oa.uj.edu.pl jamrozy@oa.uj.edu.pl
\end{center}
\begin{abstract}
The time evolution of {\sl giant} lobe-dominated radio galaxies
(with projected linear size $D>1$ Mpc if
$H_{0}$=50 km\,s$^{-1}$Mpc$^{-1}$ and $q_{0}$=0.5) is analysed on the basis
of dynamical evolution of the entire FRII-type population. Two
basic physical parameters, namely the jet power $Q_{0}$ and central density of
the galaxy nucleus $\rho_{0}$ are derived for a sample of {\sl giants} with
synchrotron ages reliably determined, and compared with the
relevant parameters in a comparison sample of normal-size sources consisting of
3C, B2, and other sources. Having the apparent radio luminosity $P$ and linear
size $D$ of each sample source, $Q_{0}$ and $\rho_{0}$ are obtained by fitting the
dynamical model of Kaiser et al. (1997).
We find that:
(i) there is not a unique factor governing the source size; they are old sources
with temperate jet power ($Q_{0}$) evolved in a relatively low-density
environment ($\rho_{0}$). The size is dependent, in order of decreasing partial
correlation coefficients, on age; then on $Q_{0}$; next on $\rho_{0}$. (ii) A
self-similar expansion of the sources' cocoon seems to be feasible if the power
supplied by the jets is a few orders of magnitude above the minimum-energy value.
In other cases the expansion can only initially be self-similar; a departure from
self-similarity for large and old sources is justified by observational data of
{\sl giant} sources. (iii) An apparent increase
of the lowest internal pressure value observed within the largest sources' cocoon
with redshift is obscured by the intrinsic dependence of their size on age and the
age on redshift, which hinders us from making definite conclusions about a
cosmological evolution of intergalactic medium (IGM) pressure.
{\bf Key words:} {galaxies: active -- galaxies: evolution -- galaxies: kinematics
and dynamics}
\end{abstract}
\section{Introduction}
Extragalactic radio sources, powered by twin jets resulting from nuclear
energy processes in the Active Galactic Nucleus (AGN), exhibit a very large
range of their linear size. The sizes of these powerful sources range from
less than $10^{2}$ pc (GPS: Giga Hertz-peaked spectrum), to $10^{2}-10^{4}$
pc (CSS: compact steep spectrum), $10^{4}-10^{6}$ pc (normal-size sources), up
to greater than $10^{6}$ pc $\equiv 1$ Mpc (`giant' radio sources). One of the
key problems of the evolution of extragalactic sources is whether and how
different
size sources are related. Is there a single evolutionary scheme governing the
size evolution of radio sources, or do small and large sources evolve in a
different way?
From many years `giant'\footnote{hereafter we use {\sl giant} or {\sl giants}
instead of giant radio source(s)} radio sources have been of special interest for
several reasons. Their very large angular sizes give excellent opportunity for
the study of source physics. They are also very useful to study the density and
evolution of the intergalactic and intracluster environment (cf. Subrahmanyan
and Saripalli 1993;
Mack et al. 1998), as well as to verify the unification scheme for powerful
radio sources (Barthel 1989; Urry and Padovani 1995). Finally, they can be used
to constrain dynamical models of the source lifetime evolution (e.g. Kaiser and
Alexander 1999). The general questions are: do the largest radio sources reach
their extremal {\sl giant} sizes due to (i) exceptional physical conditions
in the intergalactic medium, (ii) extraordinary intrinsic properties of the AGN,
or simply (iii) because they are extremely old?
To answer these questions, in a number of papers attempts were made
to recognize properties other than size which may differentiate {\sl
giants} from normal-size sources. The {\sl giant}-source morphologies, energy
density in the lobes and interaction with the intergalactic medium were studied
by Subrahmanyan et al. (1996), who suggested that {\sl giant} radio galaxies may
be located in the lowest density regions, and `may attained their large size as
a result of restarting of their central engines in multiple phases of activity
along roughly similar directions'. Also Mack et al. (1998), after a study of 5
nearby {\sl giant} radio galaxies, argued that those sources are so huge because
of their low-density environment and not because of their old ages. A similar
conclusion was drawn by Cotter (1998) after his study of a sample of
high-redshift 7C giants. He found those sources to be not old and of similar
kinetic powers of the jet as normal-size 3C sources. But the densities of their
surrounding medium was found to be much lower than those around most of 3C sources
(Rawlings and Saunders 1991). Ishwara-Chandra and Saikia (1999) compiled a sample
of more than 50 known {\sl giant} sources (many of them being of FRI-type
morphology)
and compared some of their properties with those of a complete sample of 3CR
sources `to investigate the evolution of giant sources, and test their
consistency with the unified scheme for radio galaxies and quasars'. They
concluded that the location of {\sl giants} on the power--linear size ($P-D$)
diagram may suggest that the largest sources have evolved from the smaller
sources. Finally, in the recent extensive study of 26 {\sl giant} galaxies by
Schoenmakers et al. (2000), the authors argued that those galaxies `are both
old sources, in term of their spectral age, and are situated in a relatively
low-density environment, but also that neither of these two properties are
extreme. Therefore, their large size probably results from a combination of
these properties'.
From the above results, it is clear that the phenomenon of the {\sl giant} radio
sources is still open to further research. Therefore, in this paper we analyse
whether observed properties of {\sl giant} sources can be explained by a model
of the dynamical evolution of classical double radio sources in cosmic time, and
what factor (if there is a one) is primarily responsible for the {\sl giant} size.
Obviously such an analysis, based on the giant radio sources only, i.e. the
sources with a strongly
limited range of linear sizes, would not be reliable and even possible. Therefore,
to solve the above problems we analyse the sample of radio sources comprising both
the `giant'-size and `normal'-size sources, most of the latter with sizes from
$\sim$100 kpc to 1 Mpc.
Most published analytical models of the dynamical evolution of extended powerful
radio sources are based on the hydrodynamical self-similar expansion of the
sources' cocoon caused by the interaction of light and supersonic jets with
the ambient medium. Carvalho and O'Dea (2002) classify those models into three
mutually exclusive types encoded as I, II, and III, and give an excellent
description of their properties. However, we realize that only few of them deal
with the source's energetics and the radio luminosity evolution with time which
is crucial for e.g. the analysis of evolutionary tracks of sources on the
luminosity--size ($P$--$D$) plane, i.e. the most sensitive characteristics of the
dynamical models.
Two type III models, published by Kaiser, Dennett-Thorpe and Alexander (1997)
[hereafter referred to as KDA] and Blundell, Rawlings and Willott (1999) [hereafter
BRW], are more sophisticated than models of type I and II. We
apply the KDA model for its simplicity in comparison with the BRW one, and in
spite of some objections about an application of the self-similar models to large
and old sources in which an internal pressure cannot be always above the external
medium pressure (e.g. Hardcastle and Worrall 2000).
The present analysis is confined to classical double radio sources with FRII-type
(Fanaroff and Riley 1974) morphology. Synchrotron ages in a sample comprising
both giant and normal-size sources are used to verify the dynamical time evolution
of such sources predicted by the above analytical model.
Basic physical parameters, i.e. the jet power $Q_{0}$, the central density of the
galaxy nucleus $\rho_{0}$, the energy
density and pressure in the lobes/cocoon ($u_{c}$ and $p_{c}$), and the total
energy of the source $E_{\rm tot}$ are derived with the KDA model for
each member of
the sample to fit its estimated age, redshift, monochromatic radio luminosity,
projected size, and axial ratio. Besides, the fitted values of some of these
parameters are directly compared with the relevant values calculated straight
from the data, e.g. the equipartition magnetic field strength, $B_{\rm eq}$,
equipartition energy density in the source, $u_{\rm eq}$, and its total energy
$U_{\rm eq}$. These `observational' values and their errors are homogeneously
calculated using the method outlined by Miley (1980). Next, the physical
parameters derived for real radio sources are used to specify the conditions or
circumstances under which the FRII-type radio sources can reach the largest
observed linear sizes, to verify the applicability of the self-similar analytical
models for the largest sources, and to search for an evidence after a cosmological
evolution of the ambient pressure in the intergalactic medium (IGM).
The observational data used and the physical parameters of the sample sources
found directly from the data are given in Section~2. The application of the
dynamical models is described in Section~3, while in Section~4 results of the
modelling are presented in the form of statistically significant correlations
between physical parameters of the sample sources derived from the fitting
procedure and their apparent observational parameters. The results obtained
are discussed in Section~5 and the conclusions are given in Section~6.
\section{Observational Data}
\subsection[section]{Selection Criteria}
Similarly to the approach of Ishwara-Chandra and Saikia, we have compiled a subsample
of 18 {\sl giant} sources and a comparison subsample of 54 normal-size sources.
The selection criteria were as follows: (i) the sources have the FRII-type
morphology, (ii) The existing radio maps enable a suitable
determination of their lateral extend, i.e. transversal to the source's axis, and
(iii) their spectral age or the expansion velocity, determined with the same model
of the energy losses, are available from the literature. The spectral ageing data
calculated with the JP model (Jaffe and Perola 1973; cf. Sect.~2.4) for the
{\sl giants} are taken from the papers of Saripalli et al. (1994), Parma et al.
(1996), Mack et al. (1998), Schoenmakers et al. (1998, 2000), Ishwara-Chandra
and Saikia (1999), Lara et al. (2000), and Machalski and Jamrozy (2000). For the
aim of this paper, i.e. for an observational verification of the dynamical time
evolution of classical radio sources, especially the growth of their linear size
with time predicted by the analytical models,
the comparison subsample has been chosen to comprise high-luminosity (high- and
low-redshift), as well as low-luminosity normal-size sources. The high-redshift
(with $z\geq$0.5) and low-redshift ($z<$0.5) sets consist of 3C sources taken
from the papers of Alexander and Leahy (1987), Leahy et al. (1989), Liu et al.
(1992), and Guerra et al. (2000). All of them have
$P_{178}\geq 10^{25}$\,W\,Hz$^{-1}$sr$^{-1}$
(other selection criteria are summarized in Liu et al.). The low-luminosity
set comprises FRII-type sources with $P_{1.4}< 10^{24.4}$
W\,Hz$^{-1}$sr$^{-1}$ (corresponding to $P_{178}<10^{25}$\,W\,Hz$^{-1}$sr$^{-1}$
assuming a mean spectral index of 0.7 between 178 and 1400 MHz). A limited
number of such sources with spectral ages determined have been available from
the papers of Klein et al. (1995) and Parma et al. (1999).
\subsection[section]{Observational Parameters}
For all individual sample sources the following observational parameters are
determined: the redshift $z$, the 1.4 GHz luminosity in W\,Hz$^{-1}$sr$^{-1}$,
the projected linear size $D$ in kpc, the cocoon's axial ratio $AR=D/b$ and
volume $V_{\rm o}$ in kpc$^{3}$. The volume of the source, $V_{\rm o}$, is
calculated assuming a cylindrical geometry with the length $D$, and the base
diameter $b$ taken as the average of the full deconvolved widths of the two lobes.
The latter are measured between 3$\sigma$ contours on a radio contour map
half-way between the core and the hot spots or distinct extremities of the
source. All these data for the {\sl giant}-size and normal-size sources in our
sample are given in columns 3--7 of Table~1. The columns
8--9 of Table~1 give the reference papers to the radio map used to determine both
$D$ and $AR$ for each sample source and its spectral age, respectively.
\subsection[section]{Physical Parameters Derived Directly from the Data}
As mentioned in Section~1, the equipartition magnetic field $B_{\rm eq}$,
the energy density $u_{\rm eq}$, and the total emitted energy $U_{\rm eq}$ with
their errors are calculated with the method outlined by Miley (1980). However,
the Miley's assumption of pure power-law radio spectrum has been abandoned; the
cocoon's radio spectrum has been determined by the least-square method fit of
the simple analytic functions: $y=a+bx+c\exp(\pm x)$ or $y=a+bx+cx^{2}$ (where
$x=\log\nu$[GHz], $y=\log S(\nu)$) to the available flux densities $S(\nu)$
weighted by their given error. The total luminosity of the cocoon has been
then integrated between 10 MHz and 100 GHz using the above fitted spectrum
with $H_0=50$ km\,s$^{-1}$Mpc$^{-1}$ and $q_0$=0.5\footnote{An application of
the most recent cosmological constants will change numerical values of dimensions,
power, ambient density, etc., but not relations between observational and
physical parameters of the sources. The applied constants provide an easier
comparison of the derived physical parameters of sources with those found in a
large number of previously published papers}.
The values of $u_{\rm eq}$, $B_{\rm eq}$, and
$U_{\rm eq}$=$u_{\rm eq}V_{\rm o}$ with their estimated error, calculated
for each od the sample sources with the assumption of a filling
factor of unity and equipartition of energy between electrons and protons, are
given in columns 3--5 of Table~2, respectively. Table~2 contains the adopted age
and physical parameters of the sample sources derived either from the observational
data (columns 3, 4, and 5) and from the analytical KDA model (columns 6, 7, 8,
and 9).
\subsection[section]{Spectral Age}
Determination of the age of the radio sources is crucial to constrain any
dynamical model of their time evolution. An apparent age of sources can be
estimated from the ratio of their total emitted energy, $U_{\rm eq}$, determined under the
`minimum energy' condition and the observed power
\[t_{\rm max}\approx U_{\rm eq}/(dU/dt)\,\,\,\,\,\,{\rm where}\,\,\,\,\,\,
dU/dt\equiv \int_{\nu_{1}}^{\nu_{2}} S(\nu)d\nu,\]
\noindent
and $S(\nu)$ is the observed flux density at different frequencies.
The resultant age of source, being rather its upper limit, is usually greater
than the synchrotron age of relativistic particles commonly determined from the
standard spectral-ageing analysis (e.g. Alexander and Leahy 1987). However, the
time-dependence of various energy losses suffered by the particles causes
different parts of the lobes or cocoon to have different ages. Besides, the
radiation losses (and thus the synchrotron age) depend on the history of
particle injection, the distribution of the pitch angle, etc. described by the
different synchrotron models: Kardashev--Pacholczyk (KP); Jaffe and Perola (JP);
or continuous injection (CI), cf. Carilli et al.(1991) for a detailed
description. Therefore, these models of different energy losses give
different spectral age estimates, and a comparison of ages of the sample sources
ought to be made within the same synchrotron model. Moreover, the synchrotron ages
($t_{\rm syn}$) usually differ from the dynamical age ($t_{\rm dyn}$) estimated
from the ram-pressure arguments (cf. Begelman and Cioffi 1989; Lara et al. 2000;
Schoenmakers et al. 2000).
An attempt to minimize the discrepancies between spectral and
dynamical ages has been undertaken by Kaiser (2000). His 3-dimensional model of
the synchrotron emissivity of the cocoon traces the individual evolution of
parts of the cocoon and provides, according to the author, a more accurate
estimate for the age of a source. Its application to the lobes of Cygnus~A gave
a very good fit to their observed surface brightness distribution. However,
since an application of the above model is confined to the sources for which their
lobes are reasonably resolved in the direction perpendicular to the jet
axis -- we cannot use it for our statistical approach to the dynamical evolution
of {\sl giant} radio sources and have to rely on results of the standard
ageing analysis.
Basing on a commonly accepted assumption about the proportionality of the spectral
and dynamical ages, hereafter we assume $t_{\rm dyn}=2t_{\rm syn}$ (e.g. Lara et al.
2000). This age (marked by $t$[Myr]) is given in column 2 of Table~2.
\section{Application of the KDA Model}
\subsection[section]{Source Dynamics}
The overall dynamics of a FRII-type source (precisely: its cocoon) described
in the KDA model is based on the earlier self-similar model of Kaiser and Alexander
(1997) [hereafter referred to as KA]. It is assumed that the radio structure is
formed by two jets emanating from the AGN into a
surrounding medium in two opposite directions, then terminating in strong
shocks, and finally inflating the cocoon. A density distribution of the
unperturbed external gas is approximated by a power-law relation
$\rho_{\rm d}=\rho_{0}(d/a_{0})^{-\beta}$, where $d$ is the radial distance from
the core of a source, $\rho_{0}$ is the density at the core radius $a_{0}$, and
$\beta$ is the exponent in this distribution [the simplified King (1972) model].
Half of the cocoon is approximated by a cylinder of length $L_{\rm j}=D_{\rm s}/2$
and axial ratio $R_{\rm T}=AR/2$, where $D_{\rm s}$ is its total unprojected
linear size. The cocoon expands along the jet axis driven by the hot
spot pressure $p_{\rm h}$ and in the perpendicular direction by the cocoon
pressure $p_{\rm c}$. In the model the rate at which energy is transported
along each jet ($Q_{0}$) is constant during the source lifetime.
The model predicts self-similar expansion of the cocoon and gives analytical
formulae for the time evolution of various geometrical and physical parameters,
e.g. the length of the jet [cf. equations (4) and (5) in KA]:
\begin{equation}
L_{\rm j}=D_{\rm s}/2=c_{1}\left(\frac{Q_{0}}{\rho_{0}a_{0}^{\beta}}\right)^{1/(5-\beta)}
t^{3/(5-\beta)};
\end{equation}
\noindent
and the cocoon pressure (cf. equation 34 in KA):
\begin{equation}
p_{\rm c}=\frac{18c_{1}^{2(5-\beta)/3}}{(\Gamma_{\rm x}+1)(5-\beta)^{2}\cal{P_{\rm hc}}}
(\rho_{0}a_{0}^{\beta}Q_{0}^{2})^{1/3}L_{\rm j}^{-(4+\beta)/3},
\end{equation}
\noindent
where $c_{1}$ is a dimensionless constant [equation (25) in KA], and $\Gamma_{\rm x}$
is the adiabatic index of the unshocked medium surrounding the cocoon,
and ${\cal P}_{\rm hc}\equiv p_{\rm h}/p_{\rm c}$ is the pressure ratio.
However, the pressure ratio ${\cal P}_{\rm hc}=4R_{\rm T}^{2}$, implied in the
original KDA paper, has
later been found to seriously overestimate the value of ${\cal P}_{\rm hc}$
obtained in hydrodynamical simulations by Kaiser and Alexander (1999). Therefore,
in our modelling procedure we use the empirical formula taken from Kaiser (2000):
\begin{equation}
{\cal P}_{\rm hc}=(2.14-0.52\beta)R_{\rm T}^{2.04-0.25\beta}.
\end{equation}
\subsection[section]{Source Energetics and Radio Power}
In our application of the KDA model we neglect thermal particles, hence the
overall source dynamics is governed by the pressure in the cocoon in the form
$p_{\rm c}=(\Gamma_{\rm c}-1)(u_{\rm e}+u_{\rm B})$, where $\Gamma_{\rm c}$ is
the adiabatic index of the cocoon, $u_{\rm e}$ and $u_{\rm B}$ are the energy
densities of relativistic particles and the magnetic field, respectively. Both
energy densities are a function of the source lifetime $t$. In particular,
\begin{equation}
u_{\rm B}(t)\propto B^{2}(t)={\rm const}\,t^{-a},
\end{equation}
\noindent
where $a=(\Gamma_{\rm B}/\Gamma_{\rm c})(4+\beta)/(5-\beta)$.
Since the time evolution of the pressure $p_{\rm c}$ is known from the
self-similar solution, then one can calculate the energy density in the cocoon
at any specific age $t$:
\begin{equation}
u_{\rm c}(t)\equiv u_{\rm e}(t)+u_{\rm B}(t)=p_{\rm c}(t)/(\Gamma_{\rm c}-1),
\end{equation}
\noindent
and the total source energy: $E_{\rm tot}(t)=u_{\rm c}(t)V_{\rm c}(t)$, where
$V_{\rm c}$ is the cocoon volume attained at the age $t$:
\begin{equation}
V_{\rm c}(t)=2\frac{\pi}{4R_{\rm T}^{2}}[L_{\rm j}(t)]^{3}\propto t^{9/(5-\beta)}.
\end{equation}
\noindent
Following KDA and Kaiser (2000) we can write:
\begin{equation}
E_{\rm tot}=u_{\rm c}V_{\rm c}=
\frac{2(5-\beta)}{9[\Gamma_{\rm c}+(\Gamma_{\rm c}-1)({\cal P}_{\rm hc}/4)]-4
-\beta}Q_{0}t.
\end{equation}
\noindent
Thus, the ratio of energy delivered by the twin jets and stored in the cocoon is:
\[\frac{2Q_{0}t}{E_{\rm tot}}=\frac{9\Gamma_{\rm c}-4-\beta}{5-\beta} +
\frac{9(\Gamma_{\rm c}-1)}{4(5-\beta)}{\cal P}_{\rm hc},\]
\noindent
i.e. for given $\Gamma_{\rm c}$ and $\beta$ values,
this ratio is a function of the pressure ratio ${\cal P}_{\rm hc}$ only.
For $\Gamma_{\rm c}=5/3$ and $\beta=3/2$ we have
\begin{equation}
2Q_{0}t/E_{\rm tot}=2.7+0.43 {\cal P}_{\rm hc}.
\end{equation}
The radio power of the cocoon $P_{\nu}$ is calculated in the KDA model by
splitting up the source into small volume elements and allowing them to evolve
separately. The effects of adiabatic expansion, synchrotron losses, and inverse
Compton scattering on the cosmic microwave background radiation are traced for
these volume elements independently. The total radio emission at a fixed
frequency $\nu$ is then obtained by summing up the contribution from all such
elements, resulting in an integral over time [equation (16) in KDA]. It depends
on the source's age $t$ and redshift $z$, the jet power $Q_{0}$, the cocoon
axial ratio $R_{\rm T}$, the exponent in the expected power-law distribution of
relativistic particles $p$, and on the ratio $r$ of the magnetic field energy
to the energy of relativistic electrons and non-radiating particles (given in
Sect.~3.3).
The integral is not analytically solvable and, following the KDA we
calculate it numerically.
\subsection{Fitting Procedure}
On the basis of the above model we aim to predict the specific physical
parameters for all {\sl giants} and normal-size sources in the sample at their
estimated (dynamical) age, i.e. $Q_{0}$, $\rho_{0}$, $u_{\rm c}$, $p_{\rm c}$,
and $E_{\rm tot}$. This differs from the KDA approach, who on the base of
available observational data, evaluated some general trends and made crude
estimates of possible ranges of values attained by the model parameters.
In order to derive the above parameters for our sources, all other free
parameters of the model ($r$, $p$, $\Gamma_{\rm x}$, $\Gamma_{\rm c}$,
$\Gamma_{\rm B}$, $a_{0}$, $\beta$), and the inclination angle of the jet axis
to the observer's line-of-sight $\theta$ have to be approximated.
Following the KDA, we adopt their `Case~3' where both the cocoon and magnetic
field are `cold' ($\Gamma_{\rm c}=\Gamma_{\rm B}=5/3)$ and the adiabatic index
of the jet material and external gas is also $5/3$. For the initial ratio of
the energy densities of the magnetic field and the particles we use
$r\equiv u_{\rm B}/u_{\rm e}=(1+p)/4$, with the exponent of the energy
distribution $p=2.14$.
The core radius $a_{0}$ is one of the most difficult model parameter to be set
up. Even careful 2-D modelling of a distribution of radio emission for well
known sources with quite regular structures can lead to values of $a_{0}$
discrepant with those predicted by X-ray observations, the only presently
available method to determine the source environment (cf. an extensive discussion
of this problem in Kaiser 2000). In our statistical approach we assume
$a_{0}=10$ kpc for all sources, a conservative value between 2 kpc used by
KDA and 50 kpc found by Wellman et al. (1997). In Section~5.1 we discuss
the consequences of other possible values of this parameter.
We also use a constant value of $\beta$ for all sample sources taking
$\beta=1.5$ for further calculations. This is compatible with other estimates of
this parameter (e.g. Daly 1995) although much flatter than $\beta=1.9$ adopted in
the original KDA paper on the basis of Canizares et al.'s (1987) paper who found
that value to be typical for a galaxy at about 100 kpc from its centre. A flatter
density profile should be more adequate for distances of a few hundreds of kpc.
Another free parameter of the model, the orientation of the jet axis with respect
to the observer's line-of-sight $\theta=90^{\circ}$ is assumed for all {\sl giants}
and $\theta=70^{\circ}$ for other sources. This latter value is justified by the
dominance of FRII-type radio galaxies in our sample. In view of the unified
scheme for extragalactic radio sources, an average orientation angle
$\langle\theta_{\rm RG}\rangle\simeq 69^{\circ}$ for radio galaxies only was
determined by Barthel (1989). The apparent linear size $D$ of a radio source then
yields the model cocoon size:
\begin{equation}
L_{\rm j}=\frac{D}{2\sin\theta}
\end{equation}
Having fixed all these free parameters of the model, we find the jet power
$Q_{0}$ and the initial density of external medium $\rho_{0}$ for each
individual sample source by iterative solution of the system of two equation:
(i) equation (1) equated to equation (9) for the jet length $L_{\rm j}$ , and
(ii) the integral for
the luminosity of the cocoon $P_{v}$ [equation (16) in KDA; cf. Section~3.1.2]
-- requiring the match of the solution to the observed values of $D$ and
$P_{1.4}$, respectively. The above fitting procedure proved to give always
stable and unique solutions. Then, from equations (2) and (5) we calculate
other model parameters: the cocoon's pressure $p_{\rm c}$ and its energy
density $u_{\rm c}$, and from equation (7) the total cocoon energy
$E_{\rm tot}$. The resultant values of $Q_{0}$, $\rho_{0}$ and $p_{\rm c}$
for the sample sources are given in columns 6, 7, and 8 of Table~2.
In Table~3 we summarize meanings of the observational and model parameters
characterizing the source's cocoon and used in the present analysis. All
dimensions are given in the SI units except the cocoons' length and volume which
are given in kpc and kpc$^{3}$, respectively. Two quantities present in the text
do not appear in Table~3: the source (cocoon) unprojected linear size $D_{\rm s}$
and unprojected volume $V_{\rm c}$ which imply from the apparent (observed) size
$D$, axial ratio $AR$, and assumed inclination of the sources' axis $\theta$.
\begin{table*}[htb]
\footnotesize
\caption{Data of the sample sources}
\begin{tabular*}{165mm}{@{}lllcrccrl}
\hline
IAU & Other &$z$& lg$P_{1.4}$ & $D\pm\Delta D$ &AR$\pm\Delta $AR &
lg$V_{\rm o}\pm\Delta$lg$V_{\rm o}$ & Ref. & Spect.\\
name & name & & [WHz$^{-1}$sr$^{-1}$] & [kpc] & & [kpc$^{3}$] & map & anal.\\
\hline
GIANTS\\
0109+492 & 3C35 & 0.0670 & 24.53 & 1166$\pm$31 & 3.2$\pm$0.7 & 8.08$\pm$0.17 & 28& 24\\
0136+396 & B2 & 0.2107 & 25.21 & 1555$\pm$35 & 6.0$\pm$1.2 & 7.91$\pm$0.16 & 4 & 6\\
0313+683 & WNB & 0.0901 & 24.41 & 2005$\pm$34 & 4.2$\pm$0.5 & 8.55$\pm$0.10 & 28& 23\\
0319$-$454 & PKS & 0.0633 & 24.70 & 2680$\pm$30 & 4.0$\pm$0.8 & 8.98$\pm$0.16 & 22& 22\\
0437$-$244 & MRC & 0.84 & 26.15 & 1055$\pm$17 & 7.8$\pm$1.5 & 7.18$\pm$0.15 & 5 & 5\\
0813+758 & WNB & 0.2324 & 25.10 & 2340$\pm$80 & 5.0$\pm$0.5 & 8.60$\pm$0.08 & 25&24\\
0821+695 & 8C & 0.538 & 25.28 & 2990$\pm$22 & 5.9$\pm$1.0 & 8.77$\pm$0.14 & 7 & 7\\
1003+351 & 3C236 & 0.0988 & 24.76 & 5650$\pm$75 & 9.4$\pm$1.7 & 9.20$\pm$0.14 & 15& 16\\
1025$-$229 & MRC & 0.309 & 25.28 & 1064$\pm$17 & 5.2$\pm$0.7 & 7.54$\pm$0.11 & 5 & 5\\
1209+745 & 4C74.17 & 0.107 & 24.42 & 1090$\pm$13 & 2.4$\pm$0.5 & 8.25$\pm$0.16 & 2 & 24\\
1232+216 & 3C274.1 & 0.422 & 26.32 & 1024$\pm$15 & 7.4$\pm$1.6 & 7.19$\pm$0.17 & 8 & 1\\
1312+698 & DA340 & 0.106 & 24.76 & 1085$\pm$12 & 4.4$\pm$0.9 & 7.71$\pm$0.16 & 28& 24\\
1343+379 & & 0.2267 & 24.42 & 3140$\pm$60 & 7.2$\pm$1.1 & 8.67$\pm$0.14 & 28,14& 13\\
1349+647 & 3C292 & 0.71 & 27.02 & 1073$\pm$16 & 6.2$\pm$1.4 & 7.40$\pm$0.18 & 28& 1\\
1358+305 & B2 & 0.206 & 24.86 & 2670$\pm$60 & 3.6$\pm$0.8 & 9.06$\pm$0.17 & 19& 19\\
1543+845 & WNB & 0.201 & 24.76 & 1950$\pm$25 & 7.6$\pm$1.4 & 8.00$\pm$0.15 & 28& 24\\
1550+202 & 3C326 & 0.0895 & 25.02 & 2510$\pm$55 & 7.0$\pm$1.1 & 8.40$\pm$0.13 & 28& 16\\
2043+749 & 4C74.26 & 0.104 & 24.86 & 1550$\pm$20 & 4.6$\pm$0.8 & 8.14$\pm$0.14 & 28& 24\\
& & &\\
NORMAL\\
0154+286 & 3C55 & 0.720 & 26.79 & 554$\pm$12 & 6.4$\pm$1.5 & 6.54$\pm$0.18 & 9 & 9\\
0229+341 & 3C68.1 & 1.238 & 27.26 & 414$\pm$10 & 4.4$\pm$1.0 & 6.49$\pm$0.18 & 9 & 9\\
0231+313 & 3C68.2 & 1.575 & 27.30 & 190$\pm$4 & 2.8$\pm$0.6 & 5.86$\pm$0.14 & 9 & 9\\
0404+428 & 3C103 & 0.330 & 26.32 & 564$\pm$12 & 6.7$\pm$1.0 & 6.53$\pm$0.12 & 8 & 1\\
0610+260 & 3C154 & 0.5804 & 26.84 & 376$\pm$10 & 2.9$\pm$0.8 & 6.72$\pm$0.21 & 9 & 9\\
0640+233 & 3C165 & 0.296 & 25.94 & 480$\pm$8 & 3.4$\pm$0.8 & 6.90$\pm$0.18 & 8 & 1\\
0642+214 & 3C166 & 0.246 & 26.66 & 187$\pm$15 & 3.1$\pm$0.6 & 5.76$\pm$0.15 & 8 & 1,29\\
0710+118 & 3C175 & 0.768 & 26.85 & 392$\pm$8 & 3.4$\pm$0.9 & 6.64$\pm$0.20 & 9 & 9\\
0806+426 & 3C194 & 1.184 & 27.13 & 122$\pm$3 & 3.1$\pm$0.5 & 5.18$\pm$0.17 & 30& 30\\
0828+324 & B2 & 0.0507 & 24.36 & 396$\pm$14 & 3.2$\pm$0.4 & 6.71$\pm$0.10 & 12& 6,20\\
0908+376 & B2 & 0.1047 & 24.39 & 100$\pm$8 & 2.2$\pm$0.2 & 5.24$\pm$0.08 & 18& 20\\
0958+290 & 3C234 & 0.1848 & 25.84 & 460$\pm$8 & 4.6$\pm$1.0 & 6.59$\pm$0.17 & 8 & 1\\
1008+467 & 3C239 & 1.786 & 27.51 & 94$\pm$3 & 2.6$\pm$0.7 & 5.01$\pm$0.21 & 10& 10\\
1012+488 & GB/GB2 & 0.385 & 26.17 & 694$\pm$13 & 2.2$\pm$0.3 & 7.76$\pm$0.11 & 12& 29\\
1030+585 & 3C244.1 & 0.428 & 26.47 & 352$\pm$7 & 5.4$\pm$1.3 & 6.10$\pm$0.19 & 8 & 1\\
1056+432 & 3C247 & 0.749 & 26.85 & 105$\pm$4 & 3.1$\pm$0.7 & 5.00$\pm$0.18 & 10& 10\\
1100+772 & 3C249.1 & 0.311 & 25.94 & 247$\pm$24 & 2.8$\pm$1.0 & 6.21$\pm$0.26 & 9 & 9\\
1111+408 & 3C254 & 0.734 & 26.85 & 107$\pm$4 & 2.5$\pm$0.3 & 5.22$\pm$0.10 & 10& 10\\
1113+295 & B2 & 0.0489 & 24.21 & 97$\pm$5 & 2.2$\pm$0.3 & 5.20$\pm$0.11 & 18& 20\\
1140+223 & 3C263.1 & 0.824 & 27.31 & 45$\pm$5 & 2.2$\pm$0.4 & 4.20$\pm$0.14 & 10& 10\\
1141+354 & GB/GB2 & 1.781 & 26.68 & 97$\pm$3 & 3.0$\pm$0.4 & 4.93$\pm$0.11 & 11& 29\\
1142+318 & 3C265 & 0.8108 & 27.19 & 644$\pm$16 & 5.4$\pm$1.7 & 6.89$\pm$0.24 & 9 & 1,9\\
1143+500 & 3C266 & 1.275 & 27.15 & 37$\pm$2 & 4.0$\pm$0.4 & 3.42$\pm$0.08 & 10& 10
\end{tabular*}
\end{table*}
\begin{table*}[t]
\footnotesize
\begin{tabular*}{165mm}{@{}lllcrccrl}
\hline
IAU & Other &$z$& lg$P_{1.4}$ & $D\pm\Delta D$ &AR$\pm\Delta $AR &
lg$V_{\rm o}\pm\Delta$lg$V_{\rm o}$ & Ref. & Spect.\\
name & name & & [WHz$^{-1}$sr$^{-1}$] & [kpc] & & [kpc$^{3}$] & map & anal.\\
\hline
1147+130 & 3C267 & 1.144 & 27.17 & 327$\pm$8 & 4.4$\pm$0.7 & 6.18$\pm$0.13 & 9 & 9\\
1157+732 & 3C268.1 & 0.974 & 27.41 & 390$\pm$7 & 4.1$\pm$0.6 & 6.47$\pm$0.12 & 9 & 9\\
1206+439 & 3C268.4 & 1.400 & 27.37 & 87$\pm$4 & 2.8$\pm$0.3 & 4.85$\pm$0.09 & 10& 10\\
1216+507 & GB/GB2 & 0.1995 & 24.93 & 826$\pm$8 & 4.4$\pm$0.6 & 7.39$\pm$0.11 & 12& 29\\
1218+339 & 3C270.1 & 1.519 & 27.48 & 104$\pm$12 & 2.6$\pm$0.5 & 5.15$\pm$0.15 & 10& 10\\
1221+423 & 3C272 & 0.944 & 26.73 & 490$\pm$13 & 3.6$\pm$1.1 & 6.88$\pm$0.23 & 28& 29\\
1241+166 & 3C275.1 & 0.557 & 26.56 & 130$\pm$15 & 2.0$\pm$0.4 & 5.66$\pm$0.16 & 10& 10\\
1254+476 & 3C280 & 0.996 & 27.35 & 110$\pm$13 & 2.4$\pm$0.3 & 5.29$\pm$0.10 & 10& 10\\
1308+277 & 3C284 & 0.2394 & 25.63 & 836$\pm$6 & 6.9$\pm$1.3 & 7.01$\pm$0.15 & 8 & 1\\
1319+428 & 3C285 & 0.0794 & 24.68 & 271$\pm$4 & 2.8$\pm$0.6 & 6.33$\pm$0.17 & 8 & 1\\
1343+500 & 3C289 & 0.967 & 27.02 & 86$\pm$2 & 2.3$\pm$0.2 & 5.00$\pm$0.07 & 10& 10\\
1347+285 & B2 & 0.0724 & 23.60 & 86$\pm$4 & 2.4$\pm$0.3 & 4.94$\pm$0.10 & 18& 20\\
1404+344 & 3C294 & 1.779 & 27.41 & 132$\pm$17 & 3.8$\pm$1.0 & 5.12$\pm$0.20 & 10& 10\\
1420+198 & 3C300 & 0.270 & 26.00 & 516$\pm$6 & 3.0$\pm$1.2 & 7.11$\pm$0.29 & 8 & 1\\
1441+262 & B2 & 0.0621 & 23.51 & 333$\pm$8 & 4.0$\pm$0.9 & 6.28$\pm$0.18 & 21& 20\\
1522+546 & 3C319 & 0.192 & 25.56 & 390$\pm$15 & 3.2$\pm$0.7 & 6.69$\pm$0.17 & 8 & 1\\
1533+557 & 3C322 & 1.681 & 27.49 & 279$\pm$7 & 3.6$\pm$0.7 & 6.15$\pm$0.15 & 9 & 9\\
1547+215 & 3C324 & 1.207 & 27.28 & 88$\pm$3 & 3.6$\pm$0.9 & 4.60$\pm$0.30 & 30& 30\\
1549+628 & 3C325 & 0.860 & 27.09 & 132$\pm$4 & 4.4$\pm$0.9 & 4.97$\pm$0.28 & 30& 30\\
1609+660 & 3C330 & 0.549 & 26.93 & 458$\pm$8 & 6.4$\pm$1.2 & 6.29$\pm$0.15 & 9 & 9\\
1609+312 & B2 & 0.0944 & 23.65 & 56$\pm$4 & 2.4$\pm$0.3 & 4.41$\pm$0.10 & 3 & 20\\
1615+324 & 3C332 & 0.1515 & 25.32 & 306$\pm$7 & 4.8$\pm$1.3 & 6.02$\pm$0.21 & 3 & 20\\
1618+177 & 3C334 & 0.555 & 26.45 & 430$\pm$15 & 2.8$\pm$0.4 & 6.93$\pm$0.12 & 9 & 9\\
1627+444 & 3C337 & 0.635 & 26.70 & 337$\pm$4 & 5.0$\pm$0.9 & 6.08$\pm$0.19 & 30& 30\\
1658+302 & B2 & 0.0351 & 23.39 & 120$\pm$10 & 2.2$\pm$0.2 & 5.47$\pm$0.07 & 21& 20\\
1723+510 & 3C356 & 1.079 & 26.96 & 643$\pm$13 & 7.9$\pm$1.0 & 6.55$\pm$0.10 & 9 & 9\\
1726+318 & 3C357 & 0.1664 & 25.43 & 395$\pm$10 & 3.0$\pm$0.6 & 6.76$\pm$0.16 &21,3&20\\
1957+405 & CygA & 0.0564 & 27.25 & 185$\pm$3 & 3.8$\pm$0.5 & 5.57$\pm$0.11 & 9 & 9\\
2019+098 & 3C411 & 0.467 & 27.05 & 201$\pm$5 & 2.6$\pm$0.6 & 6.00$\pm$0.18 & 26& 26\\
2104+763 & 3C427.1 & 0.572 & 26.75 & 173$\pm$5 & 2.9$\pm$0.7 & 5.71$\pm$0.19 & 9 & 9\\
2145+151 & 3C437 & 1.48 & 27.43 & 317$\pm$9 & 5.9$\pm$1.0 & 5.86$\pm$0.19 & 30& 30\\
\hline
\end{tabular*}
\vspace{2mm}
\begin{tabular*}{135mm}{@{}rlrl}
{\bf References}\\
(1)& Alexander and Leahy 1987& (16)& Mack et al. 1998\\
(2)& van Breugel and Willis 1981& (17)& Myers and Spangler 1985\\
(3)& Fanti et al. 1986& (18)& Parma et al. 1986\\
(4)& Hine 1979& (19)& Parma et al. 1996\\
(5)& Ishwara-Chandra and Saikia 1999& (20)& Parma et al. 1999\\
(6)& Klein et al. 1995& (21)& de Ruiter et al. 1986\\
(7)& Lara et al. 2000& (22)& Saripalli et al. 1994\\
(8)& Leahy and Williams 1984& (23)& Schoenmakers et al. 1998\\
(9)& Leahy et al. 1989& (24)& Schoenmakers et al. 2000\\
(10)& Liu et al. 1992& (25)& Schoenmakers et al. 2001\\
(11)& Machalski and Condon 1983& (26)& Spangler and Pogge 1984\\
(12)& Machalski and Condon 1985& (27)& FIRST (Becker et al. 1996)\\
(13)& Machalski and Jamrozy 2000& (28)& NVSS (Condon et al. 1998)\\
(14)& Machalski et al. 2001& (29)& this paper\\
(15)& Mack et al. 1997& (30)& Guerra et al. 2000\\
\end{tabular*}
\end{table*}
\section{Results of the Modelling}
\subsection[section]{Jet Power $Q_{0}$ and Core Density $\rho_{0}$}
In the KDA model the jet power and
ambient density are independent in accordance with a physical intuition.
A distribution of these parameters derived for different sets of the sample
sources on the log($Q_{0}$)--log($\rho_{0}$) plane is shown in Figure~1a.
One can realise in Figure~1a that {\sl giants} are not fully separated from
other sources, where (i) among the sources with
a comparable jet power $Q_{0}$, {\sl giant} sources have an average central
density $\rho_{0}$ smaller than a corresponding central density of normal-size
sources, (ii) {\sl giants} have at least ten times more powerful jets than much
smaller low-luminosity sources of a comparable $\rho_{0}$.
Moreover, for a number of sources in the sample the derived values of their
fundamental parameters $Q_{0}$ and $\rho_{0}$ are very close, while their ages
are significantly different. Thus in view of the model assumptions, they may be
considered as `the same' source observed at different epochs of its
lifetime. Such bunches of three to five sources (hereafter called `clans') are
indicated in Figure~1a with the large circles. These clans have appeared crucial
in a comparison of the observing data and the model predictions, and in the
analysis of the {\sl giant}-source phenomenon. More detailed analysis of these
clans and their evolution will be given in Paper II of this series.
\begin{figure}
\special{psfile=mach1_f1.ps hoffset=-20 voffset=-500}
\vspace{170mm}
\caption{Plots of the jet power $Q_{0}$ {\bf a)} against central density of the
core $\rho_{0}$; {\bf b)} against source age. The {\sl giants} are indicated by
crosses, high-redshift sources --
diamonds, low-redshift sources -- open circles, and low-luminosity sources --
stars. The dotted lines mark
a constant linear size predicted from equation (10). The numbers above some symbols
indicate observed size of the marked source.}
\label{f1}
\end{figure}
The KDA and BWR models predict that the luminosities of mature radio sources decrease
with their age. Therefore, more distant sources fall below the flux-density
limit of a sample sooner than nearer sources, and in any sample the
high-redshift sources will be younger and more luminous than the low-redshift
ones. A significant anticorrelation between $Q_{0}$ and age $t$, expected as a
consequence of the above effect (called `youth-redshift degeneracy' in BWR), is
shown in Figure~1b.
\subsection[section]{Cocoon's Energy Density $u_{\rm c}$ and Total Emitted
Energy $E_{\rm tot}$}
In the KDA model, the energetics of the radio source is governed by the jet power
$Q_{0}$, adiabatic index $\Gamma_{\rm c}$, and the pressure ratio ${\cal P}_{\rm hc}$.
Since
$\Gamma_{\rm c}$ is assumed constant for all the sources and $Q_{0}$ is constant
for a given source, the energy of the cocoon $u_{\rm c}$ is determined by the
pressure $p_{\rm c}$ attained by the cocoon at age $t$. As it decreases with time
[cf. Equations (5), (3) and (2)] and the volume increases with time [Equation (6)],
their product, i.e. the model total emitted energy, $E_{\rm tot}$, is
the increasing function of time, and its value is a fraction of energy delivered
by the jet since the source's birth, $Q_{0}t$ [cf. Equation (7)]. A distribution of
$u_{\rm c}$ and $E_{\rm tot}$ parameters on the log($u_{\rm c}$)--log($E_{\rm tot}$)
plane is shown in Figure~2. The time axes, calculated with Equations (2), (5) and
(7) for a constant jet power, are indicated by the dotted lines.
In order to investigate the time evolution of {\sl giant} radio sources
on the basis of dynamical evolution of the entire FRII-type population,
in the next Subsection we examine several correlations between the basic
physical parameters of the sample sources derived from the data and
the above two models. In the result we realize that all statistical
tendencies are similar in both models. Therefore, below we present
these correlations between parameters derived with the preferred
KDA model only.
\begin{figure}
\special{psfile=mach1_f2.ps hoffset=-20 voffset=-230}
\vspace{75mm}
\caption{Plot of the energy density $u_{\rm c}$ against total emitted energy
$E_{\rm tot}$. The symbols indicating sources are the same as in Figure~1. Dotted
lines mark the time axes for different values of $Q_{0}$.}
\label{f2}
\end{figure}
\subsection[section]{Correlations between Observed and Model Parameters}
Below we analyse the relations between principal observational and model
parameters of the sample sources important for their time evolution. Most of
these parameters are interdependent (for example: the linear size of a source is
likely dependent both on its age and the ambient medium density), hence each
parameter of sources in our sample correlates somehow with other parameters.
Therefore, in order to determine which correlation is the strongest, we calculate
the Pearson partial correlation coefficients between selected parameters. For the
reason that most correlations between different parameters seem to be a power law,
all correlations are calculated between logarithms of the given parameters
(for the sake of simplicity, the `log' signs are omitted in all
Tables showing the partial correlations). Hereafter
$r_{XY}$ denotes the correlation coefficient between parameters
$X$ and $Y$, $r_{XY/W}$ is the partial correlation coefficient between these
parameters in the presence of a third one, ($W$), which can correlate with both
$X$ and $Y$, and $P_{XY/W}$ is the probability that the test pair $X$ and $Y$ is
uncorrelated when $W$ is held constant. Similarly, $r_{XY/VW}$, $r_{XY/UVW}$,
$P_{XY/VW}$, and $P_{XY/UVW}$ are the correlation coefficients for the correlations
involving four or five parameters, and the related probabilities, respectively.
A strong correlation between linear size and spectral age of 3C radio sources was
already noted by Alexander and Leahy (1987) and confirmed by Liu et al. (1992).
This correlation in our samples is shown in Figure~3. The giant sources
do not show any tendency to a faster expansion velocity than that for the
normal-size sources. The same conclusion
has been made by Schoenmakers et al. (2000). However, two other
aspects are worth emphasizing: (i) There are four high-redshift {\sl giants}
which are much younger than the low-redshift {\sl giants}. Two of them
are quasars. It seems that they might grow so large under some exceptional
conditions. (ii) The $D-t$ relation for the low-luminosity sources (mostly B2)
follows the same slope of the correlation as that for other sources, but
low-luminosity sources are definitely much smaller indicating a dependence
of the size and expansion velocity on the source luminosity.
\begin{figure}[t]
\special{psfile=mach1_f3.ps hoffset=-20 voffset=-230}
\vspace{75mm}
\caption{Plot of the linear size $D$ against source age $t$. The
symbols indicating sources are the same as in Figure~1. The solid lines indicate
the implied expansion velocities in the speed of light $c$ units. The dashed line
shows the model predicted $D(t)\propto L_{\rm j}(t)$ relation resulting from
Equation~(1) for $\beta$=1.5.}
\label{f3}
\end{figure}
The partial correlation coefficients between the size $D$ and $t$, $Q_{0}$, and
$\rho_{0}$ together with the related probabilities of their chance correlation
are given in Table~4.
\begin{table*}[htb]
\footnotesize
\caption{Age and physical parameters}
\begin{tabular*}{165mm}{@{}lrcccllrr}
\hline
Source & t & lg$u_{\rm eq}$ & $B_{\rm eq}$ & lg$U_{\rm eq}$
&lg$Q_{0}$ & lg$\rho_{0}$ & lg$p_{\rm c}$ &\underline{$2Q_{0}t$}\\
&[Myr]&[Jm$^{-3}$] & [nT] & [J] & [W] & [kgm$^{-3}$] & [Nm$^{-2}$] & $U_{\rm eq}$\\
\hline
GIANTS\\
0109+492 & 96$\pm$18 & $-13.74\pm 0.12$ & 0.14$\pm$0.02 & 52.81$\pm$0.28 & 37.69 & $-$24.03 & $-$13.86 &4.6$\pm$2.0\\
0136+396 & 89$\pm$17 & $-13.24\pm 0.17$ & 0.25$\pm$0.05 & 53.14$\pm$0.31 & 38.29 & $-$23.30 & $-$13.36 &8.0$\pm$4.2\\
0313+683 & 140$\pm$24 & $-14.11\pm 0.14$ & 0.09$\pm$0.02 & 52.91$\pm$0.23 & 37.82 & $-$23.87 & $-$14.11 &7.2$\pm$2.6\\
0319$-$454 & 180$\pm$40 & $-13.83\pm 0.13$ & 0.13$\pm$0.02 & 53.62$\pm$0.28 & 38.07 & $-$23.79 & $-$14.15 &3.2$\pm$1.3\\
0437$-$244 & 19$\pm$6 & $-12.37\pm 0.15$ & 0.68$\pm$0.12 & 53.28$\pm$0.29 & 39.20 & $-$23.37 & $-$12.46 &10.0$\pm$3.5\\
0813+758 & 84$\pm$4 & $-13.60\pm 0.15$ & 0.17$\pm$0.03 & 53.47$\pm$0.22 & 38.47 & $-$23.90 & $-$13.79 &5.3$\pm$2.5\\
0821+695 & 84$\pm$10 & $-13.37\pm 0.17$ & 0.21$\pm$0.04 & 53.87$\pm$0.30 & 38.84 & $-$23.68 & $-$13.64 &5.0$\pm$2.8\\
1003+351 & 127$\pm$18 & $-14.25\pm 0.14$ & 0.08$\pm$0.01 & 53.42$\pm$0.27 & 38.62 & $-$23.83 & $-$14.34 &12.8$\pm$6.1\\
1025$-$229 & 64$\pm$12 & $-12.82\pm 0.22$ & 0.40$\pm$0.10 & 53.19$\pm$0.31 & 38.23 & $-$23.25 & $-$13.10 &4.4$\pm$2.3\\
1209+745 & 110$\pm$20 & $-13.79\pm 0.13$ & 0.13$\pm$0.02 & 52.93$\pm$0.28 & 37.60 & $-$24.23 & $-$13.99 &5.3$\pm$1.5\\
1232+216 & 22$\pm$3 & $-12.29\pm 0.32$ & 0.74$\pm$0.27 & 53.37$\pm$0.43 & 39.20 & $-$23.20 & $-$12.38 &9.4$\pm$8.0\\
1312+698 & 55$\pm$5 & $-13.44\pm 0.14$ & 0.19$\pm$0.03 & 52.74$\pm$0.29 & 37.95 & $-$23.97 & $-$13.56 &5.7$\pm$3.2\\
1343+379 & 94$\pm$16 & $-14.20\pm 0.33$ & 0.08$\pm$0.03 & 53.07$\pm$0.41 & 38.24 & $-$24.16 & $-$14.25 &8.8$\pm$6.8\\
1349+647 & 16$\pm$4 & $-11.84\pm 0.17$ & 1.25$\pm$0.25 & 54.03$\pm$0.33 & 39.79 & $-$23.30 & $-$12.07 &5.8$\pm$3.0\\
1358+305 & 125$\pm$25 & $-13.95\pm 0.14$ & 0.12$\pm$0.02 & 53.58$\pm$0.30 & 38.34 & $-$24.12 & $-$14.10 &4.6$\pm$2.2\\
1543+845 & 130$\pm$21 & $-13.46\pm 0.23$ & 0.19$\pm$0.05 & 53.01$\pm$0.35 & 38.03 & $-$23.00 & $-$13.61 &8.6$\pm$5.6\\
1550+202 & 134$\pm$27 & $-13.63\pm 0.27$ & 0.16$\pm$0.05 & 53.24$\pm$0.36 & 38.26 & $-$23.21 & $-$13.73 &8.9$\pm$5.7\\
2043+749 & 64$\pm$11 & $-13.57\pm 0.10$ & 0.17$\pm$0.02 & 53.04$\pm$0.23 & 38.14 & $-$24.06 & $-$13.74 &5.1$\pm$1.9\\
& & &\\
NORMAL\\
0154+286 & 13$\pm$2 & $-11.55\pm 0.16$ & 1.74$\pm$0.32 & 53.46$\pm$0.32 & 39.43 & $-$22.98 & $-$11.72 &7.7$\pm$5.0\\
0229+341 & 11$\pm$2 & $-11.31\pm 0.19$ & 2.31$\pm$0.43 & 53.65$\pm$0.35 & 39.69 & $-$22.91 & $-$11.33 &7.8$\pm$5.4\\
0231+313 & 4.6$\pm$0.3& $-10.71\pm 0.16$ & 4.58$\pm$0.85 & 53.62$\pm$0.29 & 39.64 & $-$23.55 & $-$11.03 &3.2$\pm$1.9\\
0404+428 & 18$\pm$3 & $-11.91\pm 0.17$ & 1.15$\pm$0.23 & 53.08$\pm$0.30 & 39.02 & $-$22.96 & $-$12.01 &10.2$\pm$5.5\\
0610+260 & 45$\pm$4 & $-11.72\pm 0.18$ & 1.43$\pm$0.30 & 53.47$\pm$0.36 & 39.00 & $-$22.20 & $-$11.54 &10.2$\pm$7.6\\
0640+233 & 60$\pm$10 & $-12.34\pm 0.16$ & 0.70$\pm$0.13 & 53.03$\pm$0.32 & 38.33 & $-$22.66 & $-$12.30 &7.9$\pm$4.5\\
0642+214 & 14$\pm$5 & $-11.71\pm 0.22$ & 1.44$\pm$0.37 & 52.58$\pm$0.34 & 38.87 & $-$22.71 & $-$11.23 &17.9$\pm$7.9\\
0710+118 & 35$\pm$5 & $-11.54\pm 0.16$ & 1.76$\pm$0.32 & 53.57$\pm$0.34 & 39.09 & $-$22.31 & $-$11.52 &7.8$\pm$5.0\\
0806+426 & 7.0$\pm$1.2& $-10.46\pm 0.12$ & 6.13$\pm$0.82 & 53.22$\pm$0.33 & 39.27 & $-$22.56 & $-$10.57 &5.0$\pm$2.3\\
0828+324 & 59$\pm$9 & $-13.27\pm 0.18$ & 0.24$\pm$0.05 & 51.90$\pm$0.27 & 37.20 & $-$23.60 & $-$13.24 &7.8$\pm$3.6\\
0908+376 & 28$\pm$5 & $-12.33\pm 0.16$ & 0.71$\pm$0.13 & 51.38$\pm$0.23 & 36.83 & $-$23.36 & $-$12.39 &5.5$\pm$2.0\\
0958+290 & 22$\pm$4 & $-12.11\pm 0.15$ & 0.92$\pm$0.16 & 52.94$\pm$0.30 & 38.53 & $-$23.31 & $-$12.32 &5.7$\pm$2.9\\
1008+467 & 2.8$\pm$0.3& $-10.28\pm 0.16$ & 7.48$\pm$1.39 & 53.20$\pm$0.35 & 39.66 & $-$23.21 & $-$10.36 &5.5$\pm$4.0\\
1012+488 & 38$\pm$6 & $-12.83\pm 0.13$ & 0.40$\pm$0.06 & 53.40$\pm$0.23 & 38.81 & $-$23.93 & $-$12.80 &6.7$\pm$2.6\\
1030+585 & 14$\pm$2 & $-11.56\pm 0.16$ & 1.72$\pm$0.32 & 53.01$\pm$0.33 & 38.98 & $-$22.84 & $-$11.62 &8.9$\pm$5.5\\
1056+432 & 3.2$\pm$0.3& $-10.72\pm 0.14$ & 4.55$\pm$0.71 & 52.75$\pm$0.31 & 39.18 & $-$23.44 & $-$10.81 &5.7$\pm$3.5\\
1100+772 & 32$\pm$6 & $-11.92\pm 0.16$ & 1.14$\pm$0.21 & 52.76$\pm$0.38 & 38.24 & $-$22.87 & $-$11.95 &6.5$\pm$4.6\\
1111+408 & 3.1$\pm$0.2& $-10.75\pm 0.15$ & 4.40$\pm$0.76 & 52.94$\pm$0.24 & 39.20 & $-$23.79 & $-$10.97 &3.8$\pm$1.9\\
1113+295 & 19$\pm$3 & $-12.42\pm 0.16$ & 0.64$\pm$0.12 & 51.25$\pm$0.26 & 36.84 & $-$23.84 & $-$12.51 &5.0$\pm$2.2\\
1140+223 & 1.7$\pm$0.6& $-9.87 \pm 0.11$ & 12.0$\pm$1.50 & 52.80$\pm$0.24 & 39.38 & $-$23.27 & $-$10.02 &4.4$\pm$1.0\\
1141+354 & 3.4$\pm$0.8& $-10.69\pm 0.16$ & 4.68$\pm$0.87 & 52.71$\pm$0.26 & 39.03 & $-$23.44 & $-$10.85 &4.8$\pm$2.4\\
1142+318 & 22$\pm$5 & $-11.66\pm 0.16$ & 1.54$\pm$0.29 & 53.69$\pm$0.37 & 39.64 & $-$22.52 & $-$11.57 &13.0$\pm$8.1\\
1143+500 & 1.2$\pm$0.5& $-9.67\pm 0.16$ & 15.1$\pm$2.80 & 52.22$\pm$0.23 & 39.32 & $-$22.65 & $-$9.58 &10.0$\pm$4.2\\
\end{tabular*}
\end{table*}
\begin{table*}[t]
\footnotesize
\begin{tabular*}{165mm}{@{}lrcccllrr}
\hline
Source & t & lg$u_{\rm eq}$ & $B_{\rm eq}$ & lg$U_{\rm eq}$
&lg$Q_{0}$ & lg$\rho_{0}$ & lg$p_{\rm c}$ &\underline{$2Q_{0}t$}\\
&[Myr]&[Jm$^{-3}$] & [nT] & [J] & [W] & [kgm$^{-3}$] & [Nm$^{-2}$] & $U_{\rm eq}$\\
\hline
1147+130 & 12$\pm$4 & $-11.10\pm 0.14$ & 2.93$\pm$0.48 & 53.55$\pm$0.26 & 39.51 & $-$22.60 & $-$11.16 &7.7$\pm$2.4\\
1157+732 & 13$\pm$3 & $-11.17\pm 0.16$ & 2.70$\pm$0.49 & 53.77$\pm$0.27 & 39.73 & $-$22.73 & $-$11.20 &7.8$\pm$3.2\\
1206+439 & 3.0$\pm$0.3& $-10.34\pm 0.12$ & 7.00$\pm$1.00 & 52.98$\pm$0.20 & 39.52 & $-$23.04 & $-$10.32 &7.1$\pm$2.6\\
1216+507 & 50$\pm$15 & $-13.11\pm 0.18$ & 0.29$\pm$0.06 & 52.75$\pm$0.28 & 37.98 & $-$23.73 & $-$13.29 &5.8$\pm$1.9\\
1218+339 & 3.9$\pm$1.1& $-10.35\pm 0.17$ & 6.90$\pm$1.35 & 53.26$\pm$0.30 & 39.60 & $-$23.00 & $-$10.41 &5.7$\pm$2.4\\
1221+423 & 18$\pm$8 & $-11.94\pm 0.16$ & 1.12$\pm$0.21 & 53.41$\pm$0.36 & 39.25 & $-$23.26 & $-$11.90 &8.1$\pm$3.1\\
1241+166 & 4.2$\pm$0.4& $-11.40\pm 0.16$ & 2.07$\pm$0.38 & 52.73$\pm$0.30 & 39.01 & $-$24.21 & $-$11.44 &5.5$\pm$3.3\\
1254+476 & 2.8$\pm$0.3& $-10.68\pm 0.16$ & 4.77$\pm$0.88 & 53.08$\pm$0.25 & 39.59 & $-$23.64 & $-$10.69 &6.2$\pm$2.9\\
1308+277 & 60$\pm$9 & $-12.60\pm 0.18$ & 0.52$\pm$0.11 & 52.88$\pm$0.31 & 38.37 & $-$22.59 & $-$12.63 &12.6$\pm$7.2\\
1319+428 & 110$\pm$20 & $-12.77\pm 0.16$ & 0.43$\pm$0.08 & 52.03$\pm$0.30 & 37.15 & $-$22.79 & $-$12.90 &9.1$\pm$4.9\\
1343+500 & 4.0$\pm$0.5& $-10.71\pm 0.16$ & 4.61$\pm$0.85 & 52.76$\pm$0.22 & 39.17 & $-$23.28 & $-$10.67 &7.0$\pm$2.7\\
1347+285 & 21$\pm$4 & $-12.31\pm 0.13$ & 0.73$\pm$0.11 & 51.13$\pm$0.22 & 36.35 & $-$23.85 & $-$12.72 &2.4$\pm$0.8\\
1404+344 & 2.8$\pm$0.2& $-10.59\pm 0.16$ & 5.23$\pm$0.97 & 53.00$\pm$0.34 & 39.72 & $-$23.15 & $-$10.50 &9.8$\pm$6.7\\
1420+198 & 43$\pm$6 & $-12.27\pm 0.14$ & 0.76$\pm$0.12 & 53.31$\pm$0.39 & 38.49 & $-$23.22 & $-$12.47 &4.4$\pm$3.3\\
1441+262 & 77$\pm$13 & $-13.30\pm 0.19$ & 0.23$\pm$0.05 & 51.46$\pm$0.35 & 36.49 & $-$23.40 & $-$13.46 &5.6$\pm$3.5\\
1522+546 & 43$\pm$7 & $-12.33\pm 0.15$ & 0.71$\pm$0.12 & 52.82$\pm$0.30 & 38.07 & $-$23.13 & $-$12.49 &5.1$\pm$2.7\\
1533+557 & 7.3$\pm$0.6& $-11.00\pm 0.16$ & 3.28$\pm$0.60 & 53.62$\pm$0.30 & 39.82 & $-$23.01 & $-$10.99 &7.9$\pm$4.7\\
1547+215 & 3.4$\pm$1.0& $-10.08\pm 0.18$ & 9.44$\pm$1.89 & 53.02$\pm$0.40 & 39.45 & $-$22.62 & $-$10.19 &5.8$\pm$4.1\\
1549+628 & 5.5$\pm$1.5& $-10.55\pm 0.15$ & 5.50$\pm$0.90 & 52.91$\pm$0.47 & 39.34 & $-$22.47 & $-$10.51 &9.3$\pm$4.8\\
1609+660 & 20$\pm$3.5& $-11.39\pm 0.18$ & 2.10$\pm$0.43 & 53.37$\pm$0.31 & 39.35 & $-$22.19 & $-$11.36 &12.9$\pm$7.1\\
1609+312 & 12$\pm$3 & $-12.08\pm 0.14$ & 0.95$\pm$0.15 & 50.80$\pm$0.23 & 36.42 & $-$24.00 & $-$12.39 &3.3$\pm$1.0\\
1615+324 & 47$\pm$9 & $-12.24\pm 0.15$ & 0.79$\pm$0.14 & 52.25$\pm$0.34 & 37.78 & $-$22.40 & $-$12.18 &10.8$\pm$6.3\\
1618+177 & 32$\pm$4 & $-11.91\pm 0.15$ & 1.15$\pm$0.20 & 53.49$\pm$0.26 & 38.83 & $-$23.10 & $-$12.07 &4.7$\pm$2.2\\
1627+444 & 24$\pm$6.5& $-11.35\pm 0.13$ & 2.20$\pm$0.32 & 53.23$\pm$0.37 & 39.03 & $-$22.12 & $-$11.33 &9.6$\pm$4.0\\
1658+302 & 48$\pm$9 & $-13.05\pm 0.17$ & 0.31$\pm$0.06 & 50.89$\pm$0.20 & 36.10 & $-$23.67 & $-$13.12 &5.2$\pm$1.5\\
1723+510 & 20$\pm$3 & $-11.54\pm 0.18$ & 1.76$\pm$0.36 & 53.48$\pm$0.27 & 39.55 & $-$22.29 & $-$11.53 &16.0$\pm$7.7\\
1726+318 & 27$\pm$5 & $-12.48\pm 0.14$ & 0.60$\pm$0.10 & 52.75$\pm$0.29 & 38.14 & $-$23.76 & $-$12.67 &4.5$\pm$2.1\\
1957+405 & 13$\pm$3 & $-10.39\pm 0.15$ & 6.60$\pm$1.18 & 53.64$\pm$0.25 & 39.39 & $-$22.03 & $-$10.61 &4.7$\pm$1.6\\
2019+098 & 9.2$\pm$1.2& $-11.14\pm 0.10$ & 2.80$\pm$0.32 & 53.33$\pm$0.27 & 39.28 & $-$23.20 & $-$11.22 &5.6$\pm$2.7\\
2104+763 & 7.7$\pm$0.6& $-10.87\pm 0.14$ & 3.80$\pm$0.63 & 53.31$\pm$0.31 & 39.06 & $-$23.28 & $-$11.25 &2.9$\pm$1.9\\
2145+151 & 6.4$\pm$1.1& $-10.75\pm 0.13$ & 4.38$\pm$0.64 & 53.61$\pm$0.38 & 39.87 & $-$22.70 & $-$10.90 &7.4$\pm$3.9\\
\hline
\end{tabular*}
\end{table*}
\begin{table}[htb]
\caption{Observational and model parameters characterizing the source's
cocoon}
\begin{tabbing}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\=xxxxxxxxxxxx\= \kill
Parameter \>Symbol \>Dimension\\
\end{tabbing}
{\sc Observational parameters from radio maps and spectra}
\begin{tabbing}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\=xxxxxxxxxxxx\= \kill
Apparent (projected) linear size \> $D$ \>[kpc]\\
axial ratio \> $AR$ \>[dimensionless]\\
observed volume \> $V_{0}$ \>[kpc$^{3}$]\\
1.4-GHz luminosity \> $P_{\rm 1.4}$\>[W\,Hz$^{-1}$sr$^{-1}$]\\
source redshift \> $z$ \>[dimensionless]\\
\end{tabbing}
{\sc Physical parameters derived directly from the above data}
\begin{tabbing}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\=xxxxxxxxxxxx\= \kill
equipartition magnetic field strength \> $B_{\rm eq}$ \> [nT]\\
equipartition energy density \> $u_{\rm eq}$ \> [J\,m$^{-3}$]\\
equipartition emitted energy \> $U_{\rm eq}\equiv u_{\rm eq}V_{0}$ \> [J]\\
\end{tabbing}
{\sc Physical parameters assumed in the model}
\begin{tabbing}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\=xxxxxxxxxxxx\= \kill
Central core radius \> $a_{0}$ \> [kpc]\\
exponent of density profile \> $\beta$ \> [dimensionless]\\
exponent of particle energy distribution \> $p$ \> [dimensionless]\\
ratio of jet-head to cocoon pressure \> ${\cal P}_{\rm hc}\equiv p_{\rm h}/p_{\rm c}$\>[dimensionless]\\
adiabatic indices of ambient medium,\\
cocoon,
and magnetic field, respectively \>$\Gamma_{\rm x}$, $\Gamma_{\rm c}$, $\Gamma_{\rm B}$\>[dimensionless]\\
source axis inclination \>$\theta$ \>[$^{\circ}$]\\
\end{tabbing}
{\sc Physical parameters fitted with the model for a given age}
\begin{tabbing}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\=xxxxxxxxxxxx\= \kill
Jet (constant) power \>$Q_{0}$ \>[W]\\
central core density \>$\rho_{0}$ \>[kg\,m$^{-3}$]\\
cocoon pressure \>$p_{\rm c}$\>[N\,m$^{-2}$]\\
energy density \>$u_{\rm c}$\>[J\,m$^{-3}$]\\
total emitted energy \>$E_{\rm tot}\equiv u_{\rm c}V_{\rm c}$\>[J]
\end{tabbing}
\end{table}
\begin{table}[h]
\caption{The correlations between (log) $D$ and $t$ or $Q_{0}$ or $\rho_{0}$
when other parameters are kept constant}
\begin{tabular}{lclrll}
\hline
Correlation & $r_{XY}$ & $r_{XY/U}$ & $P_{XY/U}$\\
& & $r_{XY/V}$ & $P_{XY/V}$\\
& & $r_{XY/W}$ & $P_{XY/W}$ & $r_{XY/UVW}$ & $P_{XY/UVW}$\\
\hline
$D-t$/$Q_{0}$ &+0.804 & +0.941 & $\ll$0.001\\
$D-t$/$\rho_{0}$ & & +0.805 & $\ll$0.001\\
$D-t$/1+$z$ & & +0.839 & $\ll$0.001\\
$D-t$/$Q_{0}$,$\rho_{0}$,1+$z$ & & & & +0.952 & $\ll$0.001\\
& &\\
$D-Q_{0}$/$\rho_{0}$ &$-$0.065 & +0.015 & 0.898\\
$D-Q_{0}$/$t$ & & +0.810 & $\ll$0.001\\
$D-Q_{0}$/1+$z$ & & +0.450 & 0.001\\
$D-Q_{0}$/$\rho_{0}$,$t$,1+$z$ & & & & +0.866 & $\ll$0.001\\
& &\\
$D-\rho_{0}$/$Q_{0}$ &$-$0.211 & $-$0.256 & 0.031\\
$D-\rho_{0}$/$t$ & & $-$0.080 & 0.505\\
$D-\rho_{0}$/1+$z$ & & $-$0.125 & 0.298\\
$D-\rho_{0}$/$Q_{0}$,$t$,1+$z$ & & & & $-$0.716 & $\ll$0.001\\
\hline
\end{tabular}
\end{table}
\noindent
In view of the dynamical model applied and as a result of the above statistical
correlations, we see that the linear size of a source strongly depends on
both its age and the jet power, where the correlation with age is the strongest.
However, the size also anti-correlates with central density of the core. That
anticorrelation seems to be a weaker than the correlations with $Q_{0}$ and $t$
and become well pronounced only when
all three remaining parameters ($Q_{0}$, $t$ and $z$) are kept constant.
Fitting a surface to the values of $D$ over the $t$--$Q_{0}$ plane, we find
\begin{equation}
D(t,Q_{0})\propto t^{1.06\pm 0.03}Q_{0}^{0.31\pm 0.02}.
\end{equation}
\noindent
The above relation well illustrates the influence of the jet power on the source
(cocoon) expansion velocity, i.e. its length at a given age. However, the source
expansion velocity ought to depend also on the external environment conditions.
Indeed, the significant partial correlation coefficient for the $D-\rho_{0}$
correlation in Table~4 confirms this effect.
A greater than one exponent of the age in Equation~(10) may suggest a
statistical acceleration of the expansion velocities with age which contradicts
with the deceleration implied by Equation~(1) for $\beta$=1.5 (shown with the
dashed line in Figure~3). However, the observed luminosities of the sample sources
demand different jet powers (cf. see below). A linear regression of
their apparent $D$ values transformed to a constant reference value of $Q_{0}$
with Equation~(10) on the age axis gives $D(t)\propto t^{1.05\pm 0.05}$. This is
somehow a surprised result and there would be several explanations of it: (i)
$\beta>2$, (ii) non-constant $Q_{0}$ during the lifetime of sources, (iii) not
representative sample of sources, or (iv) real possibility of an acceleration of
the expansion speed, at least for sources evolving in a specific environmental
conditions. We will return to this possibility in Paper~II.
The partial correlation coefficients calculated for the correlation between
the luminosity $P_{\rm 1.4}$ and $t$, $Q_{0}$ and 1+$z$, as well as the related
probabilities of their chance correlations are given in Table~5.
\begin{figure}[t]
\special{psfile=mach1_f4.ps hoffset=-20 voffset=-230}
\vspace{75mm}
\caption{Plot of the model jet power $Q_{0}$ against the observed 1.4 GHz luminosity
of the sample sources.}
\label{f4}
\end{figure}
\begin{table}[h]
\caption{The correlations between (log) $P_{\rm 1.4}$ and $t$ or $Q_{0}$ or 1+$z$
when other parameters are kept constant}
\begin{tabular}{lclrlr}
\hline
Correlation & $r_{XY}$ & $r_{XY/U}$ & $P_{XY/U}$\\
& & $r_{XY/V}$ & $P_{XY/V}$\\
& & & & $r_{XY/UV}$ & $P_{XY/UV}$\\
\hline
$P_{\rm 1.4}-t$/$Q_{0}$ &$-$0.718 &$-$0.647 & 0.001\\
$P_{\rm 1.4}-t$/1+$z$ & &$-$0.351 & 0.002\\
$P_{\rm 1.4}-t$/$Q_{0}$,1+$z$ & & & &$-$0.623 & 0.005\\
& &\\
$P_{\rm 1.4}-Q_{0}$/$t$ &+0.940 & +0.926 & $\ll$0.001\\
$P_{\rm 1.4}-Q_{0}$/1+$z$ & & +0.854 & $\ll$0.001\\
$P_{\rm 1.4}-Q_{0}$/$t$,1+$z$ & & & & +0.900 & $\ll$0.001\\
& &\\
$P_{\rm 1.4}-$(1+$z$)/$Q_{0}$ &+0.771& +0.270 & 0.023\\
$P_{\rm 1.4}-$(1+$z$)/$t$ & & +0.515 & $<$0.001\\
$P_{\rm 1.4}-$(1+$z$)/$Q_{0}$,$t$ & & & & $-$0.153 & 0.21\\
\hline
\end{tabular}
\end{table}
\noindent
Table~5 shows that the strongest correlation is between the source apparent
luminosity and its jet power, although the anticorrelation between the luminosity
and age is also significant. The model values of $Q_{0}$ vs. the observed 1.4-GHz
luminosities of the sample sources are plotted in Figure~4.
\begin{figure}[t]
\special{psfile=mach1_f5.ps hoffset=-70 voffset=-250}
\vspace{75mm}
\caption{Plot of the 1.4 GHz luminosity $P_{\rm 1.4}$ transformed to a constant
jet power of 10$^{38}$\,W against the source age $t$. The symbols indicating sources
are the same as in Figure~1. The dashed curve shows the $P-t$ relation for
$Q_{0}$=10$^{38}$\,W resulting
from the iterative solution of the relevant KDA equations; cf. Section~3.3.}
\label{f5}
\end{figure}
The {\sl giant} sources seemingly have either a higher jet powers or lower radio
luminosities than the normal-size sample sources. According to the model applied,
the source luminosity (at a constant $Q_{0}$) decreases with time, thus Figure~4
confirms that the {\sl giants}, statistically older than normal-size sources in the
sample (cf. Table~2), are less luminous than younger comparison sources with a
similar $Q_{0}$.
Fitting a surface to the values of $P_{\rm 1.4}$ over the $t$--$Q_{0}$ plane, we find
\[P_{\rm 1.4}(t,Q_{0})\propto t^{-0.61\pm 0.08}Q_{0}^{0.99\pm 0.05}.\]
\noindent
An anticorrelation between apparent luminosity and age of matured sources has been
predicted by the KDA and BWR analytical models. The data of the sample sources
allow to verify those predictions. This anticorrelation in our sample is shown in
Figure~5 where the 1.4-GHz luminosities are transformed to the constant jet power
of 10$^{38}$\,W according to the surface fit given above. There is an evident lack
of powerful old sources concordant with predictions of the above two models.
However, the
luminosity of {\sl giant} sources seems to decrease faster with respect to that
predicted by the KDA model and very likely is connected with a departure from the
self-similarity of the cocoon expansion in that model. Therefore our data on
{\sl giant} sources support rather the evolutionary predictions of the BWR model.
The first time (to our knowledge) axial ratios of {\sl giant} sources were
analysed and compared with those of smaller FRII-type sources by Subrahmanyan et
al. (1996), who found no difference between the axial ratios of eight {\sl
giants} and eight 3C sources with a median size of about 400 kpc. The authors
did not specify which 3C sources were considered, but since all {\sl giant} and
normal sources were of comparable powers and at comparable redshifts, we
assume they might be of similar ages, so the dependence of $AR$ on time could
not be visible.
In the BRW model the axial ratio of an individual source
steadily increases throughout its lifetime. Moreover, that model implies a
dependence of the $AR$ on the jet power $Q_{0}$. The latter dependence was
probably reflected by an apparent correlation between $AR$ and the 178-MHz
luminosity
of 3C sources noted by Leahy and Williams (1984). Taking into account the
unavoidable anticorrelation between $Q_{0}$ and age in any sample of sources (cf.
Section~4.1), in Table~6 we
have calculated the partial correlation coefficients and the related
probabilities of chance correlations between $AR$ and $t$, $AR$ and $Q_{0}$, and
$AR$ and $\rho_{0}$ when relevant combinations of the parameters $t$, $Q_{0}$,
$\rho_{0}$, and 1+$z$ are kept constant.
\begin{table}[htb]
\caption{The correlations between (log) $AR$ and $t$ or $Q_{0}$ or $\rho_{0}$
when the other parameters are kept constant}
\begin{tabular}{@{}lclrlr}
\hline
Correlation & $r_{XY}$ & $r_{XY/U}$ & $P_{XY/U}$\\
& & $r_{XY/V}$ & $P_{XY/V}$\\
& & $r_{XY/W}$ & $P_{XY/W}$ & $r_{XY/UVW}$ & $P_{XY/UVW}$\\
\hline
$AR-t$/$Q_{0}$ &+0.328 & +0.615 & $\ll$0.001\\
$AR-t$/$\rho_{0}$ & & +0.445 & $<$0.001\\
$AR-t$/1+$z$ & & +0.488 & $<$0.001\\
$AR-t$/$Q_{0}$,$\rho_{0}$,1+$z$ & & & & +0.513 & $<$0.001\\
& &\\
$AR-Q_{0}$/$t$ &+0.235 & +0.570 & $\ll$0.001\\
$AR-Q_{0}$/$\rho_{0}$ & & +0.121 & 0.314\\
$AR-Q_{0}$/1+$z$ & & +0.445 & $<$0.001\\
$AR-Q_{0}$/$t$,$\rho_{0}$,1+$z$ & & & & +0.421 & $<$0.001\\
& &\\
$AR-\rho_{0}$/$t$ &+0.225 & +0.380 & 0.001\\
$AR-\rho_{0}$/$Q_{0}$ & & +0.182 & 0.129\\
$AR-\rho_{0}$/1+$z$ & & +0.301 & 0.011\\
$AR-\rho_{0}$/$t$,$Q_{0}$,1+$z$ & & & & +0.209 & 0.085\\
\hline
\end{tabular}
\end{table}
Table~6 shows statistically significant correlations between the axial ratio
and the source's age, as well as the axial ratio and the jet power.
Fitting a surface to the values of $AR$ over the $Q_{0}$--$t$ plane (where
$Q_{0}$ is in watts and $t$ in Myr), we found
\begin{equation}
AR(t,Q_{0} )\propto t^{0.23\pm 0.03}Q_{0}^{0.12\pm 0.02}.
\end{equation}
\noindent
Indeed, our statistical data strongly support the implication of the BRW model of
the dependence of $AR$ on $Q_{0}$. A consequence of this effect for the expansion
speed of the cocoon is pointed out in the next subsection.
Using the above relation, we transform the apparent $AR$ values from
Table~1 to a reference jet power of $10^{39}$\,W. The relation between the
transformed axial ratio and age of the sample sources with the regression line
on the time axis is shown in Figure~6.
\begin{figure}[t]
\special{psfile=mach1_f6.ps hoffset=-20 voffset=-230}
\vspace{75mm}
\caption{Plot of the source axial ratio transformed to a constant jet power of
10$^{39}$\,W against the age $t$. The dashed line indicates the linear regression
on the age axis.}
\label{f6}
\end{figure}
This statistical correlation between the cocoon's axial ratio and age
implies a time evolution of the ratio of the pressure in the head
of the jet to the cocoon pressure. Indeed, substitution of Equation (11) into
Equation (3) (for $\beta$=1.5 with its assumed uncertainty of $\pm$0.4) gives:
\begin{equation}
{\cal P}_{\rm hc}(t,Q_{0})\propto t^{0.38\pm 0.06}Q_{0}^{0.21\pm 0.04}.
\end{equation}
\noindent
which violates the model assumption of self-similar expansion of the cocoon. The
consequence of this will be analysed in Paper II.
\begin{figure}
\special{psfile=mach1_f7.ps hoffset=-20 voffset=-460}
\vspace{155mm}
\caption{{\bf a)} Cocoon pressure against size of the sample sources; {\bf b)}
the same pressure transformed to the reference size of 1 Mpc and age of 100 Myr
against redshift. The symbols indicating sources within different sets are the
same as in Figure 1 and 2. The dashed line in {\bf a} and {\bf b}
indicates the pressumed IGM presure evolution $p_{\rm IGM}\propto (1+z)^{5}$.}
\label{f7}
\end{figure}
Our statistical analysis confirms the significant anticorrelation between the
cocoon pressure and the size of sources expected from Equation~(2), but reveals
also a significant correlation between this pressure and redshift. Besides, the
size strongly correlates with age and anticorrelates with redshift, so we have
calculated the partial correlation between all these parameters. The Pearson
partial correlation coefficients between $p_{\rm c}$, $D$, $t$, and 1+$z$ are
given in Table~7.
\begin{table}
\caption{The correlation between (log) $p_{\rm c}$ and $D$ or $t$ or 1+$z$ where
the other parameters are kept constant}
\begin{tabular}{@{}lclrlr}
\hline
Correlation & $r_{XY}$ & $r_{XY/U}$ & $P_{XY/U}$\\
& & $r_{XY/V}$ & $P_{XY/V}$\\
& & & & $r_{XY/UV}$ & $P_{XV/UV}$\\
\hline
$p_{\rm c}-D/t$ &$-$0.792 & $-$0.234 & 0.051\\
$p_{\rm c}-D$/1+$z$ & & $-$0.806 & $\ll$0.001\\
$p_{\rm c}-D/t$,1+$z$ & & & & $-$0.533 & $<$0.001\\
&\\
$p_{\rm c}-t/D$ &$-$0.917 & $-$0.772 & $\ll$0.001\\
$p_{\rm c}-t$/1+$z$ & & $-$0.826 & $\ll$0.001\\
$p_{\rm c}-t/D$,1+$z$ & & & & $-$0.467 & 0.001\\
&\\
$p_{\rm c}-$(1+$z)/D$ & +0.785 & +0.733 & $\ll$0.001\\
$p_{\rm c}-$(1+$z)/t$ & & +0.145 & 0.23\\
$p_{\rm c}-$(1+$z)/D,t$ & & & & +0.324 & 0.005\\
\hline
\end{tabular}
\end{table}
Table~7 shows that: (i) The correlation coefficients for the direct ($r_{XY}$) and
partial ($r_{XY/V}$, $r_{XY/UV}$) correlations between the cocoon pressure and its
size, and the cocoon pressure and the
source age are very similar. This is obvious in view of the very high size--age
correlation shown in Table~4. (ii) Both direct correlations are weakened, though
still remain very significant, if the age and redshift (in the $p_{\rm c}-D$
correlation) and the size and redshift (in the $p_{\rm c}-t$ correlation) are kept
constant. Formally, the strongest partial correlation is found between the cocoon
pressure and size (in fact: the cocoon's volume). (iii) the strong direct correlation
between the cocoon pressure and redshift is seriously weakened if the source size
and age are kept constant. This correlation, shown in Figure~7a, is very important
for studies of physical conditions in the intergalactic medium (IGM).
The dashed line indicates the expected electron
pressure in an adiabatically expanding Universe in the form $p_{\rm IGM}=
p^{0}_{\rm IGM}(1+z)^{5}$ with $p^{0}_{\rm IGM}=2\cdot10^{-15}$ N\,m$^{-2}$ (cf.
Subrahmanyan and Saripalli 1993).
Fitting a
surface to the values of $p_{\rm c}$ over the $D-t$ plane we find
\[p_{\rm c}(D,t)\propto D^{-0.30\pm 0.19}t^{-1.78\pm 0.17}.\]
\noindent
Therefore, one can transform the cocoon's pressure values to a reference size and
age. The plot of cocoon pressures transformed to $D$=1 Mpc and $t$=100 Myr versus
1+$z$ is shown in Figure~7b. This Figure illustrate how $p_{\rm c}$ of the
high-redshift sources would decrease if they had evolved into the above size and
age, and emphasize that the strong direct $p_{\rm c}-$(1+$z$) correlation in
Figure~7a is, in fact, due to the much stronger correlations $p_{\rm c}-D$ and
$p_{\rm c}-t$ (cf. Table~7). However, this is purely statistical result and the
question about conditions under which distant radio sources would reach very large
size and old age remains open. The aspect of a
cosmological evolution of the IGM is discussed in Section~5.4.
\section{Discussion of the Results}
\subsection[section]{Influence of Fixed Model Parameters on the Model Predictions}
The basic physical parameters of the radio sources derived in this paper with the
aim of the KDA model, i.e. jet
power $Q_{0}$, central density $\rho_{0}$, and cocoon pressure $p_{\rm c}$, are
in principle dependent on the assumed central core radius, the exponent in the
external gas distribution, the adiabatic indices of electrons and magnetic field,
as well as on the orientation of the jet axis towards the observer. In application
of the KDA model we have assumed the same values of these parameters for all sample
sources. This assumption can only be valid in our statistical analysis of the
evolutionary trends in the whole FRII-type population but not for individual
sample sources. In particular, we have adopted $a_{0}=10$ kpc and $\beta=1.5$,
respectively. Taking a lower
core radius, e.g. $a_{0}=2$ kpc, and keeping $\beta=1.5$ will result in increase
of the model density of the core $\rho_{0}$ by roughly one order of
magnitude, while other model parameters will not be changed. Conversely, a
lower density gradient, e.g. $\beta=1.1$, will lower $\rho_{0}$ approximately
$1.5\sim 6$ times. In this case however, the jet power will be increased by a
few percent and accordingly the pressure and energy density in the cocoon will be
changed. Solving the equations in Section~3 for a few sets of the model free
parameters ($a_{0}$, $\beta$, $\theta$) we find that the presented and discussed
correlations between the observational and model parameters are not changed; all
the statistical trends are preserved although the values of the model parameters
(especially $\rho_{0}$) are changed quantitatively.
A relativistic equation of state for the magnetic field does not
change our results significantly, unless the KDA `Case~1' ($\Gamma_{\rm c}=\Gamma_
{\rm B}=4/3$) is adopted. However, this case, i.e. when both the cocoon and the
magnetic field energy have a relativistic equation of state, is unlikely for
our sample sources. Therefore, we argue that the results discussed below are not
significantly biased by selection of a particular set of the model parameters.
\subsection[section]{Cause of Extremal Linear Size}
In view of the KDA and BRW analytical models of dynamical evolution of FRII-type
radio sources, many such sources can evolve into a stage characterized by a
linear size exceeding 1 Mpc. Access to this stage depends on a number of the
model parameters: jet power $Q_{0}$, its Lorentz factor $\gamma_{\rm jet}$, the
adiabatic indices of the cocoon material and magnetic field, $\Gamma_{\rm c}$
and $\Gamma_{\rm B}$, respectively, as well as the core radius $a_{0}$, the external
gas density $\rho_{0}$, and the exponent of its distribution $\beta$. For a given
set of these parameters, the model allows us to determine whether an evolving
source will reach the size of 1 Mpc, and if so, at what age. From this
point of view, {\sl giant} sources should be the oldest ones.
In this paper (Section 4.3) we have confronted the KDA model predictions with
the observational data on {\sl giant}-size and normal-size FRII sources. Our
statistical analysis strongly suggests that there is not a single evolutionary
scheme governing the size development. An old age or a low external density
alone is insufficient to assure extremely large linear extent of a source,
both are necessary together with a suitable power driven from AGN by the
highly relativistic jets. The Pearson partial correlation analysis indicates
that the dependence of linear size on each of these three parameters is
statistically significant. Ordering these correlations by decreasing partial
correlation coefficients, we find that the size is dependent on age, then on
$Q_{0}$; next on $\rho_{0}$.
About 83 \% of {\sl giants}
in our sample possess a projected linear size over 1 Mpc owing to statistically
old age, low or moderate density of the external medium, and high enough power
of their jets. The remaining 17 \% (3 sources) are high luminosity
sources at redshifts $z>0.4\sim 0.5$ and ages from 15 to 25 Myr which are
typical for normal-size sources. Two of them are quasars. The jet power of these
sources is high enough to compensate for a higher ram pressure in a denser
surrounding environment and higher energy losses during the cocoon expansion.
The jet power of {\sl giants} is not extreme, so several FRII-type sources
having that $Q_{0}$ can potentially achieve very large size after a suitably
long time.
According to our results (cf. Figure~1a), these potential {\sl giants} should
have $Q_{0}>10^{37.5}$ W and be situated in an environment with $\rho_{0}<10^{-23}$
kg\,m$^{-3}$.
The above scenario is not caused by a selection effect. We show that there are
low-luminosity sources which lie in the parts of $\log Q_{0}-\log\rho_{0}$
plane (Figure~1a) completely avoided
by {\sl giant} sources. They diverge from {\sl giants} and even normal-size
powerful sources by having low jet power $Q_{0}<10^{37.5}$ W and total energy
$E_{\rm tot}<10^{52.5}$ J. Thus we suggest that they never reach the giant size.
It is worth noting that some of them already have an age comparable to that of
typical {\sl giants}, in accordance with the model expectations.
\subsection[section]{Jet Power and Energy Budget}
The ratio of total energy supplied by the twin jets during the lifetime of a
source $(2Q_{0}t)$ and its energy stored in the cocoon, derived from the data
under the assumption of energy equipartition, $U_{\rm eq}=u_{\rm eq}V_{\rm o}$,
allow another test of the dynamical model predictions. The ratio of
$2Q_{0}t/U_{\rm eq}$ for the sample sources (given in column 9 of Table~2) is
plotted in Figure~8 vs. the cocoon axial ratio $AR$. The
uncertainties in both values are marked by error bars on some data points.
The solid curve indicates the model prediction from Equation (8), while the dashed
curve shows the best fit to the data. The observed trend fully corresponds to
the model prediction. However, the derived values of $2Q_{0}t/U_{\rm eq}$, i.e.
the reciprocal of the efficiency factor by which the kinematic energy of the jets
is converted into radiation, is much higher than $\sim$2, a value usually
assumed in a number of papers.
\begin{figure}
\special{psfile=mach1_f8.ps hscale=80 vscale=80 hoffset=-20 voffset=-230}
\vspace{75mm}
\caption[]{Ratio of the total energy supplied by twin jets and energy stored
in the cocoon against its axial ratio $AR$. The large uncertainties in both
parameters are marked by error bars for a few sources only for clarity. Here the
giant sources are marked with filled circles. The solid curve indicates the model
prediction from Equation (8); the dashed curve shows the best
fit to the weighted data.}
\label{f8}
\end{figure}
In a number of studies of {\sl giant} radio sources (e.g. Parma et al. 1996;
Schoenmakers et al. 1998) the authors followed the approach of Rawlings and Saunders
(1991) and assumed a fraction of the total jet energy wasted for adiabatic expansion
of the cocoon to be about 0.5 and used it to estimate the jet power $Q_{0}$ for
sources with known age (almost always from spectral ageing analysis). In the
KDA model the energy stored in the source (cocoon) is
\[E_{\rm tot}\approx\int\{Q_{0}\,dt-(p_{\rm c}\,d[V_{\rm c}(t)]+
p_{\rm h}\,d[V_{\rm h}(t)])\} \]
\noindent
where $p_{\rm c}\,dV_{\rm c}+p_{\rm h}\,dV_{\rm h}$ is the work done to expand
the cocoon, and $p_{\rm h}$ and $V_{\rm h}$ are the hotspot pressure and volume,
respectively. If $V_{\rm h}$ is neglected, the expansion
work will be $\approx 0.5\,Q_{0}t$; if not, it is dependent on the pressure ratio
${\cal P}_{\rm hc}$ as shown in Section~3.2.
The values of $2Q_{0}t/U_{\rm eq}$ in our sample (Figure~8; Table~2) vary from
about 2 to more than 10 due to the high jet power required for the adiabatic expansion
of the cocoon. The $Q_{0}$ values derived here from the KDA model can be
compared with the relevant values estimated by Wan, Daly and Guerra (2000) [WDG]
for 22 3C sources included also in our sample (21 with $z$$>$0.5 + Cyg\,A).
Recalculating their values for $H_{0}$=50 km\,s$^{-1}$Mpc$^{-1}$ we found the WDG
estimates approximately 2.5 times lower than the KDA values. Exactly the similar
ratio (1.7$\div$5.5 depending on the value of ${\cal P}_{\rm hc}$) characterize
the jet powers of 9 3C sources, common with our sample sources, determined by
Rawlings and Saunders (1991). The explanation of this ratio is straightforward.
The values of $Q_{0}$ estimated in those papers were based on the ram pressure
considerations in the overpressured source model A of Scheuer (1974) and its
further modifications (e.g. Begelman and Cioffi 1989; Loken et al. 1992; Nath 1995).
They all are self-similar models of the Carvaldo and O'Dea type I which describe
the source dynamics only. If the source energetics and especially the energy losses
are properly taken into account (the type III models; e.g. KDA, BRW), the
significantly higher values of $Q_{0}$ are implied.
The data in our sample show a
dependence of the energy ratio $2Q_{0}t/U_{\rm eq}$ on the cocoon axial ratio
$AR$, and imply an increase of the fraction of jet energy spent on the adiabatic
expansion of the cocoon volume in time. However, the data also suggest that for
a constant $AR$ (i.e. a given geometry of the cocoon), {\sl giants} tend to have
a smaller ratio of $2Q_{0}t/U_{\rm eq}$ than normal-size sources which means less
energy of the jets converted into adiabatic expansion of the cocoon. This
may indicate a lower pressure of the external medium surrounding the {\sl giant}
sources than that around smaller ones.
\subsection[section]{External Pressure of the Surrounding Medium and its Evolution}
A non-relativistic uniform intergalactic medium (IGM) in thermal equilibrium
filling an adiabatically expanding Universe should have an electron pressure
evolving with redshift $p_{\rm IGM}=p^{0}_{\rm IGM}(1+z)^{5}$. The advancing
hotspots of FRII-type radio sources are probably confined by ram pressure of the
IGM. {\sl Giant} sources, with their lobes extended far outside typical galaxy
halo, have the lowest values of $p_{\rm c}$ and may be useful for determining the
upper limit of $p_{\rm IGM}$. Using a small sample of {\sl giant} sources,
Subrahmanyan and Saripalli (1993) limited its local value to
$p^{0}_{\rm IGM}\approx(0.5\div 2)\cdot10^{-15}$ N\,m$^{-2}$. A
further study was undertaken by Cotter (1998) who, using a larger sample of 7C
giants with sources out to redshift of $\sim$0.9, confirmed a strong dependence
of the lowest $p_{\rm c}$ on redshift in agreement with a $(1+z)^{5}$ relation.
This observational result has been critically discussed by Schoenmakers et al.
(2000), who have considered possible selection effects in Cotter's analysis
(including the Malmquist bias),
and concluded that there was not evidence in their own sample for a cosmological
evolution of $p_{\rm IGM}$. However, they also state that this hypothesis
cannot be rejected until some low-pressure high-redshift sources are found.
In all the above analyses the age of sources was not considered. The very
significant correlation between $p_{\rm c}$ and $D$ or $t$, shown in Section~4.3,
strongly suggests that the intrinsic dependences of size on age as well as of
age on redshift and not the
Malmquist bias is mainly responsible for the apparent correlation between
$p_{\rm c}$ and 1+$z$. Nevertheless, we agree with Schoenmakers et al.'s
conclusion that until {\sl giant} sources with internal pressures in their lobes
$p_{\rm c}<2\cdot10^{-15}$ N\,m$^{-2}$ at redshifts of at least $0.6\div 0.8$ are
not discovered, the IGM pressure evolution in the form
$p_{\rm IGM}\propto (1+z)^{5}$ cannot be rejected.
Most {\sl giants} in our sample, except four high-redshift ones, reveal the
lowest pressure in their cocoons. Sharing Subrahmanyan and Saripalli's arguments,
we can expect those cocoon pressures are
indicative of an upper limit to the present-day external pressure of the IGM,
$p_{\rm IGM}^{0}$. Taking into account the lowest values of $p_{\rm c}$ (cf.
Figure~7a), we found $p_{\rm IGM}^{0}<2\cdot10^{-15}$ N\,m$^{-2}$ in
accordance with their value.
It is worth emphasizing that the above results are obtained from the
analytical model assuming energy equipartition in the initial ratio of the energy
densities of the magnetic field and the particles (cf. Section~3.1). This may
not be the case in every part of the source (cocoon). Hardcastle and Worrall (2000)
estimated gas pressures in the X-ray-emitting medium around normal-size 3CRR
FRII radio galaxies and quasars, and found that, with few exceptions, the
minimum pressures in their lobes, determined under equipartition conditions,
were well below the environment pressures measured from old ROSAT observations.
Therefore, they have argued that there must be an additional contribution to the
internal pressure in lobes of those sources likely including pressure from protons,
magnetic fields exceeding their minimum-energy values, or non-uniform filling
factors.
Nevertheless, the diffuse lobes of {\sl giants}, extending farther from a host
galaxy than the typical radius of high-density X-ray-emitting gas,
may be in equilibrium with an ambient medium the emissivity of which is not
directly detectable.
\section{Conclusions}
In this paper we confront the analytical KDA model predictions with the
observational data of `giant' and normal size FRII-type radio sources.
From our analysis we can conclude as follows:
(1) {\sl Giant} sources do not form a separate class of radio sources, and
do not reach their extremal sizes exclusively due to some exceptional physical
conditions of the external medium. The size is dependent, in order of decreasing
partial correlation coefficients, on age; then on the jet power $Q_{0}$; next
on the central core density $\rho_{0}$.
(2) {\sl Giants} possess the lowest equipartition magnetic field strength and
energy density of their cocoons making their detection difficult in
synchrotron emission. However, their accumulated total energy is the highest
among all sources and exceeds $3\cdot10^{52}$ W.
(3) Our data confirm the conclusion drawn by Blundell et al. (1999) that
throughout the lifetime of an individual source its axial ratio can steadily
increase, thus its expansion cannot be self-similar all the time. A
self-similar expansion seems to be feasible if the power supplied by the jets
is a few orders of magnitude above the minimum-energy value. In other cases
the expansion can only initially be self-similar; a departure from
self-similarity for large and old sources is justified by observations of
{\sl giant} sources.
(4) The apparent increase of the lowest cocoon pressures (observed in the
largest sources) with redshift is mainly caused by the intrinsic dependence of
their age on redshift and dominates over a bias by possible selection effects.
However, a
cosmological evolution of the IGM cannot be rejected until {\sl giant} sources
with internal pressures in their lobes less than $2\cdot10^{-15}$ N\,m$^{2}$ at
high redshifts are {\sl not} discovered.
\section{Acknowledgements}
We thank Dr. C. R. Kaiser for explanations of the integration procedures used in
the KDA paper.
|
{
"timestamp": "2004-11-12T12:00:58",
"yymm": "0411",
"arxiv_id": "astro-ph/0411329",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411329"
}
|
\section{Introduction}
Density functional theory (DFT) has nowadays become the standard tool for the
description of ground state properties of such different systems as atoms,
molecules, clusters or bulk materials. Part of the success stems from the fact
that in the DFT electron correlation is in principle exactly covered at the
level of an effective one-particle Hamiltonian. Difficulties arise however,
when the orbital energies obtained from the solution of the Kohn-Sham (KS)
eigenvalue problem are interpreted as approximate quasiparticle energies,
i.e., the energies associated with addition or removal of an electron. A well
known example is the severe underestimation of the band gap in insulating
solids or semiconductor crystals \cite{bec92I}. The same problem is also
present in molecules. Although it has been shown that Koopmans theorem also
holds for the highest occupied molecular orbital (HOMO) in DFT
\cite{alm85}, ionization potentials come out too small in practical
calculations. This fact has been traced back to the wrong asymptotic behavior
of common approximate exchange-correlation (XC) functionals like the local
density approximation (LDA)
\cite{lee94}. However, the KS gap can be shown to represent a first-order
approximation to optical excitation energies \cite{goe96}, and thus the KS gap
will be different from the quasiparticle gap even if the exact XC functional
is used.
As an alternative to DFT, many body perturbation theory in the approximation
of Hedin \cite{hed65} has been extremely successful in the prediction of
quasiparticle spectra. In this so-called GW method the description of the
electron-electron interaction is approached in a different way than in DFT. While
the exchange energy is exactly calculated like in Hartree-Fock theory, correlation is accounted for by an
energy dependent dielectric function which reduces the coulomb
interaction between electrons. The scheme is nearly self-interaction
free and provides asymptotically correct
potentials. Consequently, the band gap problem of the DFT is absent in the GW
and also ionization potentials and electron affinities of molecules are
computed in good accord with experiment \cite{roh98,roh00,oht01,ish04}.
In conjunction with the solution of the Bethe-Salpeter (BS) equation
\cite{noz64,sha66,str84}, the GW approximation can also be used to calculate
charge neutral quasiparticle excitations , i.e., optical spectra. In this
context, the
time-dependent generalization of DFT \cite{gro90}, which in principle
provides the same information as GW+BS has in practice been found to give
inferior results for the absorption spectrum of solids \cite{oni02}. Also for
molecules, time dependent DFT fails in the description of charge transfer excitations
\cite{dre03}, which was attributed to the locality of common XC-functionals
\cite{wan04,taw04,gri04}. Again, GW+BS which contains the correct long range
electron-hole interaction should be able to remedy the problem.
Another example where DFT orbital energies are widely used but lead to
problematic results is given by transport calculations in molecular
devices. Here the calculated currents differ typically by
orders of magnitudes from the experimentally found values
\cite{ree97,div00,pec04}. Since the current-voltage characteristics of such systems
depend critically on the HOMO-LUMO gap, a major improvement is expected when
GW quasiparticle energies would be used instead of DFT orbital energies.
It is clear from the forgoing discussion, that although the GW approximation
was developed in the context of solid state theory, its application to
molecular systems is becoming more and more important. In fact,
implementations for systems with translational symmetry using plane wave basis
sets can directly be applied to finite systems when very large super-cells are
used. Nonetheless, the use of localized atomic-like basis sets is clearly more
adapted to the problem and such implementations have been utilized quite
successful in the past years \cite{roh95,roh96,roh98,roh00}. But even with
this improvement, the numerical complexity of the GW equations limits a first
principles evaluation to rather small system sizes of tens of
atoms. So applications like
transport calculations for molecules, where a sizable amount of the atoms
comprising the leads need to be included, or molecular dynamics in the excited
state are currently not feasible. It
would therefore be desirable to have an approximate GW scheme which
nevertheless captures the essential physics of the underlying theory. The
purpose of this paper is to propose such a scheme.
In the past years a number of different approximated GW methods have been
introduced and successfully applied
\cite{hed65,hyb86,bec92,pal95,ste84,bec88,gu94,del97,del00,fur02}. The main
difference to this earlier work is that we focus on molecular applications
with a real space implementation. Furthermore we try to avoid empirical
parameters in order to achieve a higher transferability.
Like GW calculations usually employ DFT energies and wavefunctions as zero
order approximations to the quasiparticle quantities, our approach is based on
the {\em Density Functional based Tight-Binding} (DFTB) method
\cite{por95,els98}, which itself is an approximation to DFT. The DFTB scheme
has been shown to provide reliable results on a variety of systems classes
ranging from molecules to solids at a highly reduced numerical cost compared
to DFT calculations. In section \ref{meth} the DFTB method is briefly
introduced together with a detailed description of the approximations in the
various quantities involved in the GW formalism. The accuracy and main
shortcomings of our approach are then examined in section \ref{appl}, where it
is applied to a prototype series of $\pi$-bonding molecules, the polyacenes.
\section{Methodology}
\label{meth}
The GW method has been extensively discussed in the literature and several
reviews are available \cite{ary98,aul99,far99,hed99,oni02}. The main goal is
to solve the Dyson equation:
\begin{equation}
\label{dyson}
\left( H_0 + \Sigma(\epsilon^{QP}_i)\right) \ket{\psi_i^{QP}} =
\epsilon^{QP}_i \ket{\psi_i^{QP}},
\end{equation}
for the quasiparticle energies $\epsilon^{QP}_i$ and wavefunctions
$\ket{\psi_i^{QP}}$. Here $H_0$ is the Hartree Hamiltonian and the so-called
self-energy $\Sigma$ is a nonlocal and energy dependent operator, which
accounts for exchange and correlation effects. It thus can be seen as a
replacement for the local exchange-correlation potential $v_{xc}$ in the DFT
Kohn-Sham (KS) equations. In the GW approximation of Hedin, the
self-energy is given as a product of the single-particle Greens function $G$ and
the screened Coulomb interaction $W$. As these quantities, as well as the
Hartree Hamiltonian, depend on the quasiparticle wavefunctions, the Dyson
equation [Eq.~(\ref{dyson})] has to be iterated until self-consistency is
achieved. However, since the DFT one-particle wavefunctions are usually very
similar to the final quasiparticle ones and moreover self-consistency not
necessarily improves the result \cite{sch98}, Eq.~(\ref{dyson}) may be simplified to
\begin{equation}
\label{qpe}
\epsilon_i^{QP} = \epsilon_i^{DFT} +
Z_i\langle \psi_i|\Sigma(\epsilon_i^{DFT})-v_{xc}|\psi_i\rangle,
\end{equation}
where $\psi_i$ are KS orbitals, non-diagonal elements of the Dyson Hamiltonian in the basis of DFT
states are neglected and the energy dependence of the self-energy has
partially been accounted for by the renormalization constant $Z_i$:
\begin{equation}
\label{reno}
Z_i = \left(1- \left.\frac{\partial \Sigma(\omega)}{ \partial
\omega}\right|_{\epsilon_i^{DFT}}\right)^{-1}.
\end{equation}
In our approach Eq.~(\ref{qpe}) is now subjected to further approximations which
are presented separately for each term in the following.
\subsection{The DFT orbital energies $\epsilon_i^{DFT}$}
\label{DFTBeps}
As already mentioned, the Kohn-Sham energies $\epsilon_i^{DFT}$ and orbitals are
obtained from the DFTB method, which has been presented in detail earlier (for
a review see Ref.~\cite{fra02}). Here we only describe the method to the
extent necessary to motivate the remaining approximations in this work. In the
DFTB, the Kohn-Sham states $\psi_i^{\text{DFTB}}$ are expanded in a linear
combination of atom centered orbitals $\phi_{\mu}$:
\begin{equation}
\label{DFTBbase}
\psi_i({\bf r}) = \sum_{\mu}c_{\mu i}\phi_{\mu}({\bf r}-{\bf R}_A),
\end{equation}
which are obtained from a preceding DFT calculation on neutral atoms. Here
$\mu:=\{Alm\}$ is a compound index indicating the atom on which the
basisfunction is centered, its angular momentum $l$ and magnetic quantum
number $m$. Later, also quantities which depend only on $A$ and $l$
appear. These will be denoted with a corresponding index $\bar{\mu}:=\{Al\}$
throughout the paper.
Since atomic orbitals are usually too long ranged to be used directly in a molecular
calculation, the atomic DFT Hamiltonian is augmented with a confining square
potential to compress the wavefunction outside of a given radius $r_0$ (usually
twice the covalent radius of the element), while ensuring the desired cusp
conditions inside \cite{por95}. From the atomic valence states a minimal basis of $s$ and
$p$ orbitals is then chosen, although also $d$-orbitals are included when
necessary, e.g.~for second row elements \cite{nie01}. With the help of the expansion
(\ref{DFTBbase}) the Kohn-Sham equations of DFT can be written:
\begin{subequations}
\begin{gather}
\label{kse}
\sum_\nu c_{\nu i} (H_{\mu\nu} - \epsilon_i S_{\mu\nu}) =
0 , \;\;\; \forall\;\mu ,\:i\;\\
S_{\mu\nu} = \langle\phi_\mu|\phi_\nu\rangle\label{smunu} \\
H_{\mu\nu} = H_{\mu\nu}^0+H_{\mu\nu}^{SCC},\label{hmunu}
\end{gather}
\end{subequations}
where the overlap matrix $S_{\mu\nu}$ has been introduced and the Hamiltonian
is divided in two parts. The first part $H_{\mu\nu}^0$ is approximated as
follows:
\begin{eqnarray}
\label{DFTBme}
H_{\mu \nu}^0 =
\left\{
\begin{array}{c@{\quad :\quad}l}
\epsilon_{\mu}^{\text{free atom}} & \mu = \nu \\
\langle\phi_{\mu} ( { {\bf r} }) | H_{DFT}[\rho_{A}^0+\rho_{B}^0] |\phi_{\nu} ({\bf r})
\rangle &\mu\, \epsilon\, A, \nu\, \epsilon\, B \\
0 & \text{otherwise}
\end{array}
\right.
\end{eqnarray}
The KS Hamiltonian in Eq.~(\ref{DFTBme}) contains as usual the kinetic
energy, the electron-nuclei attraction and the Hartree as well as
exchange-correlation potential and depends only on the atomic densities of
atoms $A$ and $B$. This means, that besides the crystal field terms also
all three-center terms are neglected. The onsite elements are given as atomic
orbital energies obtained from a DFT calculation without confining potential
to ensure the right dissociation limit. The integrals in Eq.~(\ref{DFTBme})
are numerically evaluated and tabulated for varying distance between atoms $A$
and $B$.
The second part of the Hamiltonian (\ref{hmunu}) corrects for the fact that the
molecular density differs from a simple superposition of atomic
densities. In order to estimate this difference, spherical averages over
basis functions belonging to one angular momentum shell are build
\begin{equation}
\label{sbas}
F_{Al}({\bf r}) = \frac{1}{2l+1} \sum_{m=-l}^{m=l} \left| \phi_{Alm}({\bf r}) \right|^2,
\end{equation}
and used in a Mulliken type approximation
\begin{equation}
\label{mul}
\phi_\mu({\bf r}) \phi_\nu({\bf r}) \approx \frac{1}{2} S_{\mu\nu} \left(
F_{\bar{\mu}}({\bf r}) + F_{\bar{\nu}}({\bf r}) \right),
\end{equation}
to represent the molecular density. The latter is then constructed from point
charges
\begin{gather}
\label{rhoq}
\rho({\bf r}) = \sum_i\abs{\psi_i({\bf r})}^2 \approx \sum_{\bar{\mu}} q_{\bar{\mu}}
F_{\bar{\mu}}({\bf r}) \nonumber \\ \text{with\quad}
q_{\bar{\mu}} =
\sum_{m=-l}^{l}\sum_{\nu i}
c_{\mu i}
c_{\nu i}
S_{\mu\nu};
\end{gather}
an expansion, which despite of its simplicity takes the different spatial
localization of e.g $s$ and $p$-orbitals into account. Based on these
considerations, the difference between the true molecular density and
superimposed atomic densities can be estimated with net Mulliken charges
$\Delta q_{\bar{\mu}} = q_{\bar{\mu}} - q_{\bar{\mu}}^{\text{atom}}$ and leads
to the correction term \cite{rem1}:
\begin{equation}
H_{\mu\nu}^{SCC}= \frac{1}{2}
S_{\mu\nu}\sum_{\bar{\delta}}(\gamma_{\bar{\mu}\bar{\delta}}+\gamma_{\bar{\nu}\bar{\delta}})\Delta
q_{\bar{\delta}},
\end{equation}
as shown in more detail in Ref.~\cite{els98}.
The term $\gamma$ describes the interaction of two electrons in the orbitals
$\bar{\mu}$ on atom $A$ and $\bar{\nu}$ on atom $B$, including the effects of
exchange and correlation:
\begin{subequations}
\begin{eqnarray}
\label{gamma}
\gamma_{\bar{\mu}\bar{\nu}} &=& \int\!\!\int F_{\bar{\mu}}({\bf r})\left( \frac{1}{|{\bf r}-{\bf r'}|}
+\frac{\delta v_{xc}[{\rho({\bf r})}]}{\delta \rho({\bf r'})} \right)
F_{\bar{\nu}}({\bf r'}) \,d{\bf r} d{\bf r'}\nonumber\\\\
&\approx& \gamma_{\bar{\mu}\bar{\nu}}(\abs{{\bf R}_A-{\bf R}_B},U^H_{\bar{\mu}},U^H_{\bar{\nu}}),
\end{eqnarray}
\end{subequations}
and is approximated by considering two limiting cases. For large distances
between the two atoms, the integral (\ref{gamma}) simplifies to a pure Coulomb
interaction of two point charges, since the $v_{xc}$ contribution dies off
rapidly. For short distance on the other hand, Eq.~(\ref{gamma}) becomes an
atomic integral $U_H$, which can be easily calculated numerically for each
element. From these limiting cases a simple interpolation formula was derived
in Ref.~\cite{els98}, which is a function of the atomic parameters $U^H$
and the interatomic distance only. Since the Mulliken net charges $\Delta
q_{\bar{\delta}}$ depend on the molecular orbital coefficients $c_{\mu i}$,
Eq.~(\ref{kse}) has to be iterated until self-consistency. As a result the
orbital energies $\epsilon_i^{\text{DFTB}}$ needed in Eq.~(\ref{qpe}) are
obtained.
\subsection{The self-energy $\Sigma_i(\epsilon)$}
\label{self}
We calculate the self-energy in the GW approximation by:
\begin{equation}
\label{seli}
\Sigma({\bf r},{\bf r'},\epsilon) = \frac{i}{2\pi}\int e^{i\omega 0^+}
G_0({\bf r},{\bf r'},\epsilon-\omega)\; W({\bf r},{\bf r'},\omega)\;d\omega,
\end{equation}
where $G_0$ is the single particle Greens function built from DFTB
wavefunctions and $W=\epsilon^{-1}v$ is the screened Coulomb interaction,
while $v$ is the bare one. For
the following it is beneficial to divide the self-energy into two parts as
$\Sigma = iG_0v + iG_0(\epsilon^{-1}-1)v$. The latter term denoted $\Sigma^c$
is energy dependent and describes dynamical correlation effects, while the
former term $\Sigma^x$ provides the major part of the self-energy. For
$\Sigma^x$ the frequency integration in Eq.~(\ref{seli}) can be carried out
easily and yields in the KS basis the usual Hartree-Fock exchange energy for
orbital $i$:
\begin{equation}
\label{x}
\Sigma^x_i = \sum_j^{occ} \int\!\!\int
\frac{\psi_i({\bf r})\psi_j({\bf r})\psi_i({\bf r'})\psi_j({\bf r'})}{|{\bf r}-{\bf r'}|} \,d{\bf r} d{\bf r'}.
\end{equation}
Since in contrast to empirical tight-binding schemes the basis functions
$\phi_\mu$ are available in the DFTB method, one could in principle calculate
Eq.~(\ref{x}) directly from the wavefunctions. In this way the method would
scale like first principles schemes with $N^4$, where $N$ is the number of
basis functions. Therefore we seek for an approximate solution and
note that after expansion of the KS states in atomic orbitals,
Eq.~(\ref{x}) contains products of basis functions which are in
general located
on different atomic centers. An important simplification can thus be achieved,
when the Mulliken
approximation [Eq.~(\ref{mul})] is applied to the integral. Introducing the
following notation for the matrix elements of a general two-point function in
the basis of squared and spherically averaged DFTB atomic orbitals:
\begin{equation}
\label{twop}
[f]_{\bar{\mu}\bar{\nu}} =
\int\!\!\int F_{\bar{\mu}}({\bf r}) f({\bf r},{\bf r'}) F_{\bar{\nu}}({\bf r'}) \,d{\bf r} d{\bf r'},
\end{equation}
we then arrive at the following simplified expression for $ \Sigma^x_i $:
\begin{equation}
\label{sigx}
\Sigma^x_i = \sum_j^{occ} \sum_{\bar{\mu}\bar{\nu}}
q^{ij}_{\bar{\mu}} \left[v\right]_{\bar{\mu}\bar{\nu}} q^{ij}_{\bar{\nu}} .
\end{equation}
Here the $ q^{ij}_{\bar{\mu}}$ are generalized Mulliken charges
\begin{equation}
\label{qij}
q^{ij}_{\bar{\mu}} =
\frac{1}{2}
\sum_{m=-l}^{l} \sum_\nu
\left( c_{\mu i}
c_{\nu j}
S_{\mu\nu} +
c_{\nu i}
c_{\mu j} S_{\nu\mu} \right),
\end{equation}
which provide a point charge representation of the overlap between two
molecular orbitals $i$ and $j$. The important observation is now that the
matrix of the Coulomb interaction is equal to the definition of the
$\gamma$-functional in Eq.~(\ref{gamma}), when the contributions stemming from
the XC functional are removed. In other words, the functional form of
$\gamma$ can also be used for $[v]$, if the atomic parameters $U^H$ are
replaced by the parameters $U^{ee}$ which incorporate only the classical
Coulomb interaction. We calculate these electron repulsion integrals directly
from the DFTB basis functions using the algorithms presented in
Ref.~\cite{gus02}. The parameters for each angular momentum are set to
an average over the integrals for different combinations of the magnetic
quantum numbers belonging to that shell.
The main drawback of the Mulliken approximation in Eq.~(\ref{mul}) is, that
onsite integrals of the exchange type are completely neglected. These,
however, contribute around 10 \% to the final exchange energy. Similar to the
proceeding in the quantum chemical INDO approach \cite{rid73} we therefore
include all non-vanishing onsite integrals, leading to the following final
form for $ \Sigma^x_i $ \cite{rem2}:
\begin{gather}
\label{sigxf}
\Sigma^x_i = \sum_j^{occ} \sum_{\bar{\mu}\bar{\nu}} q^{ij}_{\bar{\mu}}
\left[v\right]_{\bar{\mu}\bar{\nu}} q^{ij}_{\bar{\nu}} \nonumber\\+ \sum_A
\sum_{\mu,\nu \ni A}^{\mu\ne\nu} \left(
c_{\mu i}^2 c_{\nu j}^2 + c_{\mu i} c_{\nu j} c_{\nu i} c_{\mu j} \right)
(\phi_\mu \phi_\nu|\phi_\mu \phi_\nu).
\end{gather}
While in the INDO approach the necessary integrals are taken as empirical
fitting parameters, we compute them from the atomic basis functions. More
precisely, the parameters are calculated from an uncompressed wavefunction in
order to be consistent with the onsite definition of the DFTB Hamiltonian
matrix elements. The values used in this study are given in
Tab.~\ref{tab_par} together with the $U^H$ and $U^{ee}$ parameters.
\begin{table}
\caption{The atomic electron-electron interaction integrals $U^H_l$ and
$U^{ee}_l$, as well as the exchange integrals $(\phi_{lm}
\phi_{l'm'}|\phi_{lm} \phi_{l'm'})$ used in this study. Results are given for
free and compressed atomic basisfunctions, as defined by the confinement
radius $r_0$ (see Sec.~\ref{DFTBeps} and Ref.~\cite{por95}). The
same compression is used in the calculation of the Hamiltonian and overlap
matrix elements. \label{tab_par}}
\begin{ruledtabular}
\begin{tabular}{lccc}
Element & Parameter & $r_0$ [$a.u.$] & Value [eV]\\\colrule
Hydrogen&&&\\
& $U^H_0$ & $\infty$ & 11.06 \\
& $U^{ee}_0$ & $\infty$ & 15.39 \\
& $U^{ee}_0$ & 3.0 & 21.36 \\
Carbon&&&\\
& $U^H_0$ & $\infty$ & 10.81 \\
& $U^H_1$ & $\infty$ & 10.81 \\
& $U^{ee}_0$ &$\infty$ & 15.66 \\
& $U^{ee}_1$ & $\infty$ & 14.15 \\
& $U^{ee}_0$ & 2.7 & 17.98 \\
& $U^{ee}_1$ & 2.7 & 18.72 \\
& $(\phi_{00} \phi_{1m'}|\phi_{00} \phi_{1m'})$ & $\infty$ &3.01 \\
& $(\phi_{1m} \phi_{1m'}|\phi_{1m} \phi_{1m'})$ & $\infty$ &0.75
\end{tabular}
\end{ruledtabular}
\end{table}
Let us now turn to the correlation contribution of the self-energy $\Sigma^c$,
which is much harder to evaluate, since it amounts to a multi step
procedure. First we construct matrix elements of the electronic polarizability
in the random-phase approximation according to:
\begin{gather}
\label{pol}
[P(\omega)]_{\bar{\mu}\bar{\nu}} = 2 \sum_k^{occ} \sum_l^{virt}
\left(\sum_{\bar{\alpha}} \tilde{S}_{\bar{\mu}\bar{\alpha}}
q^{kl}_{\bar{\alpha}} \right) \left(\sum_{\bar{\beta}} q^{kl}_{\bar{\beta}}
\tilde{S}_{\bar{\beta}\bar{\nu}} \right) \times \\ \left[
\frac{1}{\epsilon^{\text{\tiny DFTB}}_k-\epsilon^{\text{\tiny DFTB}}_l
-\omega +i0^+} + \frac{1}{\epsilon^{\text{\tiny
DFTB}}_k-\epsilon^{\text{\tiny DFTB}}_l + \omega +i0^+} \right],\nonumber
\end{gather}
where we again used the Mulliken approximation of Eq.~(\ref{mul}) and
introduced the overlap matrix of spherically averaged DFTB basis functions $
\tilde{S}_{\bar{\mu}\bar{\nu}}=\int F_{\bar{\mu}}({\bf r}) F_{\bar{\nu}}({\bf r}) d{\bf r}$,
not to be confused with the overlap appearing in the KS equations
(\ref{smunu}).
The quantity $\tilde{S}$ never needs to be constructed, since it appears only
in intermediate quantities and falls out in the final equation for the
screened Coulomb interaction we are aiming at.
In a next step we obtain the dielectric function in matrix notation as:
\begin{equation}
\label{eps}
[\epsilon(\omega)] = \tilde{S} - [v]\, \tilde{S}^{-1}\, [P(\omega)].
\end{equation}
For systems with translational symmetry the dielectric matrix is hermitian in
reciprocal space. This fact is used in plasmon-pole models
\cite{hyb86,lin88,god89} to simplify the frequency integration in
Eq.~(\ref{seli}), which is numerically demanding due to the complicated pole
structure of $\epsilon^{-1}$ along the real axis. In these models the inverse
of the dielectric matrix is diagonalized and the eigenvalues are assumed to be
a simple function of the frequency, while the eigenvectors are frequency
independent. Free parameters of the model are either obtained from sum rules
or by diagonalizing $\epsilon^{-1}$ at different test frequencies. It is then
easy to perform the frequency integration analytically to obtain the
self-energy.
However, in the present real-space approach the inverse dielectric matrix is
not symmetric. In Ref.~\cite{roh95} this problem was circumvented by
introducing an auxiliary symmetrized dielectric matrix, while we proceed by
noting that the screened Coulomb interaction $W$:
\begin{equation}
\label{W}
[W(\omega)] = \tilde{S}\,[\epsilon(\omega)]^{-1}\,[v],
\end{equation}
has the desired property of being symmetric. Applying the plasmon-pole
approximation to $[W-v]$, we finally arrive at the following expression for the
correlation contribution to the self-energy for orbital $i$:
\begin{multline}
\label{wmv}
\Sigma^c_i(\omega)= \sum_n\sum_{\bar{\delta}} \left(
\sum_{\bar{\mu}} q^{in}_{\bar{\mu}} \Phi_{\bar{\mu}\bar{\delta}} \right)^2 \times \\
\frac{z_{\bar{\delta}}
\omega_{\bar{\delta}}}{2}
\left\{
\begin{array}{c@{\quad :\quad}l}
\frac{1}{\omega-\epsilon_n^{\text{DFTB}}+\omega_{\bar{\delta}}} & n \in \text{occ} \\
\\
\frac{1}{\omega-\epsilon_n^{\text{DFTB}}-\omega_{\bar{\delta}}} & n \in \text{virt},
\end{array}
\right.
\end{multline}
where $\Phi$ denotes the eigenvectors of $[W-v]$ , while $z_{\bar{\delta}}$ and
$\omega_{\bar{\delta}}$ are the mentioned parameters of the plasmon-pole model, as
defined in Ref.~\cite{roh95}. They
are determined by diagonalization of $[W-v]$ at zero frequency and one
frequency on the imaginary axis. We checked that the actual values chosen have
little impact ($< 0.1 $ eV) on the final quasiparticle energies.
Based on the self-energy, the renormalization constant $Z_i$ from
Eq.~(\ref{qpe}) is then obtained by a simple numerical differentiation. For
the molecular structures we studied, $Z_i$ is usually roughly $0.85$, which is
close to the values reported for bulk systems \cite{hyb86}. However, for certain unbound
virtual orbitals, $Z_i$ can decrease to as much as 0.5.
\subsection{The exchange-correlation contribution $v^{xc}_i$}
\label{vxc}
We complete the description of our method with an investigation of the
contributions to the quasiparticle energies arising from the
exchange-correlation potential, denoted $v^{xc}_i[\rho_v]$. As indicated,
$v_{xc}$ is evaluated at the valence density $\rho_v$, consistent with the
fact that the summation in the exchange energy [Eq.~(\ref{sigx})] is carried out
over valence orbitals $j$ only. As pointed out in Ref.~\cite{ish01}, the
core contribution to the exchange energy is not neglible and this holds also
for the core contribution of $v^{xc}$. However, even for
exchange-correlation potentials commonly used today, which are far from exact,
both core contributions cancel to a large degree when computing quasiparticle
energies according to Eq.~(\ref{qpe}).
In analogy to the derivation of the DFTB method, we now expand $v^{xc}$
around the density $\rho_v^0$, which is a superposition of atomic valence
densities. With $\rho_v = \rho_v^0+ \delta\rho$ we obtain:
\begin{subequations}
\begin{gather}
v^{xc}_i[\rho_v] = \int \abs{\psi_i({\bf r})}^2 v^{xc}_i[\rho_v^0({\bf r})] \,\,d{\bf r}
\,\,+ \nonumber\\ \int\!\!\int \abs{\psi_i({\bf r})}^2 \,
\frac{\delta v^{xc}_i[\rho_v({\bf r})]}{\delta\rho_v({\bf r'})} \delta\rho({\bf r'})
\,d{\bf r} d{\bf r'} + {\cal O}(\delta\rho^2)\label{vxcia} \\
\approx \sum_{\mu\nu} c_{\mu i}c_{\nu i} v^{xc}_{\mu\nu}[\rho_v^0] +
\sum_{\bar{\mu}\bar{\nu}} q^{ii}_{\bar{\mu}}
\left[\frac{\delta v^{xc}}{\delta\rho}\right]_{\bar{\mu}\bar{\nu}}
\Delta q_{\bar{\nu}}\label{vxcib}
\end{gather}
\end{subequations}
In going from Eq.~(\ref{vxcia}) to Eq.~(\ref{vxcib}), the Mulliken
approximation was again employed and matrix elements of the exchange-correlation
kernel $\delta v^{xc}/\delta\rho$ were introduced in the notation of
Eq.~(\ref{twop}). The first term of Eq.~(\ref{vxcib}) is now exactly treated
like the Hamiltonian in the DFTB scheme. That is, only the two-center terms
are kept and the onsite values are calculated from uncompressed basis
functions and atomic densities. Then the integrals are numerically evaluated
and tabulated in the usual Slater-Koster form as a function of interatomic
distance.
The second term in Eq.~(\ref{vxcib}) is the counterpart of $H_{\mu\nu}^{SCC}$
in Eq.~(\ref{hmunu}). If we set
\begin{multline}
\label{gxc}
\left[\frac{\delta v^{xc}}{\delta\rho}\right]_{\bar{\mu}\bar{\nu}} =
\gamma_{\bar{\mu}\bar{\nu}}(\abs{{\bf R}_A-{\bf R}_B},U^H_{\bar{\mu}},U^H_{\bar{\nu}})\, -
\\ \gamma_{\bar{\mu}\bar{\nu}}(\abs{{\bf R}_A-{\bf R}_B},U^{ee}_{\bar{\mu}},U^{ee}_{\bar{\nu}}),
\end{multline}
the long range $1/R$ tail of the two $\gamma$-functions cancels (see Fig.~\ref{gamfig}), and one is
left with a short ranged representation of the exchange-correlation kernel
without introducing any new parameters or integral approximations. Moreover
the $v^{xc}$ contribution of the self-energy now cancels all related
contributions in the orbital energies $\epsilon^{\text{DFTB}}_i$ as it should be \cite{rem3}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Fig1.eps}
\caption{The electron-electron interaction integrals for hydrogen as a
function of distance. Shown are results including both Coulomb and
exchange-correlation interactions ($\gamma^H$), pure Coulomb
($\gamma^{ee}$) and the {\em negative} of the pure exchange-correlation
interaction ($-\delta v^{xc}/\delta\rho =\gamma^{ee}-\gamma^H$), according to
Eq.~(\ref{gxc}).\label{gamfig}}
\end{figure}
At this point, all the necessary ingredients to calculate quasiparticle
energies within the DFTB scheme have been presented. In the next section we
analyze the strengths and weaknesses of the method by applying it to the
polyacene series.
\begin{table*}
\squeezetable
\caption{The different contributions to the quasiparticle (QP) energies and
the QP energies themselves obtained from the method described in this work (DFTB) as
well as from first principles calculations using Gaussian type orbitals
(GTO). Shown are results for some levels close to the frontier orbitals of
the benzene and anthracene molecules. All energies in eV.\label{tab_vgl}}
\begin{ruledtabular}
\begin{tabular}{ll|cc|cc|cc|cc|cc}
& &\multicolumn{2}{c}{$\epsilon^i$}& \multicolumn{2}{c}{$v_{xc}^i$} &
\multicolumn{2}{c}{$\Sigma_x^i$} &
\multicolumn{2}{c}{$\Sigma_c^i$} &
\multicolumn{2}{c}{$\epsilon_{QP}^i$} \\
State & Sym. & GTO & DFTB & GTO & DFTB & GTO &
DFTB & GTO & DFTB & GTO & DFTB\\
\colrule
\multicolumn{2}{l|}{Benzene} \\
$A_{2u}$ & $\pi$ & -9.37& -8.95&-12.82&-12.39&-17.15&-16.35& 2.03&
2.47&-11.67&-10.45\\
$E_{2g}$ & $\sigma$ & -8.34& -7.71&-15.05&-12.25&-19.49&-15.62& 1.86&1.41
&-10.92& -9.67\\
$E_{1g}$ & $\pi$ & -6.59& -6.64&-13.03&-12.30&-15.61&-14.98& 0.59&0.87 &
-8.58& -8.46\\\colrule
$E_{2u}$ & $\pi^*$ & -1.30& -1.32&-12.64&-11.72& -7.58& -7.41&-1.74&-0.99&
2.01& 2.00\\
$B_{2g}$ & $\pi^*$ & 0.92& 2.29& -6.96&-11.19& -2.89& -6.33&-2.31&-2.84&
2.67& 4.31\\
\colrule\colrule
\multicolumn{2}{l|}{Anthracene}\\
$B_{3u}$ & $\pi$ & -7.97 & -7.62 &-12.98 &-12.25&-16.32&-15.61& 1.98 &2.27
& -9.33 &-8.71\\
$B_{2g}$ &$\sigma$ & -7.85 & -7.28 &-13.07 &-12.17&-16.40&-15.32& 2.01 &1.90
& -9.15 &-8.53\\
$A_{u }$ & $\pi$ & -6.82 & -6.78 &-13.12 &-12.24&-15.81&-15.11& 1.38 &1.66
& -8.12 &-7.98\\
$B_{1g}$ & $\pi$ & -6.51 & -6.40 &-13.30 &-12.10&-15.27&-14.18& 1.01 &1.06
& -7.47 &-7.42\\
$B_{2g}$ & $\pi$ & -5.30 & -5.51 &-13.28 &-12.18&-14.90&-14.37& 0.58 &0.88
& -6.34 &-6.82\\\colrule
$B_{3u}$ & $\pi^*$ & -2.86 & -2.97 &-13.08 &-11.86& -8.78& -8.17&-1.82
&-0.90 & -0.37 &-0.19\\
$A_{u }$ & $\pi^*$ & -1.58 & -1.59 &-13.18 &-11.62& -8.49& -7.92&-2.21
&-1.38 & 0.89 & 0.74\\
$B_{1g}$ & $\pi^*$ & -1.25 & -1.28 &-12.89 &-11.63& -7.63& -7.34&-2.54
&-1.83 & 1.47 & 1.18\\
$B_{3u}$ & $\pi^*$ & -0.52 & 0.01 &-11.90 &-11.41& -6.47& -6.94&-2.84
&-2.45 & 2.07 & 2.03\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{Applications}
\label{appl}
\subsection{Comparison to first principles results}
The polyacenes ($C_{4n+2}H_{2n+4}$) are linear chains
of anellated polycyclic aromatic hydrocarbons,
as shown in Fig. \ref{polyacene}.
The monomer ($n=1$) is benzene. Naphtalene is with $n=2$, anthracene $n=3$,
tetracene $n=4$ and so on. These systems have received much attention because
of their potential use in efficient organic thin film devices. Theoretically
they have been characterized quite widely
\cite{wha79,tri91,cio93,sab96,wib97,not98,del03} and recently also an
investigation in the context of the GW approximation appeared, where
polymorphs of the pentacene crystal were analyzed in terms of their optical
spectra \cite{tia03}.
Here, the polyacenes are chosen as a prototypical $\pi$-system to
explore the accuracy of our approximations. To this end we fully optimized
the different structures at the DFTB level of theory without imposing
symmetry constraints. The obtained geometries are in excellent agreement with
a recent DFT study on the polyacenes from $n=1$ to $5$, in which the hybrid
functional B3LYP and the accurate 6-311G** basis set was employed \cite{wib97}. For all
the molecules studied, we find a mean deviation in the bond lengths of no more
than 0.005 \AA.
\begin{figure}
\centering
\includegraphics[scale=0.3]{Fig2.eps}
\caption{Schematic viewgraph of the polyacenes: n is the number of monomers.\label{polyacene}}
\end{figure}
Then the quasiparticle spectrum was obtained according to the approximations
in the last section. For comparison, also first principles GW calculations in
the Gaussian type orbital (GTO) implementation of Ref.~\cite{roh95} were
carried out.
In the ab-initio GW calculation, the wave functions have been represented by
s and p Gaussian orbitals on carbon (decay constants: 0.12, 0.4, 1.0, and 2.8
atomic units, i.e., 16 orbitals per atom) and by s and p orbitals on hydrogen
(decay constants 0.15, 0.4 and 1.0, i.e., 12 orbitals per atom). The two-point
functions occurring in the GW scheme are represented by s, p, d, and s*
orbitals on carbon (decay constants 0.2, 0.6, 1.6, and 4.0, i.e., 40 orbitals
per atom) and on hydrogen, as well (decay constants 0.25 and 0.7, i.e., 20
orbitals per atom).
In both the DFTB and first principles calculations the LDA
exchange-correlation functional was used.
The results for benzene and anthracene are listed in Tab. \ref{tab_vgl} for
a small number of states around the frontier orbitals, according to the energy
partioning of Eq.~(\ref{qpe}). Focusing first on the orbital
energies, we find a very good agreement between the DFTB and the first
principles results. This might be surprising considering the limited basis set
employed in the former approach. However,
the DFTB basis consists of optimized atomic orbitals rather than simple Gaussian
type orbitals. Moreover, the approximations underlying the DFTB method seem to
be justified due to a stable error cancellation for the $\pi$-orbitals. This
hold true to a lesser extent for $\sigma$-orbitals, like the $E_{2g}$ state in
benzene, where an error up to 0.6 eV is found. Not unexpected, difficulties
are also observed for unbound virtual orbitals like the $B_{2g}$ state in
benzene, since the description of the continuum is very sensitive to the
quality of a finite basis set.
Next, we turn to the exchange-correlation energy per orbital. Here we
find in comparison with the first principles results, that the DFTB values are in general
too positive by roughly 10 \%. However, the error even reaches 20 \% for the
$\sigma$-orbitals of benzene. We attribute this failure to the neglect of
crystal field terms in our approach, which is
likely to have different effects on orbitals of $\sigma$ and $\pi$
symmetry. In fact, we calculated elements of the type
$\exv{\phi_\mu^A}{v_{xc}(\rho_B)}{\phi^A_\nu}$ and found, that integrals where A
represents a hydrogen atom and B a carbon atom are significantly larger than
in the reversed situation or integrals where both A and B stand for
carbon atoms. As the latter two types of integrals occur in the calculation of
$\pi$-orbitals of the polyacenes, while the first one is important for
$\sigma$-orbitals, the missing of the crystal field terms is likely to be the
source of error here.
Considering now the exchange contribution to the self-energy $\Sigma_x^i$,
similar trends are found. Compared to the ab initio results, the DFTB values
are slightly too positive. Since the terms $\Sigma_x^i$ and $v_{xc}^i$
contribute in Eq.~(\ref{qpe}) with opposite signs to the quasiparticle
energies, a stable error cancellation is expected. A larger deviation is found
again for the $E_{2g}$ state in benzene, where an error up to 4 eV
occurs. Since the exchange integrals depend strongly on the atomic repulsion
integrals $U_{ee}$ in our approximation, the error could be reduced by
enlarging this parameter for hydrogen without loosing the good performance for
the $\pi$-orbitals. However, we hesitate to treat the $U_{ee}$ values as
empirical parameters, because of loss of transferability. Instead, one should
look for a better approximation of the two-electron integrals. In
our approximation the density $\phi_\mu({\bf r})\phi_\nu({\bf r})$ is represented by a
superposition of spherical charge densities. Consequently, the two-electron
integrals are given by simple monopole-monopole interactions, thus neglecting
any angular momentum dependence. A natural next step would be to include
higher order terms in a multipole expansion of the density
$\phi_\mu({\bf r})\phi_\nu({\bf r})$, as it is done in the semiempirical MNDO method
developed by Dewar and Thiel \cite{dew77}.
Next, the final quasiparticle energies are discussed. For the $\pi$-orbitals
the mean deviation of the DFTB results from the ab initio values is 0.4
eV, with errors decreasing when the system size is increased. As could be
already expected from the forgoing discussion, the description of
$\sigma$-orbitals is less satisfactory in the current state of
approximations. For the $E_{2g}$ orbital of benzene an error of 1.25 eV is
observed. For the unoccupied levels however, a very nice agreement between
first principles and DFTB results is evident. Clearly, this is a consequence
of an error cancellation between all terms in Eq.~(\ref{qpe}), since e.g. the
correlation contribution $\Sigma_c^i$ is systematically underestimated in the
DFTB scheme.
In this context it is interesting to investigate if a more advanced treatment
of the Dyson equation (\ref{dyson}) leads to better results. In fact, it has
been found that the associated wavefunctions of orbitals which are bound at
the DFT level of theory, but unbound at the QP level, differ
considerably. This is in contrast to the assumptions made in the derivation of
Eq.~(\ref{qpe}) from Eq.~(\ref{dyson}) and hence the full QP Hamiltonian needs
to be diagonalized in these cases and self-consistency with respect to the
energy dependence of $\Sigma$ must be achieved. Following this approach
earlier investigations of this point reported shifts of the LUMO level up to
0.8 eV \cite{roh002,ish01}. We also performed such calculations for benzene
and found that even for the $E_{2u}$ state the diagonalization changes the QP
spectrum by less than 0.01 eV. This can be understood as as consequence of the
minimal basis set we employ, which does not provide enough flexibility to
describe the relaxation towards delocalized states. Considering the energy
dependence of the self-energy, it can be stated that the approximate treatment
of Eq.~(\ref{qpe}) using the renormalization constant $Z$ is quite successful,
as we find deviations less than 0.2 eV from the self-consistent solution of
the Dyson equation.
\subsection{Size dependence of the quasiparticle gap}
\begin{figure}
\centering
\rotatebox{0}{ \includegraphics[scale=0.7]{Fig3.eps}}
\caption{Quasiparticle ($\bullet$) and DFT gap
($\blacktriangle$) for the lowest energy conformer of the polyacenes as obtained in the
DFTB approximation. Lines are guides to the eyes. Shown is also the
difference of experimentally determined electron affinities and vertical
ionization potentials ($\circ$) from Ref.~\cite{nist}, where they were
available.\label{qpgap}}
\end{figure}
After validation of the method, we now turn to a first application and analyze
in the following the quasiparticle gap $\epsilon^{QP}_{\text{gap}}=
\epsilon^{QP}_{\text{LUMO}}-\epsilon^{QP}_{\text{HOMO}}$ as a
function of chain length. The first observation which can be drawn from
Fig.~\ref{qpgap} is that the DFTB quasiparticle gap is in very nice agreement with
the experimental data, which
provides some confidence that the general trends we are looking for are
correctly described. Furthermore, Fig.~\ref{qpgap} shows that the DFT gap is
continously decreasing and almost vanishes for n = 19 monomers. As the length
increases, the geometry of the innermost part of the chain resembles more and
more that of two coupled polyacetylene chains with equal bond lengths, as
schematically depicted in Fig.~\ref{peierls}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{Fig4.eps}
\caption{Schematic representation of the lowest energy conformations of the
polyacenes found in this study. The aromatic structure a) is the most stable
for n $\le$ 19, while the Peierls (Z)-distorted structure b) is energetically
favoured for n $\ge$ 20.\label{peierls}}
\end{figure}
The vanishing of the DFT gap can thus be understood in terms of a simple
particle-in-a-box model.
In stark contrast to the DFT gap,
$\epsilon^{QP}_{\text{gap}}$ remains finite for increasing chain length and it
seems worthwhile to explore the physical origin of this different behaviour in
a short digression. In fact, QP energies can be directly compared to results
from photoemission and inverse photoemission measurements, i.e., they include
the effect of an extra particle, while DFT is a pure N-electron
theory. Delerue et al.~pointed out that for nanocrystals the size dependence
of the difference
$\Delta=\epsilon^{QP}_{\text{gap}}-\epsilon^{DFT}_{\text{gap}}$ can be
estimated on the basis of classical electrostatic arguments
\cite{del00}. Considering the interaction between the extra particle and its
induced surface charge on the nanocrystal, they arrived at the following
formula:
\begin{equation}
\label{epol}
\Delta \approx \left( 1 - \frac{1}{\epsilon(R)} \right) \frac{e^2}{R} + 0.94
\frac{e^2}{\epsilon(R) R} \left(\frac{\epsilon(R)-1}{\epsilon(R)+1}\right) + \Delta_b,
\end{equation}
where $\epsilon(R)$ is an effective dielectric constant and $\Delta_b$ is the
bulk value of $\Delta$. In order to apply Eq.~(\ref{pol}) to the
polyacenes, we took $R$ to be half of the chain length and obtained
$\epsilon(R)$ by averaging the microscopic dielectric function in
Eq.~(\ref{eps}). The obtained values increase from 1.72 for n = 1 to 2.14 for
n = 19, which reflects the decreasing band gap. A fit of Eq.~\ref{pol} to our
QP results is shown in Fig.~\ref{qppol} and leads to a value of 2.18
eV for $\Delta_b$. Taking into account that the DFT gap is vanishing for
$n\to\infty$, we therefore predict a QP gap of the same value for an infinite
chain in the aromatic structure of Fig.~\ref{peierls}. Inspection of
Fig.~\ref{qppol} further reveals that for n $>$ 4 the agreement between
Delerue's formula and the QP results is excellent. The fact that
Eq.~(\ref{pol}), which was developed in the context of nanocrystals also
holds for a quasi one-dimensional system like the polyacenes is quite
remarkable.
\begin{figure}
\centering
\rotatebox{0}{ \includegraphics[scale=0.7]{Fig5.eps}}
\caption{Fit of Eq.~(\ref{pol}) ($\times$) to the difference between QP and
DFT gap as obtained in this work ($\bullet$).\label{qppol} }
\end{figure}
We now continue the discussion of Fig.~\ref{qpgap}. Between n = 19 and n = 20
HOMO and LUMO cross, which has important implications for the geometrical as
well as electronic structure of the system. Remaining in the picture of
polyacene as coupled polyacetylene, we observe that the equal C-C bond lengths
found for n $<$ 20 turn into alternating single and double bonds for larger n
as depicted in Fig.~\ref{peierls}, i.e., the system undergoes a Peierls
distortion. In contrast to polyacetylene, where the bond alternation is found
to be around 0.08 $\AA$ \cite{suh83}, the effect is much weaker here with a
value of less than 0.008 $\AA$. Nevertheless, the dimerization leads like in
polyacetylene to an opening of the DFT gap, which tends towards a small but
finite value for the infinite chain. Inspection of Fig.~\ref{qpgap} shows that
also the quasiparticle gap differs significantly for the distorted and
undistorted structure. This fact could be used in experiment to discriminate
between both polymorphs, since we find in line with the MP2 results of
Cioslowski \cite{cio93}, that the two forms are energetically quite close and
should therefore coexist in real samples. We should mention however, that up
to now only polyacenes up to n = 6 could be isolated, since larger chains are
highly vulnerable to photooxidation \cite{not98}.
The results of this section can be summarized as follows. First, the aromatic
from of the infinite polyacene is predicted to be metallic at the DFT level of
theory, but semiconducting at the GW level. It thus provides another example
besides bulk Ge \cite{hyb86}, where the DFT gap is not only quantitatively but
also qualitatively wrong. Second, the difference between the DFT and QP gap
can be understood in terms of the interaction between an extra particle -
which is missing in DFT - and the charge it induces on the molecular surface.
Third, the Peierls distorted polyacene conformer is energetically favoured
only for very long chains and posesses a QP spectrum which is markedly
different from the aromatic form. This also underlines the usefulness of
approximate GW schemes, since in a first principles context the Peierls
transition found here might not be noticed due to the limited treatable system
size \cite{rem4}.
\section{Numerical considerations}
In the following, we shortly discuss the numerical efficiency of our
approximations. The method scales like $N^2N_l^2$, where N is the number of
basis functions and $N_l$ is the number of spherically averaged
basisfunctions, that are used in the representation of two-point functions
($N_l < N$). This has to be compared to first principles implementations,
which usually scale like $N^4$. An additional reduction of computation time is
obtained, since we use a minimal basis of optimized atomic orbitals, while in
a first principles framework a larger number of primitive orbitals is
required. Moreover, the scaling prefactor is reduced in the DFTB scheme,
because the necessary integrals are either precalculated and tabulated or
approximated by simple functions. As an example, the first principles
evaluation of the QP spectrum of anthracene took 170 minutes on a Pentium Xeon
2.20 GHz processor (including 120 minutes for the DFT part of the
calculation), compared to less than 1 second on a Pentium 4 with 2.40 GHz in
our approach. The largest structure we studied, the n = 30 polyacene with 186
atoms took 10 minutes. The limiting factor in the calculation of very large
systems is therefore not the computation time but rather the memory
requirement. We try to circumvent this problem by computing memory intensive
quantities like the overlap charges $q^{ij}_\mu$ on-the-fly in a direct way.
\section{Concluding remarks}
In this work we presented a method to perform quasiparticle calculations of
molecular systems at a highly reduced computational cost compared to first
principles implementations. The scheme was applied to hydrocarbons,
but it can be easily extended to other elements, since all required
parameters are calculated from first principles. The various
approximations of the method are intended to be as
consistent as possible with the underlying DFTB approach to allow for a stable
error cancellation. For benzene and anthracene the results are indeed
comparable with higher level calculations with the exception of
$\sigma$-orbitals. Here, ways to overcome the deficiencies were
outlined. Nevertheless, we think that the scheme could be useful already at
the present stage, since e.g. for optical spectra or in the electronic
transport only a few states around the Fermi level are active and dominate the
physical properties of a system.
\section*{Acknowledgements}
The authors would like to thank Alessandro Pecchia and Alessio
Gagliardi for fruitful discussions related to this work. Further, the EC-Diode-Network is gratefully
acknowledged for financial support and T.A.N is much obliged for using the
computer facilities at the German Cancer
Research Center in Heidelberg.
|
{
"timestamp": "2004-11-01T15:08:48",
"yymm": "0411",
"arxiv_id": "cond-mat/0411024",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0411024"
}
|
\section{Introduction}\lbl{int}
The analysis of $K\rightarrow \pi\gamma^*\rightarrow \pi l^{+}l^{-}$ decays within the framework of chiral perturbation theory ($\chi$PT) was first made in refs.~\cite{EPR87,EPR88}. To lowest non trivial order in the chiral expansion, the corresponding decay amplitudes get contributions both from chiral one loop graphs, and from tree level contributions of local operators coming from the relevant effective Lagrangian $\Delta S =1$ at ${\cal O} (p^4)$. In fact, in order to combine refs.~\cite{EPR87,EPR88} with our new theoretical view, it is more convenient to rewrite this Lagrangian as
\begin{align}
&\hspace{-1cm}{\cal L}_{\rm eff}^{\Delta S=1}(x) \doteq \nonumber\\
&\hspace{-1cm}- G_8 \mbox{\rm tr}\left(\lambda L_{\mu}L^{\mu} \right) + eG_8F_{0}^2 A_{\mu}\mbox{\rm tr}[\lambda(L^{\mu}\Delta+\Delta L^{\mu})]\nonumber\\
&\hspace{-1cm}- \frac{ie}{3F_0^2}G_8 F^{\mu\nu}({\mbox{\bf w}}_{1}-{\mbox{\bf w}}_{2})\; \mbox{\rm tr} \left( \lambda L_\mu L_\nu \right) \nonumber\\
&\hspace{-1cm}- \frac{ie}{F_0^2}G_8 {\mbox{\bf w}}_{2}\, \mbox{\rm tr}(\lambda L_\mu \hat Q L_\nu) + \rm{h.c.}\lbl{efflnex}
\end{align}
where $U(x)$ is the matrix field which collects the Goldstone fields ($\pi$'s, $K$'s and $\eta$), $A_{\mu}$ is an external electromagnetic field source, $F^{\mu\nu}$ the corresponding electromagnetic field strength tensor and
\begin{flalign}
&\hspace{-0.8cm}L_{\mu}(x)=-iF_{0}^2 U^{\dagger}(x)\partial_{\mu}U(x)\;,\\
&\hspace{-0.8cm}\Delta(x)=U^{\dagger}(x)[\hat{Q},U(x)]\;,\; G_8=\scriptstyle{\frac{G_{F}}{\sqrt{2}}\,V_{\rm ud}^{\phantom{\ast}}V_{\rm us}^{\ast}\,\mbox{\bf g}_8}\;,\\
&\hspace{-0.8cm}\hat Q = \rm{diag}(1,0,0) \;,\; (\lambda)_{ij} = \delta_{3i}\delta_{2j}\;.
\end{flalign}
As can be show easily $\tilde {\bf w}$ is ${\cal O}(N_c^0)$ and $\mbox{\bf w}_2$ is ${\cal O}(N_c)$. Obviously our aim is to obtained values for this two coupling constants. Let us remind some basic facts concerning these decays.
\section{$K\rightarrow \pi l\bar{l}$ Decays to $O(p^4)$ in the Chiral Expansion}
In full generality, one can predict from ref.~\cite{EPR87} the $K^{+}\rightarrow \pi^{+} l^{+}l^{-}$ decay rates ($l=e,\mu$) as a function of the scale--invariant combination of coupling constants
\begin{align}
\hspace{-1cm}{\mbox{\bf w}}_{+}= &-\frac{(4\pi)^{2}}{3} \,[\tilde {\bf w}+3({\mbox{\bf w}}_{2}-4\mbox{\bf L}_{9})]\nonumber\\
&\hspace{2cm}-\frac{1}{6}\log\frac{M_{K}^2 m_{\pi}^{2}}{\nu^4}\label{eq:kppiplplmw}\,,
\end{align}
The predicted decay rate $\Gamma(K^+\rightarrow\pi^+ e^+ e^-)$ is a quadratic function of ${\mbox{\bf w}}_{+}$, then there are two solutions to reproduce the experimental branching ratio~\cite{K+dacrate} (for a value of the overall constant ${\bf g}_8=3.3$):
\begin{equation}\lbl{K+rate}
{\rm Br}(K^+\rightarrow\pi^+ e^+ e^-)=(2.88\pm0.13)\times 10^{-7}\,,
\end{equation}
\begin{equation}
\lbl{solsK+}{\mbox{\bf w}}_{+}=1.69\pm 0.03 \;\; \rm{or}\;\; {\mbox{\bf w}}_{+}=-1.10\pm 0.03\;.
\end{equation}
The $K_{S}\rightarrow \pi^{0}e^{+}e^{-}$ decay rate brings in another scale--invariant combination of constants:
\begin{equation} \lbl{eq:ws}
{\mbox{\bf w}}_{s}= -\frac{1}{3}(4\pi)^{2}\,\tilde {\bf w}-\frac{1}{3}\log\frac{M_{K}^{2}}{\nu^2}\,,
\end{equation}
and it is also quadratic in ${\mbox{\bf w}}_{s}$. From the recent result on this mode, reported by the NA48 collaboration at CERN~\cite{NA48}:
\begin{align}
&\hspace{-0.8cm}{\rm Br}\left(K_S \rightarrow \pi^0 e^+ e^-\right)= \nonumber \\
&\left[5.8^{+2.8}_{-2.3} (\rm stat.) \pm 0.8 (\rm syst.) \right] \times 10^{-9}\lbl{K0rate}\,,
\end{align}
one obtains the two solutions for ${\mbox{\bf w}}_{s}$
\begin{equation}\lbl{solsK0}
{\mbox{\bf w}}_{s}= 2.56 ^{+0.50}_{-0.53} \quad \rm{or} \quad {\mbox{\bf w}}_{s}= -1.90 ^{+0.53}_{-0.50}\,\,.
\end{equation}
At the same ${\cal O}(p^4)$ in the chiral expansion, the branching ratio for the $K_L\rightarrow \pi^0 e^+ e^-$ transition induced by CP--violation reads as follows
\begin{align}
&\hspace{-1cm}{\rm Br}\left(K_L \rightarrow \pi^0 e^+ e^-\right)\vert_{\rm\tiny CPV}= \nonumber \\
&\hspace{-1cm}\left[(2.4\pm 0.2) \left(\frac{{\rm Im}\lambda_t}{10^{-4}}\right)^{2}+(3.9\pm 0.1)\,\left(\frac{1}{3}-{\mbox{\bf w}}_s\right)^2 \right. \nonumber\\
&\hspace{-0.5cm}\left.+ (3.1\pm 0.2)\, \frac{{\rm Im}\lambda_t}{10^{-4}}\left(\frac{1}{3}-{\mbox{\bf w}}_s\right)\right]\times 10^{-12}\lbl{KSCPV}\,.
\end{align}
Here, the first term is the one induced by the {\it direct} source, the second one by the {\it indirect} source and the third one the {\it interference} term. With~\cite{Baetal03} $\mbox{\rm Im}\lambda_t= (1.36\pm 0.12)\times 10^{-4}$, the interference is constructive for the negative solution in Eq.~\rf{solsK0}.
The four solutions obtained in Eqs.~\rf{solsK+} and \rf{solsK0} define four different straight lines in the plane of the coupling constants ${\mbox{\bf w}}_{2}-4{\bf L}_9$ and ${\bf \tilde{w}}$, as illustrated in Fig.~1 below. We next want to discuss which of these four solutions, if any, may be favored by theoretical arguments.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.47\textwidth]{beyond12.eps}
\end{center}
{\bf Fig.~1} {\it The four possible values of the couplings at ${\cal O}(p^4)$ in the chiral expansion are compatible with the experiments. The cross in this figure corresponds to the values in Eqs.~\rf{wtildepred} and \rf{w24L9pred}. }
\end{figure}
\section{Theoretical Considerations}
\subsection{The Octet Dominance Hypothesis}
In ref.~\cite{EPR87}, it was suggested that the couplings ${\mbox{\bf w}}_1$ and ${\mbox{\bf w}}_2$ may satisfy the same symmetry properties as the chiral logarithms generated by the one loop calculation. This selects the octet channel in the transition amplitudes as the only possible channel and leads to the relation
\begin{equation}\lbl{odh}
{\mbox{\bf w}}_2=4{\bf L}_9\;{\mbox{\rm\footnotesize Octet Dominance Hypothesis (ODH)}}\,.
\end{equation}
We now want to show how this {\it hypothesis} can in fact be justified within a simple dynamical framework of resonance dominance, rooted in Large--$N_c$ QCD. For that, let us reduced the Lagrangian in Eq.~\rf{efflnex} at the minimum of one Goldstone field component:
\begin{align}
&\hspace{-0.8cm}{\cal L}_{\rm eff}^{\Delta S=1}(x) =\nonumber\\
&\hspace{-0.8cm} G_8ie \,{\mbox{\bf w}}_{2} \partial_{\nu}F^{\nu\mu}\mbox{\rm tr}[\lambda(\Phi\hat{Q}\partial_{\mu}\Phi-\partial_{\mu}\Phi\hat{Q}\Phi)] \;\scriptstyle{+ \cdots} \label{efflfield}
\end{align}
showing that the two--field content which in the term modulated by $ {\mbox{\bf w}}_{2}$ couples to $\partial_{\nu}F^{\nu\mu}$ is exactly the same as the one which couples to the gauge field $A^{\mu}$ in the lowest ${\cal O}(p^2)$ Lagrangian and then, they are cancelled ~\cite{EPR87}. The cancellation is expected because of the mismatch between the minimum number of powers of external momenta required by gauge invariance and the powers of momenta that the lowest order effective chiral Lagrangian can provide. As we shall next explain, it is the reflect of the dynamics of this cancellation which, to a first approximation, is also at the origin of the relation $\mbox{\bf w}_2=4\mbox{\bf L}_{9}$.
The hadronic electromagnetic interaction reads as follows
\begin{equation}
{\cal L}_{\rm em}(x)=-ie\left( A^{\mu}-\frac{2{\bf L}_{9}}{F_0^2} \partial_{\nu} F^{\nu\mu}\right) \mbox{\rm tr}(\hat{Q}\Phi\stackrel{\leftrightarrow}{\partial_{\mu}}\Phi) \;\scriptstyle{+\cdots}\,.
\end{equation}
We can recognize here an electromagnetic form factor to the charged Goldstone bosons that begins as
\begin{equation}
F_{\rm em}(Q^2)=1-\frac{2{\bf L}_{9}}{F_{0}^2}Q^2 + \cdots
\end{equation}
In the {\it minimal hadronic approximation} (MHA) to Large--$N_c$ QCD, the form factor in question is saturated by the lowest order pole
i.e. the $\rho(770)$~:
\begin{equation}
\label{Fem}
F_{\rm em}(Q^2)=\frac{M_{\rho}^2}{M_{\rho}^2+Q^2}\;\Rightarrow\;{\bf L}_{9}=\frac{F_{0}^2}{2M_{\rho}^2}\,.
\end{equation}
It is well known that this reproduces the observed slope rather well. By the same argument in Eq.~(\ref{efflfield}), we have an electroweak form factor
\begin{equation}
F_{\rm ew}(Q^2)= 1-\frac{{\bf w}_{2}}{2F_{0}^2}Q^2 + \cdots \;.
\end{equation}
Here, however, the underlying $\Delta S=1$ form factor structure can have contributions both from the $\rho$ and the $K^*(892)$~:
\begin{equation}
F_{\rm ew}(Q^2)= \frac{\alpha M_{\rho}^2}{M_{\rho}^2+Q^2} + \frac{\beta M_{K^*}^2}{M_{K^*}^2+Q^2},
\end{equation}
with $\alpha+\beta=1$ because at $Q^2\rightarrow 0$ the form factor is normalized to one by gauge invariance. This fixes the slope to
\begin{equation}
\label{w2surF}
\frac{{\bf w}_2}{2F_0^2} =\left(\frac{\alpha}{M_{\rho}^2}+\frac{ \beta}{M_{K^*}^2}\right)\,.
\end{equation}
If, furthermore, one assumes the chiral limit where $M_\rho =M_{K^*}$, there follows then combining (\ref{Fem}) and (\ref{w2surF}), the ODH relation in Eq.~\rf{odh};
a result which, as can be seen in Fig.~1, favours the solution where both ${\bf w}_{+}$ and ${\bf w}_{s}$ are negative, and the interference term in Eq.~\rf{KSCPV} is then constructive.
\subsection{Beyond the ${\cal O}(p^4)$ in $\chi$PT}
Here, we want to show that it is possible to understand the observed $K^+ \rightarrow \pi^+ l^+l^-$ spectrum within a simple MHA picture of Large--$N_c$ QCD which goes beyond the ${\cal O}(p^4)$ framework of $\chi$PT but, contrary to the proposals in refs.~\cite{DEIP98,BDI03}, it does not enlarge the number of free parameters.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.46\textwidth]{beyond13.eps}
\end{center}
{\bf Fig.~2} {\it Plot of the form factor $\left|f_V(z)\right|^2$ versus the invariant mass squared of the $e^+ e^-$ pair normalized to $M_{K}^2$. The crosses are the experimental points ~\cite{zeller}; the dotted curve is the (best) leading ${\cal O}(p^4)$ prediction ($\mbox{\bf w}_+ >$0); the continuous line is the fit of the form factor in Eq.~\rf{fff} below.}
\end{figure}
Following the ideas developed in the previous subsection, we propose a very simple generalization of the ${\cal O}(p^4)$ form factor ~\cite{EPR87}:
\begin{equation}
f_V(z) = \frac{G_8}{G_F} \left\{ \frac{1}{3}-w_+ - \frac{1}{60}z - \chi(z) \right\}\;.
\end{equation}
We keep the lowest order chiral loop contribution as the leading manifestation of the Goldstone dynamics, but replace the local couplings $\mbox{\bf w}_2 -4{\bf L}_9$ and ${\tilde{\bf w}}$ in ${\bf w}_{+}$ by the minimal resonance structure. The form factor we propose is ~\cite{FGEdeR04}
\begin{align}
\lbl{fff}
&\hspace{-0.8cm}f_V(z) = \frac{G_8}{G_F} \left\{ \frac{(4\pi)^2}{3} \left[{\tilde{\bf w}}\frac{M_{\rho}^2}{M_{\rho}^2-M_{K}^2 z} \right. \right. \nonumber\\
&\hspace{-0.4cm}\left.\left.+ 6 F_{\pi}^2 \beta \frac{M_\rho^2 - M_{K^*}^2}{\left(M_\rho^2 - M_K^2 z\right)\left(M_{K^*}^2-M_K^2 z\right)}\right] \right. \nonumber\\
&\hspace{-0.8cm}\left. + \frac{1}{6} \ln \left(\frac{M_K^2 m_\pi^2}{M_\rho^4}\right) +\frac{1}{3} - \frac{1}{60}z - \chi(z) \right\}\,,
\end{align}
where $\chi(z)=\phi_{\pi}(z)-\phi_{\pi}(0)$.
With $\tilde {\bf w}$ and $\beta$ left as free parameters, we make a least squared fit to the experimental points in Fig.~2. The result is the continuous curve shown in the same figure, which corresponds to a $\chi_{\mbox{\rm\tiny min.}}^2=13.0$ for 18 degrees of freedom. The fitted values (using ${\bf g}_8=3.3$ and $F_{\pi}=92.4~\mbox{\rm MeV}$) are
\begin{equation}\lbl{wtildepred}
\tilde {\bf w}=0.045\pm 0.003\qquad\mbox{\rm and}\qquad \beta=2.8\pm 0.1\,;
\end{equation}
and therefore
\begin{equation}\lbl{w24L9pred}
\mbox{\bf w}_2 -4{\bf L}_9=-0.019\pm 0.003\,.
\end{equation}
These are the values which correspond to the cross in Fig.~1 above. The fitted value for $\tilde{\bf w}$ results in a negative value for $\mbox{\bf w}_s$ in Eq.~\rf{eq:ws}
\begin{equation}
\lbl{fitws}\mbox{\bf w}_s=-2.1\pm 0.2\,,
\end{equation}
which corresponds to the branching ratios (for experimental values see \cite{K+dacrate}\cite{Moriond})
\begin{align}
&\hspace{-0.8cm}{\rm Br}\left(K_S \rightarrow \pi^0 e^+ e^-\right) = (7.7\pm 1.0)\times 10^{-9}\,, \\
&\hspace{-0.8cm}{\rm Br}\left(K_S \rightarrow \pi^0 e^+ e^-\right)\vert_{\scriptscriptstyle >165 {\tiny \mbox{\rm MeV}}} = (4.3\pm 0.6)\times 10^{-9}
\end{align}
and with \rf{w24L9pred} to
\begin{align}
&\hspace{-0.8cm}{\rm Br}\left(K^+ \rightarrow \pi^+ \mu^+ \mu^-\right) = (1.7\pm 0.2)\times 10^{-9}\,.
\end{align}
Finally, the resulting negative value for ${\bf w}_s$ in Eq.~\rf{fitws}, implies a constructive interference in Eq.~\rf{KSCPV} with a predicted branching ratio
\begin{equation}\lbl{KLCPVP}
{\rm Br}\left(K_L \rightarrow \pi^0 e^+ e^-\right)\vert_{\rm\tiny CPV}=(3.7\pm 0.4)\times 10^{-11}\,,
\end{equation}
where we have used~\cite{Baetal03} $\mbox{\rm Im}\lambda_t= (1.36\pm 0.12)\times 10^{-4}$.
\section{Conclusions}
Earlier analyses of $K\rightarrow\pi~e^+ e^-$ decays within the framework of $\chi$PT have been extended beyond the predictions of ${\cal O}(p^4)$, by replacing the local couplings which appear at that order by their underlying narrow resonance structure in the spirit of the MHA to Large-$N_c$ QCD. The resulting modification of the ${\cal O}(p^4)$ form factor is very simple and does not add new free parameters. It reproduces very well both the experimental decay rate and the invariant $e^+ e^-$ mass spectrum. The predicted Br$(K_S\rightarrow\pi^0 e^+ e^-)$ and Br$(K_S\rightarrow\pi^0 \mu^+ \mu^-)$ are, within errors, consistent with the recently reported result from the NA48 collaboration. The predicted interference between the {\it direct} and {\it indirect} CP--violation amplitudes in $K_L\rightarrow\pi^0 e^+ e^-$ is constructive, with an expected branching ratio (see Eq.~\rf{KLCPVP}) within reach of a dedicated experiment.
|
{
"timestamp": "2004-11-16T16:21:52",
"yymm": "0411",
"arxiv_id": "hep-ph/0411210",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/0411210"
}
|
\section{Introduction}
Our goal in this paper is to describe the coalescent processes that arise
when we consider the genealogy of a population that is affected by
repeated beneficial mutations. The starting point for this analysis
will be the continuous-time population model introduced by Moran (1958).
In this model, the population size is fixed at $2N$. Each individual
independently lives for a time that is exponentially distributed with mean $1$
and then is replaced by a new individual. The parent of the new
individual is chosen at random from the $2N$ individuals, including the
one being replaced. Note that we can think of the population as
consisting of $2N$ chromosomes of $N$ diploid individuals, so each member
of the population has just one parent.
Suppose we sample $n$ individuals at random from this population at time zero.
To describe the genealogy of the sample, we will define the ancestral process,
which will be a continuous-time Markov process $(\Psi_N(t), t \geq 0)$ whose
state space is the set ${\cal P}_n$ of partitions of $\{1, \dots, n\}$.
The ancestral process describes the coalescence of lineages as we follow
the ancestral lines of the sampled individuals backwards in time.
More precisely, $\Psi_N(0)$ is the partition of $\{1, \dots, n\}$
into $n$ singletons, and $\Psi_N(t)$ is the partition of $\{1, \dots, n\}$ such
that $i$ and $j$ are in the same block of $\Psi_N(t)$ if and only if the $i$th
and $j$th individuals in the sample have the same ancestor at time $-Nt$.
It is well-known that the process $(\Psi_N(t), t \geq 0)$
is Kingman's coalescent, a coalescent process introduced by
Kingman (1982). Kingman's coalescent is a ${\cal P}_n$-valued Markov process
that starts from the partition of $\{1, \ldots, n\}$ into singletons.
All transitions involve exactly two blocks of the partition merging together,
and each such transition occurs at rate one.
Within the last decade, progress has been made on describing the genealogy
of populations in models that allow for natural selection. Krone and
Neuhauser (1997) and Neuhauser and Krone (1997) studied a model in which each
individual can be of type $1$ or $2$. An individual of type $i$ produces
offspring at rate $\lambda_i$, with $\lambda_2 > \lambda_1$ so that type
$2$ is advantageous. Each new offspring replaces a randomly chosen
individual from the population, and is the same type as its parent with
probability $1 - u_N$ and the opposite type with probability $u_N$.
Under certain assumptions, they show that the genealogy of a sample from the
population can be described using what they call an ancestral selection graph.
Additional work of Donnelly and Kurtz (1999) and Barton, Etheridge, and Sturm
(2004) has incorporated recombination as well as selection into the model.
The ancestral selection graph arises in the limit as
$N \rightarrow \infty$ in the case of weak selection, where the selective
advantage $\lambda_2/\lambda_1 - 1$ and the mutation rates $u_N$ are $O(1/N)$.
Then, as $N \rightarrow \infty$ the fraction of individuals with the
favored allele can be approximated by a diffusion process. In this paper,
we consider strong selection, where the selective advantage is $O(1)$.
With strong selection, when a beneficial mutation occurs, there is a
positive probability that the beneficial allele will spread to the
entire population, an event known as a selective sweep.
At the end of a selective sweep, the entire population has the favorable
allele, and every member of the population will trace that favorable allele
back to the individual that had the beneficial mutation that caused the
selective sweep. However, the genealogy becomes more complicated when
we consider recombination. Diploid individuals usually do not inherit
an identical copy of one of their parent's chromosomes. Instead, the
inherited chromosome consists of pieces of each of a parent's two
chromosomes. Since a chromosome is coming
from two places, we need to consider the genealogy not of an entire
chromosome but of a particular site of interest on the chromosome.
When a selective sweep is caused by a beneficial mutation at a site other
than the site of interest, many individuals may trace their gene
at the site of interest back to the individual that had the beneficial
mutation at the beginning of the selective sweep, while others may trace
their gene at the site of interest to a different ancestor because
of recombination between the two sites on the chromosome. This effect
was first studied by Maynard Smith and Haigh (1974), who called it
the ``hitchhiking effect.''
As we will show, the typical duration of a selective sweep is only
$O(\log N)$. Therefore, when we speed up time by a factor of $N$ to define
the ancestral process, the selective sweep takes place almost instantaneously.
Consequently, if we sample $n$ individuals some time after a selective
sweep and define the ancestral process as before,
the ancestral process behaves like Kingman's coalescent until
we get back to the time of a selective sweep. At that time, many
lineages may coalesce because they get traced back to the individual
with the mutation that caused the selective sweep. This possibility was
observed by Gillespie (2000), who referred to the resulting coalescent
process as the ``pseudohitchhiking model." We will show that if selective
selective sweeps happen repeatedly throughout the history of a population
at times of a Poisson process, as proposed by Gillespie (2000), then
under suitable assumptions the ancestral processes will converge as $N \rightarrow \infty$
to a coalescent with multiple collisions,
which is a ${\cal P}_n$-valued Markov process in which many blocks
of the partition can merge at once into a single block. These coalescent
processes were introduced by Pitman (1999) and Sagitov (1999).
While coalescents with multiple collisions are the limiting coalescent processes as $N \rightarrow \infty$, an improved approximation for finite $N$ can be obtained using a coalescent with simultaneous multiple collisions. Coalescents with simultaneous multiple collisions, which were introduced by Schweinsberg (2000) and M{\"o}hle and Sagitov (2001), are coalescent processes in which many blocks can merge at once into a single block, and many such mergers can occur simultaneously. They provide a better approximation than coalescents with multiple collisions in this context because, as noted by Barton (1998), Durrett and Schweinsberg (2004a), and Schweinsberg and Durrett (2004),
multiple groups of lineages can coalesce at the time of a selective sweep.
Coalescents with multiple or simultaneous multiple collisions arise as
limits of ancestral processes in populations that occasionally have very
large families because ancestral lines that go back to an individual
with many offspring will coalesce at the same time. Coalescents
with multiple collisions arise when a single large family is possible in
a given generation, while coalescents with simultaneous multiple collisions arise when
one generation can contain many large families. For more details,
see Sagitov (1999, 2003), M{\"o}hle and Sagitov (2001), and Schweinsberg
(2003). The results in this paper provide a different
biological application of these coalescent processes.
The rest of this paper is organized as follows. In section 2, we describe our model for how the population evolves when there can be beneficial mutations. We state our main result, which is that the genealogy of this process converges to a coalescent with multiple collisions. In section 3, we present the improved approximation involving a coalescent with simultaneous multiple collisions. The next two sections are devoted to applications of these results. In section 4, we discuss how multiple mergers affect the number of segregating sites and pairwise differences in a sample of DNA. These quantities are used in Tajima's $D$-statistic (see Tajima (1989)), which can be used to detect departures from the standard Kingman's coalescent. In section 5 we discuss how multiple mergers affect the number of mutations that appear on just a single individual in the sample, which is relevant to the test proposed by Fu and Li (1993) for detecting departures from Kingman's coalescent. Our results suggest that Fu and Li's test should have less power to detect selective sweeps, at least in large samples, than Tajima's $D$-statistic. Finally, in section 6, we prove the convergence and approximation theorems stated in sections 2 and 3.
\section{Convergence to a coalescent with multiple collisions}
In this section, we give a precise description of our model of a population that experiences beneficial mutations, and we state our main convergence theorem. We describe what happens following a single beneficial mutation in subsection 2.1, and we consider recurrent beneficial mutations in subsection 2.2. Then in subsection 2.3, we state the convergence result and give some examples.
\subsection{The effect of a single beneficial mutation}
In this subsection we describe how the population
evolves after one of the $2N$ individuals experiences
a beneficial mutation. We will denote
the new favorable allele by $B$ and the other allele by $b$. We assume
the relative fitnesses of the two alleles are $1$ and $1-s$,
so the $B$ alleles will tend to survive longer. Immediately
after the mutation, one individual has the $B$ allele and $2N-1$ have the
$b$ allele. Kaplan, Hudson, and Langley (1989) and Stephen, Wiehe, and
Lenz (1992) proposed modeling the fraction of individuals $p(t)$ with the $B$
allele at time $t$ by using the logistic differential equation
$$\frac{dp}{dt} = sp(1-p).$$
This approach has been popular in simulation studies. However,
Durrett and Schweinsberg (2004a) showed that this approximation is not
very accurate. Consequently, we will consider instead a modification to
the Moran model that was studied by Durrett and Schweinsberg (2004a) and
Schweinsberg and Durrett (2004).
At one site, each
chromosome has a $B$ or $b$ allele, but we will be interested in the genealogy
at another neutral site at which all alleles have the same fitness.
As in the Moran model, each individual survives for a time that
is exponentially distributed with mean $1$, and then a replacement is
proposed in which the parent of the proposed new individual is chosen
at random from the $2N$ members of the population. However, to account
for natural selection, whenever a replacement
of a $B$ chromosome with a $b$ chromosome is proposed, the change is
rejected with probability $s$. Also, to incorporate recombination into
the model, we say that when a new individual is born, it inherits its
alleles at both sites from the same parent with probability $1-r$.
However, with probability $r$, there is recombination between the two
sites, so the new individual inherits its allele at the neutral site
from its parent's other chromosome. Because we are treating an
individual's two chromosomes as two separate members of the population,
we model this by saying that, with probability $r$, the new individual
inherits the two alleles from two ancestors chosen independently
at random from the population.
Suppose the beneficial mutation appears on
one chromosome at time $0$, and let $X(t)$ be the number of chromosomes
with the favorable allele at time $t$. Let $\tau = \inf\{t: X(t) \in
\{0, 2N\} \}$ be the time at which either the $B$ or $b$ allele
disappears from the population. Suppose we take a random sample of
$n$ individuals from the population at time $\tau$. Let $\Theta$ be
the partition of $\{1, \dots, n\}$ such that $i$ and $j$ are in the same
block of $\Theta$ if and only if the $i$th and $j$th individuals in the
sample have the same ancestor at time zero when we follow the ancestral
lines associated with the neutral site of interest. The partition $\Theta$ then
describes how the beneficial mutation affects the genealogy of the sample.
We have the following result concerning the distribution of $\Theta$.
Here $Q_{p,n}$, for $p \in [0,1]$, is the distribution of a random partition $\Pi$ obtained as
follows. First, define a sequence of independent random variables
$(\xi_i)_{i = 1}^n$ such that $P(\xi_i = 1) = p$ and
$P(\xi_i = 0) = 1-p$ for $i = 1, \dots, n$. Then define $\Pi$ such that one block
of $\Pi$ consists of $\{i \leq n: \xi_i = 1\}$ and the remaining blocks
of $\Pi$ are singletons.
\begin{Prop}
Fix $n \in \N$, and fix $s \in (0,1)$. Assume there is a constant
$C'$ such that $r \leq C' /(\log N)$ for all $N$. Let $\alpha =
r \log(2N)/s$, and let $p = e^{-\alpha}$.
\begin{enumerate}
\item There exists a positive constant $C$, depending continuously on $s$
and $\alpha$ but not depending on $N$, such that $|P(\Theta = \pi|X(\tau)
= 2N) - Q_{p,n}(\pi)| \leq C/(\log N)$ for all $\pi \in {\cal P}_n$.
\item Let $\kappa_0$ be the partition of $\{1, \dots, n\}$ into singletons.
There exists a constant $C$, depending continuously on $s$ and $\alpha$ but
not depending on $N$, such that $P(\Theta \neq \kappa_0 \mbox{ and }X(\tau) = 0)
\leq CN^{-1/2}$.
\end{enumerate}
\label{sweepprop1}
\end{Prop}
Note that in this proposition, the selective advantage $s$ is assumed to
be fixed, but the recombination probability $r$ depends on $N$. Part 1
of the proposition, which is a restatement of Theorem 1.1 of Schweinsberg and
Durrett (2004), implies that as $N \rightarrow \infty$, the distribution of
$\Theta$, conditional on the event that a selective sweep occurs,
converges to $Q_{p,n}$, where $p$ represents the approximate
fraction of lineages that coalesce at the time of the selective sweep.
Part 2 of the proposition, which we prove in Section 6,
shows that lineages typically do not coalesce when the favorable $B$ allele dies out. The probability that a selective sweep occurs,
and therefore Part 1 of the proposition applies, is
$s/(1 - (1-s)^{2N})$ (see Durrett (2002) or Schweinsberg and Durrett (2004)).
\subsection{A model with recurrent beneficial mutations}
To model a population in which beneficial mutations can occur repeatedly,
we assume that beneficial mutations at different points
on the chromosome occur at times of a Poisson process.
The selective advantage that these mutations provide and the rate of
recombination between the site of interest and the site of the mutation
will be random. When there is a beneficial mutation in the population,
the population will evolve as described in the previous subsection.
Between these times, the population will follow the standard Moran model.
To be more precise, we will consider the chromosome to be the line
segment $[-L, L]$. Our goal will be to describe the genealogy of the
site $0$. For each $N$, the beneficial mutations will be governed by
a Poisson process $K_N$ on $\R \times [-L, L] \times [0,1]$.
If $(t, x, s)$ is a point in $K_N$, then at time $t$, a mutation, which
provides a selective advantage of $s$, will appear at location $x$
on one of the $2N$ chromosomes. The intensity measure of $K_N$ will be
$\lambda \times \mu_N$, where $\lambda$ denotes Lebesgue
measure on $\R$ and $\mu_N$ is a finite measure on
$[-L, L] \times [0,1]$ which governs the rates of beneficial mutations.
The recombination probabilities will be determined by a function
$r_N: [-L, L] \rightarrow [0,1]$. We assume that $r_N(0) = 0$
and $r_N$ is nonincreasing on $[-L, 0]$ and nondecreasing on $[0, L]$.
Beginning at time $t$, the population will evolve according
to the model described in the previous subsection of a population with
a beneficial allele having selective advantage $s$ and recombination
probability $r_N(x)$. We let $\tau(t)$ denote the first time
that the beneficial mutation that appears at time $t$ either disappears from
the population or is present on all $2N$ chromosomes.
Let ${\cal T}_N = \{t: (t, x, s) \mbox{ is a point in }K_N \mbox{ for some }
x \mbox{ and }s\}$ be the times at which beneficial mutations are
proposed.
Note, however, that we can not define the evolution of the population
as explained above
if, for some $t_1, t_2 \in {\cal T}_N$, the intervals $[t_1, \tau(t_1)]$
and $[t_2, \tau(t_2)]$ overlap. There has been some work in the biology
literature on the question of how a selective sweep is affected by another
selective sweep happening at the same time (see, for example, Barton (1995),
Gerrish and Lenski (1998), and Kim and Stephen (2003)). However, as we
will show, in our model this overlap occurs too infrequently to have any
affect on our results, so we avoid the issue of defining the population
during periods of overlap by allowing a new beneficial
mutation to occur only when there is no other beneficial mutation currently
in the population. That is, beneficial mutations will occur at the
times in ${\cal T}_N' = \{t \in {\cal T}_N: \tau(u) < t \mbox{ for all }
u \in {\cal T}_N \mbox{ such that }u < t\}$. Let $${\cal I}_N =
\bigcup_{t \in {\cal T}_N'} [t, \tau(t)].$$ A beneficial mutation
will be present in the population at time $u$ if and only if
$u \in {\cal I}_N$. For the intervals in ${\cal I}_N$, the evolution of
the population was defined in subsection 2.1. For the times in $\R \setminus
{\cal I}_N$, we will say that the population evolves according to the
standard Moran model so that the evolution of the population is
well-defined for all of $\R$.
To define the ancestral process $\Psi_N = (\Psi_N(t), t \geq 0)$, we
sample $n$ of the $2N$ individuals at random from the population at time
zero. We then define $\Psi_N(t)$ to be the partition of $\{1, \dots, n\}$
such that $i$ and $j$ are in the same block of $\Psi_N(t)$ if and only if
the $i$th and $j$th individuals in the sample got
their allele at location $0$ on the chromosome from the same ancestor at
time $-Nt$. Note that we are again speeding up
time by a factor of $N$ so that, if
there are no beneficial mutations (i.e. if $\mu_N$ is the zero measure),
the ancestral process $\Psi_N = (\Psi_N(t), t \geq 0)$ is Kingman's coalescent.
When we do have beneficial mutations, the ancestral processes will converge
as $N \rightarrow \infty$, under suitable conditions, to a coalescent
with multiple collisions.
\subsection{The main convergence theorem and examples}
Pitman (1999) introduced coalescents with multiple collisions, in which
many blocks of the partition can merge into one. These coalescent processes
are in one-to-one correspondence with finite measures $\Lambda$ on $[0,1]$,
and the coalescent process associated with a particular measure $\Lambda$
is called the $\Lambda$-coalescent. We will consider here only
${\cal P}_n$-valued coalescents because they are what we will need to
approximate the genealogy of a sample of size $n$. However, the
constructions can be extended, using Kolmogorov's Extension Theorem, to yield
coalescent processes that take their values in the set of partitions of $\N = \{1, 2, \dots\}$.
Suppose $(\Pi_n(t), t \geq 0)$
is the ${\cal P}_n$-valued $\Lambda$-coalescent. Then $\Pi_n(0)$ is the
partition of $\{1, \dots, n\}$ into singletons. If $\Pi_n(t)$ has $b$ blocks,
then every possible transition involves merging $k$ of the blocks into one,
where $2 \leq k \leq b$. Denoting the rate of this transition by
$\lambda_{b,k}$, we have
\begin{equation}
\lambda_{b,k} = \int_0^1 x^{k-2} (1-x)^{b-k} \: \Lambda(dx).
\label{cmcmain}
\end{equation}
If $\Lambda = \delta_0$, where $\delta_0$ denotes a unit mass at zero,
then every transition that involves
two blocks merging into one happens at rate one, and no other transitions
are possible. Thus, the $\delta_0$-coalescent is Kingman's coalescent.
The theorem below states that when we do have beneficial mutations, the
ancestral processes converge as $N \rightarrow \infty$, under suitable
conditions, to a coalescent with multiple collisions. The multiple mergers
happen at times of selective sweeps.
Note that the convergence is in the sense of
finite-dimensional distributions. Convergence in the stronger Skorohod
topology does not hold because, during the short time intervals
when selective sweeps are taking place, $\Psi_N$ may undergo multiple transitions.
\begin{Theo}
Let $\mu$ be a finite measure on $[-L, L] \times [0,1]$, and let
$r:[-L,L] \rightarrow [0,\infty)$ be a bounded continuous function such that
$r(0) = 0$ and $r$ is
nonincreasing on $[-L, 0]$ and nondecreasing on $[0, L]$. Suppose that, as
$N \rightarrow \infty$, the measures $N \mu_N$ converge weakly to $\mu$ and
the functions $(\log 2N) r_N$ converge uniformly to $r$. Let $\eta$ be the
measure on $(0,1]$ such that
$$\eta([y, 1]) = \int_{-L}^L \int_0^1 s 1_{\{e^{-r(x)/s}
\geq y\}} \: \mu(dx \times ds)$$ for all $y \in (0,1]$. Let $\Lambda$
be the measure on $[0,1]$ defined by $\Lambda = \delta_0 + \Lambda_0$,
where $\Lambda_0(dx) = x^2 \eta(dx)$. Let $\Pi = (\Pi(t), t \geq 0)$ be
the ${\cal P}_n$-valued $\Lambda$-coalescent. Then, as $N \rightarrow \infty$,
the finite-dimensional distributions of $\Psi_N$ converge to the
finite-dimensional distributions of $\Pi$.
\label{mainth}
\end{Theo}
Note that in Theorem \ref{mainth}, the recombination
probability is $O(1/(\log N))$. The function $r$ is assumed
to be monotone on $[-L, 0]$ and $[0, L]$ because the greater
the distance between $0$ and the site of the mutation, the
greater the likelihood of recombination between the two
sites. Also, the rate of beneficial mutations is
$O(1/N)$, so that the multiple mergers caused by
selective sweeps and the ordinary mergers of two lineages at a time are
happening on the same time scale. If the rate of selective sweeps were
$o(1/N)$, then the multiple mergers would disappear in the limit. If selective
sweeps occurred on a faster time scale than $O(1/N)$, then the
multiple mergers would dominate for large $N$ and the limiting coalescent
would have no $\delta_0$ component. Gillespie (2000) considers this possibility and
proposes that it may explain why observed genetic variation does not appear
to be as sensitive to population size as Kingman's coalescent model predicts.
However, in this paper we focus on the case in which both types of
mergers happen on the same time scale.
We now derive the limiting coalescent with multiple collisions in
two natural examples.
\begin{Exm}
{\em Consider the case in which we are concerned only with
mutations at a single site, all of which have the same selective
advantage. Fix $\alpha > 0$, and let $\mu_N = \alpha N^{-1} \delta_{(z,s)}$
for some $s \in (0,1]$ and $z \in [-L, L]$. This
means that beneficial mutations that provide selective advantage $s$
appear on the chromosome at site $z$ at times of a Poisson process.
The measures $N \mu_N$ converge to $\mu = \alpha \delta_{(z,s)}$.
Assume that the recombination functions $r_N$ are defined such that
the sequence $(\log 2N)r_N$ converges uniformly to $r$, and let
$\beta = r(z)$. Then, for all $y \in (0,1]$, we have
$$\eta([y, 1]) = \int_{-L}^{L} \int_0^1 u 1_{\{e^{-r(x)/u} \geq y\}} \:
\mu(dx \times du) = s \alpha 1_{\{e^{-\beta/s} \geq y\}}.$$ Therefore, $\eta$
consists of a mass $s \alpha$ at $p = e^{-\beta/s}$. It follows from
Theorem \ref{mainth} that the limiting coalescent process is the
$\Lambda$-coalescent, where $\Lambda = \delta_0 + s \alpha p^2 \delta_p$.
Thus, in addition to the mergers involving just two blocks, we have
coalescence events at times of a Poisson process in which we flip
$p$-coins for each lineage and merge the lineages whose coins come up heads.}
\label{exm1}
\end{Exm}
\begin{Exm}
{\em It is also natural to consider the case in which mutations
occur uniformly along the chromosome. For simplicity, we
will assume that the selective advantage $s$ is fixed. Let $\lambda$
denote Lebesgue measure on $[-L, L]$. Suppose $\mu_N = N^{-1}(\alpha \lambda
\times \delta_s)$, so the measures $N \mu_N$ converge to
$\mu = \alpha \lambda \times \delta_s$. To model recombination
occurring uniformly along the chromosome, we assume that the functions
$(\log 2N)r_N$ converge uniformly to the function $r(x) = \beta|x|$, so the
probability of recombination is proportional to the distance between the
two sites on the chromosome. For all $y \in (0,1]$, we have
$$\eta([y, 1]) = \alpha s \int_{-L}^L 1_{\{e^{-r(x)/s} \geq y\}} \: dx
= \alpha s \int_{-L}^L 1_{\{e^{-\beta|x|/s} \geq y\}} \: dx.$$ Since
$e^{-\beta|x|/s} \geq y$ if and only if $|x| \leq -(s/\beta)(\log y)$, we have
$$\eta([y,1]) = \min \bigg\{\frac{-2 \alpha s^2 \log y}{\beta}, \: 2 \alpha s L
\bigg\}.$$ Therefore, for $y \geq e^{-\beta L/s}$, we have
$$\frac{d}{dy} \eta([y,1]) = - \frac{2 \alpha s^2}{\beta y}.$$
Let $c = 2 \alpha s^2/\beta$. It follows that $\eta$ has a density given by
$g_L(y) = c/y$ for $e^{-\beta L/s} \leq y \leq 1$ and $g_L(y) = 0$ otherwise.
By Theorem \ref{mainth}, the finite-dimensional distributions of the
ancestral processes $\Psi_N$ converge to those of the $\Lambda$-coalescent,
where $\Lambda = \delta_0 + \Lambda_0$ and $\Lambda_0$ has density
$h_L(y) = y^2 g_L(y)$. Note that as $L \rightarrow \infty$, the
density $h_L(y)$ converges to $h(y)$, where $h(y) = c y$
for $y \in [0,1]$ and $h(y) = 0$ otherwise. We can think of this as the
limiting coalescent for an infinitely long chromosome.}
\label{exm2}
\end{Exm}
\begin{Exm}
{\em Finally, we show that any $\Lambda$-coalescent with a unit mass at zero can
arise as a limit of ancestral processes in this model. We first show
how to obtain coalescents of the form $\Lambda = \delta_0 + \Lambda_0$,
where $\Lambda_0$ is a finite measure on $[\epsilon, 1]$
and $0 < \epsilon < 1$. Note that
in Theorem \ref{mainth}, we have $\Lambda_0(dx) = x^2 \eta(dx)$,
so it suffices to show that $\mu$ and $r$ can be chosen to make $\eta$
an arbitrary finite measure on $[\epsilon, 1]$. Let $G: [\epsilon, 1]
\rightarrow [0, \infty)$ be any nonincreasing left-continuous
function. We will choose
$\mu$ and $r$ so that $\eta([y, 1]) = G(y)$ for $\epsilon \leq y \leq 1$
and $\eta([0, \epsilon)) = 0$.
Let $L = -\frac{1}{2} \log \epsilon$, and let $\nu$ be the measure on
$[-L, L]$ such that $\nu([-L,0)) = 0$ and, for $\epsilon \leq y \leq 1$,
$\nu([0, -\frac{1}{2} \log y]) = 2G(y)$. Suppose $r(x) = |x|$ and
$\mu = \nu \times \delta_{1/2}$. Then, for $\epsilon \leq y \leq 1$,
\begin{align}
\eta([y,1]) &= \int_{-L}^L \int_0^1 s 1_{\{e^{-r(x)/s} \geq y \}} \:
\mu(dx \times ds) \nonumber \\
&= \frac{1}{2} \int_0^L 1_{\{e^{-2x} \geq y\}} \: \nu(dx) =
\frac{1}{2} \nu([0, -(\log y)/2]) = G(y), \nonumber
\end{align}
as claimed. Thus, we can get the $\Lambda$-coalescent in the limit
if $\Lambda_0((0, \epsilon)) = 0$.
We can obtain an arbitrary $\Lambda$-coalescent by then taking a limit as
$L \rightarrow \infty$ (or $\epsilon \downarrow 0$) as in Example \ref{exm2}.}
\end{Exm}
\section{Approximation by a coalescent with simultaneous multiple collisions}
A key ingredient in the proof of Theorem \ref{mainth}
is part 1 of Proposition \ref{sweepprop1}. Part 1 of
Proposition \ref{sweepprop1} says that, up to an error of $O(1/(\log N))$,
we can approximate the effect of a selective sweep on the genealogy by
flipping a $p$-coin for each lineage and merging the lineages whose
coins come up heads. However, Durrett and Schweinsberg (2004a) observed in
simulations that for $N$ between 10,000 and 1,000,000, the approximation in
Proposition \ref{sweepprop1} works poorly, largely because it is possible for
multiple groups of lineages to coalesce at the time of a selective sweep.
By taking this into account, they were able to give a more complicated
approximation that works much better in simulations and has an error
of only $O(1/(\log N)^2)$.
Before stating this result, we review Kingman's (1978) paintbox construction
of exchangeable random partitions of $\{1, \dots, n\}$. Let
$$\Delta = \big\{(x_1, x_2, \dots): x_1 \geq x_2 \geq \dots \geq 0,
\sum_{i=1}^{\infty} x_i \leq 1 \big\},$$
and let $G$ be a probability measure on $\Delta$. We define a
$G$-partition $\Pi$ of $\{1, \dots, n\}$ as follows.
Let $Y = (Y_1, Y_2, \dots)$ be a $\Delta$-valued random variable with
distribution $G$. Define a sequence $(Z_i)_{i=1}^n$ to be
conditionally i.i.d. given $Y$ such that $P(Z_i = j|Y) = Y_j$ for all
positive integers $j$ and $P(Z_i = 0|Y) = 1 - \sum_{j=1}^{\infty} Y_j$.
Then define $\Pi$ to be the partition such that
distinct integers $i$ and $j$ are in the same
block if and only if $Z_i = Z_j \geq 1$. We denote the
distribution of a $G$-partition of $\{1, \dots, n\}$ by $Q_{G,n}$.
Note that if $G$ is a unit mass at $(p, 0, 0, \dots)$, then $Q_{G,n} = Q_{p,n}$.
Next, we define a family of distributions $R(\theta, M)$ on $\Delta$
by using a stick-breaking construction.
Let $\theta \in [0,1]$, and let $M$ be a positive integer. Let
$(W_k)_{k=2}^M$ be independent random variables such that $W_k$ has a
Beta($1, k-1$) distribution. Let $(\zeta_k)_{k=2}^M$
be a sequence of independent random variables such that $P(\zeta_k = 1) =
\theta$ and $P(\zeta_k = 0) = 1 - \theta$ for all $k$. For
$k = 2, 3, \dots, M$, let $V_k = \zeta_k W_k$. To perform the
stick breaking, we first break off a fraction $W_M$ of
the unit interval, then break off a fraction $W_{M-1}$ of what
is left over, and so on until we get down to $W_2$. For
$k = 2, \dots, M$, the length of the $k$th fragment is
${\tilde Y}_k = V_k \prod_{j=k+1}^M (1 - V_j)$, and the
length of the first fragment is ${\tilde Y}_1 = \prod_{j=2}^M (1 - V_j)$.
Note that $\sum_{k=1}^M {\tilde Y}_k = 1$. Let
$Y = (Y_1, Y_2, \dots, Y_M, 0, 0, \dots ) \in \Delta$ be the sequence
obtained by ranking the interval lengths ${\tilde Y}_1, \dots, {\tilde Y}_M$ in
decreasing order and then appending an infinite sequence of zeros.
Finally, let $R(\theta, M)$ be the distribution of $Y$.
These distributions $R({\theta, M})$ were studied in Durrett and Schweinsberg
(2004b), who used them to approximate the distribution of family sizes
in a Yule process with infinitely many types. They arise in the
proposition below because, after a beneficial mutation, the number of
lineages with the $B$ allele that do not eventually die out can be approximated
by a Yule process. The result below is Theorem 1.2 of Schweinsberg and Durrett (2004).
\begin{Prop}
Fix $n \in \N$, and fix $s \in (0,1)$. Assume there is a constant
$C'$ such that $r \leq C'/(\log N)$ for all $N$. Let $\alpha =
r \log(2N)/s$, and let $p = e^{-\alpha}$. Then
there exists a positive constant $C$, depending continuously on $s$
and $\alpha$ but not depending on $N$, such that $$|P(\Theta = \pi|X(\tau)
= 2N) - Q_{R(r/s, \lfloor 2Ns \rfloor), n}(\pi)| \leq C/(\log N)^2$$
for all $\pi \in {\cal P}_n$, where $\lfloor m \rfloor$
denotes the greatest integer less than or equal to $m$.
\label{sweepprop2}
\end{Prop}
Because the improved approximation allows many groups of lineages to coalesce at the time of a selective sweep, this result suggests that, for finite $N$, a coalescent with simultaneous multiple collisions should provide a better approximation of the ancestral process than a coalescent with multiple collisions. Coalescents with
simultaneous multiple collisions, which were studied by M{\"o}hle and
Sagitov (2001), Schweinsberg (2000), and Bertoin and Le Gall (2003),
have the property that many blocks can merge at once into
a single block, and many such mergers can occur simultaneously.
Coalescents with simultaneous multiple collisions are in one-to-one
correspondence with finite measures $\Xi$ on $\Delta$.
Suppose $\pi$ is a partition of $\{1, \dots, n\}$ whose blocks are
$B_1, \dots, B_m$, and suppose $\pi'$ is a partition of $\{1, \dots, n'\}$
with $n' \geq m$ whose blocks are $B_1', \dots, B_k'$.
Following Bertoin and Le Gall (2003), define the coagulation of $\pi$ by $\pi'$
to be the partition whose blocks are given by $\bigcup_{j \in B_i'} B_j$
for $i = 1, \dots, k$. Suppose
$(\Pi_n(t), t \geq 0)$ is the ${\cal P}_n$-valued $\Xi$-coalescent.
If there are $b$ blocks at time $t-$ and a merger occurs at time $t$,
then there exists a unique partition $\pi \in {\cal P}_b$ such that
$\Pi_n(t)$ is the coagulation of $\Pi_n(t-)$ by $\pi$. If $\pi$ has
$r+s$ blocks, $s$ of which are singletons and the other $r$ of which have
sizes $k_1, \dots, k_r \geq 2$, where $b = k_1 + \dots + k_r + s$, then
the rate of this transition is
\begin{equation}
\lambda_{b;k_1, \dots, k_r;s} = \int_{\Delta}
Q_{\delta_x, b}(\pi) \bigg( \sum_{j=1}^{\infty} x_j^2 \bigg)^{-1} \: \Xi_0(dx)
+ a1_{\{r = 1, k_1 = 2\}},
\label{csmcmain}
\end{equation}
where $\delta_x$ denotes a unit mass at $x = (x_1, x_2, \dots) \in \Delta$ and
$\Xi$ has been written as $a \delta_{(0, 0, \dots)} + \Xi_0$ with
$\Xi_0(\{(0, 0, \dots )\}) = 0$.
Coalescents with multiple collisions are a special case in which $\Xi$ is
concentrated on points in which only the first coordinate is nonzero.
Coalescents with multiple and simultaneous multiple collisions can be
constructed from Poisson point processes (see Pitman (1999) and
Schweinsberg (2000)). Consider a Poisson process on $(0, \infty) \times {\cal P}_n$ whose
intensity measure is the product of Lebesgue measure on $(0, \infty)$
and a measure $L$ on ${\cal P}_n$ defined as follows. Let
$S \subset {\cal P}_n$ be the set of all partitions consisting of one block
of size $2$ and $n-2$ singletons. If $\pi \in {\cal P}_n$, let $L(\pi) = 0$
if $\pi$ is the partition consisting of $n$ singletons. Otherwise, let
\begin{equation}
L(\pi) = \int_{\Delta} Q_{\delta_x, n}(\pi) \bigg( \sum_{j=1}^{\infty} x_j^2
\bigg)^{-1} \: \Xi_0(dx) + a 1_{\{\pi \in S\}}.
\label{csmcpois}
\end{equation}
Since $L$ is a finite
measure, it is easy to define $\Pi_n = (\Pi_n(t), t \geq 0)$ such that
$\Pi_n(0)$ is the partition consisting of $n$ singletons and, at the times
of points $(t,\pi)$ of the Poisson point process, the partition
$\Pi_n(t)$ is the coagulation of $\Pi_n(t-)$ by $\pi$, and these are
the only jump times of $\Pi_n$. This coalescent
process is the ${\cal P}_n$-valued $\Xi$-coalescent. The construction of
the $\Lambda$-coalescent is the same, except that if $\pi$ has at least
one block that is not a singleton, we define
\begin{equation}
L(\pi) = \int_0^1
Q_{p,n}(\pi) p^{-2} \: \Lambda_0(dp) + a 1_{\{\pi \in S\}},
\label{cmcpois}
\end{equation}
where $\Lambda = \delta_0 + \Lambda_0$ and $\Lambda_0(\{0\}) = 0$.
Under some additional assumptions, most significantly
restricting the selective advantage resulting from each beneficial mutation
to be at least $\epsilon > 0$, we are able to obtain bounds on the
difference between the finite-dimensional distributions of $\Psi_N$ and
the finite-dimensional distributions of the approximating coalescent
process. Proposition \ref{coalprop} below shows that indeed the
coalescent with simultaneous multiple collisions gives a more accurate
approximation.
\begin{Prop}
Let $\mu$ be a finite measure on $[-L, L] \times [\epsilon, 1]$,
where $\epsilon > 0$, and let
$r:[-L,L] \rightarrow [0,1]$ be a function such that $r(0) = 0$ and
$r$ is nonincreasing on $[-L, 0]$ and nondecreasing on $[0, L]$.
Suppose that, for all $N$, we have $\mu_N = N^{-1} \mu$. Also, assume
that $r_N(x) = r(x)/\log(2N)$ for all $N$ and $x$. Fix times
$0 < u_1 < \dots < u_m$, and let $\pi_1, \dots, \pi_m \in {\cal P}_n$.
\begin{enumerate}
\item Define $\eta$ and $\Lambda$ as in Theorem \ref{mainth}.
Let $\Pi = (\Pi(t), t \geq 0)$ be the ${\cal P}_n$-valued $\Lambda$-coalescent.
Then there exists a constant $C$
such that $$|P(\Psi_N(u_i) = \pi_i \mbox{ for }i = 1, \dots, m) -
P(\Pi(u_i) = \pi_i \mbox{ for }i = 1, \dots, m)| \leq \frac{C}{\log N}.$$
\item Let $G_N$ be the measure on $\Delta$ such that for all measurable
subsets $A \subset \Delta$, we have
$$G_N(A) = \int_{-L}^L \int_0^1 s R(r_N(x)/s, \lfloor 2Ns \rfloor)(A)
\: \mu(dx \times ds).$$ Let $\Xi_N$ be the measure on $\Delta$ given by
$\Xi_N = \delta_{(0, 0, \dots)} + \Xi_{N,0}$, where
$\Xi_{N,0}$ is defined by $\Xi_{N,0}(dx) =
(\sum_{j=1}^{\infty} x_j^2) G_N(dx)$. Let $\Upsilon_N =
(\Upsilon_N(t), t \geq 0)$ be the ${\cal P}_n$-valued $\Xi_N$-coalescent.
Then there exists a constant $C$
such that $$|P(\Psi_N(u_i) = \pi_i \mbox{ for }i = 1, \dots, m) -
P(\Upsilon_N(u_i) = \pi_i \mbox{ for }i = 1, \dots, m)|
\leq \frac{C}{(\log N)^2}.$$
\end{enumerate}
\label{coalprop}
\end{Prop}
\section{Segregating sites and pairwise differences}
One motivation for modeling a population that experiences recurrent
selective sweeps by coalescents with multiple
or simultaneous multiple collisions is that these coalescent models can
provide insight into tests used to detect selective sweeps.
In view of part 2 of Proposition \ref{coalprop} and the simulation
results in Durrett and Schweinsberg (2004a), there should be little loss
of accuracy in studying the behavior of these tests under the assumption
that the genealogy of a sample follows a coalescent with
simultaneous multiple collisions.
One commonly used test is based on Tajima's $D$-statistic
(see Tajima (1989)). Given a sample of $n$ strands of DNA from the same
region on a chromosome, let $\Delta_{ij}$ be the number of sites at which
the $i$th and $j$th segments differ, and let
$\Delta_n = \binom{n}{2}^{-1} \sum_{i \neq j}
\Delta_{ij}$ be the average number of pairwise differences over the
$\binom{n}{2}$ possible pairs. Let $S_n$ be the number of segregating
sites in the sample, that is, the number of sites at which at least one pair
of segments differs. Tajima's $D$-statistic compares
the statistics $\Delta_n$ and $S_n$.
Suppose the ancestral history of a sample of $N$ individuals is given
by a coalescent with multiple or simultaneous multiple collisions.
Let $\lambda_b$ be the total rate of all mergers when the coalescent
has $b$ blocks. Assume that, on the time scale of the coalescent
process, mutations happen at rate $\theta/2$. Any mutation on the
$i$th or $j$th lineage before these lineages coalesce will cause the
$i$th and $j$th segments to differ at some site. Since the expected
time for these lineages to coalesce is $\lambda_2^{-1}$, we have
$E[\Delta_{ij}] = \theta \lambda_2^{-1}$. Therefore
\begin{equation}
E[\Delta_n] = \theta \lambda_2^{-1}.
\label{pairdiff}
\end{equation}
Note that $\lambda_2 = \Lambda([0,1])$ for coalescents with multiple
collisions and $\lambda_2 = \Xi(\Delta)$ for coalescents with simultaneous
multiple collisions.
To calculate the expected number of segregating sites, we note that any
mutation in the ancestral tree before all $n$ lineages have coalesced
into one adds to the number of
segregating sites. If, at some time, the coalescent has exactly $b$
blocks, the expected time that the coalescent has $b$ blocks is
$\lambda_b^{-1}$. Let $G_n(b)$ be the probability that
the coalescent, starting with $n$ blocks, will have exactly $b$ blocks
at some time. Then
\begin{equation}
E[S_n] = \frac{\theta}{2} \sum_{b=2}^n b \lambda_b^{-1} G_n(b).
\label{segsites}
\end{equation}
Although we do not have a closed-form expression for $G_n(b)$,
these quantities can be calculated recursively because
(\ref{cmcmain}) and (\ref{csmcmain}) allow us to express
$G_n(b)$ in terms of $G_k(b)$ for $k < n$. As a result, it would
not be difficult to evaluate the expression in (\ref{segsites}) numerically.
Suppose the ancestral process is given by Kingman's coalescent, which
would be the case if there were no selective sweeps. Then
$\lambda_b = \binom{b}{2}$ for all $b \geq 2$. Also, the number of
blocks never decreases by more than one at a time, so
$G_n(b) = 1$ whenever $2 \leq b \leq n$. It follows that
$E[\Delta_n] = \theta$ and
\begin{equation}
E[S_n] = \frac{\theta}{2} \sum_{b=2}^n b \binom{b}{2}^{-1}
= \theta \sum_{b=2}^n \frac{1}{b-1} = \theta h_{n-1},
\label{kingseg}
\end{equation}
where $h_{n-1} = \sum_{i=1}^{n-1} (1/i)$. Thus,
$E[\Delta_n - S_n/h_{n-1}] = 0$. This observation is the basis for
Tajima's $D$-statistic, which is given by
\begin{equation}
D = \frac{\Delta_n - S_n/h_{n-1}}{\sqrt{a_nS_n + b_nS_n(S_n-1)}},
\label{tajeq}
\end{equation}
where $a_n$ and $b_n$ are somewhat complicated constants that are
chosen to make the variance of $D$ approximately one when the ancestral
tree is given by Kingman's coalescent. See section 4.1 of Durrett (2002) for details.
After a selective sweep, the new mutants will tend to have low frequency.
As a result, a recent selective sweep should decrease $\Delta_n$ more
than $S_n$, causing the numerator of Tajima's $D$-statistic to be negative.
Braverman et. al. (1995) found in simulations that Tajima's $D$-statistic
indeed tends to be negative after a selective sweep. Simonsen, Churchill,
and Aquadro (1995) studied this question further and argued that unless the
selective sweep was recent, Tajima's $D$-statistic had relatively little
power to detect selective sweeps.
See also Przeworski (2002), who discusses the power of Tajima's $D$-statistic
to detect selective sweeps.
Our coalescent approximation allows us to obtain the following result
regarding the expected number of segregating sites when the population
experiences recurrent selective sweeps.
\begin{Prop}
Consider a $\Lambda$-coalescent in which $\Lambda = \delta_0 + \Lambda_0$,
where $\Lambda_0(\{0\}) = 0$, or a $\Xi$-coalescent in which
$\Xi = \delta_{(0, 0, \dots)} + \Xi_0$ and $\Xi_0(\{(0, 0, \dots)\}) = 0$.
Let $\alpha_b = \lambda_b - \binom{b}{2}$.
Suppose
\begin{equation}
\sum_{b=2}^{\infty} \frac{\alpha_b \log b}{b^2} < \infty.
\label{segcond}
\end{equation}
Then, there exists a constant $\rho \geq 0$ such that
\begin{equation}
\lim_{n \rightarrow \infty} E[S_n] - \theta h_{n-1} = -\rho.
\label{segeq}
\end{equation}
Furthermore, defining $G_{\infty}(b) = \lim_{n \rightarrow \infty} G_n(b)$,
we have
\begin{equation}
\rho = \frac{\theta}{2}
\sum_{b=2}^{\infty} b \bigg( \binom{b}{2}^{-1} -
\lambda_b^{-1} \bigg) + \frac{\theta}{2} \sum_{b=2}^{\infty} b \lambda_b^{-1}
(1 - G_{\infty}(b)).
\label{rhodef}
\end{equation}
\label{segprop}
\end{Prop}
The condition (\ref{segcond}) prevents
$\Lambda_0$ or $\Xi_0$ from having too much mass near zero. Note
that (\ref{pairdiff}) implies that $E[\Delta_n]$ decreases
by a constant as a result of the beneficial mutations, while
Proposition \ref{segprop} implies that when (\ref{segcond}) holds, $E[S_n/h_{n-1}]$
decreases by approximately $\rho/h_{n-1}$, which is $O(1/(\log n))$.
Therefore, Proposition \ref{segprop} shows that for
sufficiently large samples we do expect Tajima's
$D$-statistic to be negative when the population is affected by
recurrent selective sweeps. Before proving this proposition, we consider some examples.
\begin{Exm}
{\em Suppose, as in Example \ref{exm1}, we have a $\Lambda$-coalescent
in which $\Lambda = \delta_0 + s \alpha p^{-2} \delta_p$.
Since $p$-mergers occur at rate $s \alpha$, we have
$\lambda_b \leq \binom{b}{2} + s \alpha$ and thus $\alpha_b \leq s \alpha$
for all $b$. Condition (\ref{segcond}) follows immediately.
Suppose instead we have the $\Lambda$-coalescent of Example \ref{exm2},
where $\Lambda = \delta_0 + \Lambda_0$ and $\Lambda_0(dx) = cx \: dx$.
Note that $\alpha_b$ is the same as the total merger rate of the
$\Lambda_0$-coalescent when there are $b$ blocks. Using the fact that
if $Z \sim \mbox{Binomial}(b,x)$ then $P(Z \geq 2) =
1 - (1-x)^b - bx(1-x)^{b-1}$, we have
\begin{align}
\alpha_b &= \int_0^1 (1 - (1-x)^b - bx(1-x)^{b-1}) x^{-2} \: \Lambda_0(dx)
\nonumber \\
&= c \int_0^1 (1 - (1-x)^b - bx(1-x)^{b-1}) x^{-1} \: dx
\leq c \int_0^1 (1 - (1-x)^b) x^{-1} \: dx \nonumber \\
&= c \int_0^{1/b} (1 - (1-x)^b) x^{-1} \: dx +
c \int_{1/b}^1 (1 - (1-x)^b) x^{-1} \: dx \nonumber \\
&\leq c \int_0^{1/b} b \: dx + c \int_{1/b}^1 x^{-1} \: dx =
c(1 + \log b),
\label{alphab}
\end{align}
which implies (\ref{segcond}).
}
\end{Exm}
\begin{Exm}
{\em Although (\ref{segcond}) holds in the natural cases given in
Examples \ref{exm1} and \ref{exm2}, we show here that it does not hold
for all coalescents. Suppose $\Lambda = \delta_0 + \Lambda_0$,
where $\Lambda_0$ is the uniform distribution on $(0,1)$.
Note that there exists a constant $C > 0$ such that if
$Z \sim \mbox{Binomial}(b,x)$ with $x \geq 1/b$ and $b \geq 2$, then
$P(Z \geq 2) \geq C$. Therefore,
\begin{align}
\alpha_b &= \int_0^1 (1 - (1-x)^b - bx(1-x)^{b-1}) x^{-2} \: dx \nonumber \\
&\geq \int_{1/b}^1 (1 - (1-x)^b - bx(1-x)^{b-1}) x^{-2} \: dx \nonumber \\
&\geq C \int_{1/b}^1 x^{-2} \: dx = C(b-1), \nonumber
\end{align}
so (\ref{segcond}) does not hold in this case.
}
\end{Exm}
\begin{proof}[Proof of Proposition \ref{segprop}]
When the coalescent has $n+1$ blocks, the probability
that the next coalescence event will take the coalescent down to
fewer than $n$ blocks is at most
$[\lambda_{n+1} - \binom{n+1}{2}]/\lambda_{n+1}$. Therefore,
if $2 \leq b \leq n$, then
\begin{equation}
|G_{n+1}(b) - G_n(b)| \leq
\frac{\lambda_{n+1} - \binom{n+1}{2}}{\lambda_{n+1}}
= \frac{\alpha_{n+1}}{\lambda_{n+1}} \leq \frac{2 \alpha_{n+1}}{n(n+1)}.
\label{Gneq}
\end{equation}
Therefore, when (\ref{segcond}) holds,
the sequence $(G_n(b))_{n=b}^{\infty}$ is Cauchy and thus
has a limit $G_{\infty}(b)$.
It follows from (\ref{segsites}) and (\ref{kingseg}) that
\begin{align}
E[S_n] - \theta h_{n-1} &= \frac{\theta}{2} \sum_{b=2}^n
b \lambda_b^{-1} G_n(b) - \frac{\theta}{2} \sum_{b=2}^n b \binom{b}{2}^{-1}
\nonumber \\
&= \frac{\theta}{2} \sum_{b=2}^n b \bigg( \lambda_b^{-1} - \binom{b}{2}^{-1}
\bigg) - \frac{\theta}{2} \sum_{b=2}^n b \lambda_b^{-1} (1 - G_{\infty}(b))
\nonumber \\
&\hspace{.5in} + \frac{\theta}{2}
\sum_{b=2}^n b \lambda_b^{-1} (G_n(b) - G_{\infty}(b)).
\label{segmain}
\end{align}
To prove Proposition \ref{segprop}, we need to take the limit as
$n \rightarrow \infty$ of the three terms on the right-hand side
of (\ref{segmain}).
For the first term, we note that
$$\binom{b}{2}^{-1} - \lambda_b^{-1} =
\frac{\lambda_b - \binom{b}{2}}{\binom{b}{2} \lambda_b} \leq
\alpha_b \binom{b}{2}^{-2} = \frac{4 \alpha_b}{b^2(b-1)^2}.$$
Therefore, when (\ref{segcond}) holds, we have a summable series and
\begin{equation}
\lim_{n \rightarrow \infty} \frac{\theta}{2} \sum_{b=2}^n b
\bigg( \lambda_b^{-1} - \binom{b}{2}^{-1} \bigg) =
- \frac{\theta}{2} \sum_{b=2}^{\infty} b
\bigg( \binom{b}{2}^{-1} - \lambda_b^{-1} \bigg).
\label{term1}
\end{equation}
For the second term, note that (\ref{Gneq}) and the fact that
$G_b(b) = 1$ imply
\begin{align}
\sum_{b=2}^{\infty} b \lambda_b^{-1} (1 - G_{\infty}(b))
&\leq \sum_{b=2}^{\infty}
\frac{2}{b-1} \bigg( \sum_{m=b}^{\infty} \frac{2 \alpha_{m+1}}{m(m+1)} \bigg)
\nonumber \\
&= \sum_{m=2}^{\infty} \frac{2 \alpha_{m+1}}{m(m+1)} \sum_{b=2}^m
\frac{2}{b-1} \leq \sum_{m=2}^{\infty}
\frac{4 \alpha_{m+1} (1 + \log (m-1))}{m(m+1)},
\nonumber
\end{align}
which is finite by (\ref{segcond}). Therefore,
\begin{equation}
\lim_{n \rightarrow \infty} \frac{\theta}{2} \sum_{b=2}^n
b \lambda_b^{-1} (1 - G_{\infty}(b)) = \frac{\theta}{2}
\sum_{b=2}^{\infty} b \lambda_b^{-1} (1 - G_{\infty}(b)).
\label{term2}
\end{equation}
Finally, for the third term,
\begin{align}
\limsup_{n \rightarrow \infty} \sum_{b=2}^n b \lambda_b^{-1}
|G_n(b) - G_{\infty}(b)| &\leq \limsup_{n \rightarrow \infty}
\sum_{b=2}^n \frac{2}{b-1} \bigg( \sum_{m=n}^{\infty}
\frac{2 \alpha_{m+1}}{m(m+1)} \bigg) \nonumber \\
&\leq \limsup_{n \rightarrow \infty}
\frac{1}{\log n} \sum_{b=2}^n \frac{2}{b-1} \bigg(
\sum_{m=n}^{\infty} \frac{2 \alpha_{m+1} \log m}{m(m+1)} \bigg) \nonumber \\
&\leq \limsup_{n \rightarrow \infty}
\frac{2(1 + \log (n-1))}{\log n} \sum_{m=n}^{\infty}
\frac{2 \alpha_{m+1} \log m}{m(m+1)} = 0
\label{term3}
\end{align}
by (\ref{segcond}). The proposition follows from
(\ref{segmain}), (\ref{term1}), (\ref{term2}), and (\ref{term3}).
\end{proof}
\section{The number of singletons}
Fu and Li (1993) proposed another test to detect departures from Kingman's coalescent. They considered the ancestral tree in which the leaves are the $n$ individuals in the sample. They defined the branches connecting a leaf to an internal node to be external branches and the other branches to be internal branches. Let $\eta_e$ denote the number of mutations on external branches, and let $\eta_i$ be the number of mutations on internal branches. Every mutation produces a segregating site, so $\eta_e + \eta_i = S_n$. If a mutation occurs on an external branch, the mutant gene appears on just one of the $n$ individuals in the sample, while if a mutation occurs on an internal branch, the mutant gene appears on between $2$ and $n-1$ of the individuals in the sample. Therefore, to determine $\eta_e$, we simply count the number of mutations that appear on just one of the sampled chromosomes. Note that unless an outgroup is available,
it will not be possible to distinguish between a mutation that appears on one of the sampled chromosomes and a mutation that appears on $n-1$ of the sampled chromosomes. Fu and Li (1993) proposed a modification of their test for when there is no outgroup, but for the analysis in this section, we assume that we have an outgroup that enables us to make this distinction.
Let $J_n$ be the sum of the lengths of the external branches. In terms of the associated coalescent process, $J_n$ is the sum, over $i$ between $1$ and $n$, of the amount of time that the integer $i$ is in a singleton block. Let $I_n$ be the sum of the lengths of the internal branches. Assuming, as before, that mutations occur at rate $\theta/2$ on the time scale of the coalescent process, we have $E[\eta_e|J_n] = (\theta/2)J_n$ and $E[\eta_i|I_n] = (\theta/2)I_n$.
Fu and Li's $D$-statistic is based on comparing $\eta_i$ with $(h_{n-1} - 1) \eta_e$. Note that $\eta_i - (h_{n-1} - 1) \eta_e = S_n - h_{n-1} \eta_e$. To see that this has mean zero when the ancestral tree is given by Kingman's coalescent, we follow the explanation on p. 163 of Durrett (2002). In the case of Kingman's coalescent, (\ref{kingseg}) gives $E[S_n] = \theta h_{n-1}$. Therefore,
$E[S_n - h_{n-1} \eta_e] = \theta h_{n-1} - \theta h_{n-1} E[J_n]/2$, so it remains to show that
$E[J_n] = 2$. Let $K_n$ be the amount of time that the integer $1$ is in a singleton block of the partition, so $E[J_n] = n E[K_n]$. Let $T_n$ be the amount of time before the first coalescence event, and note that $E[T_n] = 2/[n(n-1)]$. The probability that $1$ coalesces with another integer at time
$T_n$ is $2/n$, and this event is independent of $T_n$. If $1$ does not coalesce at this time, then the expected additional time that $1$ is a singleton is $E[K_{n-1}]$. Therefore, we get the recursion $$E[K_n] = \frac{2}{n} E[T_n] + \frac{n-2}{n} E[T_n + K_{n-1}] = \frac{2}{n(n-1)} + \frac{n-2}{n} E[K_{n-1}].$$ Note that $E[K_2] = 1$, and then it is easy to show by induction that $E[K_n] = 2/n$ for all $n$, and so $E[J_n] = 2$ for all $n$,
as claimed.
We can write Fu and Li's $D$-statistic as
\begin{equation}
D = \frac{S_n - h_{n-1} \eta_e}{\sqrt{c_n S_n + d_nS_n^2}},
\label{fulieq}
\end{equation}
where, as in (\ref{tajeq}), $c_n$ and $d_n$ are constants chosen to make the variance of the statistic approximately one when the genealogy is given by Kingman's coalescent. Details of the variance computation are given in section 4.2 in Durrett (2002), where an error of Fu and Li (1993) is corrected.
When multiple mergers cause many lineages to coalesce at once, one expects $I_n$ to be reduced more than $J_n$ because there is still an external branch associated with each leaf, but there are fewer internal branches because of multiple mergers. This would cause Fu and Li's $D$-statistic to be negative. The next proposition shows that this is indeed the case.
\begin{Prop}
Let $(\Pi_n(t), t \geq 0)$ be a ${\cal P}_n$-valued $\Lambda$-coalescent in which
$\Lambda = \delta_0 + \Lambda_0$, where $\Lambda_0(\{0\}) = 0$, or a ${\cal P}_n$-valued
$\Xi$-coalescent in which $\Xi = \delta_{(0, 0, \dots)} + \Xi_0$ and $\Xi_0(\{(0, 0, \dots)\}) = 0$.
Let $\alpha_b = \lambda_b - \binom{b}{2}$, and suppose (\ref{segcond}) holds. Then
\begin{equation}
\lim_{n \rightarrow \infty} E[S_n - h_{n-1} \eta_e] = - \rho,
\label{rho2}
\end{equation}
where $\rho$ is the constant defined in (\ref{rhodef}).
\label{fuliprop}
\end{Prop}
The key to the proof of this proposition is the following lemma.
\begin{Lemma}
Under the assumptions of Proposition \ref{fuliprop}, there is a positive constant $C$ such that
\begin{equation}
0 \leq E[2 - J_n] \leq \frac{C}{n} \sum_{b=2}^n \frac{\alpha_b}{b}
\label{Lneq}
\end{equation}
for all $n \geq 2$.
\label{singlem}
\end{Lemma}
The first inequality in (\ref{Lneq}), which does not require condition (\ref{segcond}), shows that the expected sum of the lengths of the external branches is never greater than $2$, which means that it is largest for Kingman's coalescent. The second inequality gives a rather sharp bound on the difference. Recall that in Example \ref{exm1}, we have $\alpha_b \leq s \alpha$, so
$E[2 - J_n] \leq C' (\log n)/n$ for some other constant $C'$. In Example \ref{exm2},
(\ref{alphab}) gives $\alpha_b \leq c(1 + \log b) \leq c(1 + \log n)$, which implies
$E[2 - J_n] \leq C''(\log n)^2/n$ for some constant $C''$. Thus, in these examples, the lengths of the external branches are affected very little by multiple mergers when the sample size is large. The reason is that, in large samples, a lot of coalescence occurs very quickly, so most ancestral lines have merged with at least one other ancestral line before the first multiple merger takes place.
\begin{proof}[Proof of Lemma \ref{singlem}]
We start by proving the first inequality in (\ref{Lneq}) by induction. As before, let $K_n$ be the amount of time that the integer $1$ is in a singleton block. We need to show that $E[K_n] \leq 2/n$ for all $n \geq 2$. First, note that $E[K_2] = \lambda_2^{-1} \leq 1$. Now, suppose for some $n \geq 3$, we have $E[K_j] \leq 2/j$ for $j = 2, \dots, n-1$, and consider $E[K_n]$. Let $T_n$ be the time of the first merger when the coalescent starts with $n$ blocks, and let $B \geq 2$ be the number of blocks involved in the merger at time $T_n$. Note that $B$ is independent of $T_n$.
Conditional on $B$, the probability that $1$ merges with at least one other block at time $T_n$ is $B/n$. If this does not happen, then at least $n - B + 1$ blocks remain after the merger, so by the induction hypothesis, the expected time after $T_n$ that $\{1\}$ will remain a singleton is at most
$2/(n-B+1)$. Therefore,
$$E[K_n|T_n, B] \leq \bigg( \frac{B}{n} \bigg) T_n + \bigg( \frac{n-B}{n} \bigg) \bigg(T_n +
\frac{2}{n-B+1} \bigg) = T_n + \frac{2(n-B)}{n(n-B+1)}.$$
Since $2 \leq B \leq n$, we have $(n-B)/(n-B+1) \leq (n-2)/(n-1)$. Also,
$E[T_n] = \lambda_n^{-1} \leq 2/[n(n-1)]$, so
$$E[K_n] \leq \frac{2}{n(n-1)} + \frac{2(n-2)}{n(n-1)} = \frac{2}{n},$$
which proves the first inequality.
The proof of the second inequality requires a coupling argument. Let
$(\Pi_n(t), t \geq 0)$ be the coalescent process defined in the statement of Proposition \ref{fuliprop}, and let $(\Upsilon_n(t), t \geq 0)$ be Kingman's coalescent, started from the partition of $1, \dots, n$ into singletons. We may assume that the coalescent processes $\Pi_n$ and $\Upsilon_n$ are constructed
from Poisson processes $N_1$ and $N_2$ respectively on $(0, \infty) \times {\cal P}_n$, as described in section 3. That is, whenever $(t, \pi)$ is a point of $N_1$, the partition $\Pi_n(t)$ is the coagulation of $\Pi_n(t-)$ by $\pi$, and whenever $(t, \pi)$ is a point of $N_2$, the partition $\Upsilon_n(t)$ is the coagulation of $\Upsilon_n(t-)$ by $\pi$. Furthermore, these are the only jump times of $\Pi_n$ and $\Upsilon_n$. Let $L_1$ and $L_2$ be the intensity measures of the second coordinate for the Poisson processes $N_1$ and $N_2$ respectively. Then, for $\pi \in {\cal P}_n$, we have $L_2(\pi) = 1$ if $\pi$ consists of one block of size $2$ and $n-2$ singletons, and $L_2(\pi) = 0$ otherwise. Also, $L_1(\pi) \geq L_2(\pi)$ for all $\pi \in {\cal P}_n$. Therefore, we may assume that the Poisson processes $N_1$ and $N_2$ are coupled such that if $(t, \pi)$ is a point of $N_2$ then $(t, \pi)$ is a point of $N_1$. The points $(t, \pi)$ in both $N_1$ and $N_2$ correspond to mergers in which two blocks coalesce at a time, while the points $(t, \pi)$ in $N_1$ but not $N_2$ correspond to multiple mergers caused by selective sweeps.
To compare the two processes,
note that $K_n = \inf\{t: \{1\} \mbox{ is not a singleton in }\Pi_n(t)\}$, and let
$K_n' = \inf\{t: \{1\} \mbox{ is not a singleton in }\Upsilon_n(t)\}$. We have
$E[J_n] = n E[K_n]$. By our previous results for Kingman's coalescent, we have
$E[K_n'] = 2/n$, and so $E[2 - J_n] = n E[K_n' - K_n]$.
Let $\tau = \inf\{t: \Pi_n(t) \neq \Upsilon_n(t)\}$, where we say $\tau = \infty$ if $\Pi_n(t) = \Upsilon_n(t)$ for all $t$. For $\pi \in {\cal P}_n$, denote by $|\pi|$ the number of blocks in $\pi$.
Since $\Pi_n(t) = \Upsilon_n(t)$ for all $t \leq \tau$, we have
$$E[2 - J_n] = nE[K_n' - K_n] \leq nE[(K_n' - \tau)1_{\{\tau < K_n'\}}]
= n\sum_{b=2}^n E[(K_n' - \tau)1_{\{\tau < K_n'\}} 1_{\{|\Upsilon_n(\tau)| = b\}}].$$
For $b = 1, 2, \dots, n$, define $T_b = \inf\{t: |\Upsilon_n(t)| = b\}$. If $\tau < K_n'$ and
$|\Upsilon_n(\tau)| = b$, then $K_n' > T_b$. Therefore,
\begin{equation}
E[2 - J_n] \leq n \sum_{b=2}^n E[K_n' - \tau|\{\tau < K_n'\} \cap \{ |\Upsilon_n(\tau)| = b\}]
P(\{K_n' > T_b\} \cap \{ |\Upsilon_n(\tau)| = b\}).
\label{fueq1}
\end{equation}
If $\tau < K_n'$ and $|\Upsilon_n(\tau)| = b$, then $\{1\}$ is one of $b$ blocks of $\Upsilon_n(\tau)$, and by our previous results on Kingman's coalescent, the expected time before it merges with another block is $2/b$. Thus, we have
\begin{equation}
E[K_n' - \tau|\{\tau < K_n'\} \cap \{ |\Upsilon_n(\tau)| = b\}] = \frac{2}{b}.
\label{fueq2}
\end{equation}
Note that $K_n' > T_b$ whenever $\{1\}$ remains a singleton at the time that Kingman's coalescent is down to $b$ blocks. Whenever the coalescent goes from $j$ blocks to $j-1$, the probability that the integer $1$ is involved in the merger is $2/j$, so
\begin{equation}
P(K_n' > T_b) = \prod_{j=b+1}^n \bigg(1 - \frac{2}{j} \bigg) \leq
\exp \bigg( - \sum_{j=b+1}^n \frac{2}{j} \bigg) \leq
\exp \bigg( 1 - 2 \int_b^n \frac{1}{x} \: dx \bigg) = e \bigg( \frac{b}{n} \bigg)^2.
\label{fueq3}
\end{equation}
If $|\Upsilon_n(\tau)| = b$, then both $\Pi_n$ and $\Upsilon_n$ have the same $b$ blocks at time $T_b$, but at time $\tau$ the process $\Pi_n$ has a transition but $\Upsilon_n$ does not. Since the total merger rate for $\Pi_n$ after time $T_b$ is $\lambda_b = \alpha_b + \binom{b}{2}$ and the total merger rate for $\Upsilon_n$ after time $T_b$ is $\binom{b}{2}$, we have
\begin{equation}
P(|\Upsilon_n(\tau)| = b|K_n' > T_b) \leq \frac{\alpha_b}{\lambda_b} \leq \frac{2 \alpha_b}{b(b-1)}.
\label{fueq4}
\end{equation}
Combining (\ref{fueq1})-(\ref{fueq4}), we get
$$E[2 - J_n] \leq n \sum_{b=2}^n \frac{4e \alpha_b b^2}{b^2(b-1)n^2} \leq \frac{C}{n} \sum_{b=2}^n
\frac{\alpha_b}{b},$$
which is the second inequality in (\ref{Lneq}).
\end{proof}
\begin{proof}[Proof of Proposition \ref{fuliprop}] We have
\begin{equation}
E[S_n - h_{n-1} \eta_e] = (E[S_n] - \theta h_{n-1}) + h_{n-1}(\theta - E[\eta_e]) =
(E[S_n] - \theta h_{n-1}) + \frac{h_{n-1} \theta}{2} E[2 - J_n].
\label{Seta}
\end{equation}
By Proposition \ref{segprop}, $\lim_{n \rightarrow \infty} (E[S_n] - \theta h_{n-1}) = -\rho$.
It thus remains only to show that the second term on the right-hand side of (\ref{Seta}) goes to zero as $n \rightarrow \infty$. Let $\epsilon > 0$. By (\ref{segcond}), there exists a positive integer $N$ such that
$$\sum_{b=N}^{\infty} \frac{\alpha_b (1 + \log b)}{b^2} < \epsilon.$$ Therefore, by Lemma \ref{singlem},
\begin{align}
\limsup_{n \rightarrow \infty} \frac{h_{n-1} \theta}{2} E[2 - J_n] &\leq
\limsup_{n \rightarrow \infty} \frac{C h_{n-1} \theta}{2n} \sum_{b=2}^n \frac{\alpha_b}{b}
= \limsup_{n \rightarrow \infty} \frac{C h_{n-1} \theta}{2n} \bigg( \sum_{b=2}^N \frac{\alpha_b}{b}
+ \sum_{b=N}^n \frac{\alpha_b}{b} \bigg) \nonumber \\
&\leq 0 + \frac{C \theta}{2} \limsup_{n \rightarrow \infty} \sum_{b=N}^n \frac{\alpha_b h_{n-1}}{bn}
\leq \frac{C \theta}{2} \limsup_{n \rightarrow \infty} \sum_{b=N}^n \frac{\alpha_b (1 + \log b)}{b^2}
\leq \frac{C \theta \epsilon}{2}. \nonumber
\end{align}
Since this is true for all $\epsilon > 0$, and since $E[2 - J_n] \geq 0$ for all $n$ by
Lemma \ref{singlem}, we have $$\lim_{n \rightarrow \infty} \frac{h_{n-1} \theta}{2} E[2 - J_n] = 0,$$
which completes the proof of the proposition.
\end{proof}
We conclude this section with some comments about the power of Tajima's $D$-statistic and Fu and Li's $D$-statistic to detect selective sweeps. The numerators of these two statistics, which are $\Delta_n - S_n/h_{n-1}$ and $S_n - h_{n-1} \eta_e$, each have mean zero when the ancestral process is Kingman's coalescent. The expected values of these two numerators both converge to a negative constant as the sample size goes to infinity when multiple mergers can occur. These statistics are used to test for departures from Kingman's coalescent. If the goal is to test for multiple mergers caused by selective sweeps, one would reject the null hypothesis of no selective sweeps if the value of the statistic is too small (i.e. more negative than would be expected with Kingman's coalescent).
A natural question, then, is how much power these tests have to detect selective sweeps. While a full analysis of this question would require a simulation study, we can obtain some insight from the analytical results presented above. From the values of $a_n$ and $b_n$ in (\ref{tajeq}), which can be found in section 4.1 of Durrett (2002), we see that the standard deviation of the numerator of Tajima's $D$-statistic is $O(1)$ when the genealogy is given by Kingman's coalescent. However, from the values of $c_n$ and $d_n$ in (\ref{fulieq}), which can be found in section 4.2 of Durrett (2002), we see that the numerator of Fu and Li's $D$-statistic has a standard deviation which is $O(\log n)$. This means that, for large $n$, moderate negative values for the numerator of Fu and Li's $D$-statistic are not strong evidence against the null model of Kingman's coalescent, and thus a test based on Fu and Li's $D$-statistic will most likely have low power.
These observations are consistent with simulation results of Simonsen, Churchill, and Aquadro (1995), who found that Tajima's $D$-statistic has more power to detect selective sweeps than Fu and Li's $D$-statistic.
Neither of these tests has the desirable feature of many tests in classical statistics, which is that for all $\alpha > 0$, the power of the level $\alpha$ test tends to $1$ as the sample size $n$ tends to infinity. Indeed, for the problem of detecting recurrent selective sweeps, no such test based on the genealogy of the sample can exist because, with positive probability, none of the selective sweeps affects the genealogy of the $n$ sampled lineages before we get back to the most recent common ancestor. We formulate this observation precisely in the following proposition, which uses the coupling in the proof of Lemma \ref{singlem}.
\begin{Prop}
Let $(\Pi_n(t), t \geq 0)$ be the $\Lambda$-coalescent or $\Xi$-coalescent defined in the proof of Proposition \ref{fuliprop}, and assume that
\begin{equation}
\sum_{b=2}^{\infty} \frac{\alpha_b}{b^2} < \infty,
\label{weakcond}
\end{equation}
which is slightly weaker than (\ref{segcond}).
Let $(\Upsilon_n(t), t \geq 0)$ be Kingman's coalescent, coupled with
$(\Pi_n(t), t \geq 0)$ as in the proof of Lemma \ref{singlem}. Then there exists a constant $C > 0$ such that for all $n$, we have $P(\Upsilon_n(t) = \Pi_n(t) \mbox{ for all }t) \geq C$.
\end{Prop}
\begin{proof}
Let $T_b = \inf\{t: |\Upsilon_n(t)| = b\}$. Conditional on $\Pi_n(T_b) = \Upsilon_n(T_b)$, the probability that $\Pi_n(t) \neq \Upsilon_n(t)$ for some $t \in [T_b, T_{b-1}]$ is $\alpha_b/\lambda_b$. It follows that
$$P(\Upsilon_n(t) = \Pi_n(t) \mbox{ for all }t) = \prod_{b=2}^n \bigg(1 - \frac{\alpha_b}{\lambda_b} \bigg).$$ Note that $\alpha_b/\lambda_b \leq 2 \alpha_b/[b(b-1)]$ for all $b$.
By (\ref{weakcond}), there exists a positive integer $N$ such that $6 \alpha_b/[b(b-1)] \leq 1$ for all $b \geq N$, and if $0 \leq x \leq 1$ then $1 - x/3 \geq e^{-x}$. Putting these results together, we get
\begin{align}
P(\Upsilon_n(t) = \Pi_n(t) \mbox{ for all }t) &\geq \prod_{b=2}^{N-1} \bigg(1 - \frac{\alpha_b}{\lambda_b} \bigg) \prod_{b=N}^{\infty} \exp \bigg( - \frac{6 \alpha_b}{b(b-1)} \bigg) \nonumber \\
&= \prod_{b=2}^{N-1} \bigg(1 - \frac{\alpha_b}{\lambda_b} \bigg)
\exp \bigg(- \sum_{b=N}^{\infty} \frac{6 \alpha_b}{b(b-1)} \bigg) \geq C,
\nonumber
\end{align}
where the last inequality uses (\ref{weakcond}) again.
\end{proof}
\section{Proofs of convergence theorems}
In this section, we prove Theorem \ref{mainth} and Proposition \ref{coalprop}.
The proofs use Propositions \ref{sweepprop1} and \ref{sweepprop2} in combination with the Poisson
process construction of coalescents with multiple or simultaneous multiple
collisions.
Recall the model presented in subsection 2.1 of how the population behaves
following a single beneficial mutation. As in subsection 2.1, assume for
now that a beneficial mutation occurs at time $0$. Let $X(t)$ be the
number of chromosomes with the favorable $B$ allele at time $t$, and let
$\tau = \inf\{t: X(t) \in \{0, 2N\}\}$. Let
$0 = \xi_0 < \xi_1 < \xi_2 < \dots$
be the times of the proposed replacements, which occur at times of a rate
$2N$ Poisson process. Let $0 = \xi_0'
< \xi_1' < \xi_2' < \dots$ be the subset of
these times at which the number of individuals with the favorable allele
changes. As observed in Schweinsberg and Durrett (2004), if $1 \leq k
\leq 2N-1$, then $P(X(\xi_{i+1}') = k+1|X(\xi_i') = k) = 1/(2-s)$ and
$P(X(\xi_{i+1}') = k-1|X(\xi_i') = k) = (1-s)/(2-s)$. Thus, the number
of chromosomes with the $B$ allele behaves like an asymmetric random walk
until it reaches $0$ or $2N$. For integers $i$, $j$, and $k$
such that $0 \leq i \leq k \leq j \leq 2N$ and $i < j$, define
$$p(i,j,k) = P(\inf\{s \geq t: X(s) = j\} < \inf\{s \geq t: X(s) = i\}|X(t) = k),$$
which is the probability that if at some time there are $k$ chromosomes
with $B$, the number of $B$'s will reach $j$ before $i$. Using
the fact that $(1-s)^{\xi_n'}$ is a martingale and applying
the Optional Sampling Theorem, we get (see also Durrett (2002) or
Lemma 3.1 of Schweinsberg and Durrett (2004))
$$p(i,j,k) = \frac{1 - (1-s)^{k-i}}{1 - (1-s)^{j-i}}.$$ Therefore, the
probability that the beneficial mutation leads to a selective sweep is
$p(0,2N,1) = s/(1 - (1-s)^{2N})$.
Lemma \ref{taulem} below shows that the length of time that the beneficial
allele is present in the population is only $O(\log N)$. Since we speed
up time by a factor of $N$ to define the ancestral process, it will follow
that for large populations, on the time scale of the ancestral process
the lineages that coalesce as a result of a selective sweep coalesce almost
at the same time. It is well-known
(see Durrett (2002)) that a selective sweep takes time approximately
$(2/s) \log(2N)$. However, since a beneficial mutation leads to a
selective sweep with probability approximately $s$, we get a bound on
$E[\tau]$ that does not depend on $s$.
\begin{Lemma}
We have $E[\tau] \leq 4( \log N + 1)$.
\label{taulem}
\end{Lemma}
\begin{proof}
For $1 \leq k \leq 2N - 1$, let $S_k = \# \{i \geq 0: X(\xi_i') = k\}$ and
$T_k = \# \{i \geq 0: X(\xi_i) = k\}$, where $\# S$ denotes the cardinality of
a set $S$. Let $q_k = P(X(\xi_j') \neq k \mbox{ for all }j > i|X(\xi_i') = k)$
be the probability that the asymmetric random walk never returns to $k$.
Note that $E[S_k|S_k \geq 1] = 1/q_k$.
Also, $P(X(\xi_i) = k \mbox{ for some }k) = p(0,k,1) = s/(1 - (1-s)^k)$.
Therefore,
\begin{equation}
E[S_k] = P(S_k \geq 1) E[S_k|S_k \geq 1] =
\frac{s}{q_k(1 - (1-s)^k)}.
\label{ESk}
\end{equation}
We have, for $1 \leq k \leq 2N-1$,
\begin{align}
q_k &= \bigg( \frac{1-s}{2-s} \bigg) [1 - p(0,k,k-1)] +
\bigg( \frac{1}{2-s} \bigg) p(k, 2N, k+1) \nonumber \\
&= \bigg( \frac{1-s}{2-s} \bigg) \bigg[ 1 - \frac{1 - (1-s)^{k-1}}{1 - (1-s)^k}
\bigg] + \bigg( \frac{1}{2-s} \bigg) \frac{1 - (1-s)}{1 - (1-s)^{2N-k}}
\nonumber \\
&= \bigg( \frac{1-s}{2-s} \bigg) \frac{s(1-s)^{k-1}}{1 - (1-s)^k} +
\bigg( \frac{1}{2-s} \bigg) \frac{s}{1 - (1-s)^{2N-k}} \nonumber \\
&\geq \frac{s}{2-s} \bigg( \frac{(1-s)^k}{1 - (1-s)^k} + 1 \bigg)
= \frac{s}{(2-s)(1 - (1-s)^k)}. \nonumber
\end{align}
It follows from this result and (\ref{ESk}) that $E[S_k] \leq 2-s$
for all $k$.
Schweinsberg and Durrett (2004) calculated that $P(X(\xi_{i+1}) \neq
X(\xi_i)|X(\xi_i) = k) = k(2N-k)(2-s)/(2N)^2$. It follows that
$$E[T_k] = E[S_k] \bigg( \frac{(2N)^2}{k(2N-k)(2-s)} \bigg) \leq
\frac{4N^2}{k(2N-k)}.$$ Since $E[\xi_{i+1} - \xi_i] = 1/2N$ for all $i$, we
have $$E[\tau] = \frac{1}{2N} \sum_{k=1}^{2N-1} E[T_k] \leq \sum_{k=1}^{2N-1}
\frac{2N}{k(2N-k)} \leq 2 \sum_{k=1}^N \frac{2}{k} \leq 4(\log N + 1),$$
as claimed.
\end{proof}
We now use this result to prove part 2 of Proposition \ref{sweepprop1},
which shows that beneficial mutations do not cause lineages to coalesce
when the beneficial gene dies out.
\begin{proof}[Proof of part 2 of Proposition \ref{sweepprop1}]
Suppose $X(\tau) = 0$ and $\Theta \neq \kappa_0$. Then it can not be true
that for all $t \in [0, \tau]$, the $n$ individuals sampled at time $\tau$
all have distinct ancestors with the $b$-chromosome at time $t$. Therefore,
there is an
integer $i$ with $\xi_i \leq \tau$ such that one of the following is true:
\begin{enumerate}
\item The ancestor at time $\xi_i$ of one of the individuals sampled at
time $\tau$ has the $b$ allele, but the ancestor of the same individual
at time $\xi_{i-1}$ has the $B$ allele because of recombination.
\item There are two individuals in the sample at time $\tau$ that have
distinct ancestors with the $b$ allele at time $\xi_i$, but both of them
have the same ancestor at time $\xi_{i-1}$.
\end{enumerate}
We now calculate the probability of these events conditional on
$X(\xi_i) = k$, where $1 \leq k \leq N^{1/2}$. We assume $N \geq 2$.
For a randomly chosen $b$ chromosome at time $\xi_i$ to have a $B$
chromosome as its ancestor at time $\xi_{i-1}$, the chosen $b$ chromosome
must be the new one born at time $\xi_i$ (which has probability at most
$1/(2N-k)$ because $2N-k$ chromosomes have the $b$ allele at time
$\xi_i$), there must be recombination at this
time (which happens with probability $r$), and the ancestor at the site
of interest must be a $B$ chromosome (which happens with probability at
most $(k+1)/2N$ because $X(\xi_{i-1}) \leq k+1$). Therefore, the probability
that all three events occur is at most
$r(k+1)/[(2N-k)(2N)] \leq r/N^{3/2}$. Also, at most one pair of $b$
chromosomes at time $\xi_i$ can have the same ancestor at time $\xi_{i-1}$,
so the probability that two randomly chosen $b$ chromosomes coalesce at this
time is at most $\binom{2N-k}{2}^{-1} = 2/[(2N-k)(2N-k-1)] \leq 2/N^2$.
By Lemma \ref{taulem}, if $M$ is the integer such that $\xi_M = \tau$,
then $E[M] \leq (2N)[4(\log N + 1)] = 8N(\log N + 1)$. Since there
are $n$ individuals and $\binom{n}{2}$ pairs in the sample, combining
these bounds gives
\begin{equation}
P(X(\tau) = 0, X(t) \leq N^{1/2} \mbox{ for all }t,
\mbox{ and } \Theta \neq \kappa_0)
\leq 8N(\log N + 1) \bigg( \frac{nr}{N^{3/2}} + \frac{n(n-1)}{N^2} \bigg).
\label{nosweep1}
\end{equation}
Note that for $1 \leq k \leq 2N-1$, we have
\begin{align}
P(X(\tau) = 0 \mbox{ and }X(t) = k \mbox{ for some }t) &\leq
P(X(\tau) = 0|X(t) = k \mbox{ for some }t) \nonumber \\
&= 1 - p(0, 2N,k) = 1 - \frac{1 - (1-s)^k}{1 - (1-s)^{2N}} \leq (1-s)^k.
\nonumber
\end{align}
Therefore,
\begin{equation}
P(X(\tau) = 0 \mbox{ and }
X(t) > N^{1/2} \mbox{ for some }t) \leq (1-s)^{N^{1/2}}.
\label{nosweep2}
\end{equation}
Combining (\ref{nosweep1}) and (\ref{nosweep2}), we get
$$P(X_{\tau} = 0 \mbox{ and } \Theta \neq \kappa_0) \leq
(1 - s)^{N^{1/2}} + 8N(\log N + 1) \bigg( \frac{nr}{N^{3/2}} +
\frac{n(n-1)}{N^2} \bigg).$$
Part 2 of Proposition \ref{sweepprop1} follows because
$r \leq C' \log(2N)$ and $s$ is fixed.
\end{proof}
We now consider our model of recurrent selective sweeps and work towards
the proof of Theorem \ref{mainth}. We will first define a coalescent
with multiple collisions. We will then show that this process can be
coupled with the ancestral process $(\Psi_N(t), t \geq 0)$ such that,
given a finite number of times $0 < u_1 < \dots < u_m$, the processes
agree at these times with high probability.
Recall that $K_N$ is a Poisson point process on $\R \times [-L, L]
\times [0,1]$ with intensity $\lambda \times \mu_N$. We can define another
Poisson point process $K_N^*$ on $[0, \infty) \times [-L, L] \times [0,1]$
which consists of all the points $(-t/N, x, s)$ such that $(t, x, s)$ is
a point of $K_N$ and $t \leq 0$. By the Mapping Theorem for Poisson processes
(see section 2.3 of Kingman (1993)), $K_N^*$ is a Poisson process with
intensity measure $\lambda \times N \mu_N$. The points in
$K_N^*$ can be ordered by their first coordinate, so we can write the
points as $(t_i, x_i, s_i)$ for positive integers $i$, where
$0 < t_1 < t_2 < \dots$ a.s. Also, define $t_0 = 0$.
We now define a ${\cal P}_n$-valued coalescent process
$\Pi_N = (\Pi_N(t), t \geq 0)$. Let $\Pi_N(0)$ be the partition $\kappa_0$
of $\{1, \dots, n\}$ into singletons. Given $\Pi_N(t_i)$ for some $i \geq 0$,
we define $\Pi_N(t)$ for $t_i < t \leq t_{i+1}$ in two steps. First,
we let the process obey the law of Kingman's coalescent
over the interval $(t_i, t_{i+1})$, meaning that each possible transition
that involves the merging of two blocks happens at rate one.
Second, let $\pi_{i+1}$ be a random partition of $\{1, \dots, n\}$,
independent of $(\Pi_N(t), 0 \leq t < t_{i+1})$, such that for an event
$A_{i+1}$ of probability $s_{i+1}$, we have $\pi_{i+1} = \kappa_0$
on $A_{i+1}^c$ and the conditional distribution of $\pi_{i+1}$ given
$A_{i+1}$ is $Q_{p,n}$, where $p = e^{-r_N(x_{i+1}) \log(2N)/s_{i+1}}$.
We then define $\Pi_N(t_{i+1})$ to be the coagulation of
$\Pi_N(t_{i+1}-)$ by $\pi_{i+1}$.
The lemma below states that the coalescent process $\Pi_N$ that we have
just defined is a coalescent with multiple collisions.
\begin{Lemma}
Let $\eta_N$ be the measure on $(0,1]$ such that
$$\eta_N([y,1]) = \int_{-L}^L \int_0^1
s 1_{\{e^{-r_N(x) \log(2N)/s} \geq y\}} \: N \mu_N(dx \times ds)$$
for all $y \in (0,1]$. Let $\Lambda_{0,N}$ be the measure on
$(0, 1]$ such that $\Lambda_{0,N}(dx) = x^2 \eta_N(dx)$, and
let $\Lambda_N = \delta_0 + \Lambda_{0,N}$. Then the process
$(\Pi_N(t), t \geq 0)$ is the ${\cal P}_n$-valued $\Lambda_N$-coalescent.
\label{coallem}
\end{Lemma}
\begin{proof}
Let $K_N'$ be the point process on $[0, \infty) \times {\cal P}_n$
consisting of the points $(t_i, \pi_i)$.
By the Marking Theorem for Poisson processes (see section 5.2 of
Kingman (1993)), $K_N'$ is also a Poisson point process. Given
$(t_i, x_i, s_i)$, the partition $\pi_i$ has distribution
$Q_{p,n}$, where $p = e^{-r_N(x_i) \log(2N)/s_i}$, conditional on an
event of probability $s_i$ and otherwise is $\kappa_0$.
Therefore, the intensity measure of $K_N'$ is given by $\lambda \times H$,
where, for $\pi \neq \kappa_0$,
\begin{align}
H(\pi) &= \int_{-L}^L \int_0^1
s Q_{e^{-r_N(x) \log(2N)/s}, n}(\pi) \: N \mu_N(dx \times ds) \nonumber \\
&= \int_0^1 Q_{p,n}(\pi) \: \eta_N(dp) = \int_0^1 Q_{p,n}(\pi)
p^{-2} \Lambda_{0,N}(dp). \nonumber
\end{align}
By comparing this with (\ref{cmcpois}) and recalling that
$\Pi_N$ follows the law of Kingman's coalescent over the intervals
$(t_{i-1}, t_i)$, we conclude that $\Pi_N$ is the $\Lambda_N$-coalescent.
\end{proof}
The next lemma states that it is unlikely for there to be a beneficial
allele in the population at any fixed time.
Recall that ${\cal T}_N = \{t: (t,x,s) \mbox{ is a point in }K_N
\mbox{ for some } x \mbox{ and }s\}$.
\begin{Lemma}
There exists a constant $C$, not depending on $N$, such that for any
fixed $y \in \R$, we have $P(y \in [t, \tau(t)] \mbox { for some }t \in
{\cal T}_N) \leq (C \log N)/N.$
\label{overlap}
\end{Lemma}
\begin{proof}
The points of ${\cal T}_N$ form a Poisson process on $\R$ of rate
$\gamma_N$, where $\gamma_N = \mu_N([-L,L] \times [0,1])$.
Recall from Lemma \ref{taulem} that if $\tau$ denotes the amount of time for
which a beneficial allele is present in between $1$ and $2N-1$ members of
the population, then $E[\tau] \leq 4(\log N + 1)$. Therefore,
\begin{align}
P(y \in [t, \tau(t)] \mbox{ for some }t \in {\cal T}_N) &\leq
\int_{-\infty}^y P(\tau \geq y - x) \gamma_N \: dx \nonumber \\
&= \gamma_N \int_0^{\infty} P(\tau \geq x) \: dx = \gamma_N E[\tau]
\leq 4 \gamma_N (\log N + 1). \nonumber
\end{align}
Since the measures $N \mu_N$ converge weakly to $\mu$, the sequence
$(N \gamma_N)_{N=1}^{\infty}$ converges to $\mu([-L,L] \times [0,1])$
and therefore is bounded. The lemma follows.
\end{proof}
We now show how to couple the processes $\Psi_N$ and $\Pi_N$ so that
they agree at a given finite set of times with high probability. We first
consider how the ancestral process $\Psi_N$ behaves around the times
$t_1, t_2, \dots$. For positive integers $i$, let $\tau_i = -\tau(-Nt_i)/N$.
We have $-Nt_i \in {\cal T}_N$. However, recall from subsection 2.2 that
not all points in ${\cal T}_N$ are in ${\cal T}'_N$ because some potential
mutations are discarded to avoid overlapping selective sweeps.
When $-Nt_i \in {\cal T}_N'$, there is a beneficial allele in the
population during the time interval $[-Nt_i, \tau(-Nt_i))$, and this
affects the process $\Psi_N$ over the interval $[\tau_i, t_i]$.
For each $i$ such that $-Nt_i \in {\cal T}_N'$, we can define a random
partition $\theta_i \in {\cal P}_n$ by choosing $n$ individuals from the
population at time $\tau(-Nt_i)$ and declaring two integers $j$ and $k$
to be in the same block of $\theta_i$ if and only if the $j$th and
$k$th individuals chosen got their
allele at the neutral site of interest from the same ancestor at
time $-Nt_i$. If $\tau_i > 0$ and the partition $\Psi_N(\tau_i)$ contains
$b_i$ blocks, we can choose the $n$ individuals at time $\tau(-Nt_i)$
by first picking the $b_i$ individuals that are ancestors of the $n$
individuals that were sampled at time zero, and then choosing the
remaining $n - b_i$ at random. This will ensure that, for $i$ such that
$-Nt_i \in {\cal T}_N'$ and $\tau_i > 0$, the random partition
$\Psi_N(t_i)$ is the coagulation of $\Psi_N(\tau_i)$ by $\theta_i$.
Moreover, the conditional distribution of $\theta_i$
given $(t_i, x_i, s_i)$ and given that $-Nt_i \in {\cal T}_N'$
is the same as the distribution of the random partition $\Theta$
defined in subsection 2.1, when the selective advantage is $s_i$ and
the recombination probability is $r_N(x_i)$. Recall that when a beneficial
mutation occurs in the population with selective advantage $s_i$, it spreads
to the entire population with probability $s_i/(1 - (1 - s_i)^{2N})$.
Therefore, by Proposition \ref{sweepprop1},
the distribution of $\Theta$ is approximately that of a random partition
that has distribution $Q_{p,n}$, where $p = e^{-r_N(x_i) \log(2N)/s_i}$,
on an event of probability $s_i/(1 - (1 - s_i)^{2N})$ and is $\kappa_0$
on the complementary event. However, this is the same as the conditional
distribution of $\pi_i$ given $(t_i, x_i, s_i)$, except we have
$s_i/(1 - (1 - s_i)^{2N})$ instead of $s_i$. It thus follows
from Proposition \ref{sweepprop1} that
we can couple the partitions $\theta_i$ and $\pi_i$ such that
for any $\delta > 0$,
\begin{equation}
P(\theta_i \neq \pi_i \mbox{ and }-Nt_i \in {\cal T}_N'|(t_i, x_i, s_i))
\leq \frac{C_{\delta}}{\log N} + 1_{\{s_i < \delta\}},
\label{couppart}
\end{equation}
where $C_{\delta}$ is a constant that depends on $\delta$. Note that
we only get the $O(1/(\log N))$ bound when $s_i \geq \delta$ because
of the assumption in Proposition \ref{sweepprop1} that $s$ is fixed.
Finally, we consider the processes during the intervals
$(t_i, t_{i+1})$. The process $\Pi_N$ behaves like Kingman's coalescent
during these intervals.
Let $${\cal I}_N^* = \bigcup_{i=1}^{\infty} [\tau_i, t_i].$$
The process $\Psi_N$ behaves like Kingman's coalescent during the
intervals in $(0, \infty) \setminus {\cal I}_N^*$ because the population
follows the Moran model during the corresponding intervals.
Therefore, if $\Pi_N(t_i) = \Psi_N(t_i)$, we can couple the processes
so that $\Pi_N(t) = \Psi_N(t)$ for all $t \in [t_i, \phi_i)$,
where $\phi_i = \inf\{t > t_i: t \in {\cal I}_N^* \}$.
\begin{Prop}
Suppose the processes $\Pi_N$ and $\Psi_N$ are coupled in the manner
described above. Let $0 < u_1 < \dots < u_m$ be fixed times.
Let $\epsilon > 0$. For sufficiently large $N$, we have
\begin{equation}
P(\Pi_N(u_i) \neq \Psi_N(u_i) \mbox{ for some }i \in \{1, \dots, m\})
< \epsilon.
\label{maincoup}
\end{equation}
\label{coupprop}
\end{Prop}
\begin{proof}
Let $K = \sup\{k: t_k \leq u_m\}$. Suppose the following conditions hold:
\begin{enumerate}
\item For $i = 1, \dots, m$, we have $u_i \notin {\cal I}_N^*$.
\item For all positive integers $i$, we have $\tau_i > 0$.
\item For $i = 1, \dots, K$, we have $-Nt_i \in {\cal T}_N'$.
\item For $i = 1, \dots, K$, we have $\Pi_N(\tau_i) = \Pi_N(t_i-)$.
\item For $i = 1, \dots, K$, we have $\theta_i = \pi_i$.
\end{enumerate}
Conditions 2 and 3 imply that
$$0 = t_0 < \tau_1 < t_1 < \tau_2 < t_2 < \dots < \tau_K < t_K \leq u_m.$$
Condition 1 with $i = m$ implies further that $\tau_j > u_m$ for all $j > K$,
so $(t_K, u_m] \subset \R \setminus {\cal I}_N^*$.
We know that $\Pi_N(t_0) = \Psi_N(t_0) = \kappa_0$. Suppose, for
some $i \in \{0, \dots, K-1\}$, that $\Pi_N(t_i) = \Psi_N(t_i)$.
Then the coupling gives $\Pi_N(t) = \Psi_N(t)$ for all
$t \in [t_i, \tau_{i+1})$. Condition 4 gives
$\Pi_N(\tau_{i+1}) = \Pi_N(t_{i+1}-)$. Conditions 2 and 3 imply that
$\Psi_N(t_{i+1})$ is the coagulation of $\Psi_N(\tau_{i+1})$ by $\theta_{i+1}$.
Since $\Pi_N(t_{i+1})$ is the coagulation of $\Pi_N(t_{i+1}-)$ by $\pi_{i+1}$,
condition 5 ensures that $\Pi_N(t_{i+1}) = \Psi_N(t_{i+1})$.
Thus, $\Pi_N(t_i) = \Psi_N(t_i)$ for $i = 0, 1, \dots, K$,
and the coupling combined with the fact that $\Pi_N(t_K) = \Psi_N(t_K)$
gives $\Pi_N(t) = \Psi_N(t)$ for all $t \in (t_K, u_m]$. Thus,
we have $\Pi_N(t) = \Psi_N(t)$ for all $t \in [0, u_m] \setminus {\cal I}_N^*$.
Therefore, by condition 1, $\Pi_N(u_i) = \Psi_N(u_i)$ for $i = 1, \dots, m$.
It thus remains only to show that conditions 1 through 5 occur with
high probability. For the rest of the proof, we allow the constant $C$
to change from line to line.
If $u_i \in {\cal I}_N^*$, then there exists $t \in {\cal T}_N$ such that
$-Nu_i \in [t, \tau(t)]$. Therefore, by Lemma \ref{overlap},
$$P(u_i \in {\cal I}_N^* \mbox{ for some }i \in \{1, \dots, m\}) \leq
\frac{C \log N}{N}.$$ Likewise,
if $\tau_i < 0$ for some $i$, then $-Nt_i \leq 0 \leq \tau(-Nt_i)$
and $-Nt_i \in {\cal T}_N$. It follows that $P(\tau_i < 0 \mbox{ for some }i)
\leq C(\log N)/N$ by Lemma \ref{overlap}.
To deal with conditions 3, 4, and 5, let $l_j = ju_m/N$ for
$j = 0, 1, \dots, N$, and define the intervals $I_1, \dots, I_N$
by $I_j = [l_{j-1}, l_j]$. Note that the number of the points $t_i$
in an interval $I_j$ is Poisson with mean $u_m \gamma_N$. Therefore,
the probability that some point $t_i$ falls in $I_j$ is at most
$u_m \gamma_N \leq C/N$.
The probability that two or more points fall in $I_j$ is at most
$u_m^2 \gamma_N^2 \leq C/N^2$.
If there is a point $t_i \in I_j$ with $-Nt_i \notin {\cal T}_N'$,
then either there are two points in $I_j$ or there is one point in
$I_j$ and $l_j \in {\cal I}_N^*$. The event that there is at least
one point in $I_j$ is independent of the event that $l_j \in {\cal I}_N^*$,
so using Lemma \ref{overlap} again, the probability that both occur
is at most $C(\log N)/N^2$.
When the number of blocks in the coalescent is at most $n$, the
total transition rate of the process $\Pi_N$ is bounded by
$\binom{n}{2} + N \gamma_N$. The probability that there is any
point $t_i$ in $I_j$ is at most $C/N$, so by Lemma \ref{taulem},
\begin{equation}
P(\Pi_N(\tau_i) \neq \Pi_N(t_i-) \mbox{ for some }t_i \in I_j) \leq
\frac{C}{N} \bigg(
\binom{n}{2} + N\gamma_N \bigg) E[t_i - \tau_i] \leq \frac{C \log N}{N^2}.
\nonumber
\end{equation}
Finally, we may choose $\delta$ small enough that
$P(s_i \leq \delta) < \epsilon$, and then (\ref{couppart}) gives
$$P(\theta_i \neq \pi_i|t_i \in I_j) < \epsilon + \frac{C_{\delta}}{\log N},$$
where $C_{\delta}$ is a constant that depends on $\delta$. Therefore,
$P(\theta_i \neq \pi_i \mbox{ for some } t_i \in I_j)
\leq \epsilon/N + C_{\delta}/(N \log N)$. Since there are only
$N$ intervals $I_j$, we can add these bounds to show that the probability
that conditions 1 through 5 all hold is at least $1 - \epsilon$ for
sufficiently large $N$, which implies the statement of the proposition.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainth}]
Let $0 < u_1 < \dots < u_m$ be fixed times. Let $\epsilon > 0$.
Define $\Lambda_N$ as in Lemma \ref{coallem}, and let $\Pi_N$ be
a ${\cal P}_n$-valued $\Lambda_N$-coalescent. In view of Proposition
\ref{coupprop}, it suffices to show that for all $\pi_1, \dots, \pi_m
\in {\cal P}_n$, we have
$$|P(\Pi_N(u_i) = \pi_i \mbox{ for all } i \in \{1, \dots, m\}) -
P(\Pi(u_i) = \pi_i \mbox{ for all } i \in \{1, \dots, m\})| < \epsilon$$
for sufficiently large $N$. Therefore (see Pitman (1999)), it
suffices to show that the measures $\Lambda_N$ converge weakly to
$\Lambda$. Thus, we need to show (see Billingsley (1999), Theorem 2.1)
that for any bounded uniformly continuous function $h$ on $[0,1]$,
we have $\int_0^1 h(x) \: \Lambda_N(dx) \rightarrow \int_0^1 h(x) \: \Lambda(dx)$
as $N \rightarrow \infty$. By the definitions of $\Lambda_N$ and
$\Lambda$, it suffices to show that
$\int_0^1 h(x) \: \eta_N(dx) \rightarrow \int_0^1 h(x) \: \eta(dx)$
as $N \rightarrow \infty$ for
any bounded uniformly continuous function $h$ on $(0,1]$.
By the definitions of the measures
$\eta_N$ and $\eta$, this is equivalent to showing that
\begin{equation}
\lim_{N \rightarrow \infty} \int_{-L}^L \int_0^1 s h \big(
e^{-r_N(x) \log(2N)/s} \big) \: N \mu_N(dx \times ds) =
\int_{-L}^L \int_0^1 s h \big(e^{-r(x)/s} \big) \: \mu(dx \times ds)
\label{weakconv}
\end{equation}
for any bounded uniformly continuous function $h$ on $(0,1]$.
However, it is easy to deduce (\ref{weakconv}) from the
boundedness and uniform continuity of $h$, the uniform convergence of
$(\log 2N) r_N$ to $r$, the continuity of $r$, and the weak convergence
of the measures $N \mu_N$ to $\mu$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{coalprop}] Proposition
\ref{coalprop} can be proved by repeating the proof of Proposition
\ref{coupprop} with minor changes. To prove the
first part of the proposition, we construct the coalescent process $\Pi_N$
as before. Because $N \mu_N = \mu$ and $\log(2N) r_N = r$
for all $N$, we have $\Lambda_N = \Lambda$ for all $N$.
It follows from Lemma \ref{coallem} that $\Pi_N$ is a $\Lambda$-coalescent
for all $N$. Thus, it suffices to show (\ref{maincoup}), but with
$C/(\log N)$ on the right-hand side instead of $\epsilon$.
Because we are assuming that $\mu$ is concentrated on
$[-L, L] \times [\epsilon, 1]$ for some $\epsilon > 0$, we can
choose $\delta = \epsilon$ and drop the indicator from the
right-hand side of (\ref{couppart}) to get a bound of
$C_{\epsilon}/(\log N)$. We then obtain $C/(\log N)$ on the
right-hand side of (\ref{maincoup}) by following the same steps as before.
To prove the second part of Proposition \ref{coalprop}, we modify the
definition of $\Pi_N$. Conditional on $A_i$,
we give $\pi_i$ the distribution
$Q_{R(r_N(x_i)/s_i, \lfloor 2N s_i \rfloor), n}$. We set $\pi_i = \kappa_0$
on $A_i^c$.
The intensity measure of $K_N'$ is then given by $\lambda \times J$, where,
for all $\pi \neq \kappa_0$, we have
\begin{align}
J(\pi) &= \int_{-L}^L \int_0^1 s
Q_{R(r_N(x)/s, \lfloor 2N s \rfloor), n}(\pi) \: N \mu_N(dx \times ds)
\nonumber \\
&= \int_{\Delta} Q_{\delta_x, n}(\pi) \: G_N(dx)
= \int_{\Delta} Q_{\delta_x, n}(\pi)
\bigg( \sum_{j=1}^{\infty} x_j^2 \bigg)^{-1} \: \Xi_{N,0}(dx). \nonumber
\end{align}
By comparing this with (\ref{csmcpois}),
we see that the process $\Pi_N$ is a $\Xi_N$-coalescent.
It follows from Proposition \ref{sweepprop2} that we
can replace $C_{\delta}/(\log N)$ on the right-hand side of (\ref{couppart})
by $C_{\delta}/(\log N)^2$. This gives the second part of the proposition.
\end{proof}
\bigskip
\begin{center}
{\bf {\Large References}}
\end{center}
\bigskip\noindent N. H. Barton (1995). Linkage and the limits to natural selection.
{\it Genetics}, {\bf 140}, 821-841.
\bigskip\noindent N. H. Barton (1998). The effect of hitch-hiking on neutral genealogies.
{\it Genet. Res.} {\bf 72}, 123-133.
\bigskip\noindent N. H. Barton, A. M. Etheridge, and A. K. Sturm (2004). Coalescence in a
random background. {\it Ann. Appl. Probab}. {\bf 14}, 754-785.
\bigskip\noindent J. Bertoin and J.-F. LeGall (2003). Stochastic flows associated
to coalescent processes. {\it Probab. Theory Relat. Fields},
{\bf 126}, 261-288.
\bigskip\noindent P. Billingsley (1999). {\it Convergence of Probability Measures}.
2nd ed. New York, Wiley.
\bigskip\noindent J. M. Braverman, R. R. Hudson, N. L. Kaplan, C. H. Langley,
and W. Stephan (1995). The hitchhiking effect on the site frequency spectrum
of DNA polymorphisms. {\it Genetics}, {\bf 140}, 783-796.
\bigskip\noindent P. Donnelly and T. G. Kurtz (1999). Genealogical processes for
Fleming-Viot models with selection and recombination. {\it Ann. Appl.
Prob.} {\bf 9}, 1091-1148.
\bigskip\noindent R. Durrett (2002). {\it Probability Models for DNA Sequence
Evolution}. New York, Springer-Verlag.
\bigskip\noindent R. Durrett and J. Schweinsberg (2004a). Approximating selective
sweeps. {\it Theor. Popul. Biol.} {\bf 66}, 129-138.
\bigskip\noindent R. Durrett and J. Schweinsberg (2004b). Power laws for family sizes
in a duplication model. Preprint.
Available at http://front.math.ucdavis.edu/math.PR/0406216.
\bigskip\noindent Y. X. Fu and W. H. Li (1993). Statistical tests of neutrality of mutations.
{\it Genetics}, {\bf 133}, 693-709.
\bigskip\noindent P. J. Gerrish and R. E. Lenski (1998). The fate of competing
beneficial mutations in an asexual population. {\it Genetica},
{\bf 102/103}, 127-144.
\bigskip\noindent J. H. Gillespie (2000). Genetic drift in an infinite population:
the pseudohitchhiking model. {\it Genetics}, {\bf 155}, 909-919.
\bigskip\noindent N. L. Kaplan, R. R. Hudson, and C. H. Langley (1989).
The ``hitchhiking effect'' revisited. {\it Genetics}, {\bf 123}, 887-899.
\bigskip\noindent Y. Kim and W. Stephan (2003). Selective sweeps in the presence
of interference among partially linked loci. {\it Genetics},
{\bf 164}, 389-398.
\bigskip\noindent J. F. C. Kingman (1978). The representation of partition
structures. {\it J. London Math. Soc.} {\bf 18}, 374-380.
\bigskip\noindent J. F. C. Kingman (1982). The coalescent.
{\it Stochastic Process. Appl.} {\bf 13}, 235-248.
\bigskip\noindent J. F. C. Kingman (1993). {\it Poisson processes}. Oxford, Clarendon Press.
\bigskip\noindent S. M. Krone and C. Neuhauser (1997). Ancestral processes with
selection. {\it Theor. Popul. Biol.} {\bf 51}, 210-237.
\bigskip\noindent J. Maynard Smith and J. Haigh (1974). The hitchhiking effect of a
favorable gene. {\it Genet. Res.} {\bf 23}, 23-35.
\bigskip\noindent M. M{\"o}hle and S. Sagitov (2001).
A classification of coalescent processes for haploid exchangeable
population models. {\it Ann. Probab.} {\bf 29}, 1547-1562.
\bigskip\noindent P. A. P. Moran (1958). Random processes in genetics.
{\it Proc. Cambridge Philos. Soc.} {\bf 54}, 60-71.
\bigskip\noindent C. Neuhauser and S. M. Krone (1997). The genealogy of samples in
models with selection. {\it Genetics}, {\bf 145}, 519-534.
\bigskip\noindent J. Pitman (1999). Coalescents with multiple collisions.
{\it Ann. Probab.} {\bf 27}, 1870-1902.
\bigskip\noindent M. Przeworski (2002). The signature of positive selection at
randomly chosen loci. {\it Genetics}. {\bf 160}, 1179-1189.
\bigskip\noindent S. Sagitov (1999). The general coalescent with asynchronous
mergers of ancestral lines. {\it J. Appl. Probab.} {\bf 36}, 1116-1125.
\bigskip\noindent S. Sagitov (2003). Convergence to the coalescent with simultaneous
multiple mergers. {\it J. Appl. Probab.} {\bf 40}, 839-854.
\bigskip\noindent J. Schweinsberg (2000). Coalescents with simultaneous multiple
collisions. {\it Electron. J. Probab.} {\bf 5}, 1-50.
\bigskip\noindent J. Schweinsberg (2003). Coalescent processes obtained from
supercritical Galton-Watson processes. {\it Stochastic Process. Appl.}
{\bf 106}, 107-139.
\bigskip\noindent J. Schweinsberg and R. Durrett (2004). Random partitions approximating
the coalescence of lineages during a selective sweep. Preprint.
Available at http://front.math.ucdavis.edu/math.PR/ 0411069.
\bigskip\noindent K. Simonsen, G. A. Churchill, and C. F. Aquadro (1995).
Properties of statistical tests of neutrality for DNA polymorphism data.
{\it Genetics}, {\bf 141}, 413-429.
\bigskip\noindent W. Stephan, T. Wiehe, and M. W. Lenz (1992). The effect of
strongly selected substitutions on neutral polymorphism: Analytical
results based on diffusion theory. {\it Theor. Popul. Biol.} {\bf 41}, 237-254.
\bigskip\noindent F. Tajima (1989). Statistical method for testing the neutral
mutation hypothesis by DNA polymorphism. {\it Genetics}, {\bf 123}, 585-595.
\end{document}
|
{
"timestamp": "2005-04-23T02:33:45",
"yymm": "0411",
"arxiv_id": "math/0411071",
"language": "en",
"url": "https://arxiv.org/abs/math/0411071"
}
|
\section{Introduction}
In the study of Ricci flow on a Riemnnian manifold, we meet Ricci
solitons (see \cite{H95}). Ricci solitons and their sisters
Kahler-Ricci solitons are important objects by their own right
(see also \cite{B04}, \cite{C96}, and \cite{T04} ).
Given a Riemannian manifold $(X,g)$ of dimension $n$. Let $Rc$
be the Ricci tensor of the metric $g$. The equation for a homothetic Ricci
soliton is
$$
Rc=cg+{L}_Vg
$$
where $c$ is a homothetic constant, $V$ is a smooth vector field
on $X$, and ${L}_Vg$ is the Lie derivative of the metric $g$. When
$c=0$, the soliton is steady. For $c>0$ the soliton is shrinking,
and one can consider the Ricci flow on the sphere as such an
example. For $c<0$ the soliton is expanding. When $V$ is the
gradient of a smooth function, we call such solitons {\em
Gradient Homothetic Ricci Solitons}. Let $R$ be the scalar
curvature of $g$. An important equation in Riemannian geometry and
general relativity theory is the so called Einstein equation:
$$
E_{ij}=T_{ij},
$$
where $E=Rc-\frac{R}{2}g$, and $T$ is the energy momentum tensor
in the space. The tensor $T$ sometimes is also an unknown being.
So it is interesting to know whether $T$ is the Hessian matrix of
a smooth function, and to explore more properties about this
function.
In this short paper we constrict ourselves to consider the following
question about the gradient Ricci soliton on $X$.
We deduce a Liouville type theorem for a smooth
solution $f$ for the equation
$$
Rc=D^2f
$$
on $X$,
where $D^2f$ is the Hessian matrix of the function $f$.
Taking
the trace of the equation above, we get that
$$
R=\Delta f.
$$
Here we denote by $R$ the scalar curvature of the metric
$g$ and $\Delta f$ the Laplacian of $f$. If such a smooth function exists,
we call $(X,g)$ a {\em gradient Ricci soliton}.
In the
following, we write $D$ the covariant derivative of $g$.
As in \cite{H95}, we can derive (see the next section) that there is a constant $M$
such that
$$
|Df|^2+R=M
$$
on $X$. Then we have the following simple observation.
\begin{Prop} Assume $X$ is compact. Then $(X,g)$ is Ricci-flat.
\end{Prop}
\begin{proof}
In fact, we have
$$
\Delta f+|Df|^2=M.
$$
Set
$$
u=e^f.
$$
Then we have that $u>0$ and
$$
\Delta u=Mu
$$
Integrating we find
$$ M\int_X u=0.
$$
Then we have $M=0$. Hence, $u$ is a harmonic function. By the
Maximum principle, we know that $u$ is a constant, so $f$ is a
constant. This implies that $Rc=0$ and $(X,g)$ is Ricci-flat.
\end{proof}
We want to generalize the above result. Observe that if $(X,g)$
is non-compact and complete, the matter is not simple.
However, if $R(x)\to 0$ as $r:=r(x)\to\infty$, then we can get some conclusions.
Here
$r(x)=dist(x,o)$ is the distance of the point $x$ from a fixed point $o$.
Since $M\geq R$ and $R$ can be very small at infinity, we have
$M\geq 0$. If we also assume that $|Df(x)|\to 0$ as $r(x)\to
\infty$, then we must have that $M=0$. Hence $u$ is again a positive
harmonic function.
If we further assume that $Rc\geq
0$, then we must have by the Liouvile theorem of Yau \cite{Y} that
$u$ is a constant.
So we have obtained the following result.
\begin{Prop} Assume $X$ is a non-compact complete Riammnian manifold
with non-negative Ricci curvature. If either case 1) $\int_X
|R|^p<+\infty$ and $\int_X |f|^p<+\infty$ or $\int_X
|Df|^p<+\infty$ ( for some $p\geq 1$) or case 2) both $R$ and $f$
decay to zero at infinity, then $f$ is a constant and $(X,g)$ is
Ricci-flat.
\end{Prop}
\begin{proof} In both cases we can easily conclude that $M=0$.
Then we have that
$$
\Delta u=M=0.
$$
By the Liouvile theorem of Yau \cite{Y} we have that $u$ is a
constant.
\end{proof}
Now the nontrivial matter is to treat the case when $Rc$ has no
sign assumption. The natural consideration is sing the cut-off
function trick. In fact, it works for some cases. We have the
following theorem, which is the main result of this paper.
\begin{Thm}
Assume $X$ is a non-compact complete Riammnian manifold which is
quasi-isometric to Euclidean space at infinity. If $n\geq 3$ and
$Rc$ satisfies that $$\int_X|Rc|^{n/2}<+\infty ,$$ then $f$ is a
constant and $(X,g)$ is Ricci-flat and ALE of order $n-1$. If we
further assume that $n=4$, then $(X,g)$ is ALE of order $4$.
\end{Thm}
The definition about ALE space will be given in the next section.
In our proof of the result above, we will use the Bochner formula,
Moser iteration method, and the interesting result of
Bando-Kasue-Nakajima \cite{BKN89}. Our idea can also be used to
study Ricci-Kahler soliton and related question for Einstein
equations.
\section{Notations, definitions, and basic facts}
In local coordinates $(x^i)$ of the Riemannian manifold $(X,g)$,
we write the metric $g$ as $(g_{ij})$. The corresponding
Riemannian curvature tensor and Ricci tensor are denoted by
$Rm=(R_{ijkl})$ and $Rc=(R_{ij})$ respectively. Hence,
$$
R_{ij}=g^{kl}R_{ikjl}
$$
and
$$
R=g^{ij}R_{ij}.
$$
We write the covariant derivative of a smooth function $f$ by
$Df=(f_i)$, and denote the Hessian matrix of the function $f$ by
$D^2f=(f_{ij})$, where $D$ the covariant derivative of $g$ on
$X$.
The
higher order covariant derivatives are denoted by $f_{ijk}$, etc.
Similarly, we use the $T_{ij,k}$ to denote the covariant
derivative of the tensor $(T_{ij})$. We write
$T_j^i=g^{ik}T_{jk}$. Then the Ricci soliton equation is
$$
R_{ij}=f_{ij}.
$$
Taking covariant derivative we get
$$
f_{ijk}=R_{ij,k}.
$$
So we have
$$
f_{ijk}-f_{ikj}=R_{ij,k}-R_{ik,j}.
$$
By the Ricci formula we have that
$$
f_{ijk}-f_{ikj}=R_{ijk}^lf_l.
$$
Hence we obtain that
$$
R_{ij,k}-R_{ik,j}=R_{ijk}^lf_l.
$$
Recall that the contracted Bianchi identity is
$$
R_{ij,j}=\frac{1}{2}R_i.
$$
Upon taking the trace of the previous equation we get that
$$
\frac{1}{2}R_i+R_{i}^kf_k=0,
$$
i.e.,
$$
R_k=-2R_{k}^jf_j.
$$
Then
$$
D_k(|Df|^2+R)=2f_j(f_{jk}-R_{jk})=0.
$$
So, $|Df|^2+R$ is a constant, which is denoted by $M$.
To analyze the global behavior of the geometry of the manifold
$X$,
we need the following definition of quasi-isometry.
\begin{Def}
We say that $(X,g)$ is quasi-isometric to the Euclidean space
$R^n$ if there is a compact subset $K$ in $X$ such that the metric
$X-K$ is uniformly equivalent to $R^n-B_1(0)$.
\end{Def}
It is well-known that for such a Riemannian manifold, the Sobolev
inequality is true. We shall use this fact to make the Moser
iteration method work in $X$. A special case of such a manifold
with one end is called {\em asymptotically locally Euclidean}
(ALE) of order $\tau>0$ (see \cite{B86}); namely there are
constant $\rho>0$, $\alpha \in (0,1)$ and a
$C^{\infty}$-diffeomorphism
$$\Xi: X-K\to R^n-B_{\rho}(0)$$ such that $\phi=\Xi^{-1}$ satisfies
$$
(\phi^{*}g)_{ij}(x)=\delta_{ij}+ 0(|x|^{-\tau}),
$$
$$
\partial_k(\phi^{*}g)_{ij}(x)=0(|x|^{-1-\tau}),
$$
and
$$
\frac{|\partial_k(\phi^{*}g)_{ij}(x)-\partial_k(\phi^{*}g)_{ij}(z)|}{|x-z|^{\alpha}}
=0(|x|^{-1-\alpha-\tau},|z|^{-1-\alpha-\tau})
$$
for any $x,z\in R^n-B_{\rho}(0)$.
\section{Proof of Theorem 3}
We now recall the Bochner formula for any smooth function $w$ on $X$:
$$
\frac{1}{2}\Delta|D w|^2=|D^2w|^2+<D\Delta w,Dw>+Rc(Dw,Dw).
$$
For our function $f$ in the Ricci soliton equation, we have that
$$
\frac{1}{2}\Delta|Df|^2=|D^2f|^2+<DR,Df>+Rc(Df,Df)
$$
Recall that
$$
R_k=-2R_{kj}f_j.
$$
Then we have
$$ <DR,Df>=-2Rc(Df,Df).
$$
So we have
$$
\frac{1}{2}\Delta|Df|^2=|Rc|^2-Rc(Df,Df).
$$
This equation can also be written as
$$
-\frac{1}{2}\Delta R=|Rc|^2-\frac{1}{2}(DR,Df),
$$
from which we know the following inequality
$$
\frac{1}{2}\Delta|Df|^2\geq-|Rc||Df|^2.
$$
We shall use this differential inequality to study the
behavior of $|Df|$ at infinity. Let $w=|Df|^2$. Then $w=|R|$.
Using the assumption that
$$ \int
|Rc|^{n/2}<+\infty
$$
we get that
$$ \int
|w|^{n/2}<+\infty.
$$
Using Moser's iteration method
we know (see Lemmas 4.1,4.2,4.6 in \cite{BKN89}) that there is a
positive constant $\alpha>0$ such that
$$
w=0(r^{-\alpha}).
$$
That is to say that
$$
|R(x)|=|Df|^2(x)=0(r^{-\alpha}).
$$
This implies that $M=0$, and $R\leq 0$ on $X$. Using the maximum
principle to the equation
\begin{equation}
-\frac{1}{2}\Delta R=|Rc|^2-\frac{1}{2}(DR,Df)
\end{equation}
and the fact that $$ |Rc|^2\geq \frac{1}{n}R^2,
$$
we find that the minimum of $R$ on $X$ can not be negative, and it
must be zero. Hence $R=0$ on $X$. This implies from (1) that
$Rc=0$. Using Theorem 1.5 in \cite{BKN89} we know that $(X,g)$ is
ALE of order $n-1$. If further $n=4$, $(X,g)$ is ALE of order $4$.
This proves our Theorem 3.
{\bf Acknowledgement}. The author thanks Mr. D.Z. Chen for
checking spelling mistakes in a previous version of this paper.
|
{
"timestamp": "2004-11-19T04:42:49",
"yymm": "0411",
"arxiv_id": "math/0411426",
"language": "en",
"url": "https://arxiv.org/abs/math/0411426"
}
|
\section{Measuring electron spin in quantum dots}
\label{spin}
In quantum dot devices, single electron charges are easily measured. Spin states in quantum dots, however, have only been studied by measuring the average signal from a large ensemble of electron spins [17-22]. In contrast, the experiment presented here aims at a single-shot measurement of the spin orientation (parallel or antiparallel to the field, denoted as spin-$\uparrow$ and spin-$\downarrow$, respectively) of a particular electron; only one copy of the electron is available, so no averaging is possible. The spin measurement relies on spin-to-charge conversion \cite{fujisawa02,rh03} followed by charge measurement in a single-shot mode \cite{lu,fujisawa04}. Figure~\ref{fig5:principle}a schematically shows a single electron spin confined in a quantum dot (circle). A magnetic field is applied to split the spin-$\uparrow$ and spin-$\downarrow$ states by the Zeeman energy. The dot potential is then tuned such that if the electron has spin-$\downarrow$ it will leave, whereas it will stay on the dot if it has spin-$\uparrow$. The spin state has now been correlated with the charge state, and measurement of the charge on the dot will reveal the original spin state.
\section{Implementation}
\label{implementation}
This concept is implemented using a structure \cite{jme03} (Fig.~\ref{fig5:principle}b) consisting of a quantum dot in close proximity to a quantum point contact (QPC). The quantum dot is used as a box to trap a single electron, and the QPC is operated as a charge detector in order to determine whether the dot contains an electron or not. The quantum dot is formed in the two-dimensional electron gas (2DEG) of a GaAs/AlGaAs heterostructure by applying negative voltages to the metal surface gates $M$, $R$, and $T$. This depletes the 2DEG below the gates and creates a potential minimum in the centre, that is, the dot (indicated by a dotted white circle). We tune the gate voltages such that the dot contains either zero or one electron (which we can control by the voltage applied to gate $P$). Furthermore, we make the tunnel barrier between gates $R$ and $T$ sufficiently opaque that the dot is completely isolated from the drain contact on the right. The barrier to the reservoir on the left is set \cite{jme04} to a tunnel rate $\Gamma \approx (0.05$ ms$)^{-1}$. When an electron tunnels on or off the dot, it changes the electrostatic potential in its vicinity, including the region of the nearby QPC (defined by $R$ and $Q$). The QPC is set in the tunnelling regime, so that the current, $I_{QPC}$, is very sensitive to electrostatic changes \cite{field}. Recording changes in $I_{QPC}$ thus permits us to measure on a timescale of about 8 $\mu$s whether an electron resides on the dot or not \cite{LMKV}. In this way the QPC is used as a charge detector with a resolution much better than a single electron charge and a measurement timescale almost ten times shorter than $1/\Gamma$.
\begin{figure}[t]
\centering
\includegraphics[width=11.5cm, clip=true]{Fig1.eps}
\caption{Spin-to-charge conversion in a quantum dot coupled to a quantum point contact.
\textbf{(a)} Principle of spin-to-charge conversion. The charge on the quantum dot, $Q_{dot}$, remains constant if the electron spin is $\uparrow$, whereas a spin-$\downarrow$ electron can escape, thereby changing $Q_{dot}$.
\textbf{(b)} Scanning electron micrograph of the metallic gates on the surface of a GaAs/Al$_{0.27}$Ga$_{0.73}$As heterostructure containing a two-dimensional electron gas (2DEG) 90 nm below the surface. The electron density is ${2.9\times 10^{15}}$ m${^{-2}}$. (Only the gates used in the present experiment are shown, the complete device is described in Ref.~\protect\cite{jme03}.) Electrical contact is made to the QPC source and drain and to the reservoir via Ohmic contacts. With a source-drain bias voltage of 1 mV, $I_{QPC}$ is about 30 nA, and an individual electron tunnelling on or off the dot changes $I_{QPC}$ by $\sim0.3$ nA. The QPC-current is sent to a room temperature current-to-voltage convertor, followed by a gain 1 isolation amplifier, an AC-coupled 40 kHz SRS650 low-pass filter, and is digitized at a rate of $2.2\times 10^6 $ samples/s. With this arrangement, the step in $I_{QPC}$ resulting from an electron tunnelling is clearly larger than the rms noise level, provided it lasts at least 8 $\mu$s. A magnetic field, ${B}$, is applied in the plane of the 2DEG.}
\label{fig5:principle}
\end{figure}
The device is placed inside a dilution refrigerator, and is subject to a magnetic field of 10 T (unless noted otherwise) in the plane of the 2DEG. The measured Zeeman splitting in the dot \cite{rh03}, $\Delta E_Z \approx 200 \mu$eV, is larger than the thermal energy (25 $\mu$eV) but smaller than the orbital energy level spacing (1.1 meV) and the charging energy (2.5 meV).
\section{Two-level pulse technique}
\label{2levelpulse}
To test our single-spin measurement technique, we use an experimental procedure based on three stages: (1) empty the dot, (2) inject one electron with unknown spin, and (3) measure its spin state. The different stages are controlled by voltage pulses on gate $P$ (Fig.~\ref{fig5:diagrams}a), which shift the dot's energy levels (Fig.~\ref{fig5:diagrams}c). Before the pulse the dot is empty, as both the spin-$\uparrow$ and spin-$\downarrow$ levels are above the Fermi energy of the reservoir, $E_F$. Then a voltage pulse pulls both levels below $E_F$. It is now energetically allowed for an electron to tunnel onto the dot, which will happen after a typical time $\sim \Gamma^{-1}$. The particular electron can have spin-$\uparrow$ (shown in the lower diagram) or spin-$\downarrow$ (upper diagram). (The tunnel rate for spin-$\uparrow$ electrons is expected to be larger than that for spin-$\downarrow$ electrons \cite{rh04}, i.e. $\Gamma_{\uparrow} > \Gamma_{\downarrow}$, but we do not assume this a priori.) During this stage of the pulse, lasting $t_{wait}$, the electron is trapped on the dot and Coulomb blockade prevents a second electron to be added. After $t_{wait}$ the pulse is reduced, in order to position the energy levels in the read-out configuration. If the electron spin is $\uparrow$, its energy level is below $E_F$, so the electron remains on the dot. If the spin is $\downarrow$, its energy level is above $E_F$, so the electron tunnels to the reservoir after a typical time $\sim \Gamma_{\downarrow}^{-1}$. Now Coulomb blockade is lifted and an electron with spin-$\uparrow$ can tunnel onto the dot. This occurs on a timescale $\sim \Gamma_{\uparrow}^{-1}$ (with $\Gamma=\Gamma_{\uparrow}+\Gamma_{\downarrow}$). After $t_{read}$, the pulse ends and the dot is emptied again.
\begin{figure}[htbp]
\centering
\includegraphics[width=10.1cm, clip=true]{Fig2.eps}
\caption{Two-level pulse technique used to inject a single electron and measure its spin orientation.
\textbf{(a)} Shape of the voltage pulse applied to gate $P$. The pulse level is 10 mV during $t_{wait}$ and 5 mV during $t_{read}$ (which is 0.5 ms for all measurements).
\textbf{(b)} Schematic QPC pulse-response if the injected electron has spin-$\uparrow$ (solid line) or spin-$\downarrow$ (dotted line; the difference with the solid line is only seen during the read-out stage). Arrows indicate the moment an electron tunnels into or out of the quantum dot.
\textbf{(c)} Schematic energy diagrams for spin-$\uparrow$ ($E_{\uparrow}$) and spin-$\downarrow$ ($E_{\downarrow}$) during the different stages of the pulse. Black vertical lines indicate the tunnel barriers. The tunnel rate between the dot and the QPC-drain on the right is set to zero. The rate between the dot and the reservoir on the left is tuned to a specific value, $\Gamma$. If the spin is $\uparrow$ at the start of the read-out stage, no change in the charge on the dot occurs during $t_{read}$. In contrast, if the spin is $\downarrow$, the electron can escape and be replaced by a spin-$\uparrow$ electron. This charge transition is detected in the QPC-current (dotted line inside red circle in (b)).}
\label{fig5:diagrams}
\end{figure}
The expected QPC-response, $\Delta I_{QPC}$, to such a two-level pulse is the sum of two contributions (Fig.~\ref{fig5:diagrams}b). First, due to a capacitive coupling between pulse-gate and QPC, $\Delta I_{QPC}$ will change proportionally to the pulse amplitude. Thus, $\Delta I_{QPC}$ versus time resembles a two-level pulse. Second, $\Delta I_{QPC}$ tracks the charge on the dot, i.e. it goes up whenever an electron tunnels off the dot, and it goes down by the same amount when an electron tunnels on the dot. Therefore, if the dot contains a spin-$\downarrow$ electron at the start of the read-out stage, $\Delta I_{QPC}$ should go up and then down again. We thus expect a characteristic step in $\Delta I_{QPC}$ during $t_{read}$ for spin-$\downarrow$ (dotted trace inside red circle). In contrast, $\Delta I_{QPC}$ should be flat during $t_{read}$ for a spin-$\uparrow$ electron. Measuring whether a step is present or absent during the read-out stage constitutes our spin measurement.
\begin{figure}[htbp]
\centering
\includegraphics[width=11.5cm, clip=true]{FigS2.eps}
\caption{Tuning the quantum dot into the spin read-out configuration. We apply a two-stage voltage pulse as in Fig.~\ref{fig5:diagrams}a ($t_{wait}$ = 0.3 ms, $t_{read}$ = 0.5 ms), and measure the QPC-response for increasingly negative values of $V_M$.
\textbf{(a)} QPC-response (in colour-scale) versus $V_M$. Four different regions in $V_M$ can be identified (separated by white dotted lines), with qualitatively different QPC-responses.
\textbf{(b)} Typical QPC-response in each of the four regions. This behaviour can be understood from the energy levels during all stages of the pulse.
\textbf{(c)} Schematic energy diagrams showing $E_{\uparrow}$ and $E_{\downarrow}$ with respect to $E_F$ before and after the pulse (blue), during $t_{wait}$ (orange) and during $t_{read}$ (purple), for four values of $V_M$. For the actual spin read-out experiment, $V_M$ is set to the optimum position (indicated by the arrow in a).}
\label{fig5:tuning}
\end{figure}
\section{Tuning the quantum dot into the read-out configuration}
\label{tune}
To perform spin read-out, $V_M$ has to be fine-tuned so that the position of the energy levels with respect to $E_F$ is as shown in Fig.~\ref{fig5:diagrams}c. To find the correct settings, we apply a two-level voltage pulse and measure the QPC-response for increasingly negative values of $V_M$ (Fig.~\ref{fig5:tuning}a). Four different regions in $V_M$ can be identified (separated by white dotted lines), with qualitatively different QPC-responses. The shape of the typical QPC-response in each of the four regions (Fig.~\ref{fig5:tuning}b) allows us to infer the position of $E_{\uparrow}$ and $E_{\downarrow}$ with respect to $E_F$ during all stages of the pulse (Fig.~\ref{fig5:tuning}c).
In the top region, the QPC-response just mimics the applied two-level pulse, indicating that here the charge on the dot remains constant throughout the pulse. This implies that $E_{\uparrow}$ remains below $E_F$ for all stages of the pulse, thus the dot remains occupied with one electron. In the second region from the top, tunnelling occurs, as seen from the extra steps in $\Delta I_{QPC}$. The dot is empty before the pulse, then an electron is injected during $t_{wait}$, which escapes after the pulse. This corresponds to an energy level diagram similar to before, but with $E_{\uparrow}$ and $E_{\downarrow}$ shifted up due to the more negative value of $V_M$ in this region. In the third region from the top, an electron again tunnels on the dot during $t_{wait}$, but now it can escape already during $t_{read}$, irrespective of its spin. Finally, in the bottom region no electron-tunneling is seen, implying that the dot remains empty throughout the pulse.
Since we know the shift in $V_M$ corresponding to shifting the energy levels by $\Delta E_Z$ \cite{jme04}, we can set $V_M$ to the optimum position for the spin read-out experiment (indicated by the arrow). For this setting, the energy levels are as shown in Fig.~\ref{fig5:diagrams}c, i.e. $E_F$ is approximately in the middle between $E_{\uparrow}$ and $E_{\downarrow}$ during the read-out stage.
\section{Single-shot read-out of one electron spin}
\label{exp}
\begin{figure}[htbp]
\centering
\includegraphics[width=11.5cm, clip=true]{Fig3.eps}
\caption{Single-shot read-out of one electron spin.
\textbf{(a)} Time-resolved QPC measurements. Top panel: an electron injected during $t_{wait}$ is declared `spin-up' during $t_{read}$. Bottom panel: the electron is declared `spin-down'.
\textbf{(b)} Examples of `spin-down' traces (for $t_{wait}$ = 0.1 ms). Only the read-out segment is shown, and traces are offset for clarity. The time when $\Delta I_{QPC}$ first crosses the threshold, $t_{detect}$, is recorded to make the histogram in Fig.~\ref{fig5:fidelity}a.
\textbf{(c)} Fraction of `spin-down' traces versus $t_{wait}$, out of 625 traces for each waiting time. Open circle: spin-down fraction using modified pulse shape (d). Red solid line: exponential fit to the data. Inset: $T_1$ versus $B$.
\textbf{(d)} Typical QPC-signal for a `reversed' pulse, with the same amplitudes as in Fig.~\ref{fig5:diagrams}a, but the order of the two stages reversed, so that only a spin-$\uparrow$ electron can be injected. The fraction of traces nevertheless declared `spin-down' gives an independent measure of the `dark count' probability. This fraction is plotted as the open circle in (c) and is used in the exponential fit with an associated value of $t_{wait}$ = 10 ms (i.e. $>> T_1$). The blue threshold is used in Fig.~\ref{fig5:fidelity}b}.
\label{fig5:singleshot}
\end{figure}
Figure~\ref{fig5:singleshot}a shows typical experimental traces of the pulse-response recorded after proper tuning of the DC gate voltages (see Fig.~\ref{fig5:tuning}). We emphasize that each trace involves injecting one particular electron on the dot and subsequently measuring its spin state. Each trace is therefore a single-shot measurement. The traces we obtain fall into two different classes; most traces qualitatively resemble the one in the top panel of Fig.~\ref{fig5:singleshot}a, some resemble the one in the bottom panel. These two typical traces indeed correspond to the signals expected for a spin-$\uparrow$ and a spin-$\downarrow$ electron (Fig.~\ref{fig5:diagrams}b), a strong indication that the electron in the top panel of Fig.~\ref{fig5:singleshot}a was spin-$\uparrow$ and in the bottom panel spin-$\downarrow$. The distinct signature of the two types of responses in $\Delta I_{QPC}$ permits a simple criterion for identifying the spin~\cite{note}: if $\Delta I_{QPC}$ goes above the threshold value (red line in Fig.~\ref{fig5:singleshot}a and chosen as explained below), we declare the electron `spin-down'; otherwise we declare it `spin-up'. Fig.~\ref{fig5:singleshot}b shows the read-out section of twenty more `spin-down' traces, to illustrate the stochastic nature of the tunnel events.
The random injection of spin-$\uparrow$ and spin-$\downarrow$ electrons prevents us from checking the outcome of any individual measurement. Therefore, in order to further establish the correspondence between the actual spin state and the outcome of our spin measurement, we change the probability to have a spin-$\downarrow$ at the beginning of the read-out stage, and compare this with the fraction of traces in which the electron is declared `spin-down'. As $t_{wait}$ is increased, the time between injection and read-out, $t_{hold}$, will vary accordingly ($t_{hold} \approx t_{wait}$). The probability for the spin to be $\downarrow$ at the start of $t_{read}$ will thus decay exponentially to zero, since electrons in the excited spin state will relax to the ground state ($k_B T << \Delta E_Z$). For a set of 15 values of $t_{wait}$ we take 625 traces for each $t_{wait}$, and count the fraction of traces in which the electron is declared `spin-down' (Fig.~\ref{fig5:singleshot}c). The fact that the expected exponential decay is clearly reflected in the data confirms the validity of the spin read-out procedure.
\begin{figure}[t]
\centering
\includegraphics[width=11.5cm, clip=true]{FigS4.eps}
\caption{Measurement of the spin-relaxation time as in Fig.~\ref{fig5:singleshot}c, but at different magnetic fields. Averaging the results of an exponential fit (as shown) over three similar measurements yields
\textbf{(a)}, $T_1 = (0.85 \pm 0.11)$ ms at 8 T and
\textbf{(b)}, $T_1 = (0.12 \pm 0.03)$ ms at 14 T.}
\label{fig5:relaxation}
\end{figure}
We extract a single-spin energy relaxation time, $T_1$, from fitting the datapoints in Fig.~\ref{fig5:singleshot}c (and two other similar measurements) to $\alpha + C \exp(-t_{wait}/T_1)$, and obtain an average value of $T_1 \approx (0.55 \pm 0.07)$ ms at 10 Tesla. This is an order of magnitude longer than the lower bound on $T_1$ established earlier \cite{rh03}, and clearly longer than the time needed for the spin measurement (of order $1/\Gamma_{\downarrow} \approx 0.11$ ms). A similar experiment at 8 Tesla gives $T_1 \approx (0.85 \pm 0.11)$ ms and at 14 Tesla we find $T_1 \approx (0.12 \pm 0.03)$ ms (Fig.~\ref{fig5:relaxation}). More experiments are needed in order to test the theoretical prediction that relaxation at high magnetic fields is dominated by spin-orbit interactions \cite{khaetskii,golovach,woods}, with smaller contributions resulting from hyperfine interactions with the nuclear spins \cite{khaetskii,erlingsson} (cotunnelling is insignificant given the very small tunnel rates). We note that the obtained values for $T_1$ refer to our entire device under active operation: i.e. a single spin in a quantum dot subject to continuous charge detection by a QPC.
\begin{figure}[t]
\centering
\includegraphics[width=11.5cm, clip=true]{FigS3.eps}
\caption{Setting the injection threshold.
\textbf{(a)} Example of QPC-signal for the shortest waiting time used (0.1 ms). The blue horizontal line indicates the injection threshold. Injection is declared successful if the QPC-signal is below the injection threshold for a part or all of the last 45 $\mu$s before the end of the injection stage ($t_{wait}$). Traces in which injection was not successful, i.e. no electron was injected during $t_{wait}$, are disregarded.
\textbf{(b)} Fraction of traces in which injection was successful, out of a total of 625 taken for each waiting time. The threshold chosen for analysing all data is indicated by the vertical blue line.}
\label{fig5:injection}
\end{figure}
\section{Measurement fidelity}
\label{fidelity}
For applications in quantum information processing it is important to know the accuracy, or fidelity, of the single-shot spin read-out. The measurement fidelity is characterised by two parameters, $\alpha$ and $\beta$ (inset to Fig.~\ref{fig5:fidelity}a), which we now determine for the data taken at 10 T.
The parameter $\alpha$ corresponds to the probability that the QPC-current exceeds the threshold even though the electron was actually spin-$\uparrow$, for instance due to thermally activated tunnelling or electrical noise (similar to `dark counts' in a photon detector). The combined probability for such processes is given by the saturation value of the exponential fit in Fig.~\ref{fig5:singleshot}c, $\alpha$, which depends on the value of the threshold current. We analyse the data in Fig.~\ref{fig5:singleshot}c using different thresholds, and plot $\alpha$ in Fig.~\ref{fig5:fidelity}b.
\begin{figure}[t]
\centering
\includegraphics[width=11.5cm, clip=true]{Fig4.eps}
\caption{Measurement fidelity.
\textbf{(a)} Histogram showing the distribution of detection times, $t_{detect}$, in the read-out stage (see Fig.~\ref{fig5:singleshot}b for a definition of $t_{detect}$). The exponential decay is due to spin-$\downarrow$ electrons tunnelling out of the dot (rate $=\Gamma_{\downarrow}$) and due to spin flips during the read-out stage (rate $= 1/T_1$). Solid line: exponential fit with a decay time ($\Gamma_{\downarrow} + 1/T_1)^{-1}$ of 0.09 ms. Given that $T_1$ = 0.55 ms, this yields $\Gamma_{\downarrow}^{-1} \approx 0.11$ ms. Inset: fidelity parameters. A spin-$\uparrow$ electron is declared `up' or `down' with probability $1-\alpha$ or $\alpha$, respectively. A spin-$\downarrow$ electron is declared `down' (d) or `up' (u) with probability $1-\beta$ or $\beta$, respectively.
\textbf{(b)} Filled dark circles represent $\alpha$, obtained from the saturation value of exponential fits as in Fig.~\ref{fig5:singleshot}c for different values of the read-out threshold. A current of 0.54 nA (0.91 nA) corresponds to the average value of $\Delta I_{QPC}$ when the dot is occupied (empty) during $t_{read}$. Open circles: measured fraction of `reverse-pulse' traces in which $\Delta I_{QPC}$ crosses the injection threshold (blue line in Fig.~\ref{fig5:singleshot}d). This fraction approximates $1-\beta_2$, where $\beta_2$ is the probability of identifying a spin-$\downarrow$ electron as `spin-up' due to the finite bandwidth of the measurement setup. Red circles: total fidelity for the spin-$\downarrow$ state, $1-\beta$, calculated using $\beta_1 = 0.17$. The vertical red line indicates the threshold for which the visibility $1-\alpha-\beta$ (difference between filled circles and open squares) is maximal. This threshold value of 0.73 nA is used in the analysis of Fig.~\ref{fig5:singleshot}.}
\label{fig5:fidelity}
\end{figure}
The parameter $\beta$ corresponds to the probability that the QPC-current stays below the threshold even though the electron was actually spin-$\downarrow$ at the start of the read-out stage. Unlike $\alpha$, $\beta$ cannot be extracted directly from the exponential fit (note that the fit parameter $C = p (1-\alpha-\beta)$ contains two unknowns: $p = \Gamma_{\downarrow}/(\Gamma_{\uparrow} + \Gamma_{\downarrow})$ and $\beta$). We therefore estimate $\beta$ by analysing the two processes that contribute to it. First, a spin-$\downarrow$ electron can relax to spin-$\uparrow$ before spin-to-charge conversion takes place. This occurs with probability $\beta_1 = 1/(1 + T_1 \Gamma_{\downarrow})$. From a histogram (Fig.~\ref{fig5:fidelity}a) of the actual detection time, $t_{detect}$ (see Fig.~\ref{fig5:singleshot}b), we find $\Gamma_{\downarrow}^{-1} \approx 0.11$ ms, yielding $\beta_1 \approx 0.17$. Second, if the spin-$\downarrow$ electron does tunnel off the dot but is replaced by a spin-$\uparrow$ electron within about 8 $\mu$s, the resulting QPC-step is too small to be detected. The probability that a step is missed, $\beta_2$, depends on the value of the threshold. It can be determined by applying a modified (`reversed') pulse (Fig.~\ref{fig5:singleshot}d). For such a pulse, we know that in each trace an electron is injected in the dot, so there should always be a step at the start of the pulse. The fraction of traces in which this step is nevertheless missed, i.e. $\Delta I_{QPC}$ stays below the threshold (blue line in Fig.~\ref{fig5:singleshot}d), gives $\beta_2$. We plot $1-\beta_2$ in Fig.~\ref{fig5:fidelity}b (open circles). The resulting total fidelity for spin-$\downarrow$ is given by $1-\beta \approx (1-\beta_1)(1-\beta_2)+(\alpha \beta_1)$. The last term accounts for the case when a spin-$\downarrow$ electron is flipped to spin-$\uparrow$, but there is nevertheless a step in $\Delta I_{QPC}$ due to the dark-count mechanism~\cite{note2}. In Fig.~\ref{fig5:fidelity}b we also plot the extracted value of $1-\beta$ as a function of the threshold.
We now choose the optimal value of the threshold as the one for which the visibility $1- \alpha - \beta$ is maximal (red vertical line in Fig.~\ref{fig5:fidelity}b). For this setting, $\alpha \approx 0.07$, $\beta_1 \approx 0.17$, $\beta_2 \approx 0.15$, so the measurement fidelity for the spin-$\uparrow$ and the spin-$\downarrow$ state is $\sim0.93$ and $\sim0.72$ respectively. The measurement visibility in a single-shot measurement is thus at present $65\%$.
Significant improvements in the spin measurement visibility can be made by lowering the electron temperature (smaller $\alpha$) and especially by making the charge measurement faster (smaller $\beta$). Already, the demonstration of single-shot spin read-out and the observation of $T_1$ of order 1 ms are encouraging results for the use of electron spins as quantum bits.\\
We thank D. P. DiVincenzo, H. A. Engel, T. Fujisawa, V. Golovach, Y. Hirayama, D. Loss, T. Saku, R. Schouten, and S. Tarucha for technical support and helpful discussions. This work was supported by a Specially Promoted Research Grant-in-Aid from the Japanese Ministry of Education, the DARPA-QUIST program, the ONR, the EU-RTN network on spintronics, and the Dutch Organisation for Fundamental Research on Matter (FOM).
|
{
"timestamp": "2004-11-19T10:51:22",
"yymm": "0411",
"arxiv_id": "cond-mat/0411232",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0411232"
}
|
\section{Introduction}
The relevance of the initial boundary value problem (IBVP) for
Numerical Relativity has been pointed out many times since the
ground-breaking work of Stewart~\cite{Stewart}.
The origin of the problem is the well-known fact that Einstein's
equations
\begin{equation}\label{Einsteins}
G^{\mu\nu} = 8\pi~T^{\mu\nu},
\end{equation}
when interpreted as second-order field equations for the metric
components $g_{\mu\nu}$, provide only six evolution equations for
the space components $g_{ij}$ whereas the remaining four
Einstein's equations
\begin{equation}\label{Constraints}
G^{\mu 0} = 8\pi~T^{\mu 0}
\end{equation}
are just second-order constraints on $g_{ij}$~\cite{ADM}. The
evolution equations are then a reduction of the full Einstein's
system. Notice that this reduction is not uniquely defined, as far
as one can add in multiples of the constraints to the original
evolution system: this freedom is at the root of the diversity of
the proposed evolution formalisms (for a review, see
Ref.~\cite{Reula}).
The current approach in Numerical Relativity is to use one of
these reductions of Einstein's field equations, plus four
coordinate conditions, as the main evolution system in order to
compute the full set of metric components. This is the
unconstrained, or free evolution~\cite{Cent80}, approach in which
the constraint equations (\ref{Constraints}) are mainly used for
monitoring the accuracy of the simulations. As a consequence, the
evolution system has an extended space of solutions which
contains, in addition to the true ones, constraint-violating
solutions that do not verify the full set of Einstein's equations.
This poses the question of which are the requirements in order to
get, in the free evolution approach, true Einstein's solutions in
a consistent and stable way, avoiding any drifting towards
extended, constraint-violating solutions~\cite{Stewart}. The key
point is to analyze the subsidiary system
\begin{equation}\label{subsidiary}
\nabla_\nu ( G^{\mu\nu} - 8\pi~T^{\mu\nu} ) = 0,
\end{equation}
which follows from the contracted Bianchi identities and the
conservation of the stress-energy tensor. As it is well known, the
subsidiary system ensures that the constraints (\ref{Constraints})
are first integrals of the main evolution system. But it can also
be interpreted as providing evolution equations for the
constraints deviations. Of course, the same is true for any
variant obtained by combining (\ref{subsidiary}) with space
derivatives of either evolution or constraint equations: this
freedom can be easily used in order to get a variant of the
subsidiary system (\ref{subsidiary}) with a strongly
hyperbolic~\cite{KL89}, even symmetric hyperbolic, principal part:
\begin{eqnarray}
\partial_0 ( G^{00} - 8\pi~T^{00} ) +
\partial_k ( G^{0k} - 8\pi~T^{0k} )&=& \cdots \\
\partial_0 ( G_{k}^{~0} - 8\pi~T_{k}^{~0} ) +
\partial_k ( G^{00} - 8\pi~T^{00} )&=& \cdots
\end{eqnarray}
(normal coordinates).
In the case of the pure initial value problem (IVP), the question
about the consistency and stability for free-evolution
true-Einstein's solutions has been given a precise answer in
Ref.~\cite{Stewart}:
\begin{itemize}
\item Consistency: The initial data must verify the constraint
equations.
\item Stability: The principal part of both the main evolution
system and the subsidiary system must be strongly hyperbolic.
\end{itemize}
Our former considerations suggest that the strong hyperbolicity
requirement on the subsidiary system is always fulfilled in the
cases in which the only constraints are the energy-momentum ones
(\ref{Constraints}).
In the case of the IBVP, which is the main subject of this paper,
a number of developments are currently under way. Let us briefly
summarize the main ones:
\subsubsection{Constraint-preserving boundary conditions.}
The original work in Ref.~\cite{Stewart} was focused on the
Frittelli-Reula evolution system~\cite{FR94,FR96}. More
recently~\cite{CLT02, CPRST03, CS03,Calabrese04}, the
constraint-preserving boundaries approach has been extended to
other symmetric-hyperbolic systems of the Kidder-Scheel-Teukolsky
(KST) type~\cite{KST01,ST02}. The full programme consists in the
following steps:
\begin{itemize}
\item Writing down the subsidiary system (it can be
(\ref{subsidiary}) or any variant of it) as a first order
evolution system for constraint deviations, with symmetric
hyperbolic principal part.
\item Providing algebraic (Dirichlet) boundary conditions for the
incoming modes of the subsidiary system in such a way that the
total amount of constraint deviations (as measured with a suitable
energy estimate) keeps bounded.
\item Interpreting these algebraic boundary conditions of the
subsidiary system as differential (Neumann) boundary
conditions for the constraint-related incoming modes of the main
evolution system.
\item Completing the resulting subset of boundary conditions by
adding suitable conditions for the remaining incoming modes.
\item Checking the stability of the final set of the main system
boundary conditions. In Ref.~\cite{Stewart}, the
Majda-Osher theory~\cite{MO75} is applied and the uniform Kreiss
condition~\cite{KL89} is obtained as a result.
\end{itemize}
\subsubsection{Einstein's boundary conditions}
An alternative approach has been proposed by Frittelli and
G\'{o}mez~\cite{FG03a, FG03b,FG04a, FG04b}. For a better
understanding, let us start by writing down the constraint
equations (\ref{Constraints}) in a covariant form
\begin{equation}\label{covconstraints}
n_\nu (G^{\mu\nu} - 8\pi~T^{\mu\nu}) = 0\,,
\end{equation}
where $n_\nu$ is the normal to the constant-time hypersurfaces. In
local adapted coordinates we have
\begin{equation}\label{timenormal}
n_\nu = \alpha~\delta_\nu^{~0}\,,
\end{equation}
so that the original form (\ref{Constraints}) is recovered. The
absence of second time derivatives in (\ref{Constraints}) can now
be rephrased as the absence of second derivatives normal to the
spacelike constant-time hypersurfaces.
Now let us invoke the general covariance of Einstein's theory. We
will realize that the same kind of result must hold for other kind
of hypersurfaces, not just the spacelike ones. We can then start
again from the covariant form (\ref{covconstraints}) but with
$n_\nu$ being now the normal to any timelike hypersurface, like
the ones corresponding to the boundaries of our computational
domain. In local adapted coordinates, we could take for instance
\begin{equation}\label{spacenormal}
n_\nu \sim \delta_\nu^{~z}\,,
\end{equation}
so that no second derivatives normal to the constant-z
hypersurfaces would appear in (\ref{covconstraints}).
If the main evolution system is written in first order form, the
absence of second order normal derivatives ensures that four
combinations of the first-order dynamical fields can be
consistently computed at the boundaries without any recourse to
outside information. The Fritelli-G\'{o}mez idea is to use these
combinations in order to get consistent boundary conditions for a
subset of four incoming modes (Einstein's boundaries). Of course,
suitable conditions for the remaining incoming modes must also be
provided and the stability of the full set must be checked.
\subsubsection{Harmonic coordinates}
Still another approach is due to Szil\'{a}gyi, Winicour and
coworkers~\cite{SGBW00, SSW02, SW03, BSW04}. Their formulation
looks quite different from the preceding ones, so that we will
need to rephrase some statements in order to point out the
underlying similarities.
For instance, instead of the reduction of Einstein's equations, we
will consider the equivalent extension of the solution space. In
the harmonic coordinates approach, this extension is achieved by
writing down the principal part of Einstein's equations as a set
of generalized wave equations with some extra terms (de
Donder-Fock decomposition~\cite{DeDo21,Fock59}) and then getting
rid of these extra terms by requiring the four spacetime
coordinates to be harmonic functions, that is
\begin{equation}\label{4harmonic}
\Box~x^\mu = 0
\end{equation}
($x^\mu$ is considered as a set of four scalar functions here).
Let us compare now the Harmonic Coordinates approach with the
preceding ones:
\begin{itemize}
\item The resulting (relaxed) system is used as the main
evolution system for the full set of metric components. By
construction, its principal part amounts to a set of wave
equations, so that symmetric hyperbolicity is ensured.
\item True Einstein's solutions are recovered only when
imposing the coordinate conditions (\ref{4harmonic}), which play
here the role of the constraints. The extra principal terms that
were suppressed from the original Einstein's system contained
first derivatives of these coordinate constraints (second
derivatives of the metric components)~\cite{SW03}.
\item The subsidiary system can be again obtained from
(\ref{subsidiary}). Terms containing the main evolution system
equations, or their derivatives, vanish separately so that only
the contribution of the extra principal terms remains. Notice that
the resulting subsidiary system is of second order in the
coordinate constraints (\ref{4harmonic}), in contrast with the
former approaches.
\item The principal part of the (second order) subsidiary
system is symmetric hyperbolic: it amounts again to a set of
wave equations on the coordinate constraints (\ref{4harmonic}).
\end{itemize}
In Ref.~\cite{SW03}, a boundary condition derived from the local
reflection symmetry requirement is analyzed. Constraint
preservation is explicitly shown and a theorem of
Secchi~\cite{Secchi96} is used in order to show that the resulting
IBVP is well posed. This theoretical result is checked by means of
the numerical robust-stability test~\cite{Mexico1}, which is
adapted so that reflection boundary conditions are applied along
just one space axis, while keeping periodic boundary conditions
along the other two.
Due to the limited use of reflection symmetry boundary conditions
in practical applications, a proposal is made~\cite{BSW04} for
extending these results to boundary conditions of the Sommerfeld
type, as it was done previously in a different
framework~\cite{FN99}.
\subsubsection{The Z4 case}\label{Z4case}
The Z4 approach~\cite{Z4,Z48} uses an extra dynamical four-vector,
along the track of previous formulations which contained extra
dynamical quantities~\cite{BM92,BM95,SN95,BS99}. It has strong
similarities with the harmonic coordinates approach, while
providing much more flexibility regarding the gauge choices.
\begin{itemize}
\item The main evolution system is obtained by modifying
Einstein's equations with the help of the extra four-vector
$Z^\mu$, namely
\begin{equation}\label{EinsteinZ4}
G_{\mu\nu} + \nabla_{\mu} Z_{\nu} + \nabla_{\nu} Z_{\mu} -
( \nabla_{\rho} Z^{\rho} )~g_{\mu\nu} = 8~\pi~T_{\mu\nu}\,.
\end{equation}
This system provides ten evolution equations for the set formed
by the six space components of the metric plus the four $Z^\mu$
components. A first order version can be easily obtained which,
when supplemented with suitable gauge conditions, has been shown
to have a strongly hyperbolic principal part~\cite{Z48}.
\item True Einstein's solutions can be recovered by requiring
the vanishing of the extra four-vector
\begin{equation}\label{Zis0}
Z^{\mu} = 0\,,
\end{equation}
so that this condition can be considered as a set of four
algebraic constraints. Notice that the main evolution system
(\ref{EinsteinZ4}) is of a mixed type: it contains second order
derivatives of the metric, but only first order derivatives of the
extra four-vector.
\item The subsidiary system can be obtained from the covariant
divergence of the main system (\ref{EinsteinZ4}): it will be of
third order in the metric and of second order on the constraint
variables $Z_\mu$. Allowing for (\ref{subsidiary}), the Einstein's
tensor contribution vanishes separately, so that only the
contribution of the extra terms remains, namely
\begin{equation}\label{divZ4}
\nabla_{\nu}~[~\nabla^{\mu} Z^{\nu} + \nabla^{\nu} Z^{\mu} -
( \nabla_{\rho} Z^{\rho} )~g^{\mu\nu}~]~ = 0~,
\end{equation}
which can be also expressed in the equivalent form~\cite{Z48}
\begin{equation}\label{WaveZis0}
\Box~ Z_{\mu} + R_{\mu\nu} Z^\nu = 0~.
\end{equation}
\item It follows from (\ref{WaveZis0}) that the subsidiary
system, when considered as a second order system for the algebraic
constraints deviations $Z_\mu$, has a symmetric hyperbolic
principal part: it amounts here again to an uncoupled set of wave
equations.
\end{itemize}
The fact that the subsidiary system is of second order means that
the vanishing of both $Z_\mu$ and its first time derivative must
be imposed on the initial data if one wants to ensure \textit{a
priori} that the resulting solution will be a true Einstein's one.
This amounts to impose the usual energy and momentum constraints
(\ref{Constraints}) on the initial data hypersurface, so that it
can seem that the Z4 formalism is not being of much help. But on
the other side, if one is checking a given solution \textit{a
posteriori}, the vanishing of $Z\mu$ in a given spacetime domain
ensures that the same is true for its derivatives, so that this
solution is necessarily a true Einstein's one. This is why one can
monitor constraint violations by looking just at the values of
$Z_\mu$ and, more important, this is why one can devise
constraint-preserving strategies by aiming at the vanishing of
$Z_\mu$. Here is where the Z4 formalism shows its main advantages.
In what follows, we will consider the fully first-order version of
the Z4 system~\cite{Z48}, as summarized in Section 2. In Section
3, we will apply the constraint-preserving boundary conditions
programme, obtaining as a result conditions of the Sommerfeld type
for the main evolution system. The stability of these conditions
is studied in Section 4, including the use of the robust-stability
numerical test. As a result, our Sommerfeld-like conditions will
be shown to behave in the same way as the reflection symmetry ones
proposed in Refs.~\cite{SW03,BSW04}.
\section{First order Z4 system}
The general-covariant equations (\ref{EinsteinZ4}) can be written
in the equivalent 3+1 form \cite{Z4}
\begin{eqnarray}
\label{dtgamma}
(\partial_t -{\cal L}_{\beta})~ \gamma_{ij}
&=& - {2~\alpha}~K_{ij}
\\
\label{dtK}
(\partial_t - {\cal L}_{\beta})~K_{ij} &=& -\nabla_i\alpha_j
+ \alpha~ [{}^{(3)}\!R_{ij}
+ \nabla_i Z_j+\nabla_j Z_i
\nonumber \\
&~&-~ 2\,K^2_{ij}+({\rm tr}\, K-2\Theta)~K_{ij}
\nonumber \\
&~&- ~S_{ij}+\frac{1}{2}\,({\rm tr}\, S - \tau)~\gamma_{ij}~]
\\
\label{dtTheta} (\partial_t -{\cal L}_{\beta})~\Theta &=&
\frac{\alpha}{2}~
[{}^{(3)}\!R + 2~ \nabla_k Z^k + ({\rm tr}\, K - 2~ \Theta)~{\rm tr}\, K
\nonumber \\
&~&- ~{\rm tr}\,(K^2) - 2\tau~] - Z^k {\alpha}_k
\\
\label{dtZ}
(\partial_t -{\cal L}_{\beta})~Z_i &=& \alpha~ [~\nabla_j\,({K_i}^j
-{\delta_i}^j {\rm tr}\, K) + \partial_i \Theta
\nonumber \\
&~&- ~2\,{K_i}^j~ Z_j - S_i~] - \Theta\,{\alpha}_i
\end{eqnarray}
where we have noted
\begin{eqnarray}\label{tauSdef}\nonumber
\tau \equiv 8 \pi \alpha^2~ T^{00}\,,~
S_i &\equiv& 8 \pi \alpha ~ T^0_{~i}\,,~
S_{ij} \equiv 8 \pi ~T_{ij}\,,~ \\
\Theta \equiv \alpha ~ Z^0\,,&~& \alpha_i \equiv
\partial_i~\alpha\,.
\end{eqnarray}
In the form (\ref{dtgamma}-\ref{dtZ}), it is evident that the Z4
evolution system is fully relaxed: it consists only of evolution
equations. The original constraints (\ref{Zis0}), which can be
translated into
\begin{equation}\label{ZThis0}
\Theta~=~0,\qquad Z_i~=~0,
\end{equation}
are algebraic so that the full set of field equations
(\ref{EinsteinZ4}) is actually used during evolution, like in the
harmonic coordinates case.
But now we have not to impose the harmonic coordinate conditions
(\ref{4harmonic}). We will consider instead a wider class of gauge
conditions, in which the time slicing will be of the
form~\cite{Z48}
\begin{equation}\label{dtAlpha}
(\partial_t -{\cal L}_{\beta})~\ln \alpha = -~f\alpha~({{\rm tr}\,} K
- m \Theta)
\end{equation}
(generalized harmonic slicing). Although more general cases can be
considered~\cite{BP04}, we will use here normal coordinates (zero
shift) for simplicity.
A first order version of the Z4 evolution system
(\ref{dtgamma}-\ref{dtZ}) can be obtained by introducing the first
space derivatives
\begin{equation}\label{AkDkij}
A_k~\equiv~\alpha_k/\alpha,~~D_{kij}~\equiv~\frac{1}{2}~\partial_k \gamma_{ij}
\end{equation}
as independent dynamical quantities, so that the full set of
dynamical fields can be given by
\begin{equation}\label{uvector}
\mathbf{u} ~ = ~ \{\alpha,~\gamma_{ij},~ K_{ij},~ A_k,~D_{kij},~\Theta,~Z_k \}
\end{equation}
(38 independent fields).
Of course, one must provide evolution equations for the new
quantities (\ref{AkDkij}): the simplest way is to take
\begin{eqnarray}\label{dtA}
\partial_t A_k~&+&~\partial_k [~ f \alpha~( {{\rm tr}\,}K - m \Theta) ~]~=~0
\\\label{dtD}
\partial_t D_{kij}~&+&~\partial_k [~\alpha~K_{ij}~]~=~0~.
\end{eqnarray}
Notice that one could add to (\ref{dtA}, \ref{dtD}) a number of
terms involving first derivatives of either $\Theta$ or $Z_k$.
This would amount to introduce coupling terms with either the
Energy or the Momentum constraints, as in the KST
system~\cite{KST01, ST02}, each one with its own free parameter.
We have chosen instead to keep the simplest form (\ref{dtA},
\ref{dtD}) because the first order constraints (\ref{AkDkij})
evolve in a trivial way, that is
\begin{eqnarray}\label{dtAbis}
\partial_t [~ A_k~-~\partial_k ~ln\, \alpha ~]~&=&~0
\\\label{dtDbis}
\partial_t [~ D_{kij}~-~\partial_k~ \gamma_{ij}~]~&=&~0~,
\end{eqnarray}
so that the relationship between the first and the second order
versions of the evolution system is more transparent. We are
losing in this way the possibility of playing with a number of
extra free parameters.
Care must be taken, however, when expressing the Ricci tensor
${}^{(3)}\!R_{ij}$ in (\ref{dtK}) in terms of the derivatives of
$D_{kij}$, because as far as the definitions (\ref{AkDkij}) are no
longer enforced, the identities
\begin{equation}\label{dDdD}
C_{kl} \equiv \partial_{[k}~A_{l]} = 0\,\qquad
C_{klij} \equiv \partial_{[k}~D_{l]ij} = 0
\end{equation}
can not be taken for granted in first order systems. As a
consequence of these ordering ambiguities, the principal part of
the evolution equation (\ref{dtK}) leads to a one-parameter family
of non-equivalent first-order versions, namely
\begin{equation}\label{PrincipalK}
\partial_t K_{ij} ~+~\partial_k~[~\alpha~\lambda^k_{ij}~]~=~...~
\end{equation}
where
\begin{eqnarray}\label{deflambda}
\lambda^k_{ij} &=& {D^k}_{ij}
-{\frac{1+\zeta}{2}}~ (D_{ij}^{~~k}+D_{ji}^{~~k}
-\delta^k_i E_j-\delta^k_j E_i)
\nonumber \\
&+& \frac{1}{2}\, \delta^k_i(A_j-D_j+2V_j)
+ \frac{1}{2}\, \delta^k_j(A_i-D_i+2V_i)\qquad
\end{eqnarray}
and we have noted
\begin{equation}\label{defVk}
D_i \equiv \gamma^{rs}D_{irs}\,,~ E_i \equiv
\gamma^{rs}D_{rsi}\,,~
V_k \equiv D_k-E_k-Z_k\,.
\end{equation}
Notice that the parameter choice $\zeta = +1$ corresponds to the
standard Ricci decomposition
\begin{equation}\label{Def3R}
{}^{(3)}\!R_{ij}~=~\partial_k~{\Gamma^k}_{ij}-\partial_i~{\Gamma^k}_{kj}
+{\Gamma^r}_{rk}{\Gamma^k}_{ij}-{\Gamma^k}_{ri}{\Gamma^r}_{kj}
\end{equation}
whereas the opposite choice $\zeta = -1$ corresponds to the
de~Donder-Fock \cite{DeDo21,Fock59} decomposition
\begin{eqnarray}\label{Def3dDF}
{}^{(3)}\!R_{ij}&=&-\partial_k~{D^k}_{ij}+\partial_{(i}~{\Gamma_{j)k}}^{k}
- 2 {D_r}^{rk} D_{kij} \nonumber \\
&+& 4 {D^{rs}}_i D_{rsj} - {\Gamma_{irs}}
{\Gamma_j}^{rs}-{\Gamma_{rij}} {\Gamma^{rk}}_k
\end{eqnarray}
which is most commonly used in Numerical Relativity formalisms.
The ordering ambiguities do not affect to the principal part of
eq. (\ref{dtTheta}), namely
\begin{equation}
\partial_t~\Theta + \partial_k ~[~\alpha~ V^k~] = ...
\end{equation}
The resulting first order system has been shown to be strongly
hyperbolic~\cite{Z48} provided that the first gauge parameter $f$
is greater than zero. In the harmonic slicing case ($f=1$), the
second gauge parameter is fixed ($m=2$). The full list of
eigenvectors is given in Appendix A.
\section{Constraint-preserving boundary conditions}
We have seen in the Introduction that the simple equation
(\ref{WaveZis0}) provides the subsidiary system for the deviations
of the algebraic constraints (\ref{Zis0}). This would be the whole
story if we were planning to use the second order version
(\ref{EinsteinZ4}) of the evolution system. But we prefer to focus
here in the first-order-in-space version, as described in the
previous section. The reason is that the mathematical theory of
first order systems seems to be more developed, both at the
continuum and at the discrete level, so more powerful tools are
available: Energy methods, Total-Variation-Diminishing algorithms
and so on (see for instance Refs.~\cite{KL89, GKO95}).
There is a price to pay for this. We have found in the previous
Section new constraints, like (\ref{dDdD}), arising from ordering
ambiguities in the space derivatives. The ordering parameter
$\zeta$ appeared precisely from the coupling of these ordering
constraints with the evolution system. The original subsidiary
system (\ref{WaveZis0}) must then be extended in order to include
both these coupling terms and the evolution of the ordering
constraints themselves.
\setcounter{subsubsection}{0}
\subsubsection{First-order subsidiary system.}
The easiest way of obtaining the full subsidiary system in the
first-order case is just by computing the time derivative of the
full Z4 first-order system. We give here (the principal part of)
the resulting subsidiary system
\begin{eqnarray}
\partial_t~C_{kl} &=& 0
\label{Asub} \\
\partial_t~C_{klij} &=& 0
\label{Dsub}\\
1/\alpha^2~\partial^2_{tt}~\Theta - \triangle~\Theta&=& \cdots
\label{Thetasub}\\ \nonumber
1/\alpha^2~\partial^2_{tt}~Z_i - \triangle~Z_i &=&
\gamma^{kl}~\partial_k~[~C_{il}
+~\gamma^{rs}~(~C_{ilrs} \\
+ ~(\zeta-1)~C_{rlsi}
&+& ~(\zeta+1)~C_{risl}~)~] + \cdots \qquad
\label{Zsub}
\end{eqnarray}
(the dots stand for non-principal terms).
The subsidiary system (\ref{Asub} - \ref{Zsub}) can be put in
first order form in the usual way, by considering the first
derivatives of ($\Theta$, $Z_i$) as new independent variables. The
following evolution conditions
\begin{eqnarray}
\partial_t~ (\partial_k\Theta)
- \partial_k~ [~\partial_t\Theta~] &=& 0 \label{dtdtTheta}\\
\partial_t~ (\partial_k Z_i)
- \partial_k~ [~\partial_t Z_i~] &=& 0
\label{dtdtz}
\end{eqnarray}
could be added then to complete (the first order version of) the
subsidiary system.
Notice that the evolution equations (\ref{Asub}, \ref{Dsub}) for
the ordering constraints are trivial. This means that the ordering
constraints themselves are eigenfields of the full subsidiary
system (\ref{Asub} - \ref{dtdtz}) with zero characteristic speed.
Moreover, the evolution equations (\ref{Thetasub},
\ref{dtdtTheta}) for (the derivatives of) $\Theta$ form a separate
subsystem with the structure of the wave equation. Concerning the
remaining equations (\ref{Zsub}, \ref{dtdtz}), one can express
them in terms of the quantities
\begin{eqnarray}\nonumber
Z_{ki} &\equiv& \partial_k Z_i + ~C_{ik} +~\gamma^{rs}~[~C_{ikrs}
\\ \label{Zki} &+& ~(\zeta-1)~C_{rksi}+ ~(\zeta+1)~C_{risk}~]\,,
\end{eqnarray}
so that they read
\begin{eqnarray}
1/\alpha^2~\partial_t~(\partial_t Z_i) - \partial^k~ [~Z_{ki}~]
&=& \cdots\\
\partial_t~ Z_{ki} - \partial_k~ [~\partial_t Z_i~] &=& \cdots
\label{dtdtzki}\,,
\end{eqnarray}
and we get again the structure of the wave equation.
It follows that the principal part of the subsidiary system
(\ref{Asub} - \ref{dtdtz}) can be put in symmetric hyperbolic
form. The characteristic speeds are either zero or the light
speed. A simple energy estimate is provided by
\begin{eqnarray}\nonumber
\mathbb{E} \equiv 1/\alpha^2~[~(\partial_t \Theta)^2 &+& \gamma^{ij}~
(\partial_t Z_i)(\partial_t Z_j)~] \\
+~(\partial_k \Theta)(\partial^k \Theta) &+& Z_{ij}Z^{ij}\,.
\label{estimate}
\end{eqnarray}
We are now in position to take the second step in the
constraint-preserving boundary conditions programme. We will
impose the vanishing of all the incoming modes of (\ref{Asub} -
\ref{dtdtz}) at the boundaries, that is
\begin{eqnarray}
1/\alpha~\partial_t~\Theta + n^k~\partial_k~\Theta &=& 0
\label{Thetabound}\\ \label{Zboun}
1/\alpha~\partial_t~Z_i + n^k Z_{ki} &=& 0\,,
\end{eqnarray}
where $\vec{n}$ stands here for the outwards-pointing unit normal
to the boundary surface.
Equations (\ref{Thetabound}, \ref{Zboun}) meet the two
requirements we were looking for:
\begin{itemize}
\item They provide maximally-dissipative algebraic boundary
conditions for the subsidiary system (\ref{Asub} -\ref{dtdtz}). In
this way, no constraint-violating modes are allowed to enter
across the selected boundary.
\item They will provide, as we will see in what follows, four
boundary conditions of the Sommerfeld type for the evolution
system (\ref{dtgamma}-\ref{dtZ}), which can be consistently
imposed in order to obtain true solutions of Einstein's field
equations. Notice that the extra terms in the definition
(\ref{Zki}) consist in ordering constraints, which would not
appear in a second order in space formulation.
\end{itemize}
\subsubsection{Boundary conditions implementation.}
The third step in the programme is to use the resulting values
($\Theta^{(boun)}$, $Z_i^{(boun)})$, as computed from
(\ref{Thetabound}, \ref{Zboun}), in order to obtain four of the
main system's incoming fields at the boundary. This process is not
free from ambiguities, like the choice of a suitable basis for the
dynamical fields.
For a symmetric hyperbolic evolution system, one could find a
(positive definite) quadratic form which would provide a metric
for the space of dynamical fields. The natural choice would be
then to build an orthogonal basis of dynamical fields containing
both $\Theta$ and $Z_i$ (or some equivalent combinations).
Imposing boundary conditions would then consist in prescribing the
values (\ref{Thetabound}, \ref{Zboun}) for these fields, while
leaving the remaining ones unchanged.
But the evolution system (\ref{dtgamma}-\ref{dtZ}) is not
symmetric hyperbolic. This means that we do not have a unique
prescription for imposing the boundary conditions, as far as we
have many ways of selecting an appropriate set of dynamical fields
at the boundary. A convenient starting point in this case is to
replace the original basis
\begin{equation}\label{Zbasis}
(\Theta\,,~K_{ij}\,,~Z_i\,,~D_{kij}\,,~A_i)
\end{equation}
by one which is more adapted to the characteristic decomposition
at the boundary, namely
\begin{equation}\label{Vbasis}
(\Theta\,,~\tilde{K}_{ij}\,,~Z_i\,,
~D_{\bot ij}\,,~\tilde{D}_{n\bot\bot}\,,~V_i\,,~D_i\,,~A_i)\,,
\end{equation}
where the symbol $\bot$ replacing an index means the projection
orthogonal to $\vec{n}$. We have noted as $\tilde{D}_{n\bot\bot}$
the traceless part of $D_{n\bot\bot}$ and
\begin{equation}\label{tildeK}
\tilde{K}_{ij} \equiv K_{ij}-\frac{\Theta}{2}~\gamma_{ij}~.
\end{equation}
Notice that the quantities $D_{nn\bot}, ~tr(D_{n\bot\bot})$ do not
appear explicitly in the new basis. These components must be
computed instead from $(Z+V)_i$ and the $D_{\bot ij}$ components.
Allowing for the definition (\ref{defVk}), we actually get
\begin{eqnarray}\label{DtZperp}
D_{nn\bot} &=& D_\bot
-h^{rs}\,D_{rs\bot} - (Z+V)_\bot \qquad\\
\label{DtZn}
h^{rs}D_{nrs} &=&
h^{rs}\,D_{rsn} + (Z+ V)_n\,,
\end{eqnarray}
where $h^{rs}$ stands for the (inverse) metric on the boundary
surface, namely
\begin{equation}\label{2metric}
h^{rs} \equiv \gamma^{rs}-n^rn^s~.
\end{equation}
The new basis (\ref{Vbasis}) has been chosen in such a way that,
as we can easily verify, the values of ($\Theta$, $Z_i$) appear in
only eight eigenfields (four characteristic cones), namely:
\begin{eqnarray}\label{E-bound}
E^{\pm} &=& \Theta \pm V^n\,, \\
\nonumber
L_{n\bot}^{\pm} &=& \tilde{K}_{n\bot}
\pm [~\frac{1}{2}\;(A_\bot+D_\bot- 2 ~Z_\bot)
\\ \qquad &~& -~{\frac{\zeta+1}{2}}\,D_{\bot nn}
+~{\frac{\zeta-1}{2}}\, h^{rs}\,D_{rs\bot}~]\,,
\label{L-nAbound}\\ \nonumber
\label{L-trbound}
L^{\pm}&=&
h^{rs}\tilde{K}_{rs} \pm [~Z_n
- \zeta ~h^{rs}D_{rsn}~]\,,
\end{eqnarray}
where we have noted
\begin{equation}\label{Ltilde}
L^{\pm} \equiv h^{rs}L_{rs}^{\pm} - E^{\pm}\,.
\end{equation}
In order to set up the required four boundary conditions, we will
simply replace the original values for $(\Theta~,Z_i)$ by
$(\Theta^{(bound)}~,Z^{(bound)}_i)$, while leaving the other
fields in the basis (\ref{Vbasis}) unchanged. To be more specific:
\begin{itemize}
\item The original values for $(\Theta~,Z_i)$ are replaced by
$(\Theta^{(bound)}~,Z^{(bound)}_i)$, as computed from
(\ref{Thetabound}, \ref{Zboun}), respectively. This amounts,
modulo some linear combinations with tangent fields (transverse
derivatives), to prescribe the first term in $E^\pm$ and the
second terms in $(L_{n\bot}^{\pm},L^\pm)$.
\item The values of their 'counterpart' fields
$(V_n\,,~\tilde{K}_{n\bot}\,,~h^{rs}\tilde{K}_{rs})$ are not
changed by the boundary conditions.
\end{itemize}
It is clear then that the values of the four characteristic cones
(\ref{E-bound},~\ref{L-nAbound},~\ref{Ltilde}) have been
prescribed in such a way that the four equations
(\ref{Thetabound}, \ref{Zboun}) hold true at the selected
boundary.
A further source of ambiguity comes from the prescription of the
remaining incoming eigenfields (the gauge and the transverse
traceless ones). We will use here a convenient generalization of
the maximally dissipative boundary conditions, namely
\begin{equation}\label{G-bound}
\partial_t~G^- = 0\,, \qquad
\partial_t~[~L_{\bot\bot}^{-}
-\frac{1}{2}~(h^{rs}\,L_{rs}^{-})~\gamma_{\bot\bot}~] = 0\,,
\end{equation}
although we are aware that more sophisticated choices could be
required in physical applications.
\section{Constraints stability}
The final step in the proposed programme is to check the stability
of the constraint-preserving boundary conditions
(\ref{Thetabound}, \ref{Zboun}, \ref{G-bound}).
Notice however that the main evolution system is just strongly
hyperbolic, but not symmetric hyperbolic (at least not in the
generic case~\cite{Z4}). This means that the Majda-Osher
theory~\cite{MO75} can not be directly applied, and the same is
true for the Secchi theorems~\cite{Secchi96}. This is why we will
check the stability of (\ref{Thetabound}, \ref{Zboun},
\ref{G-bound}) by other methods, both at the theoretical and the
numerical level.
From the theoretical point of view, the well-known Fourier-Laplace
method~\cite{KL89} could provide necessary conditions for
stability~\cite{CS03}. We will prefer here a simpler approach, by
analyzing the system of equations verified by the dynamical fields
at the boundary. We will call it the modified system in order to
distinguish it from the original evolution system, which is being
used at the interior points. We will see that this approach
provides some insight about the behavior at the boundary points.
The drawback is that boundary points form just the outermost layer
of the computational domain. It follows that the modified system
analysis has to be considered at this stage just as an heuristic
approach, so that the stability of the boundary conditions must be
confirmed by other means. More details are provided in
Ref.~\cite{Foundations}.
\subsection{The modified system approach}
For the sake of clarity, let us focus first on the subset of
dynamical fields spanned by $(\Theta,~V_i)$. As stated in the
previous Section, the boundary conditions are not affecting any of
the $V_i$ components. This means that the boundary values of $V_i$
verify the main evolution system equations, namely
\begin{equation}
1/\alpha~\partial_t~V_i + \partial_i~\Theta
= \cdots \label{dtV}
\end{equation}
The original equation (\ref{dtTheta}) for $\Theta$, however, no
longer holds at the boundary, where one is imposing instead the
advection equation (\ref{Thetabound}). This means that, even at
the continuum level, the evolution system is being modified at the
boundaries. The modified system for the subset of dynamical fields
($\Theta$,~$Z_i$) is given by (\ref{Thetabound}, \ref{dtV}).
The modified subsystem (\ref{Thetabound}, \ref{dtV}) has real
non-negative characteristic speeds along any direction $\vec{r}$,
oblique to $\vec{n}$. They are actually
\begin{equation}\label{speeds_energy}
\{~0,~\alpha~(\vec{n}\cdot \vec{r})~\}\,.
\end{equation}
It follows that (the principal part of) the modified subsystem
(\ref{Thetabound}, \ref{dtV}) can be interpreted on physical
grounds as describing the outwards propagation of both $\Theta$
and $V_i$ at the boundary.
We can push one step further our analysis by considering the
particular case in which $\vec{r}\,$ is tangent to the boundary,
that is orthogonal to $\vec{n}$. In this case the speeds
(\ref{speeds_energy}) are fully degenerate, and a non-diagonal
coupling term remains in (\ref{dtV}), so that the modified
subsystem is just weakly hyperbolic. This has some relevant
consequences. Let us assume for instance that $\vec{n}$ is aligned
with the $x$ coordinate axis and that we get a static profile for
$\Theta$ of the form
\begin{equation}\label{Thetaprof}
\Theta = g(y,z)~,
\end{equation}
which trivially satisfies equation (\ref{Thetabound}). The
derivative coupling in (\ref{dtV}) allows then modes in the $V_y$
and $V_z$ components which grow in time in a linear way. These
linearly growing modes will actually show up in numerical tests,
as we will see below.
The analysis of the full modified system can be simplified by
writing down (the principal part of) the modified evolution
equations for the combinations corresponding to incoming modes of
the original system. Allowing for (\ref{Thetabound}, \ref{Zboun}),
we have
\begin{eqnarray}\label{Emod}
1/\alpha~\partial_t~E^- &=& 0 \\
\nonumber
1/\alpha~\partial_t~L_{n\bot}^{-} &=&
h^{rs}\partial_r[~D_{ns\bot}-D_{sn\bot}
+\frac{\zeta-1}{2}~\tilde{K}_{s\bot}] \\
\nonumber &-&~\partial_\bot\,[~\frac{\zeta+1}{2}~\tilde{K}_{nn}
-\frac{f+1}{2}~tr\,\tilde{K}+A_n\qquad\\
&~& \qquad +~V_n -\frac{(3-2m)f+1}{4}~\Theta~] \\
\nonumber
1/\alpha~\partial_t~L^{-} &=&
-h^{rs}\partial_r\,[~D_{nns} - D_{snn}
+ \zeta\,\tilde{K}_{ns}\\
&~& \qquad +~ A_s + V_s~]\,,
\label{Lmod}
\end{eqnarray}
plus the trivial evolution equations (\ref{G-bound}) for the gauge
and the transverse traceless incoming modes.
Notice that only derivatives tangent to the boundary appear on the
modified system equations (\ref{Emod} - \ref{Lmod}) for the
incoming modes. This means that all the characteristic speeds
along the longitudinal direction $\vec{n}$ are real and
non-negative: they are actually
\begin{equation}\label{charmod}
{0,~\alpha,~\alpha\sqrt{f}}\,.
\end{equation}
The corresponding eigenvectors are either standing fields ($v=0$):
\begin{equation}\label{modstanding}
A_\perp\,,~~D_{\perp ij}\,,~~A_k-f D_k+f m
V_k\,,~~E^-\,,~~L^-_{ij}\,,~~G^-
\end{equation}
or outgoing fields ($v=\alpha,~\alpha\sqrt{f}$):
\begin{equation}\label{modoutgoing}
\Theta\,,~~Z_i\,,~~\tilde{L}^+_{\bot\bot}\,,~~G^+.
\end{equation}
These fields span the whole dynamical space; the modified system
is then strongly hyperbolic along the direction $\vec{n}$ normal
to the boundary.
Computing the characteristic speeds along a generic direction
$\vec{r}$, oblique to $\vec{n}$, and for an arbitrary value of the
ordering parameter, is a much harder task, even using an algebraic
computing program. We have just checked the particular cases
\begin{equation}\label{zetachoices}
\zeta = 0\,,~\pm 1
\end{equation}
and we have found that the modified system is at least weakly
hyperbolic (real characteristic speeds) only in the $\zeta=0$
case. This suggests that the $\zeta=0$ case, corresponding to a
symmetric ordering of the space derivatives, could be free of
boundary instabilities, as we will confirm below.
\subsection{The robust stability test}
The robust stability numerical test~\cite{Mexico1} amounts to
consider small perturbations of Minkowski space-time which are
generated by taking random initial data for every dynamical field
in the system. The level of the random noise must be small enough
to make sure that we will keep in the linear regime even for
hundreds of crossing times (the time that a light ray will take to
cross the longest way along the numerical domain). We are taking
advantage in this way of the peculiar nature of the Einstein's
equations, where the principal part is quasilinear and the
non-principal (source) terms are quadratic in the dynamical
fields. Checking the linear regime of Einstein's equations amounts
then to test the behavior of their principal part.
This test has been previously used~\cite{Z48} for to check
numerically the stability properties of the Z4 evolution system
for interior points. In order to avoid boundary effects, the grid
had the topology of a three-torus, with periodic boundaries along
every axis. We will now open the $x$ faces and impose the
constraint-preserving boundary conditions there, while keeping
periodic boundary conditions along the other two axes.
\begin{figure}[t]
\begin{center}
\epsfxsize=8cm
\epsfbox{fig1.ps}
\end{center}
\caption{$L_{\infty}$ norm of $trK$ for different values of the
ordering parameter $\zeta$ and constraint-preserving boundary
conditions along one single direction (periodic boundaries along
the other two). The values of $\zeta$ are shown with an interval
of $0.5$ for the sake of clarity, although the survey has been
made with a finer interval of $0.1$. The stable range is given by
$\zeta$ values in the interval $[-0.5,0]$. The decreasing of the
norm in the stable regions is due to the dissipative boundary
conditions (\ref{G-bound}) for the gauge modes}\label{1face}
\end{figure}
We show in Fig.~\ref{1face} the $L_{\infty}$ norm of $trK$ for
different values of the ordering parameter $\zeta$. A spacing
$\Delta\zeta=0.5$ is used in the plot for the sake of clarity,
although the numerical survey has been made with a finer spacing
of $\Delta\zeta=0.1$. Our results show that the
constraint-preserving boundary conditions (\ref{Thetabound},
\ref{Zboun}) are stable if and only if $\zeta\,$ is in the range
$[-0.5,0]$. The behavior is the same in all the stable regions:
the different values of $\zeta$ just determine when the
instabilities (if any) will appear.
Notice that the robust stability analysis predicted the arising of
linearly growing modes related to the transverse $V_i$ components.
In terms of the original basis, one can expect to see these modes
in the quantities $D_{xxy}$, $D_{xxz}$, which are derived from
$V_x$, $V_z$ by the relationships (\ref{DtZperp}, \ref{DtZn}),
respectively. We can see for instance in Fig.~\ref{1faceD} a
growing linear mode in the $L_{\infty}$ norm of $D_{xxy}$. This
confirms that the modified system analysis can be useful to
anticipate the behavior of the boundary conditions under the
robust stability test. Further evidence in this direction is
provided in the Appendix B, where maximally dissipative boundary
conditions are considered.
\begin{figure}[t]
\begin{center}
\epsfxsize=8cm
\epsfbox{Dxxx_chi0_Zrad2.ps}
\end{center}
\caption{Same as Fig.~\ref{1face}, but now for the $L_{\infty}$
norm of $D_{xxy}$ and $\zeta=0$. We plot here the results
corresponding to three different resolutions, focusing on the
first $10$ crossing times. Notice that this is a logarithmic plot,
so that the resolution-independent linear growing with unit slope
corresponds actually to a linearly growing dynamical
mode.}\label{1faceD}
\end{figure}
The robust stability test is also useful for checking the
constraint-preserving character of the proposed boundary
conditions (\ref{Thetabound}, \ref{Zboun}). As far as true
Einstein's solutions are actually recovered by setting both
$\Theta$ and $Z_i$ to zero, the values of these quantities can be
considered to be good indicators of constraint violations. We can
monitor the norm of these quantities to check whether constraint
violations are being injected into the computational domain
through the open boundaries.
We can see in Fig.~\ref{Zradevol} that this is actually not the
case. The values of $\Theta$ and $Z_i$ are not growing at all,
contrary to what happens to the $D_{xxy}$ components, as seen in
Fig.~\ref{1faceD}. Moreover, their norm is diminishing: we can
understand this decreasing by noticing that the boundary
conditions (\ref{Thetabound}, \ref{Zboun}) are, modulo some
coupling with ordering constraints, advection equations. This
means that the values of (\ref{Thetabound}, \ref{Zboun}) are just
flowing out of the computational domain through the open boundary.
\begin{figure}[h]
\begin{center}
\epsfxsize=8cm \epsfbox{fig3_n1.ps}
\end{center}
\caption{Same as in the previous figures, but now the norms of
$\Theta$ and $Z_x$ are plotted in order to monitor constraint
violations. No growth can be seen, confirming that constraint
violations are not being injected through the
boundary.}\label{Zradevol}
\end{figure}
\section{Discussion and outlook}
One can wonder about wether the multi-dimensional character of the
problem is lost when applying boundary conditions to just one
face. This is not the case: the $x=constant$ boundary surfaces are
actually surfaces, not just points, so that oblique modes are
still present and can lead to instabilities, as it actually occurs
when $\zeta=\pm 1$. The only thing we are avoiding in this way is
to apply constraint-preserving boundary conditions to corner
points (which are assigned here to the $y$ and $z$ boundary
surfaces, where periodic boundary conditions are applied). If we
try to apply conditions (\ref{Thetabound} - \ref{Zboun}) along
every space direction, then instabilities appear even for the
$\zeta=0$ choice of the ordering parameter.
One could argue as well that the difficulties with corners and
edges could be related with the fact that the main evolution
system has just a strongly hyperbolic, but not symmetric
hyperbolic, principal part. This is not the case, as it is shown
in Appendix B, where boundary conditions of the maximally
dissipative type are successfully implemented and applied to all
the boundary points of a cartesian-like numerical grid, including
corners and edges.
Our results are very similar to those of Ref.~\cite{SW03}, where
the same test is applied to reflection boundary condition: a
stable linear-growing mode is detected, which becomes unstable
only when the boundary conditions are applied also to the other
faces, so that the numerical grid gets corners and edges. Our work
can be then understood as an extension (in a different formalism)
of the results of Ref.~\cite{SW03} to boundary conditions of the
Sommerfeld type.
In our opinion, the main problem at corner points comes from the
inconsistency inherent to the choice of a (unique) normal
direction there. Different faces get different normal vectors, but
corner points belong to two different faces at the same time. This
is not just a theoretical caveat: corners and edges pose a real
problem in practical applications, where more work should be done
along any of the following lines:
\begin{itemize}
\item Devising an specific treatment for corner points. The
correct implementation of constraint-preserving boundary
conditions in the presence of corners is still a unsolved issue.
For symmetric hyperbolic evolution systems, using finite
difference operators satisfying the summation by parts rule with
respect to a diagonal scalar product leads to stable schemes with
maximally dissipative boundary conditions~\cite{CLRST04, CN04}. In
our case, with a strongly hyperbolic evolution system, we have got
the same result, as presented in Appendix B.
But these results do not extend to constraint-preserving boundary
conditions. A major difficulty is that compatibility conditions
between boundary data at adjacent faces need to be satisfied if
one wishes to obtain smooth solutions. Necessary conditions for
continuous solutions have been derived in Ref.~\cite{CPRST03} for
a symmetric hyperbolic evolution system. But more conditions are
needed in order to obtain smooth solutions. Compatibility issues
are also present at corner points between initial and boundary
data (see for instance Chapter 9 in Ref.~\cite{GKO95}).
\item Building numerical grids with smooth boundaries (without
corners and edges), so that the constraint-preserving boundary
conditions (\ref{Thetabound} - \ref{Zboun}) can be applied
consistently in an stable way, as we have confirmed numerically by
means of the robust stability test. The construction of
'\,Multi-patch' numerical schemes, which would allow for smooth
boundaries, has become a major research topic in Numerical
Relativity. See for instance Refs.~\cite{Thorn04, CN04prep}.
\end{itemize}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix A: Characteristic decomposition of
the Z4 first order system}
Let us consider the propagation of perturbations with wavefront
surfaces given by the unit (normal) vector $n_i$, we can write the
principal part of the Z4 first order system in matrix form
\begin{equation}\label{B_dtu}
\frac{1}{\alpha}~\partial_t~\mathbf{u}
~+~ \mathbf{M} ~n^k\partial_k ~\mathbf{u} = ...~,
\end{equation}
where $\mathbf{u}$ is the full array of dynamical fields
(\ref{uvector}). Notice that derivatives tangent to the wavefront
surface play no role here.
A straightforward analysis of the characteristic matrix $\mathbf
{M}$ provides the following list of eigenfields~\cite{Z48}:
\begin{itemize}
\item Standing eigenfields (zero eigenvalues)
\begin{equation}\label{B_EF0}
\alpha\,,~~ \gamma_{ij}\,,~~A_\perp\,,~~
D_{\perp ij}\,,~~A_k-f D_k+f m V_k\,,
\end{equation}
where the symbol $\perp$ replacing an index means the projection
orthogonal to $n_i$
\begin{equation}\label{B_DAij}
D_{\perp ij} \equiv D_{kij} - n_k n^r D_{rij}.
\end{equation}
\item Light-cone eigenfields (eigenvalues $\pm \alpha $)
\begin{eqnarray}\label{B_EFLij}
{L^{\pm}}_{ij} &\equiv& [K_{ij}-n_i n_j ~{{\rm tr}\,}K ]
\nonumber \\
&~& \pm~[{\lambda^n}_{ij} - n_i n_j~{{\rm tr}\,}\lambda^n]
\\
\label{B_EFL}
E^{\pm} &\equiv& \Theta \pm V^n~,
\end{eqnarray}
where the symbol $n$ replacing the index means the contraction
with $n_i$
\begin{equation}
\lambda^n_{ij}~\equiv~n_k~\lambda^k_{ij}\qquad V^n~\equiv~n_k~V^k.
\end{equation}
\item Gauge eigenfields (eigenvalues $\pm \alpha \sqrt{f}$)
\begin{equation}\label{B_EFG}
G^{\pm} \equiv \sqrt{f} \left[~{{\rm tr}\,}K -\mu\,\Theta ~\right]
\pm \left[~A^n + (2-\mu)\,V^n ~\right]
\end{equation}
where we have noted for short
\begin{equation}\label{muparam}
\mu \equiv \frac{f m-2}{f-1}~.
\end{equation}
In the degenerate case (f=1), one must have $m=2$, so that the
value of $\mu$ is not fixed. The degeneracy allows for any
combination with (\ref{B_EFL}), as expected.
\end{itemize}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix B: Maximally dissipative boundary conditions}
A convenient generalization of the maximally dissipative boundary
conditions can be implemented by just imposing the vanishing of
(the time derivatives of) all the incoming modes, that is
\begin{equation}
\label{maxdiss}
\partial_t~E^{-}
=~ \partial_t~L_{ij}^{-} =~ \partial_t~G^{-} = 0~.
\end{equation}
The principal part of the modified system is the much simpler that
in the constraint-preserving case. It is, by construction,
strongly hyperbolic along the direction $\vec{n}$ normal to the
boundary, with characteristic speeds given again by
(\ref{charmod}).
We will compute the characteristic speeds along a generic
direction $\vec{r}$, oblique to $\vec{n}$, where the vector
$\vec{r}$ is related with $\vec{n}$ by
\begin{equation}\label{omega}
\vec{r} = ~\vec{n} ~cos\varphi + \vec{s} ~sin\varphi\,,
\end{equation}
and we have taken
\begin{equation}\label{omega_cond}
\vec{n}^{~2} = \vec{s}^{~2} = 1\,, \qquad \vec{n} \cdot \vec{s} = 0\,.
\end{equation}
The hyperbolicity requirement amounts to demand that all the
resulting characteristic speeds be real for any value of the angle
$\varphi$.
The trivial equations (\ref{maxdiss}) provide $7$ (remember that
$L_{ij}^{~\pm}$ is traceless) standing eigenfields (zero
characteristic speed) of the modified system. Another set of $17$
standing eigenfields is given by
\begin{equation}\label{ADV}
A_p~, \qquad D_{p\,ij}~, \qquad A_k-f D_k+fm\, V_k~,
\end{equation}
where $\vec{p}~$ is the direction orthogonal to both vectors
$\vec{n}$ and $\vec{s}$.
The remaining $14$ dynamical fields can be grouped into the
following sectors:
\begin{itemize}
\item \textbf{Energy sector} \{$E^+,~V_s$\}. The corresponding
evolution equations are (principal part only):
\begin{eqnarray}\label{bound_dV} \nonumber
\frac{1}{\alpha}~\partial_t~V_s &=&
- sin\varphi~\partial_s ~\Theta =
- \frac{1}{2}~sin\varphi~\partial_r~E^+ \cdots\\ \nonumber
\frac{1}{\alpha}~\partial_t~E^+ &=&
- \partial_r[~V_r + \Theta~cos\varphi~] \\ \nonumber
&=& - \partial_r[~E^+ cos\varphi + V_s ~sin\varphi+\cdots~]\,,
\label{bound_dE}\end{eqnarray} where the dots stand for coupling
terms with the standing eigenfields (which are irrelevant for the
eigenvalues calculation). It follows that the characteristic
speeds are given by the solutions of the algebraic equation
\begin{equation}\label{lambdaE}
\lambda(\lambda-\alpha~cos\varphi) =
\frac{1}{2}\,\alpha^2~sin^2\varphi~,
\end{equation}
so that real characteristic speeds are obtained for every
value of $\varphi$.
\item \textbf{Gauge sector} \{$G^+,~A_s$\}. The corresponding
evolution equations are (principal part only):
\begin{eqnarray}\nonumber
\frac{1}{\alpha}~\partial_t A_s &=&
- sin\varphi~\partial_s [~f(tr K - m~\Theta)] \\ \nonumber
&=& - \frac{1}{2}\sqrt{f}\,sin\varphi\,
\partial_r~G^+ +~\cdots \label{bound_dA}\\ \nonumber
\frac{1}{\alpha}~\partial_t~G^+ &=&
- \partial_r[~\sqrt{f}\,A_r + f~cos\varphi~tr K~] \\ \nonumber
&=& - \sqrt{f}\,\partial_r[~G^+~cos\varphi~
+ A_s ~sin\varphi ~+ \cdots]\label{bound_dG}
\end{eqnarray}
where the dots stand for coupling terms with the previous
sectors. It follows that the characteristic speeds are
given by the solutions of the algebraic equation
\begin{equation}\label{lambdaG}
\lambda(\lambda-\alpha~\sqrt{f}\,cos\varphi) =
\frac{1}{2}\,f~\alpha^2~sin^2\varphi~,
\end{equation}
so that, allowing for the positivity of the gauge parameter $f$,
real characteristic speeds are obtained again for every value
of $\varphi$.
\item \textbf{Metric sector} \{$L_{ij}^{~+},~D_{sij}$\}. The
corresponding evolution equations can be written as (principal
part only)
\begin{eqnarray}\nonumber
\frac{1}{\alpha}&\partial_t& D_{sij} =
- sin\varphi~\partial_s ~K_{ij}
= - \frac{1}{2}\,sin\varphi\,
\partial_r[~L_{ij}^{~+}+\cdots~] \label{bound_dD}\\
\frac{1}{\alpha}&\partial_t&L_{ij}^{~+} =
- \partial_r[~\lambda^r_{ij} + cos\varphi~K_{ij} + \cdots\\ \nonumber
&~& - \frac{1+\zeta}{2}~(r_i K_{nj}+r_j K_{ni} -
n_i K_{rj} - n_j K_{ri}) ~] \\ \nonumber
&=& - \partial_r[~L_{ij}^{~+}~cos\varphi + D_{sij}~sin\varphi
+ \cdots\\ \nonumber
&~& - \frac{1+\zeta}{2}~sin\varphi~(s_i K_{nj}+s_j K_{ni} -
n_i K_{sj} - n_j K_{si}) \\ \nonumber
&~& - \frac{1+\zeta}{2}~sin\varphi~(D_{ijs}+D_{jis} -
s_i E_{j} - s_j E_{i})~]\,.
\label{bound_dL}\end{eqnarray}
where the dots stand again for coupling terms with the previous sectors.
\end{itemize}
The evolution equation (\ref{bound_dL}) for these outgoing
'\,metric' fields contains (unless $\zeta = -1$) crossed coupling
terms that complicate the analysis. One gets three variants of the
same algebraic equation
\begin{eqnarray}
\lambda(\lambda-\alpha~cos\varphi) &=&
\frac{1}{2}~\alpha^2~sin^2\varphi \\
\lambda(\lambda-\alpha~cos\varphi) &=&
\frac{1}{2}~\alpha^2~sin^2\varphi ~[1-(\frac{1+\zeta}{2})^2]
\qquad\\
\lambda(\lambda-\alpha~cos\varphi) &=&
\frac{1}{2}~\alpha^2~sin^2\varphi ~[1-(1+\zeta)^2]\,,
\label{lambda_conds}
\end{eqnarray}
depending on the particular set of components considered (the last
two equations appear twice, so that one gets $10$ characteristic
speeds that complete the full set of $38$). The most restrictive
is the last one (\ref{lambda_conds}): it implies that we get
complex characteristic speeds for some values of $\varphi$ unless
\begin{equation}\label{zeta_cond}
\zeta \leq 0\,,
\end{equation}
so that the standard ordering case ($\zeta=+1$) is excluded.
\begin{figure}[t]
\begin{center}
\epsfxsize=8cm
\epsfbox{fig5.ps}
\end{center}
\caption{Same as Fig.~\ref{1face}, but now with maximally
dissipative boundary conditions enforced along every axis. Again,
the values of $\zeta$ are shown with an interval of $0.5$ for the
sake of clarity, although the survey has been made with a finer
interval of $0.1$. It confirms that the $\zeta \leq 0$ choices are
stable, as expected from the modified system analysis. Notice
that, in the stable cases, the decrease after $100$ crossing times
is more than one order of magnitude greater than in
Fig.~\ref{1face}. This shows the effect of the maximally
dissipative boundary conditions: they are actually dissipating all
the dynamical fields.}\label{bound_strong}
\end{figure}
We can check out these results by using again the robust stability
test-bed. We will enforce the maximally dissipative conditions
(\ref{maxdiss}) along every axis in a cartesian-like numerical
grid, including corners and edges. We will survey the values of
the $\zeta$ parameter in the interval $[-1,1]$, with a spacing
$\Delta\zeta=0.1$.
\begin{figure}[t]
\begin{center}
\epsfxsize=8cm \epsfbox{Zdisevol.ps}
\end{center}
\caption{Same as in the previous figure, but now the norms of
$\Theta$ and $Z_x$ are plotted in order to monitor constraint
violations. Their decay indicates that the constraint-violating
modes are diminishing, even much faster than in
Fig.~\ref{Zradevol}. But now the reason is a different one: the
boundary conditions are dissipating all the dynamical fields.
}\label{Zdisevol}
\end{figure}
We show in Fig.~\ref{bound_strong} the time evolution of the
maximum of the absolute value of $tr K$ (a spacing
$\Delta\zeta=0.5$ is used in the plot for the sake of clarity).
Our results show that the positive choices of the ordering
parameter are actually unstable, whereas the choices in the range
$[-1,0]$ are stable and behave in the same way. Notice that the
norm of $tr K$ in Fig.~\ref{bound_strong} is decreasing, in the
stable cases, much faster than in the constraint-preserving case
(Fig.~\ref{1face}). This can not be explained just by the fact
that boundary conditions are now being applied along the three
coordinate axes: the boundary conditions are actually dissipating
all the dynamical fields.
We show in Fig.~\ref{Zdisevol} the time evolution of the maximum
of the absolute value of both $\Theta$ and $Z_i$ for the symmetric
ordering ($\zeta=0$) case. As far as their values are diminishing,
one can conclude that no constraint-violating modes are being
produced at the boundaries. But notice that this is at the price
of dissipating all the dynamical fields. One can not conclude then
that maximally dissipative boundary conditions are
constraint-preserving: constraint-related fields are flowing out
through the boundaries in the same way as the remaining degrees of
freedom.
{\em Acknowledgements: This work has been supported by the EU
Programme 'Improving the Human Research Potential and the
Socio-Economic Knowledge Base' (Research Training Network Contract
HPRN-CT-2000-00137), by the Spanish Ministry of Science and
Education through the research project number FPA2004-03666 and by
a grant from the Department of Innovation and Energy of the
Balearic Islands Government.}
\bibliographystyle{prsty}
|
{
"timestamp": "2005-06-14T10:53:18",
"yymm": "0411",
"arxiv_id": "gr-qc/0411110",
"language": "en",
"url": "https://arxiv.org/abs/gr-qc/0411110"
}
|
\section{\ Introduction} \label{sec.intro}
Let $p$ be an odd prime. Based on integrality results for the quantum
$SO(3)$-invariant of closed oriented $3$-manifolds
\cite{Mu2,MR}, an
integral TQFT-functor ${\mathcal{S}_p}$ was defined in \cite{G} and
\cite{GMW}. It associates to a closed surface ${\Sigma}$ a free
lattice\footnote{A lattice over a Dedekind domain is a finitely generated
torsion-free module.
In general, a lattice need not be free,
and the freeness of the lattices ${\mathcal{S}_p}({\Sigma})$ is a non-trivial fact.}
${\mathcal{S}_p}({\Sigma})$ over the cyclotomic ring
\begin{equation*}\label{cyclo}
{\mathcal{O}}=\left\{
\begin{array}{cl}
{\mathbb{Z}}[\zeta_p] &\text{if $p \equiv -1 \pmod{4}~,$} \\
{\mathbb{Z}}[\zeta_{p},i]={\mathbb{Z}}[\zeta_{4p}] &\text{if $p \equiv 1 \pmod{4}~,$}\\
\end{array}\right.
\end{equation*}
where $\zeta_n$ is a primitive $n$-th root of unity.
The lattice ${\mathcal{S}_p}({\Sigma})$ carries an ${\mathcal{O}}$-valued non-degenerate
hermitian form which will be denoted by $(\ ,\ )_{\Sigma}$. Here,
non-degenerate means that the form induces an injective adjoint map
${\mathcal{S}_p}({\Sigma})\rightarrow {\mathcal{S}_p}({\Sigma})^*$. The lattice ${\mathcal{S}_p}({\Sigma})$ also carries
a linear representation of an
appropriately extended
mapping class group of ${\Sigma}$; moreover, this representation preserves the hermitian form on ${\mathcal{S}_p}({\Sigma})$.
The integral lattices ${\mathcal{S}_p}({\Sigma})$ have a natural definition in terms of the
vector-valued quantum $SO(3)$-invariants for $3$-manifolds with boundary (see
below). Their existence thus reflects interesting
structural properties of quantum invariants and TQFT's.
The main aim of the present paper is to give an explicit description of
these lattices for surfaces of arbitrary genus,
possibly disconnected,
by describing bases for them.
Here, we allow surfaces to be equipped with a (possibly empty) collection of
colored banded points (a banded point is an
embedded
oriented arc), where a color is an integer $i\in\{0,1,\ldots, p-2\}$.
But we require the sum of colors on the colored points of any component
of a surface to be even (this is a feature of the $SO(3)$-theory we
consider). Before, explicit bases of ${\mathcal{S}_p}({\Sigma})$ were known
only in the case where the surface ${\Sigma}$
is connected and
has genus one or two with no colored points \cite{GMW}.
The bases we find display some nice ``graph-like'' structure which we
believe should generalize to other TQFT's, at least
for those associated to integral modular categories as defined in \cite{MW}.
One can ask
whether the graph-like bases we have found might be related to canonical bases in representation theory.
An interesting new feature is that the usual tensor
product axiom of TQFT holds only with some modification for the
lattices ${\mathcal{S}_p}({\Sigma})$: it turns out that the lattice associated to a
disjoint union of surfaces is sometimes bigger than the tensor product
of the lattices associated to the individual components, although this
phenomenon does not happen for surfaces without colored points.
We remark that Kerler \cite{K} has announced an integral
version
of the $SO(3)$ TQFT at $p=5$, but details have not yet appeared.
Chen and Le have recently constructed integral TQFTs from quantum
groups \cite{CL, C}, although without describing explicit bases for the
modules associated to surfaces.
We give two topological applications of our theory.
In Section \ref{sec.cut}, we prove a conjecture which relates the cut number of a $3$-manifold $M$ to
divisibility properties of its quantum invariants. The cut number is the same as the co-rank of $\pi_1(M)$. Thus this result relates quantum invariants to the fundamental group of a $3$-manifold.
Our second application concerns the
Frohman and Kania-Bartoszynska ideal of $3$-manifolds with boundary \cite{FK} which can be used to show that one $3$-manifold does not embed in another. This ideal is hard to compute directly from its definition, except in very special circumstances, as its definition involves the quantum invariants of infinitely many $3$-manifolds.
However our integral bases allow us to give an explicit finite set of generators for this ideal (see Section \ref{sec.FKB}). We apply this method to exhibit a family of examples of
$3$-manifolds with the homology of a solid torus which cannot embed in $S^3.$
The integral TQFT-functor ${\mathcal{S}_p}$ is a refinement of the
$SO(3)$-TQFT-functor $V_p$ constructed in \cite{BHMV2}, which in turn
is, in some sense, an alternative version of a special case of the
Reshetikhin-Turaev theory \cite{RT} of quantum invariants of $3$-manifolds. In
particular, the lattice ${\mathcal{S}_p}({\Sigma})$ is defined in \cite{G,GMW} as an
${\mathcal{O}}$-submodule of the TQFT-vector space $V_p({\Sigma})$. The latter is
defined over the quotient field of ${\mathcal{O}}$ and exists also when $p$ is
not prime. In fact, $V_p({\Sigma})$ can be defined (and is a free module) already over the ring ${\mathcal{O}}[\frac 1 p]$, see \cite{BHMV2},
and this is the version of $V_p$ that we refer to in the rest of the paper.
As a submodule of $V_p({\Sigma})$, the lattice ${\mathcal{S}_p}({\Sigma})$
is simply defined as the ${\mathcal{O}}$-span of the
vectors associated to $3$-manifolds $M$ with boundary $\partial M
={\Sigma}$, with the condition that $M$ should have no closed
components. It is clear from this definition that the extended mapping
class group acts on ${\mathcal{S}_p}({\Sigma})$. The fact that ${\mathcal{S}_p}({\Sigma})$ is a free
lattice of rank equal to the dimension of $V_p({\Sigma})$ is shown in
\cite{G}. The hermitian form $(\ ,\ )_{\Sigma}$ is obtained by rescaling
the natural form on $V_p({\Sigma})$ given by gluing manifolds together
along their boundary and computing the invariant of the closed
manifold so obtained. That such a rescaling is possible depends
crucially on the integrality result for this invariant of closed
$3$-manifolds due
to H.~Murakami \cite{Mu2} and Masbaum-Roberts
\cite{MR} (see \cite{G} for more details).
\footnote{Using deep results of Habiro, integrality of the $SO(3)$-invariant for all $3$-manifolds
and any odd $p$ was proven in 2006 by Beliakova-Le
\cite{BL} (see also Le \cite{L}).}
Bases of the free ${\mathcal{O}}[\frac 1 p]$-module $V_p({\Sigma})$ are well
understood in terms of admissible colorings of uni-trivalent graphs. Here,
any uni-trivalent graph which is the spine of a handlebody with boundary
${\Sigma}$, and with the univalent vertices meeting ${\Sigma}$ in
the colored points, may be used. The ${\mathcal{O}}$-span of such a
{\em graph basis} is a sublattice of ${\mathcal{S}_p}({\Sigma})$, but this sublattice
is almost never invariant under the mapping class
group, and hence cannot be equal to the whole integral lattice
${\mathcal{S}_p}({\Sigma})$. One might hope that a basis of ${\mathcal{S}_p}({\Sigma})$ could be
obtained by rescaling the elements of a graph basis in some way, but
this is not
the case. Still, the situation is actually rather nice. We will show that
the lattice ${\mathcal{S}_p}({\Sigma})$ admits what we call {\em graph-like} bases
associated to a special kind of uni-trivalent graph which we call a
{\em lollipop tree}. Roughly speaking, a graph-like basis is obtained
from the usual graph basis associated to the lollipop tree by taking certain linear combinations,
followed by some overall rescaling depending on the colors. The nice
thing is that the linear
combinations are taken independently in each handle. For precise
definitions and a statement of the result in the case of connected
surfaces, see
Section~\ref{sec.thm}.
We remark here that
for connected surfaces,
${\mathcal{S}_p}({\Sigma})$ has a simple skein-theoretical description shown in \cite{GMW}, namely as the ${\mathcal{O}}$-span
in $V_p({\Sigma})$
of banded links and graphs in a handlebody colored in a certain way.
This description will be given below in
Proposition \ref{vgraph}.
For the purpose of the present paper, this description
can
be taken as the definition of ${\mathcal{S}_p}({\Sigma})$.
Moreover, our method of constructing a basis will give an independent proof that ${\mathcal{S}_p}({\Sigma})$ is indeed a free lattice.
For disconnected surfaces, we describe a basis of ${\mathcal{S}_p}({\Sigma})$ in Section
\ref{discon.sec},
where we also discuss the modified tensor product axiom.
In the case of surfaces of genus one and two without colored
points, the natural hermitian form $(\ ,\ )_{\Sigma}$ was shown in
\cite{GMW} to be
unimodular (here, unimodular means that the adjoint map is not only
injective but is an isomorphism).
As already observed there, this property no longer holds in higher
genus.
It is then natural to consider the dual lattice ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$, defined as
\[ {{\mathcal{S}}_p^{\sharp}}({\Sigma}) =\{x\in
V_p({\Sigma})\
|\ (x,y)_{\Sigma}\in {\mathcal{O}} \text{ for
all } y\in {\mathcal{S}_p}({\Sigma})\}~. \]
Note that ${\mathcal{S}_p}({\Sigma})\subset {{\mathcal{S}}_p^{\sharp}}({\Sigma})$, with equality if and only if the form is unimodular.\footnote{It is easy to check that since ${\mathcal{S}_p}({\Sigma})$ is a
free
lattice, so is ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$. In fact, ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ is isomorphic
as an ${\mathcal{O}}$-module to the dual of
the conjugate of
${\mathcal{S}_p}({\Sigma})$, justifying the terminology.}
The main result about the dual lattices $ {{\mathcal{S}}_p^{\sharp}}({\Sigma})$ is that they also admit
graph-like bases. In fact, such bases can be described as rescalings of
graph-like bases for ${\mathcal{S}_p}({\Sigma})$ (see again Section~\ref{sec.thm} for a
precise statement). Moreover, the dual lattices play an important
role in the proof that our bases are indeed bases,
as the proof
proceeds by studying the two lattices simultaneously.
The dual lattice ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ is, of course, also preserved by the
action of the mapping class group. Note that when one expresses the
mapping class group representation on $ {\mathcal{S}_p}({\Sigma})$ or on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ in our integral bases, all matrix
coefficients will be algebraic integers.
One also obtains a representation
by isometries
(of the associated torsion ``linking form'')
on the quotient ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$, which admits a simple
description,
at least if $p\equiv -1\pmod{4}$:
When ${\Sigma}$ is connected and has no colored points,
${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$
is a skew-symmetric inner product space over the finite field ${\mathbb{F}}_p$.
If $p\equiv 1\pmod{4}$, a similar statement holds with a little more
effort for a refined theory, see
Section~\ref{torsion.sec}
for details.
It should be mentioned, however, that here we merely begin the study of these
representations of mapping class groups on torsion modules; we hope to
return to this matter elsewhere.
{\em Acknowledgments.} We would like to thank the following for hospitality and support:
the visiting experts program in mathematics of the Louisiana Board of Regents (LEQSF(2002-04)-ENH-TR-13),
Knots in Poland 2003 at The Banach Center and at the Mathematical Research and Conference Center in
Bedlewo, the Mini-Workshop: Quantum Topology in Dimension Three at
Mathematisches Forschungsinstitut Oberwolfach, and also the
Mathematics Institute of Aarhus University and the MaPhySto network of
the Danish National Research Foundation.
{\em Note added March 2006.} As a further application of the integral lattices ${\mathcal{S}_p}({\Sigma})$, it will be shown in \cite{M?} that when one expresses the mapping class group representations in our graph-like bases associated to lollipop trees, then the representations (at least when restricted to the Torelli group) have a perturbative limit as $p\rightarrow \infty$, in much the same sense as Ohtsuki's power series invariant \cite{Oh} of homology spheres is a limit of the quantum $SO(3)$-invariants at roots of prime order.
{\em Notations and Conventions.}
Throughout the paper,
$p\geq 5$
will be a
prime integer, and we
put $d=(p-1)/2$.
\footnote{We assume $p\geq 5$ because the color $2$, which will play an important
role in our construction, does not exist for $p=3$. But the theory
for $p=3$ is trivial anyway \cite{BHMV2}.}
Recall that our ground ring ${\mathcal{O}}$ contains a
primitive $p$-th root of unity $\zeta_p$. We define $h\in {\mathcal{O}}$ by
\begin{equation} h=1-\zeta_p~. \end{equation} One has that $h^{p-1}$ is
a unit times $p$, so that ${\mathcal{O}}[\frac 1 p]={\mathcal{O}}[\frac 1 h]$.
For skein
theory purposes, we put $A= -
\zeta_p^{d+1}$. This is a primitive $2p$-th root of
unity such that $A^2=\zeta_p$. The quantum integer $[n]$ is defined by $[n]=(\zeta_p^{n}-\zeta_p^{-n})/(\zeta_p-\zeta_p^{-1})$.
We also fix ${\mathcal{D}}\in{\mathcal{O}}$ such that
\begin{equation} \label{defD} {\mathcal{D}}^2 = \frac{-p }{(\zeta_p-\zeta_p^{-1})^2}~.
\end{equation} One has that
$\mathcal{D}$ is a unit times $h^{d-1}.$
Unless otherwise stated all manifolds that we consider are assumed to be compact and oriented.
If a surface ${\Sigma}$ is equipped with a particular collection of colored
banded points, we denote the latter by $\ell({\Sigma}).$
When writing $V_p({\Sigma})$, ${\mathcal{S}_p}({\Sigma})$, {\em etc.}, we let ${\Sigma}$ stand for the surface equipped with its given colored
points.
As usual in TQFT, our surfaces and $3$-manifolds are equipped with
an additional structure to resolve the ``framing anomaly''. As it is
well-known
how to do this \cite{BHMV2,W,T}, and the additional structure is
basically irrelevant for integrality questions, we postpone further
discussion of
this ``framing'' issue until Section~\ref{mcg.sec} where some details
of the construction will be needed.
\section{\ Skein theory and the definition of ${\mathcal{S}_p}({\Sigma})$}\label{skein.sec1}
Unless otherwise stated,
by skein module we will mean the Kauffman Bracket skein module over ${\mathcal{O}}[\frac 1 p]={\mathcal{O}}[\frac 1 h]$.
Recall that this skein module of a $3$-manifold $M$
is the free ${\mathcal{O}}[\frac 1 h]$ module on the banded links in $M$
modulo the well-known Kauffman relations and isotopy. We denote this
module by
$K(M).$ Elements of $K(M)$ can be described by colored trivalent
banded graphs, which can be expanded to linear combinations of banded
links in the familiar way. In particular, a strand colored, say $a$,
is replaced by $a$ parallel strands with the Jones-Wenzl idempotent,
denoted $f_a$, inserted. This is a particular linear combination of $a$-$a$ tangles, and is sometimes denoted as a rectangle with $a$ inputs on each of the long sides. See for instance \cite{BHMV2}.
We also need to consider the relative
skein module of a $3$-manifold whose boundary is equipped with some
banded colored points $\ell(\partial M)$. Here one takes the free
${\mathcal{O}}[\frac 1 h]$ module on those links
(or rather, tangles)
which are expansions of colored
graphs which meet the boundary nicely in
the colored points.
In this case
we only use isotopy relative the boundary in the relations. This
relative skein module is denoted $K(M, \ell(\partial M))$.
An element of this module is often denoted $(M,L)$ where $L$ stands
for the colored link or graph in $M$.
Suppose the surface
${\Sigma}$ is
equipped with a (possibly empty) collection of colored points
$\ell({\Sigma})$.
The ${\mathcal{O}}[\frac 1 p]$-module $V_p({\Sigma})$ can be described as follows.
For any
$3$-manifold $M$ with boundary ${\Sigma}$,
there is a surjective
map from $K(M,\ell({\Sigma}))$ to $V_p({\Sigma})$.~\footnote{There is no
connectivity hypothesis on $M$ since $V_p$ satisfies the tensor
product axiom. Here we use our assumption that the sum of the
colors of the banded points on every connected component of every
surface is even.}
The image of the skein class represented by $(M,L)$ is denoted
$[(M,L)]$. Here we think of $(M,L)$ as a cobordism from $\emptyset$ to
${\Sigma}.$
Suppose $M'$ is a second
$3$-manifold with boundary ${\Sigma}$, then
a sesquilinear form
$\langle \ ,\ \rangle_{M, M'}: K(M,\ell({\Sigma})) \times K(M',\ell({\Sigma})) \rightarrow {\mathcal{O}}[\frac 1 p]$ is defined by \[ \langle(M,L), (M',L')\rangle_{M, M'} =
\langle (M \cup_{\Sigma} -M', L \cup_{\ell({\Sigma})} -L')\rangle_p
\] Here
$\langle \ \rangle_p$ denotes the quantum invariant of a
closed $3$-manifold,
and the minus sign indicates reversal of orientation.
The kernel of the map $K(M,\ell({\Sigma})) \rightarrow V_p({\Sigma})$ is the left
radical of the form $\langle \ ,\ \rangle_{M, M'}$. Moreover,
this form
induces the canonical
nonsingular hermitian form
$\langle \ ,\ \rangle_{{\Sigma}}:V_p({\Sigma}) \times V_p({\Sigma}) \rightarrow
{\mathcal{O}}[\frac 1 p]$
(which is independent of $M$ and $M'$).
All the results of this paragraph appear in
\cite{BHMV2}.
\begin{de}\label{defS}{\em Given a closed surface ${\Sigma}$ with possibly a collection
of colored banded points $\ell({\Sigma})$, we define ${\mathcal{S}_p}({\Sigma})$ to be the
${\mathcal{O}}$-submodule of $V_p({\Sigma})$ generated by all vectors $[(M,L)]$
where
$M$ is any $3$-manifold with boundary ${\Sigma}$ having no closed connected components, and the colored graph $L\subset M$ meets ${\Sigma}$ nicely in $\ell({\Sigma})$.
}\end{de}
As shown in \cite{GMW}, ${\mathcal{S}_p}({\Sigma})$ has a skein-theoretical
description, as follows. First, we let $v$ denote the skein element in $K(S^1 \times D^2)$ described by
$h^{-1} ( 2+ z)$ where $z$ is $S^1 \times \{0\}$ with
standard framing.
Thus $v$ denotes a skein class and also the element
in
$V_p(
S^1 \times S^1)$ which
it
represents, depending on context.
(The $v$ used in \cite{GMW} differs by a unit from the $v$ used
here.)
Coloring a link component $v$ is shorthand for replacing this component by the linear combination $h^{-1} ( 2+ z)$ and expanding linearly.
Next, by a {\em $v$-graph} in $3$-manifold with possibly some colored
points in the boundary, we will mean a banded colored graph in $M$
which agrees with $\ell(\partial M)$ on the boundary together with
possibly some other
banded
link components which have been colored with
$v$. By \cite[Prop. 5.6 and Cor. 7.5 ]{GMW}, we have that:
\begin{prop} \label{vgraph} Suppose ${\Sigma}$ is a connected surface.
Choose a connected $3$-manifold $M$
with boundary ${\Sigma}$. Then ${\mathcal{S}_p}({\Sigma})$ is generated over ${\mathcal{O}}$ by elements represented by $v$-graphs in $M$ which meet the boundary in the colored points of ${\Sigma}$.
\end{prop}
For connected surfaces, the above can be taken as an alternative
definition of ${\mathcal{S}_p}({\Sigma})$.
There is also a version of this for surfaces which are not connected
but we delay stating it until section \ref{discon.sec} when it is
needed.
\section{\ Lollipop trees and the small graph basis of $V_p({\Sigma})$}
We let ${\Sigma}$ denote the boundary of a
genus $g$ handlebody $H_g$ and fix a particular collection of colored
banded points in ${\Sigma}$ which we denote by $\ell({\Sigma}).$
A basis of $V_p({\Sigma})$ can be described by
the $p$-admissible
(see below)
colorings of any
uni-trivalent
banded graph $G$ having the same homotopy type as
the handlebody $H_g$
and which meets the boundary at $\ell({\Sigma})$
in the univalent vertices of $G$.
Moreover, the colorings
should extend the given colorings at these banded colored points
\cite{BHMV2}.
As usual, a colored graph with an edge colored zero is identified with the same graph without that edge (similarly for zero-colored points in $\ell({\Sigma})$).
For example, we can take the graph in Figure \ref{t} when $g=5$ and $\ell({\Sigma})$ has 6 points.
\begin{figure}[h]
\includegraphics[width=2in]{graphy/t.eps}
\caption{} \label{t}
\end{figure}
The banding of this graph lies in the plane.
The points $\ell({\Sigma})$ are depicted
at the bottom of the diagram. The x's in this and later diagrams denote holes in $H_g$.
A $p$-admissible coloring of $G$ is an assignment of colors to the edges of $G$ such that at every vertex of $G$, the three colors $i,j,k$, say, meeting at that vertex satisfy the conditions
\begin{eqnarray*}\label{adm1} i+j+k &\equiv& 0 \pmod 2\\
\label{adm2} |i-j|\ \leq& k & \leq \ i+j\\
\label{adm3} i+j+k &\leq& 2p-4=4d-2
\end{eqnarray*}
To a $p$-admissible coloring of $G$ one associates in the usual way a skein element in the handlebody $H_g$, by replacing the edges of $G$ with appropriate Jones-Wenzl idempotents. Identifying now the boundary of the handlebody with our surface ${\Sigma}$, this skein element defines in turn a vector in $V_p({\Sigma})$.
The vectors associated to $p$-admissible colorings where, in addition,
the colors
satisfy a parity condition, form a basis of $V_p({\Sigma})$
\cite[Theorem 4.14]{BHMV2}. \footnote{In the exceptional case where ${\Sigma}$ is a two-sphere, with
only one
even-colored
point, there is no such graph. Thus
$V_p({\Sigma})$ is zero if the color is nonzero, and ${\mathcal{O}}[\frac 1 p]$ if the color is zero.}
In Proposition~\ref{2.1} below, we will describe a different basis of $V_p({\Sigma})$, where the parity
condition is replaced by a ``smallness'' condition. For this, we must
restrict our graph to be a lollipop tree, defined as follows.
\begin{de}{\em Let $G$ be a uni-trivalent graph as above. Let $g$ be
the first Betti number of $G$ and let
${s}$ be the number of its
univalent vertices. (Of course, $g$ is the genus of $\Sigma$ and
${s}$ is the number of colored points in $\ell({\Sigma})$.) Then $G$ is
called a {\em lollipop tree} provided it satisfies the following
conditions.
\begin{itemize}
\item[(i)] $G$ has exactly $g$ loop edges, so that the complement of
these loop edges in $G$ is a tree $T$.
\item[(ii)] If ${s}>0$, there must be a single edge of the tree $T$
called
the {\em trunk edge} (or simply the {\em trunk})
with the property that if we remove the interior of the trunk from $T$,
we obtain the disjoint union of two trees: one which meets every loop
edge and one which
contains every uni-valent vertex. (If ${s}=1$, this second tree consists of a single point.)
\end{itemize}
}\end{de}
Note that a lollipop tree is not actually a tree.
We chose to call it so because of the special case where there is only one loop edge and the tree $T$ consists of just one edge; in this case the graph $G$ looks somewhat like a lollipop.
For example, the
graph
of Figure \ref{t} is a lollipop tree.
\begin{prop}\label{2.1} The vectors associated to $p$-admissible
colorings of a lollipop tree $G$, where the loop edges are assigned colors in the
interval $[0,d-1]$, form a basis of $V_p({\Sigma})$.
\end{prop}
We will refer to this basis as the {\em small graph basis} and denote it by ${\mathcal{G}}$. Here, the adjective ``small'' refers to the colors on the loop edges.
\begin{proof}
By \cite[Theorem 4.14]{BHMV2}, we have a basis by taking all
$p$-admissible colorings with even colors on the loop edges. (Observe
that the parity of the colors of the edges of the sub-graph
$T$ is imposed by the colors of $\ell({\Sigma})$.) Lemma 8.2 of \cite{GMW}
shows how to replace even colors on the loop edges by small ones,
{\em i.e.,} colors in $[0,d-1]$. Specifically, the lemma dealt with
the case $g=2,{s}=0$, but the same argument works in general. \end{proof}
\section{\ Bases for ${\mathcal{S}_p}({\Sigma})$ and ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ for connected surfaces}\label{sec.thm}
We are now ready to state our results concerning graph-like bases of
${\mathcal{S}_p}({\Sigma})$ and ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$. Fix a lollipop tree $G$ and define $g$ and
${s}$ as in the preceding
section.
We will use the following labelling of the edges of $G$. Recall that
the complement of the loop edges of $G$ is a tree $T$. We call an edge
of $T$ {\em a stick edge} if it is incident with a loop edge of $G$.
An edge is called {\em ordinary} otherwise.
We denote their colors by $2a_1, \ldots, 2a_g$ for the stick edges
(which will always have an even color), and by
$c_1,c_2, \ldots $
for
the ordinary edges. Here
\begin{equation} \label{smallbasis1} 0\leq a_i \leq d-1~.
\end{equation}
The case $g=2,{s}=0$ is special as there is only one stick edge. In this
case we put $a_1=a_2$ but both $a_1$ and $a_2$ are to be entered in
Eqs.~(\ref{basedef0}),
(\ref{basedef}), and (\ref{basedefsharp}) below.
The color of the trunk is always
even; we denote it by $2e$. If ${s}=0$,
we also set $e$ to be zero. We note that the trunk is usually an
ordinary edge but it is a stick edge when $g=1$ and
${s}\geq 1$.
We denote the colors of the loop edges by $a_i+b_i$ ($i=1,\ldots,
g$). Here, the loop edge colored $a_i+b_i$ is incident to the
stick
edge of $T$ labelled $2a_i$.
Note $b_i\geq 0$ by $p$-admissibility. Moreover, since loop edges should have small colors, we have
\begin{equation} \label{smallbasis2} 0\leq b_i\leq d-1-a_i~.
\end{equation}
The elements of the small graph basis ${\mathcal{G}}$ will be denoted by
${\mathfrak{g}}(a,b,c)$, where $a=(a_1,\ldots, a_g)$, $b=(b_1,\ldots, b_g)$, and
$c=(c_1,c_2, \ldots )$. The index set is precisely the set of
$(a,b,c)$ satisfying conditions \eqref{smallbasis1} and \eqref{smallbasis2}, and such that $(2a,c)$ is a $p$-admissible coloring of the tree $T$
extending the given coloring of $\ell({\Sigma})$.
We will refer to $(a,b,c)$ as a small coloring
of $G$.
\vskip 8pt
{\em The basis of ${\mathcal{S}_p}({\Sigma})$.} Our basis of ${\mathcal{S}_p}({\Sigma})$ will be denoted by ${\mathcal{B}}$. It consists of vectors ${\mathfrak{b}}(a,b,c)$ indexed by the same set as the small graph basis ${\mathcal{G}}$.
They are defined as follows. Recall that $h=1-\zeta_p$.
For $x\in \mathbb R$, we use the notation $\lfloor x
\rfloor$ (resp. $\lceil x \rceil$) to denote the greatest integer
$\leq x$ (resp. the smallest integer $\geq x$).
If $b=0$, the vector ${\mathfrak{b}}(a,0,c)$ is just a rescaling of ${\mathfrak{g}}(a,0,c)$:
\begin{equation} \label{basedef0} {\mathfrak{b}}(a,0,c) =
h^{- \lfloor \frac 1 2 (-e+\sum_i a_i)
\rfloor } {\mathfrak{g}}(a,0,c) \end{equation}
(Observe that one always has $e\leq \sum_i a_i$.)
If $b>0$, the vector ${\mathfrak{b}}(a,b,c)$
is conveniently described using multiplicative notation, as follows.
Think of the handlebody $H_g$ as $P\times I$ where $P$ is a $g$-holed disk thought of as a regular neighborhood of a planar embedding of the banded graph $G$. This endows the
absolute
skein module of $H_g$ with an algebra structure, where multiplication is given by putting one skein element on top of the other.
Similarly, relative skein modules of $H_g$ are modules over the absolute skein module.
Note that this multiplication is non-commutative in general. This also induces an algebra structure on $V_p({\Sigma})$ in the case $\ell({\Sigma})$ is empty. Although these algebra/module structures are not canonically associated to the surface ${\Sigma}$, they are well-defined once
${\Sigma}$ has been identified with the boundary of $P\times I$
with all colored points, say, at level one-half.
Similarly, we obtain ${\mathcal{O}}$-algebra/module structures on the lattices ${\mathcal{S}_p}({\Sigma})$, which will be used throughout the paper.
For $i=1,\ldots,g$, let $z_i$ denote the skein element represented by a circle around the $i$-th hole of the $g$-holed disk $P$ and with framing parallel to $P$. We have
\[ z_i={\mathfrak{g}}((0,\ldots,0),(0,\ldots, 0,1,0,\ldots 0), (0,\ldots ))~. \]
(The only non-zero coefficient sits at the $i$-th entry of the $g$-tuple $b$.) We define
\begin{equation*} v_i=h^{-1}(2+z_i) \end{equation*}
which is, of course, the same as $z_i$ cabled by the skein element $v$
defined in Section~\ref{skein.sec1}.
Using the
module
structure on ${\mathcal{S}_p}({\Sigma})$ discussed above, we can now define the elements of our basis ${\mathcal{B}}$ as follows:
\begin{eqnarray}\label{basedef} {\mathfrak{b}}(a,b,c) &=& v_1^{b_1} \cdots v_g^{b_g}
\,{\mathfrak{b}}(a,0,c)\\
\nonumber &=& h^{-\sum_i b_i - \lfloor \frac 1 2 ( -e +\sum_i a_i ) \rfloor } (2+z_1)^{b_1}\cdots (2+z_g)^{b_g} \,{\mathfrak{g}}(a,0,c)~. \end{eqnarray}
\begin{thm} \label{basis}${\mathcal{B}}$ is a basis of ${\mathcal{S}_p}({\Sigma})$.
\end{thm}
The proof will be completed in Section~\ref{proofsharp}. Note that it
is enough to show that these vectors lie in ${\mathcal{S}_p}({\Sigma})$ and generate it
over ${\mathcal{O}}$. As their number is equal to the dimension of
$V_p({\Sigma})$, they must then form a basis, and ${\mathcal{S}_p}({\Sigma})$ must be a free
lattice.
We call ${\mathcal{B}}$ a {\em graph-like} basis. Note that ${\mathfrak{b}}(a,b,c)$
is a linear combination of vectors ${\mathfrak{g}}(a,b',c)$ with $b'_i\leq b_i$
for all $i=1,\ldots, g$. Moreover, the multiplicative expression in Eq.~(\ref{basedef}) shows that the
linear combinations are taken independently in each handle. Observe
also that the rescaling factor is a power of $h$ whose exponent
depends on the
numbers
$a_i$ and $b_i$ and the trunk color $2e$, but
does not explicitly depend on the colors $c_i$.
In genus one with no colored points, the basis ${\mathcal{B}}$ is
up to units
the same as the second integral basis $\{1,v,\ldots,v^{d-1}\}$ of \cite{GMW}.
By functoriality,
it follows
that the vectors $v_i$ and their products lie in ${\mathcal{S}_p}({\Sigma})$. However, it is not obvious {\em a priori} that the vectors ${\mathfrak{b}}(a,0,c)$ lie in ${\mathcal{S}_p}({\Sigma})$. (Even in genus two the basis ${\mathcal{B}}$ is different from the one obtained in \cite{GMW}.)
\begin{cor} The extended mapping class group
(see Section~\ref{mcg.sec})
acts on ${\mathcal{S}_p}({\Sigma})$ in the basis ${\mathcal{B}}$ by matrices with coefficients
in the cyclotomic ring ${\mathcal{O}}$. Moreover, these matrices preserve a
non-degenerate ${\mathcal{O}}$-valued hermitian form.
\end{cor}
{\em The basis of ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$.}
Recall that ${{\mathcal{S}}_p^{\sharp}}({\Sigma})\subset V_p({\Sigma})$ is the dual lattice to ${\mathcal{S}_p}({\Sigma})$ with respect to the ${\mathcal{O}}$-valued hermitian form $(\ ,\ )_{\Sigma}$. (We will review the
definition of this form in Section~\ref{pB}.) Our basis of ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ will be
denoted by ${\mathcal{B}}^{\sharp}$. Using
the
algebra/module
structure discussed above, we
define the elements of ${\mathcal{B}}^\sharp$ as follows:
\begin{equation}\label{basedefsharp} {\mathfrak{b}}^{\sharp}(a,b,c) = h^{-\sum_i b_i - \lceil \frac 1 2 ( e+\sum_i a_i ) \rceil } (2+z_1)^{b_1}\cdots (2+z_g)^{b_g} \,{\mathfrak{g}}(a,0,c)~. \end{equation}
as $(a,b,c)$ vary over the same index set. Note that the only
difference between Eqs.~(\ref{basedef}) and (\ref{basedefsharp}) is
that the trunk half-color $e$ appears with the opposite sign, and
$\lfloor \ \rfloor$ is replaced with $\lceil \ \rceil$. Thus
${\mathfrak{b}}^{\sharp}(a,b,c)$ is a rescaling of ${\mathfrak{b}}(a,b,c)$.
\begin{thm}\label{sharpbasis} ${\mathcal{B}}^\sharp$ is a basis of ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$.
\end{thm}
The following sections until Section~\ref{proofsharp} will be devoted
to the proofs of Theorems~\ref{basis} and~\ref{sharpbasis}.
\section{\ The $3$-ball lemma.}\label{B3.sec}
Let $K_{\mathcal{O}}(D^3, \ell_{n, 2})$ denote the skein module of $D^3$
relative $n$ points in $S^2=\partial D^3$ colored $2$. Here, the
notation $K_{\mathcal{O}}$ means that we consider skein modules with
coefficients in ${\mathcal{O}}$, not in ${\mathcal{O}}[\frac 1 p]$. (But the result is
actually more general; see the remark at
the end of this section.) The following
result will be needed in
proving that the vectors ${\mathfrak{b}}(a,b,c)$ lie in
${\mathcal{S}_p}({\Sigma})$.
\begin{thm}[$3$-ball lemma]\label{skeind3} If $n$ is even, $K_{\mathcal{O}}(D^3, \ell_{n, 2})$ is generated by skein elements which can be represented by a collection of $n/ 2$ disjoint arcs colored $2$. If $n$ is odd, $K_{\mathcal{O}}(D^3, \ell_{n, 2})$ is generated by skein elements which can be represented by a collection of $ (n-3)/ 2 $ disjoint arcs colored $2$ union one Y shaped component also colored $2$.
\end{thm}
Recall that the module $K_{\mathcal{O}}(D^3, \ell_{n, 2})$ is generated by
colored graphs which meet the boundary nicely in the given $n$
points colored $2$. We remark here that the required Jones-Wenzl
idempotents are all defined over ${\mathcal{O}}$, because the only
denominators needed in their definition
are
the quantum integers
$[n]$ ($n\leq p-2$) which
are invertible in ${\mathcal{O}}$.
The proof of Theorem~\ref{skeind3} proceeds by a series of lemmas.
The first is an exercise in skein theory whose proof is left to
the reader (represent elements of $K_{\mathcal{O}}(D^3, \ell_{n, 2})$ by diagrams in a disk and apply the usual fusion formulas \cite{KL,MV}).
\begin{lem}\label{L0} $K_{\mathcal{O}}(D^3, \ell_{n, 2})$ is generated by unions of tree graphs
where all edges are colored~$2$.
\end{lem}
In the remainder of this section, unless otherwise stated, unlabeled arcs in figures
are assumed to be colored $2$. As usual, we put $\delta = -A^2-A^{-2}$. The
following two lemmas are simple skein-theoretical calculations.
\begin{lem}\label{L1} $ (A^4 -1 + A^{-4})
\hskip.1in
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0.eps}\end{minipage}
= A^{-4} \delta
\hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/hibar.eps}\end{minipage}
+ (1-A^{-4}) \ \delta
\hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/vibar.eps}\end{minipage}
+
\hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L+.eps}\end{minipage}.$
\end{lem}
\begin{proof} We use the abbreviation:
$\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/x.eps}\end{minipage} = \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/xe.eps}\end{minipage}$, where the arcs in the right hand diagram are colored $1$.
To prove \ref{L1}, it is enough to expand the right hand side using
\[\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/hibar.eps}\end{minipage}
= \hskip.1in
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/x.eps}\end{minipage}
- \delta^{-1}
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/Linf.eps}\end{minipage}
\quad \text{and \cite[p.35]{KL}} \quad
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L+.eps}\end{minipage}
= A^4 \hskip.1in
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0.eps}\end{minipage}
+ A^{-4} \hskip.1in
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/Linf.eps}\end{minipage}
-\delta \hskip.1in
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/x.eps}\end{minipage}
\]
\end{proof}
\begin{lem}\label{L2} $
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/vibar.eps}\end{minipage}
= (A^{4}-1) \delta^{-1}
\hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0.eps}\end{minipage}
+ A^{-4} \delta^{-1}
\hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/Linf.eps}\end{minipage}
- \delta^{-1}
\hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L+.eps}\end{minipage}.$
\end{lem}
\begin{proof}
Rewrite the right hand side and simplify using the equations in the proof of Lemma \ref{L1}:
\[ \delta^{-1} \left( A^4 \hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0.eps}\end{minipage} + A^{-4} \hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/Linf.eps}\end{minipage} - \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L+.eps}\end{minipage} \right) - \delta^{-1} \hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0.eps}\end{minipage} =
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/x.eps}\end{minipage} - \delta^{-1} \hskip.1in \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0.eps}\end{minipage} = \begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/vibar.eps}\end{minipage}.\]
\end{proof}
We refer to the diagram on the left hand side of Lemma \ref{L2} as an
I-bar. Using this lemma repeatedly to expand I-bars in tree graphs colored $2$,
we see from Lemma~\ref{L0} that
$K_{\mathcal{O}}(D^3, \ell_{n, 2})$ is generated by skein elements which can
be represented as disjoint unions of arcs colored $2$ and $Y$-shaped
graphs colored $2$. Here we use that $\delta=-[2]$ is invertible in ${\mathcal{O}}$.
The crucial step is now the following lemma which
shows how to replace two $Y$-shaped
graphs colored $2$ by three arcs; clearly this will be enough to
complete the proof of Theorem~\ref{skeind3}.
\begin{lem} The element \ \
$
\hskip.1in
\begin{minipage}{0.4in}\includegraphics[width=0.3in]{graphy/L0ex.eps}\end{minipage}$
of $K_{\mathcal{O}}(D^3, \ell_{6,2})$ is an ${\mathcal{O}}$-linear combination of diagrams
consisting of three arcs colored $2$. \end{lem}
\begin{proof}
Extending all the diagrams in Lemma \ref{L1} by the same wiring, we obtain:
\[ (A^4 -1 + A^{-4})
\hskip.1in
\begin{minipage}{0.3in}\includegraphics[width=0.3in]{graphy/L0ex.eps}\end{minipage}
= A^{-4} \delta
\hskip.1in \begin{minipage}{0.3in}\includegraphics[width=0.3in]{graphy/hibarex.eps}\end{minipage}
+ (1-A^{-4}) \delta
\hskip.1in \begin{minipage}{0.3in}\includegraphics[width=0.3in]{graphy/vibarex.eps}\end{minipage}
+
\hskip.1in \begin{minipage}{0.3in}\includegraphics[width=0.3in]{graphy/L+ex.eps}\end{minipage}.\]
Using Lemma \ref{L2} to expand two I-bars in the first diagram on the right hand side, two I-bars in the second diagram on the right hand side,
and one I-bar in the third diagram on the right hand side, we see that
the right hand side is an ${\mathcal{O}}$-linear combination of diagrams
consisting of $3$ arcs colored $2$. But $A^4 -1 + A^{-4}$ (which, up
to units, is the sixth cyclotomic polynomial in $A^4$) is easily
seen to be invertible
in ${\mathcal{O}}$. This proves the Lemma, and completes
the proof of Theorem~\ref{skeind3}.
\end{proof}
\begin{rem}{\em We never used that $A$ is a root of unity in this
proof. Thus the result also holds for the skein
module with coefficients in ${\mathbb{Q}}(A)$, the ring of rational
functions in $A$. In fact, it would be enough to work with the
subring ${\mathbb{Z}}[A,A^{-1}]$ with inverses of the relevant quantum
integers and of $A^4 - 1 +A^{-4}$ adjoined to it.
}\end{rem}
\section{\ Proof that ${\mathfrak{b}}(a,b,c)\in {\mathcal{S}_p}({\Sigma})$.}
We fix some notation and terminology. Let ${\Sigma}_g$ denote the boundary of the handlebody $H_g$ with no
colored points.
The colored graph that represents
${\mathfrak{g}}((1,1),(0,0))
\in {\mathcal{S}_p}({\Sigma}_2)$ is called an {\em eyeglass} and the colored graph that
represents
${\mathfrak{g}}((1,1,1), (0,0,0))\in {\mathcal{S}_p}({\Sigma}_3)$ is called a
{\em tripod.} See Figure~\ref{eyetri}.
\begin{figure}[h]
\includegraphics[width=2in]{graphy/eyetri2.eps}
\caption{An eyeglass and a tripod} \label{eyetri}
\end{figure}
\begin{lem}\label{eyeglasses} Eyeglasses and tripods are divisible by $h$ in ${\mathcal{S}_p}$.
\end{lem}
Since by definition ${\mathfrak{b}}((1,1), (0,0))=h^{-1} {\mathfrak{g}}((1,1), (0,0))$ and ${\mathfrak{b}}((1,1,1), (0,0,0))=h^{-1} {\mathfrak{g}}((1,1,1),
(0,0,0))$, it follows that ${\mathfrak{b}}((1,1), (0,0))
\in {\mathcal{S}_p}({\Sigma}_2)$ and
${\mathfrak{b}}((1,1,1), (0,0,0))
\in {\mathcal{S}_p}({\Sigma}_3)$.
\begin{proof} Recall $z_{i}$ denotes a simple loop which encloses the
$i$th hole. Let $z_{i,j}$ denote a simple loop which encloses just
the $i$th and $j$th hole, etc. If we want these curves colored
$v$, we just change the $z$ to a $v.$ A scalar denotes this
scalar times the empty
link.
If we expand the
idempotent
$f_2$ in ${\mathfrak{g}}((1,1),(0,0))$, we get $z_{1,2} +
[2]^{-1}
z_1z _2
.$ Making the substitution $z= hv-2$, we get
$${\mathfrak{g}}((1,1),(0,0))= h v_{1,2} + [2]^{-1} \left(h^2
v_1v _2 -2 h v_1 -2h v_2 -2 \zeta_p^{-1} h^2 \right)~.$$
(We have used $2-[2]=2-\zeta_p-\zeta_p^{-1}=-\zeta_p^{-1}h^2$.)
As $[2]$ is a unit of ${\mathcal{O}}$ and $v$-graphs are in
${\mathcal{S}_p}$
(Proposition \ref{vgraph}), this shows ${\mathfrak{g}}((1,1),(0,0))$ is
divisible by $h$ in
${\mathcal{S}_p}({\Sigma}_2)$. The divisibility of tripods is proved in the same way.
\end{proof}
Now consider again an arbitrary connected surface ${\Sigma}$, possibly with colored points $\ell({\Sigma})$.
Fix a lollipop tree $G$ and consider the elements ${\mathfrak{b}}(a,b,c)$ defined
in Section~\ref{sec.thm}.
\begin{prop}\label{62} One has ${\mathfrak{b}}(a,b,c) \in {\mathcal{S}_p}({\Sigma})$.
\end{prop}
\begin{proof} As already observed in Section~\ref{sec.thm}, it is enough
to show this for $b=0$, since ${\mathfrak{b}}(a,b,c)$ is obtained from ${\mathfrak{b}}(a,0,c)$
by multiplying it by some $v$-colored curves. In other words, we need to
show that ${\mathfrak{g}}(a,0,c)$ is divisible by $h^{\lfloor \frac 1 2 ( -e +\sum_i a_i) \rfloor }$ in ${\mathcal{S}_p}({\Sigma})$.
The colored graph representing ${\mathfrak{g}}(a,0,c)$ contains $g$ lollipops, where we
mean by lollipop a subgraph consisting of a colored loop
joined
to a stick at a single point. We now perform
a local change at the $g$ lollipops and obtain a new skein element $w(a,0,c),$
as is done in Figure \ref{w}. To do this, we can view the $i$th lollipop in ${\mathfrak{g}}(a,0,c)$ as $a_i$ arcs starting and ending at the
idempotent $f_{2a_i}$ and looping around the $i$th hole. Here we refer to the usual device of representing an idempotent by a rectangle. Specifically, the $j$th arc connects the $j$th and the
$(2a_i-j+1)$th
input of the idempotent. The modification from ${\mathfrak{g}}(a,0,c)$ to $w(a,0,c)$
consists of inserting a braid so that the $a_i$ arcs now connect consecutive inputs on the $f_{2a_i}$. By a well-known property of the idempotent $f_{2a_i}$, this changes a given skein element only
by multiplying it
by some power of $A$.
Hence it is enough to show that $w(a,0,c)$ is divisible by
the above-mentioned power of $h$.
\begin{figure}[h]
\includegraphics[width=3in]{graphy/w.eps}
\caption{a lollipop in ${\mathfrak{g}}(a,0,c)$, its expansion as a skein diagram, and the corresponding portion of $w(a,0,c)$} \label{w}
\end{figure}
We can draw $w(a,0,c)$ slightly differently: we also insert $a_i$ idempotents $f_2$ (as is also done in Figure~\ref{w}). This does not change the skein element at all (the $f_2$'s are ``absorbed'' by the $f_{2a_i}$ by another well-known property of the idempotent). We refer to this last operation as {\em spawning off} $f_2$'s from $f_{2a_i}$. If $\ell({\Sigma})$ is nonempty, we also spawn
off $e$
idempotents $f_2$
from the $f_{2e}$ on the trunk in the diagram for $w(a,0,c)$.
There is a 3-ball in $H_g$ whose boundary intersects $w(a,0,c)$ in
$e+ \sum_i a_i $ points colored $2$ corresponding to the idempotents we spawned off.
By the $3$-ball lemma~\ref{skeind3},
we can replace this part of the diagram with a linear combination over ${\mathcal{O}}$ of diagrams each with
$\lfloor \frac 1 2 ( e+ \sum_i a_i ) \rfloor $ arcs and Y's colored two. At most $e$ of
these arcs and Y's meet the trunk. Thus the rest of the arcs and Y's are completed to eyeglasses and tripods in the larger diagram. Thus $w(a,0,c)$ is also represented as a linear combination over ${\mathcal{O}}$ of diagrams
with
\begin{center}
$ \lfloor \frac 1 2 ( e+ \sum_i a_i ) \rfloor - e=
\lfloor \frac 1 2 (-e + \sum_i a_i ) \rfloor $
\end{center}
eyeglasses and tripods.
As each eyeglass and tripod is divisible by $h$ this gives exactly the required divisibility of $w(a,0,c)$. This completes the proof.
\end{proof}
\section{\ The lollipop lemma.}
The lollipop lemma (Theorem~\ref{ll} below)
will be used to show that the elements ${\mathfrak{b}}^\sharp(a,b,c)$ lie in
${{\mathcal{S}}_p^{\sharp}}({\Sigma})$.
Recall that a $v$-graph in
a $3$-manifold is a usual colored graph together with some banded
link components colored $v$, where $v=h^{-1} (2+z)$
(see Section~\ref{skein.sec1}).
As before, a subgraph in a $v$-graph
consisting of a colored loop meeting an edge (called the stick)
is called a lollipop. A stick can take part in two lollipops. The color
of the loop edge is at least one half of the
stick color, but is allowed to be greater than that. Note that we have imposed no condition on how the loop edge of a lollipop is embedded into the ambient manifold.
\begin{thm} \label{ll}(Lollipop Lemma) Let $L$ be a $v$-graph in $S^3$
containing $N$
lollipops with stick colors $2a_1, 2a_2 \cdots 2a_N,$
then its evaluation $\langle L \rangle$ is divisible by $h^{\lceil \frac 1 2 {\sum_{i=1}^N a_i} \rceil }.$
\end{thm}
Here, the evaluation $\langle L\rangle $ of a skein element $L$ in
$S^3$ is defined to be the ordinary Kauffman bracket.
Thus the empty link evaluates to $1$ and a
zero-framed
unknotted loop colored one
evaluates to $ -\zeta_p -\zeta_p^{-1}$, for example. As shown in \cite{GMW}, the evaluation
of a $v$-graph in $S^3$ lies in ${\mathcal{O}}$.
Let us first prove a special case. A {\em basic lollipop} is a lollipop where the stick is colored $2$ and the loop edge is colored $1$.
\begin{lem}\label{6.1} The evaluation of a $v$-graph in $S^3$ with $N$ basic
lollipops is divisible by $h^{\lceil N/ 2 \rceil}$ in ${\mathcal{O}}$.
\end{lem}
\begin{proof} If we have a basic lollipop whose loop spans a disk which misses the rest of the $v$-graph $L$ then
$\langle L\rangle =0$, by a well-known property of the Jones-Wenzl
idempotents. More generally, if $L$ intersects a 3-ball only in a basic lollipop, then $\langle L\rangle =0$.
Let us show that we may reduce to this case by changing crossings using the relation
\[
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L+.eps}\end{minipage}
-
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L-.eps}\end{minipage}
= (A -A^{-1})
\left( \hskip.1in
\begin{minipage}{0.2in}\includegraphics[width=0.2in]{graphy/L0} \end{minipage}
-
\begin{minipage}{0.2in} \includegraphics[width=0.2in]{graphy/Linf.eps} \end{minipage}
\hskip.1in \right)
\]
and all the error terms are divisible by $h^{\lceil N/ 2
\rceil}$.
(In this figure, strands are ordinary strands, {\em i.e.} colored
$1$.)
By expanding all idempotents except one at the stick of each lollipop, we may assume that all arcs of the graph are colored $1$. When we change a crossing between a $v$-colored link component and a strand colored $1$,
the skein relation shows that the error term
is given by
$h^{-1}(A- A^{-1} ) =-A^{-1}$
times the difference of two evalutions of $v$-graphs, each of which
satisfies again the hypothesis of the lemma
but
has
one less $v$-colored link component. By induction it follows that we can assume that all $v$-colored link components are unlinked from the rest of the graph. Since a $v$-colored link evaluates to something integral, this shows that we may assume that there are no $v$-colored link components.
Next if we change crossings between a loop colored $1$ of a basic lollipop and any arc colored $1$, the error term is
$-A^{-1}h$
times a $v$-graph with at least $N-2$ basic lollipops. By induction on the number of lollipops the error term satisfies the conclusion of the lemma. Hence we may reduce to the case where one basic lollipop is not linked with anything.
\end{proof}
\begin{rem}\label{altv}{\em In a similar way, we can give a new proof
of the fact that the evaluation
of a $v$-graph lies in ${\mathcal{O}}$. Namely, one checks that it is true for $v$-colored unlinks and then reduces to this case
by
changing crossings, observing that error terms always lie in ${\mathcal{O}}$.
}\end{rem}
\begin{proof}[Proof of Theorem \ref{ll}] We may expand the loop idempotents over ${\mathcal{O}}$ into terms where each stick idempotent has $a_i$ arcs that meet it at $2a_i$ points along an edge of the idempotent. Here we refer again to the device of representing an idempotent by a rectangle. As in the previous section,
we insert
a braid
in the strands that meet this edge so that the arcs now join points to their immediate neighbors. This only changes the evaluation by a power of $A.$
Then without changing the evaluation
one can spawn from each $2a_i$-stranded idempotent $a_i$
2-stranded idempotents. Now each term is a $v$-graph with $\sum_{i=1}^N a_i$ basic lollipops
so the result follows from Lemma~\ref{6.1}.
\end{proof}
\section{\ Proof that ${\mathfrak{b}}^\sharp(a,b,c)\in {{\mathcal{S}}_p^{\sharp}}({\Sigma})$.}\label{pB}
In this section, we use the Lollipop Lemma to show that the elements
${\mathfrak{b}}^\sharp(a,b,c)$ defined
in Section~\ref{sec.thm} lie in ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$. Let us first review
the definition of this module in more detail.
The hermitian form $( \ ,\ )_{{\Sigma}}$ is simply a rescaling of the
canonical hermitian form $\langle \ ,\ \rangle_{{\Sigma}}$ on $V_p({\Sigma})$
(see Section~\ref{skein.sec1}):
$$(x,y )_{{\Sigma}} = {\mathcal{D}}^{\beta_0({\Sigma})} \langle x,y\rangle_{{\Sigma}}~.$$
Here
$\beta_0({\Sigma})$ is the number of components of ${\Sigma}$, and ${\mathcal{D}}$ is
defined in Eq.~(\ref{defD}) in Section~\ref{sec.intro}. In fact, ${\mathcal{D}}$ is the inverse of the quantum invariant $\langle S^3
\rangle_p$.
When restricted to ${\mathcal{S}_p}({\Sigma}) \subset V_p({\Sigma})$, the form $(
\ ,\ )_{{\Sigma}}$ takes values in ${\mathcal{O}}$. This follows from the definition
of ${\mathcal{S}_p}({\Sigma})$ and the integrality result for quantum invariants of
closed
connected manifolds
\cite{Mu2,MR}. (See \cite{G,GMW} for more details.)
\begin{de} {\em The lattice ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ is the dual lattice to
${\mathcal{S}_p}({\Sigma})$ with respect to the hermitian form
$( \ ,\ )_{{\Sigma}}$.
}\end{de}
Now assume ${\Sigma}$ is connected. If $M$ has boundary ${\Sigma}$, and $M$ is
also connected,
let $K_v(M, \ell({\Sigma}))$ denote the ${\mathcal{O}}$-submodule of $K(M, \ell({\Sigma}))$
spanned by $v$-graphs in $M.$
Proposition~\ref{vgraph} is equivalent to saying that for connected
$M$ and ${\Sigma}$, the natural map $$K_v(M, \ell({\Sigma})) \rightarrow {\mathcal{S}_p}({\Sigma})$$
is surjective.
\begin{rem}{\em The inclusions $K_{\mathcal{O}}(M, \ell({\Sigma})) \subset K_v(M, \ell({\Sigma}))\subset K(M, \ell({\Sigma}))$ are strict in general.
}\end{rem}
The following Proposition~\ref{testingSsharp} is a useful device for
describing ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$. Let $H$ and $H'$ be two complementary handlebodies in $S^3$ with
$\partial H = {\Sigma} = -\partial H'$. We define a bilinear
form
\begin{center}
$( ( \ ,\ ) )_{H, H'}: K(H,\ell({\Sigma})) \times K(H',-\ell({\Sigma}))
\ \longrightarrow \ {\mathcal{O}}[\frac 1 h]$
\end{center}
by
\[ ((L, L'))_{H, H'} =
\langle L \cup_{\ell({\Sigma})} L'\rangle. \]
Here $\langle \ \rangle$ is the usual Kauffman bracket of a colored
graph in $S^3$. When restricted to
$K_v(H,\ell({\Sigma})) \times K_v(H',-\ell({\Sigma}))$, this form takes values in ${\mathcal{O}}$.
\begin{prop} \label{testingSsharp}
A skein element
$x \in K(H,\ell({\Sigma}))$ represents an element of ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ if and only if $((x,x'))_{H,H'} \in {\mathcal{O}}$ for all $x' \in K_v(H',
-\ell({\Sigma}))$.
\end{prop}
\begin{proof} The proof is essentially a standard argument in the skein-theoretical
approach to TQFT's.
The skein element $x$ defines the vector $[(H,x)]\in V_p({\Sigma})$.
The hermitian form
is given by
\begin{equation}\label{revers}
([(H,x)],[(H,y)])_{\Sigma}=
{\mathcal{D}} \langle (H\cup_{\Sigma} -H, x \cup_{\ell({\Sigma})}
y^\star
)\rangle_p~,
\end{equation}
where $y^\star$ denotes the skein element in $-H$ obtained from $y$ by reversing orientation.
This can also be viewed as a bilinear pairing of
$[(H,x)] \in V_p({\Sigma})$ with $[(-H,y^\star)] \in V_p(-{\Sigma})$.
But
$[(-H,y^\star)]$
can also be represented by some skein element $y'\in K(H',-\ell({\Sigma}))$. Thus
the hermitian pairing (\ref{revers}) is equal to $$
{\mathcal{D}} \langle (H\cup_{\Sigma} H', x \cup_{\ell({\Sigma})} y')\rangle_p
=\langle x \cup_{\ell({\Sigma})} y'\rangle
=((x,y'))_{H,H'}$$ since $H\cup_{\Sigma} H'=S^3$. Now
$$[(H,y)] \in {\mathcal{S}_p}({\Sigma})\ \iff \ \
[(-H,y^\star)]
\in {\mathcal{S}_p}(-{\Sigma}) \
\iff
\ y'\in
K_v(H',-\ell({\Sigma}))~.$$
Thus one has
$[(H,x)]
\in {{\mathcal{S}}_p^{\sharp}}({\Sigma})$ if and only if $((x,y'))_{H,H'} \in {\mathcal{O}}$ for all $y' \in K_v(H',-\ell({\Sigma}))$.
\end{proof}
We are now ready to prove the following.
\begin{prop}\label{8.4} The elements ${\mathfrak{b}}^\sharp (a, b,c)$ lie in
${{\mathcal{S}}_p^{\sharp}}({\Sigma}).$
\end{prop}
\begin{proof} Embed the handlebody $H_g$ into $S^3$ so that its exterior is also a handlebody
$H_g'$.
By Proposition \ref{testingSsharp}, we only need show that
if ${\mathfrak{b}}^\sharp (a,b,c)$ in $H_g$ is completed by any $v$-graph in
$H_g'$,
then the evaluation of the result lies in ${\mathcal{O}}$. However we
may isotope the
$v$-colored curves
$v_i^{b_i}$ ($i=1,\ldots, g$)
in
${\mathfrak{b}}^\sharp (a,b,c)$
across ${\Sigma}.$ Thus we only need to prove this statement in the case $b=0.$ In other words, we only need to show that if ${\mathfrak{g}}(a,0,c)$ is completed by any $v$-graph in the complementary handlebody then the evaluation of the result is
divisible by $h^{ \lceil \frac 1 2 ( e+\sum_i a_i ) \rceil }$ in ${\mathcal{O}}.$
If $e=0$, this follows immediately from the Lollipop Lemma~\ref{ll}, as in
any completion of ${\mathfrak{g}}(a,0,c)$ we have the $g$ lollipops of ${\mathfrak{g}}(a,0,c)$
with the sum of the stick half-colors equal to $\sum_i a_i.$
If $e>0$, then in any completion of ${\mathfrak{g}}(a,0,c)$ we first modify
the part
in $H_g'$ which is glued to
the trunk edge
of ${\mathfrak{g}}(a,0,c)$
in a now familiar way.
To do this, we represent the idempotent $f_{2e}$ on the trunk edge diagrammatically by the usual rectangle
and expand all the idempotents
in the glued-on part of the graph
into strands
colored $1$. In every term of this expansion, we have $e$ arcs that start and end at the
``bottom'' of this
rectangle.
Inserting an appropriate braid we can arrange that the arcs now join consecutive points on the ``bottom'' of
the rectangle, and then we spawn off
$e$ idempotents $f_2$. As before, we can compensate for these changes by changing the coefficients in the expansion over ${\mathcal{O}}$ by some powers of $A$. In each term, we now see $e$ basic lollipops with stick color $2$
below the trunk edge.
Using these
and the $g$ lollipops in ${\mathfrak{g}}(a,0,c)$ itself, we see that any completion of
${\mathfrak{g}}(a,0,c)$ can be expanded over ${\mathcal{O}}$ as a
linear combination
of $v$-graphs containing
lollipops with the sum of the stick
half-colors equal to
$e+\sum_i a_i.$
By the Lollipop Lemma~\ref{ll} we are done.
\end{proof}
\section{\ Index counting
}\label{proofsharp}
Let ${\mathbb{B}}$ be the
${\mathcal{O}}$-lattice in $V_p({\Sigma})$ spanned by the ${\mathfrak{b}}(a,b,c).$ Let ${\mathbb{B}}'$ be the
${\mathcal{O}}$-lattice
spanned by the ${\mathfrak{b}}^\sharp(a,b,c)$. \footnote{We don't use the notation ${\mathbb{B}}^\sharp$ because it is not clear {\em a priori} that ${\mathbb{B}}'$ is the dual lattice of ${\mathbb{B}}$.} We know ${\mathbb{B}} \subset {\mathcal{S}_p}({\Sigma})$
and ${\mathbb{B}}' \subset {{\mathcal{S}}_p^{\sharp}}({\Sigma})$ by
Propositions~\ref{62} and~\ref{8.4}.
In this section, we will use order ideals to show that both inclusions are equalities (Theorem~\ref{9.3}).
This will prove Theorems~\ref{basis} and~\ref{sharpbasis}. We will sometimes abbreviate
${\mathcal{S}_p}({\Sigma})$ by ${{\mathcal{S}}}$ and ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ by ${{\mathcal{S}}^{\sharp}}$.
Let ${\mathbb{G}}$ denote the
${\mathcal{O}}$-lattice
spanned by the small graph basis ${\mathcal{G}}$ of $V_p({\Sigma})$, {\em i.e.}
the elements ${\mathfrak{g}}(a,b,c)$.
One has ${\mathbb{G}} \subset {\mathbb{B}}\subset {\mathbb{B}}'$ and all three of these lattices are free.
If $L \subset L'$ is an inclusion of free lattices of the same rank over ${\mathcal{O}}$, define their {\em index} $[L':L]$ to be the determinant up to units in ${\mathcal{O}}$ of a matrix representing a basis for $L$ in terms of a basis for $L'.$
\begin{prop} \label{numer}$[{\mathbb{B}}:{\mathbb{G}}] [{\mathbb{B}}' :{\mathbb{G}}] = [{\mathbb{G}}^\sharp : {\mathbb{G}}]$~.
\end{prop}
\begin{proof}
By \cite[5.4]{GMW}, we have
\[ [{\mathbb{G}}^\sharp: {\mathbb{G}}]= h ^{g(d-1)\dim(V_p({\Sigma}))}. \]
Actually \cite[5.4]{GMW} only deals with the case $\ell({\Sigma})= \emptyset,$ but this equation holds in general by the same argument.
By construction: $[{\mathbb{B}} : {\mathbb{G}}]=h^{N}$ where
$N= \sum_{(a,b,c)}\left( {\sum_i b_i + \lfloor \frac 1 2 ( -e +\sum_i a_i ) \rfloor}\right)$.
Also by
construction: $[{\mathbb{B}}' : {\mathbb{G}}]=h^{N'}$ where
$N'= \sum_{(a,b,c)} \left( {\sum_i b_i + \lceil \frac 1 2 ( e+ \sum_i a_i ) \rceil}\right).$ Of course, $e$ depends on $(a,b,c)$ in these expressions. Luckily, $e$ drops out when we look at $N+N'$.
We need to show $N+N'= g(d-1)\dim(V_p({\Sigma})),$ or
\[ \sum_{(a,b,c)} \left( 2\sum_i b_i + \sum_i a_i \right) = g(d-1)\sum_{(a,b,c)} 1. \]
It suffices to prove this with $a$ and $c$ held constant:
\[ \sum_{b}\ \left( 2\sum_{i=1}^g b_i + \sum_{i=1}^g a_i \right) = g(d-1)\sum_{b} 1. \]
Recall that the set of $b$ being summed over is the set of $(b_1, \ldots, b_g)$
such that $0 \le b_i \le d-1-a_i$, and the cardinality of this set is
$\prod_{j=1}^g (d-a_j)$.
So it suffices to show: \[ 2\sum_{b}\ \sum_{i=1}^g b_i + \left( \sum_{i=1}^g a_i\right) \prod_{j=1}^g (d-a_j) =
g (d-1) \prod_{j=1}^g (d-a_j) \]
Noting that $g (d-1) = \sum_{i=1}^g ( d-1),$ we only need to show
\[ 2\sum_{b}\ \sum_{i=1}^g b_i = \sum_{i=1}^g (d-1-a_i) \prod_{j=1}^g (d-a_j) \]
This equation expresses the fact that the average value of $\sum_{i=1}^g b_i$
over the index of summation $b$ is $ \frac 1 2 \sum _i (d-1-a_i). $ To see this, we consider the involution on the index set $\{b\}$ defined by sending $b_i$ to $d-1 -a_i -b_i$ for each $i.$
On each orbit the average value of $\sum_{i=1}^g b_i$
is $ \frac 1 2 \sum _i (d-1-a_i).$ This establishes the identity.
\end{proof}
If $M$ is a finitely generated torsion module over ${\mathcal{O}}$, we denote its order ideal by
${\mathfrak{o}}(M).$ If $L \subset L'$ is an inclusion of free lattices of the same rank over ${\mathcal{O}}$, then ${\mathfrak{o}}(L'/L)$ is the
principal
ideal
in ${\mathcal{O}}$
generated by
$[L':L]$.
We will need the following result (here $\overline{M}$ denotes the conjugate module of $M$).
\begin{prop} \label{S/G} ${\mathbb{G}}^\sharp / {{\mathcal{S}}^{\sharp}} \cong \overline{{{\mathcal{S}}}/{\mathbb{G}}}$~.
\end{prop}
Proposition~\ref{S/G} is a general fact about inclusions of
(not necessarily free) lattices of the same rank
equipped with a non-singular hermitian form over a Dedekind domain. We were unable to find this fact stated in the literature, but it is not hard to deduce it from
\cite[Theorem 4.12]{R}. We omit the details.
The proof of Theorems~\ref{basis} and~\ref{sharpbasis} is completed with the following
\begin{thm}\label{9.3} $ {\mathcal{S}_p}({\Sigma})= {\mathbb{B}}$ and $ {{\mathcal{S}}_p^{\sharp}}({\Sigma}) ={\mathbb{B}}'$. In particular ${\mathcal{S}_p}({\Sigma})$ and ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ are free.
\end{thm}
\begin{proof}
As ${\mathbb{G}} \subset {\mathbb{B}} \subset {{\mathcal{S}}}$, we have that
\begin{equation} \label{E1}
{\mathfrak{o}}({{\mathcal{S}}}/{\mathbb{G}}) = {\mathfrak{o}}({{\mathcal{S}}}/{\mathbb{B}}) {\mathfrak{o}}({\mathbb{B}}/{\mathbb{G}}).
\end{equation}
Similarly ${\mathbb{G}} \subset {\mathbb{B}}' \subset {{\mathcal{S}}^{\sharp}}$ gives us:
\begin{equation} \label{E2}
{\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{G}}) = {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{B}}') {\mathfrak{o}}({\mathbb{B}}'/{\mathbb{G}}).
\end{equation}
\begin{align*}
{\mathfrak{o}}({\mathbb{G}}^\sharp / {\mathbb{G}}) & = {\mathfrak{o}}({\mathbb{G}}^\sharp / {{\mathcal{S}}^{\sharp}}) {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{G}}) \\
& = \overline{ {\mathfrak{o}}({{\mathcal{S}}} / {\mathbb{G}} ) } {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{G}}) \text{ by Proposition \ref{S/G} } \\
& = \overline{ {\mathfrak{o}}({{\mathcal{S}}}/{\mathbb{B}})} \overline{ {\mathfrak{o}}({\mathbb{B}}/{\mathbb{G}})} {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{B}}') {\mathfrak{o}}({\mathbb{B}}'/{\mathbb{G}})
\text{ by Equations (\ref{E1}) and (\ref{E2})} \\
& = \overline{ {\mathfrak{o}}({{\mathcal{S}}}/{\mathbb{B}})} {\mathfrak{o}}({\mathbb{B}}/{\mathbb{G}}) {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{B}}') {\mathfrak{o}}({\mathbb{B}}'/{\mathbb{G}}) \text{ as $h$ is
self-conjugate
up to units} \\
& = \overline{ {\mathfrak{o}}({{\mathcal{S}}}/{\mathbb{B}})} {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{B}}'){\mathfrak{o}}({\mathbb{G}}^\sharp / {\mathbb{G}}) \text{ by Proposition \ref{numer}}.
\end{align*}
Thus $ {\mathfrak{o}}({{\mathcal{S}}}/{\mathbb{B}})= {\mathfrak{o}}({{\mathcal{S}}^{\sharp}}/{\mathbb{B}}')= 1, $ hence ${{\mathcal{S}}}= {\mathbb{B}}$ and ${{\mathcal{S}}^{\sharp}}={\mathbb{B}}'.$
\end{proof}
\section{The necessity of a trunk}
In this section we give an example which explains why we have insisted that lollipop trees have trunks. Let ${\Sigma}$ be the boundary of a regular neighborhood of a lollipop tree $G.$ A graph-like basis for $G$ is a basis for ${\mathcal{S}_p}({\Sigma})$ that is obtained from the small graph basis for ${V_p}({\Sigma})$ by ``peeling'' off any excess $b$ from each loop color of a small basis vector, inserting a compensating $(z+2)^b$ and then rescaling the resulting elements by some factor.
Now let ${\Sigma}$ denote a surface of genus two with two banded points colored $2$. Consider the graphs pictured in Figure \ref{gg} .
\begin{figure}[h]
\includegraphics[width=2.1in]{graphy/gg.eps}
\caption{ The graphs $G$ and $G'$} \label{gg}
\end{figure}
While $G$ is a lollipop tree, $G'$ is not, as it does not have a trunk. Still, admissible colorings with small colors on $G'$ also yield a basis for $V_p({\Sigma})$, and the notion of graph-like basis makes sense for $G'$. However, we have the following
\begin{thm} ${{\mathcal{S}}}_5({\Sigma})$ does not have a graph-like basis associated to the graph $G'$.
\end{thm}
\begin{proof}
Figure \ref{gs} illustrates two basis elements from $G$ and two from $G'.$ In this figure, the loops are all colored $1$ and the edges all colored $2$.
\begin{figure}[h]
\includegraphics[width=4in]{graphy/gs2.eps}
\caption{ The basis elements ${\mathfrak{g}}_0$, ${\mathfrak{g}}_1$, ${\mathfrak{g}}_0^\prime$ and ${\mathfrak{g}}_1^\prime$} \label{gs}
\end{figure}
Using the fact that
${\mathcal{S}_p}$
of a 2-sphere with four points colored $2$ is two-dimensional, one can see that
\begin{align} {\mathfrak{g}}_0 &= a {\mathfrak{g}}_0^\prime+ b {\mathfrak{g}}_1^\prime \label{e4} \\
{\mathfrak{g}}_1 &= \alpha {\mathfrak{g}}_0^\prime+ \beta {\mathfrak{g}}_1^\prime, \label{e5}
\end{align}
for some $a,b,\alpha, \beta \in {\mathcal{O}}$. The exact values of these are not important, but we will need to use that $a,b\in {\mathcal{O}}^\star$ which is easily checked.
As $G$ is a lollipop tree, we know that \begin{equation} h^{-1} {\mathfrak{g}}_0 \in {{\mathcal{S}}}_5({\Sigma}) \text{ and } h^{-1} {\mathfrak{g}}_1 \notin {{\mathcal{S}}}_5({\Sigma})~. \label{e6}
\end{equation}
Let us try to express $h^{-1} {\mathfrak{g}}_0$ in terms of a graph-like basis for $G'$,
assuming such a basis exists. We can modify the elements ${\mathfrak{g}}_0^\prime$ and ${\mathfrak{g}}_1^\prime$ only by a rescaling, as no peeling off is possible for these elements. Hence the required expression for $h^{-1} {\mathfrak{g}}_0$ would be
$$h^{-1} {\mathfrak{g}}_0 = a h^{-1} {\mathfrak{g}}_0^\prime+ bh^{-1} {\mathfrak{g}}_1^\prime~.$$ Since $a$ and $b$ are units of ${\mathcal{O}}$, both $h^{-1} {\mathfrak{g}}_0^\prime$ and $h^{-1} {\mathfrak{g}}_1^\prime$ would have to exist in ${{\mathcal{S}}}_5({\Sigma})$.
But then Eq.~(\ref{e5}) implies that $h^{-1}{\mathfrak{g}}_1$ would also exist in ${{\mathcal{S}}}_5({\Sigma})$, which contradicts the second half of Statement~(\ref{e6}). Thus ${{\mathcal{S}}}_5({\Sigma})$ does not have a graph-like basis for $G'$.
\end{proof}
\section{Integral modular functor}
The collection of
${\mathcal{O}}[\frac 1 p]$-modules
$V_p({\Sigma})$ associated to surfaces with
colored points form a modular functor (see {\em
e.g.} \cite{T}). This means
that the
$V_p({\Sigma})$ satisfy certain
axioms describing their
behaviour when
surfaces are cut into pieces in various ways. These axioms reflect
the semi-simplicity of the underlying modular category \cite{T}.
The integral lattices ${\mathcal{S}_p}({\Sigma})$
should be a guiding example towards the notion of an ``integral
modular functor''. But their behaviour under such ``cut and paste''
operations is considerably more
complicated. In some sense, the reason is that the integral theory is
no longer semi-simple.
We hope to develop these ideas elsewhere. Here, we limit ourselves
to describing
one axiom in a particularly simple situation, where a
rescaling of the usual modular functor axiom suffices.
Let $\gamma$ be a separating simple closed curve on the connected
surface ${\Sigma}$. The curve $\gamma$ cuts ${\Sigma}$ into two subsurfaces
${\Sigma}'$ and ${\Sigma}''$. If ${\Sigma}$ has colored points, we assume $\gamma$
misses these points.
For simplicity, we
also assume that the sum of the colors on ${\Sigma}'$ (hence also on ${\Sigma}''$) is even.
Let ${\Sigma}'_i$ and ${\Sigma}''_i$ be the subsurfaces
with their boundary capped off by a disk containing an additional $i$-colored
banded point.
Then \cite[1.14]{BHMV2} the obvious gluing map along the disks induces an isomorphism
\begin{equation}\label{IF1}
\bigoplus_{i=0}^{d-1} V_p({\Sigma}'_{2i}) \otimes V_p({\Sigma}''_{2i})
\mapright\approx V_p({\Sigma})~.
\end{equation}
This induces an injective map
\begin{equation}\label{IF2}
\bigoplus_{i=0}^{d-1} {\mathcal{S}_p}({\Sigma}'_{2i}) \otimes {\mathcal{S}_p}({\Sigma}''_{2i})
\longrightarrow
{\mathcal{S}_p}({\Sigma})~,
\end{equation}
whose image is a free sublattice of ${\mathcal{S}_p}({\Sigma})$.
\begin{thm}\label{11.1} If at most one of ${\Sigma}'$ and ${\Sigma}''$ has colored points,
then there exist bases ${\mathcal{B}}'_{2i}$ of ${\mathcal{S}_p}({\Sigma}'_{2i})$, ${\mathcal{B}}''_{2i}$ of
${\mathcal{S}_p}({\Sigma}''_{2i})$, and ${\mathcal{B}}$ of ${\mathcal{S}_p}({\Sigma})$, such that ${\mathcal{B}}$ is a
rescaling of the tensor product basis $\sqcup_i {\mathcal{B}}_{2i}'\otimes {\mathcal{B}}_{2i}''$. (Here,
$\sqcup_i {\mathcal{B}}_{2i}'\otimes {\mathcal{B}}_{2i}''$ is viewed as a basis of $V_p({\Sigma})$ via the map \eqref{IF1}.)
More precisely, there are functions $\varphi_i:{\mathcal{B}}'_{2i} \times {\mathcal{B}}_{2i}''
\rightarrow
{\mathbb{Z}}_{\geq 0}$ such that
\begin{equation} \label{IF3}
{\mathcal{B}} = \sqcup_i \{ h^{-\varphi_i({\mathfrak{b}}',{\mathfrak{b}}'')} {\mathfrak{b}}'\otimes {\mathfrak{b}}'' \,\vert\, {\mathfrak{b}}'\in {\mathcal{B}}_{2i}',
{\mathfrak{b}}'' \in {\mathcal{B}}_{2i}'' \}
\end{equation}
\end{thm}
\begin{proof} Write ${\Sigma}$ as the boundary of a handlebody $H$ such
that the curve $\gamma$ bounds a disk $D$ in $H$. This disk cuts
$H$ in handlebodies $H'$ and $H''$ with boundary
${\Sigma}'\cup D$ and ${\Sigma}''\cup D$.
We can find a
graph-like basis ${\mathcal{B}}$ of
${\mathcal{S}_p}({\Sigma})$ with respect to a lollipop tree $G$ in $H$ such that $G$
meets the disk $D$ transversely in one edge,
and
such that, moreover, $G'=G\cap H'$ is a lollipop tree in $H'$, and
$G''=G\cap H''$ is a lollipop tree in $H''$. Here we use the
hypothesis that at most one of ${\Sigma}'$ and ${\Sigma}''$ has colored points.
Taking for ${\mathcal{B}}_{2i}'$ and ${\mathcal{B}}_{2i}''$ the graph-like bases associated to $G'$
and $G''$, with color $2i$ on the edge meeting the disk $D$, the result follows.
\end{proof}
It is easy to write down an explicit formula for the
rescaling factors $\varphi_i :{\mathcal{B}}_{2i}' \times {\mathcal{B}}_{2i}'' \rightarrow
{\mathbb{Z}}_{\geq 0}$ in this situation; this is left to the reader.
\begin{rem} {\em (i) If both ${\Sigma}'$ and ${\Sigma}''$ have colored points, then
in general
there is no basis of ${\mathcal{S}_p}({\Sigma})$ which is just a
rescaling
of a tensor product basis as in \eqref{IF3}. This follows from the
example in the previous section.
(ii) If $\gamma$ is non-separating, the modular functor axiom also needs
more modification than just a rescaling.}
\end{rem}
\section{Disconnected surfaces and the tensor product axiom
}\label{discon.sec}
Let ${\Sigma}$ and ${\Sigma}'$ be two closed surfaces.
We have compatible natural maps ${\mathcal{S}_p}({\Sigma}) \otimes {\mathcal{S}_p}({\Sigma}') \rightarrow {\mathcal{S}_p}({\Sigma} \sqcup {\Sigma}')$ and ${V_p}({\Sigma}) \otimes {V_p}({\Sigma}') \rightarrow {V_p}({\Sigma} \sqcup {\Sigma}')$
specified by sending
$[(M,L)] \otimes [(M',L')]$ to $[(M \sqcup M',L \sqcup L')]$ where $(M,L)$ and $(M',L')$ are $3$-manifolds with colored links whose boundary is ${\Sigma}$ and ${\Sigma}'$, respectively.
The map ${V_p}({\Sigma}) \otimes {V_p}({\Sigma}') \rightarrow {V_p}({\Sigma} \sqcup {\Sigma}')$
is an isomorphism \cite{BHMV2}, and this property is called the tensor
product axiom.
We are interested in the extent that ${\mathcal{S}_p}({\Sigma}) \otimes
{\mathcal{S}_p}({\Sigma}') \rightarrow {\mathcal{S}_p}({\Sigma} \sqcup {\Sigma}')$ is also an isomorphism.
It follows from Corollary \ref{tpa}
below
that this map is an isomorphism if
${\Sigma}$ and ${\Sigma}'$ have no colored points. If there are colored points,
the image of this
map
may only be a sublattice of ${\mathcal{S}_p}({\Sigma} \sqcup
{\Sigma}')$. But we shall see that a basis of
${\mathcal{S}_p}({\Sigma} \sqcup {\Sigma}')$ can always be obtained as a rescaling of a
tensor product basis.
The following definition allows for a convenient expression of the
needed rescaling.
Let ${\mathcal{B}}$ be
the
basis of ${\mathcal{S}_p}({\Sigma})$,
for a connected surface ${\Sigma}$,
associated to some
lollipop tree $G$ in a handlebody $H$.
We define the {\em oddity} $\varepsilon({\mathfrak{b}})$ of a
basis element ${\mathfrak{b}}\in{\mathcal{B}}$ as follows. \footnote{A different notion of
parity will be defined in Section~\ref{torsion.sec}.} Let $2a_1, \ldots, 2a_k$ be the
colors of the stick edges, and let $2e$ be the trunk color of
${\mathfrak{b}}$. Let $A({\mathfrak{b}}) =\sum_i a_i $, and
$e({\mathfrak{b}}) =e$. Define
$\varepsilon({\mathfrak{b}})=1$ if $e({\mathfrak{b}})=d-1$ and $A({\mathfrak{b}}) -
e({\mathfrak{b}})$ is odd. Otherwise, $\varepsilon({\mathfrak{b}})=0$.
\begin{thm}\label{DC}
Let ${\Sigma}_1, \ldots, {\Sigma}_n$ be connected surfaces, and
let ${\mathcal{B}}_1, \ldots,
{\mathcal{B}}_n$ be graph-like bases of ${\mathcal{S}_p}({\Sigma}_1), \ldots, {\mathcal{S}_p}({\Sigma}_n)$. Then
the set
\begin{equation} \label{DC1}
{\mathcal{B}} = \{ h^{-\lfloor \frac 1 2 \sum_i \varepsilon({\mathfrak{b}}_i) \rfloor }
{\mathfrak{b}}_1\otimes \cdots \otimes {\mathfrak{b}}_n \,\vert\, {\mathfrak{b}}_i\in {\mathcal{B}}_i \}
\end{equation}
is a basis of
${\mathcal{S}_p}(\sqcup_i {\Sigma}_i)$.
\end{thm}
\begin{rem}{\em If ${\Sigma}$ is connected, the lattice ${\mathcal{S}_p}({\Sigma})$ has
basis vectors with non-trivial oddity if and only if ${\Sigma}$ has
genus at least two and the sum of the colors of the banded points
on ${\Sigma}$ is at least $2(d-1)=p-3$. For example, the rightmost diagram in
Figure~\ref{exa} represents an element with non-trivial oddity
in ${{\mathcal{S}}}_5$ of a genus $2$ surface with one colored point colored $2$.}
\end{rem}
\begin{cor} \label{tpa} The natural map ${\mathcal{S}_p}({\Sigma}) \otimes {\mathcal{S}_p}({\Sigma}') \rightarrow {\mathcal{S}_p}({\Sigma} \sqcup {\Sigma}')$
is always injective with a cokernel isomorphic to a direct sum of
cyclic modules ${\mathcal{O}}/h{\mathcal{O}}$. It is an isomorphism if one of the surfaces has no colored points.
\end{cor}
\begin{rem}{\em In fact, the map ${\mathcal{S}_p}({\Sigma})
\otimes {\mathcal{S}_p}({\Sigma}') \rightarrow {\mathcal{S}_p}({\Sigma} \sqcup {\Sigma}')$ has a
non-trivial cokernel if and
only if both surfaces have a connected component whose
${\mathcal{S}_p}$-lattice has basis vectors with non-trivial oddity. This
follows from Theorem \ref{DC}.}
\end{rem}
Since the tensor product axiom for ${\mathcal{S}_p}$ does not always hold for surfaces with colored points, it seems that any proof of the tensor product axiom for surfaces without colored points must ultimately depend on detailed knowledge of bases for ${\mathcal{S}_p}$ for connected surfaces.
For the proof of Theorem \ref{DC}, we need
to state the analog of Proposition \ref{vgraph} for disconnected
surfaces.
The proof is the same as in the connected case.
A little terminology is convenient. Let $\pi_0({\Sigma})$ be the set of connected components of ${\Sigma}$. We say
that a $3$-manifold $M$ with boundary ${\Sigma}$ represents a partition $P$ of
$\pi_0({\Sigma})$ if $P$ coincides with the partition given by the fibers of the natural map
$\pi_0({\Sigma})\rightarrow \pi_0(M)$.
\begin{prop} \label{vgraph2} If ${\Sigma}$ is not connected, choose any collection $\mathcal M$ of
$3$-manifolds $M$ with boundary ${\Sigma}$ such that every partition of
$\pi_0({\Sigma})$ is represented by some $M\in \mathcal M$. Then ${\mathcal{S}_p}({\Sigma})$ is
generated over ${\mathcal{O}}$ by elements in $V_p({\Sigma})$ represented by $v$-graphs in
the $3$-manifolds $M\in \mathcal M$, where, as before, the
$v$-graphs
must meet the boundary in the colored points of
${\Sigma}$.
\end{prop}
\begin{proof}[Proof of Theorem \ref{DC} in the case n=2]
We suppose ${\Sigma}_i$ is the boundary of $H_{g_i}.$
We wish to find a basis for $ {\mathcal{S}_p}({\Sigma}_1 \sqcup {\Sigma}_2)$ which we recall is defined as a subset of ${V_p}({\Sigma}_1 \sqcup {\Sigma}_2)$ which we can identify with ${V_p}({\Sigma}_1) \otimes {V_p}({\Sigma}_2)$. Thus any element of $ {\mathcal{S}_p}({\Sigma}_1 \sqcup {\Sigma}_2)$ may be described as a linear combinations over ${\mathcal{O}}[\frac 1 h]$ of elements of the form ${\mathfrak{b}}_1 \otimes {\mathfrak{b}}_2.$ Here ${\mathfrak{b}}_i$ is defined with respect to some lollipop tree $G_i$ in $H_{g_i}$.
Let $\#$ denote the operation of (interior)
connected sum of connected $3$-manifolds. ${\Sigma} _1\sqcup {\Sigma}_2$ is the boundary of $H_{g_1} \# H_{g_2}.$ According to Proposition \ref{vgraph2},
${\mathcal{S}_p}({\Sigma}_1 \sqcup {\Sigma}_2)$ is generated by $v$-graphs in $H_{g_1} \sqcup H_{g_2}$ and $v$-graphs in $H_{g_1} \# H_{g_2}$ . The $v$-graphs in $H_{g_1} \sqcup H_{g_2}$
generate the subspace of ${\mathcal{S}_p}({\Sigma}_1 \sqcup {\Sigma}_2)$ that has as basis the set of ${\mathfrak{b}}_1 \otimes {\mathfrak{b}}_2$ associated to small colorings of $G_1$ and $G_2$.
We want to see if $v$-graphs in $H_{g_1} \# H_{g_2}$ can give any elements not in this subspace.
Let $\#_{\partial}$ denote the operation of boundary
connected sum of connected $3$-manifolds with connected boundaries.
$H_{g_1} \# H_{g_2}$ is homeomorphic to $H_{g_1} \#_\partial H_{g_2}$ with a 2-handle attached along the curve which bounds the disk used to form the boundary connected sum. We indicate such curves in figures as dotted curves.
Thus dotted curves always indicate that a 2-handle should be attached to a handlebody along the curve. The handlebodies are not actually drawn in our figures but are regular neighborhoods of the lollipop trees. We identify $H_{g_1} \#_\partial H_{g_2}$ with $H_{g_1 +g_2}$.
\begin{figure}[h]
\includegraphics[width=2.5in]{graphy/graft2b.eps}
\caption{ Lollipop trees $G_1$ in $H_{g_1}$, $G_2$ in $H_{g_2}$ and $G$ in $H_{g_1} \# H_{g_2}.$ We say $G$ is the grafting of $G_1$ and $G_2$ along their trunks (only the parts near the trunks are shown, and all loop edges are above the trunks in this figure).} \label{graft}
\end{figure}
A $v$-graph in $H_{g_1} \# H_{g_2}$ can be isotoped into $H_{g_1 +g_2}$ where we know a graph-like basis for ${\mathcal{S}_p}(\partial (H_{g_1 +g_2}))$ associated to the lollipop tree $G$ obtained by the grafting of $G_1$ and $G_2$ along their trunks. See Figure \ref{graft}.
If
${\mathfrak{b}}$ is such a basis element of ${\mathcal{S}_p}(\partial (H_{g_1+g_2}))$, we let $\hat {\mathfrak{b}}$ denote the element that ${\mathfrak{b}}$ represents in ${\mathcal{S}_p}({\Sigma}_1 \sqcup {\Sigma}_2 ).$ Our task is to compute $\hat {\mathfrak{b}}$.
Suppose the coloring of $G$ that gives ${\mathfrak{b}}$ is as shown on the right of Figure \ref{graft}. Note that the 2-sphere composed of the disk spanning the dotted curve and the core of the 2-handle which is attached along this dotted curve can be isotoped to intersect the colored lollipop tree in either two points colored $2e_1$ and $2e'_1$ or two points colored $2e_2 $ and $2e'_2$. As $V_p$ of a 2-sphere with two points with distinct even colors is the zero module, we see that $\hat {\mathfrak{b}}$ is zero unless $e_1 = e'_1$ and $e_2 = e'_2$. In this case, we may consider the small colorings of the $G_i$ which agree with the colorings of $G$ except on the edges already labelled $2e_i$ in Figure \ref{graft}. Let ${\mathfrak{b}}_i$ denote the basis elements of ${\mathcal{S}_p}({\Sigma}_i)$ indexed by these small colorings of $G_i.$
In this case,
$\hat {\mathfrak{b}}$ is up to units $ h^E\ {\mathfrak{b}}_1 \otimes {\mathfrak{b}}_2$ where
\begin{center} $E= { d-1 - \lfloor \frac 1 2 ( A({\mathfrak{b}}) -e({\mathfrak{b}})) \rfloor+
\lfloor \frac 1 2 ( A({\mathfrak{b}}_1) -e({\mathfrak{b}}_1) ) \rfloor +
\lfloor \frac 1 2 ( A({\mathfrak{b}}_2) -e({\mathfrak{b}}_2)) \rfloor
}.$\end{center}
To see this one uses fusion and the fact that $V_p$ of a $2$-sphere with a single point with a nonzero even color is the zero module. (When applying fusion, one encounters certain coefficients, but these are products of quantum integers and their inverses, hence units in ${\mathcal{O}}$.)
One also must take into account the powers of $h$ in the definitions of ${\mathfrak{b}}$,${\mathfrak{b}}_1$, and ${\mathfrak{b}}_2.$ The $d-1$
term comes from the surgery axiom (S1) of \cite{BHMV2} \footnote{In \cite{BHMV2}, ${\mathcal{D}}$ is denoted by $\eta^{-1}$.} which implies
that attaching a 1-handle to a $3$-manifold has the effect of multiplying its quantum invariant by $\mathcal{D}$ which we recall, up to units, is $h^{d-1}.$
One checks that if E is negative, it must be $-1$, and this happens precisely when
$e({\mathfrak{b}})=0$, $e({\mathfrak{b}}_1)=e({\mathfrak{b}}_2)=d-1$, and $A({\mathfrak{b}}_1) \equiv A({\mathfrak{b}}_2) \equiv d \pmod{2},$ in other words, when $\varepsilon({\mathfrak{b}}_1)= \varepsilon({\mathfrak{b}}_2)=1$. Conversely, if $\varepsilon({\mathfrak{b}}_1)= \varepsilon({\mathfrak{b}}_2)=1$ we can find a ${\mathfrak{b}}$ such that $\hat {\mathfrak{b}}$ is $h^{-1} {\mathfrak{b}}_1\otimes {\mathfrak{b}}_2$ up to units.
Thus the elements $h^{-\lfloor \frac 1 2 (\varepsilon({\mathfrak{b}}_1) + \varepsilon({\mathfrak{b}}_2)) \rfloor }
{\mathfrak{b}}_1\otimes {\mathfrak{b}}_2$ form indeed a basis of ${\mathcal{S}_p}({\Sigma}_1\sqcup {\Sigma}_2)$.
\end{proof}
\begin{figure}[h]
\includegraphics[width=2.5in]{graphy/ex0.eps}
\caption{} \label{exa}
\end{figure}
\begin{ex}\label{ex}{\em Let ${\Sigma} = \partial H_2$ with one point colored $2$. On the left of Figure \ref{exa} is pictured a skein element in $H_2 \# H_2$ representing an element of ${{\mathcal{S}}}_5({\Sigma} \sqcup {\Sigma})$ which is divisible by $h^2$. On the right of the figure is a skein element in $H_2 \sqcup H_2$ representing an element of ${{\mathcal{S}}}_5({\Sigma} \sqcup {\Sigma})$ which (by the argument in the proof above) is $h$ times the element on the left (up to units) and thus is divisible by $h$ in $S_5({\Sigma} \sqcup {\Sigma})$. This latter element is of the form ${\mathfrak{b}}\otimes{\mathfrak{b}}$ where ${\mathfrak{b}}\in {{\mathcal{S}}}_5({\Sigma})$ is not divisible by $h$. This shows that the cokernel of the homomorphism from $S_5({\Sigma}) \otimes S_5({\Sigma}) \rightarrow S_5({\Sigma} \sqcup {\Sigma})$ contains a non-trivial element which is annihilated by $h$. Of course, one has $\varepsilon(
{\mathfrak{b}}
)=1$.} \end{ex}
\begin{proof}[Proof of Theorem \ref{DC} for $n > 2$]
As before, we suppose ${\Sigma}_i$ is the boundary of $H_{g_i}.$
Given a partition of $\pi_0( \sqcup_i {\Sigma}_i)$, we can take a sequence of internal connected sums of the $H_{g_i}$ together to form a manifold, say $H$, that represents the given partition. Using partitions into at most pairs and singletons, it follows from the $n=2$ case that rescaled tensor products of the form $$h^{-\lfloor \frac 1 2 \sum_i \varepsilon({\mathfrak{b}}_i) \rfloor }
{\mathfrak{b}}_1\otimes \cdots \otimes {\mathfrak{b}}_n \ \ \ \ \ \ ({\mathfrak{b}}_i\in {\mathcal{B}}_i)$$
lie in ${\mathcal{S}_p}(\sqcup_i {\Sigma}_i)$. Thus we only have to show that other partitions do not give rise to elements not in the ${\mathcal{O}}$-span of these ones.
By induction on $n$, it is enough to consider the case where $H$ is connected.
For this, we apply the same strategy as used in the proof for $n=2$.
Let $G_i$ be lollipop trees in $H_{g_i}$. We identify the boundary connected sum of all the $H_{g_i}$ with $H_{g}$ where $g= \sum g_i$. The boundary of $H_{g}$ is $\#_i {\Sigma}_i$, the connected sum of all the ${\Sigma}_i.$ Our $H$ is the interior connected sum $\# H_g$ with boundary $\sqcup_i {\Sigma}_i$; it is obtained from $H_g$ by adding $n-1$ two-handles along the $n-1$ connecting circles in $\#_i {\Sigma}_i$ .
\begin{figure}[h]
\includegraphics[width=1.5in]{graphy/3Si5.eps}
\caption{Grafting three lollipop trees. If $n>3$, one generalizes this pattern in the obvious way.} \label{3graft}
\end{figure}
We graft the $G_i$ together as indicated in Figure \ref{3graft} to form a lollipop tree $G$ in $H_{g}$. As in the case $n=2$,
if ${\mathfrak{b}}$ is a basis element of ${\mathcal{S}_p}(\#_i {\Sigma}_i)$ given by a small coloring of $G$, we denote by $\hat {\mathfrak{b}}$ the induced element of ${\mathcal{S}_p}(\sqcup_i {\Sigma}_i)$ {\em via} the inclusion $H_g \subset H$. Our task is to compute the space spanned by $v$-graphs in $H$, but since they all can be isotoped into $H_g$, we only have to compute the span of the elements
$\hat {\mathfrak{b}}$ as ${\mathfrak{b}}$ ranges over the basis associated to the small colorings of $G$.
Again as in the case $n=2$, $\hat {\mathfrak{b}}$ will be zero unless $n-1$ specified pairs of edges have the same color. If these are the same, then we may define a related small coloring on each $G_i$. Let ${\mathfrak{b}}_i$ denote the
basis vector for ${\mathcal{S}_p}({\Sigma}_i)$ associated to this coloring. As above,
if $\hat {\mathfrak{b}}$ does not represent zero,
$\hat {\mathfrak{b}} $ is, up to units of ${\mathcal{O}}$, given by
$ h^E {\mathfrak{b}}_1\otimes \ldots \otimes{\mathfrak{b}}_n$, where
\begin{align*}
&E= (n-1)(d-1) - \lfloor \frac 1 2 ( A({\mathfrak{b}}) -e({\mathfrak{b}})) \rfloor + \sum_i \lfloor
\frac 1 2 ( A({\mathfrak{b}}_i) -e({\mathfrak{b}}_i) ) \rfloor \\
& \ge (n-1)(d-1) - \frac 1 2 ( A({\mathfrak{b}}) -e({\mathfrak{b}})) - \frac n 2 +
\sum_i \frac 1 2 ( A({\mathfrak{b}}_i) -e({\mathfrak{b}}_i) ) \\
& = (n-1)(d-1) - \frac n 2 + \frac 1 2 ( e({\mathfrak{b}}) - \sum_i e({\mathfrak{b}}_i) )
\text{, as $A({\mathfrak{b}})=\sum_i A({\mathfrak{b}}_i)$} \\
& \ge (n-1)(d-1) - \frac n 2 - \frac {n (d-1)} 2
\text{, as $e({\mathfrak{b}}) \ge 0$, and $e({\mathfrak{b}}_i) \le d-1
$} \\
& = \frac {(n-2)(d-2) } {2} -1.
\end{align*}
If $p >5$, then $d>2,$ and $E$ is non-negative (since $n\geq 3$ here). Thus $v$-graphs in $H$ do not give rise to any new elements of
${\mathcal{S}_p}(\sqcup_i {\Sigma}_i)$.
We are reduced to the case that $p=5$
and $d=2$,
when $E$ perhaps can be negative but is never less than $-1$. One has $E=-1$ exactly when
all $\geq$ signs in the computation above are equalities. This requires in particular that $A({\mathfrak{b}}_i) -e({\mathfrak{b}}_i)$ must be odd for all $i$, and that $e({\mathfrak{b}}_i) =d-1$ for all $i$. In other words, we must have $\varepsilon({\mathfrak{b}}_i)=1$ for all $i$.
{}From
this we conclude that for $p=5$ the connect-summing of $n\geq 3$ handlebodies together may imply the divisibility of an element of the form ${\mathfrak{b}}_1\otimes \cdots \otimes {\mathfrak{b}}_n$ by $h$
but in every case, one can also see at least this divisibility arising from a partitioning of these handlebodies into at most pairs and singletons. This completes the proof.
\end{proof}
\begin{rem}{\em It might have occurred to the reader that perhaps we should have defined ${\mathcal{S}_p}$ of a disconnected surface by a tensor product formula, rather than as in Definition \ref{defS}. This would have definite drawbacks when we consider how cobordisms act in the ${\mathcal{S}_p}$-theory. Here is an example.
Suppose we let ${\Sigma}= \partial H_2$ with a point colored $p-3$ and let $C$ be the cobordism from ${\Sigma} \# {\Sigma}$ to ${\Sigma} \sqcup {\Sigma}$ constructed by adding a 2-handle as in Example \ref{ex}. The TQFT associates to $C$ a linear map $Z_p(C)$ from $V_p({\Sigma} \# {\Sigma})$ to $V_p({\Sigma} \sqcup {\Sigma} ).$ If we had defined ${\mathcal{S}_p}({\Sigma} \sqcup {\Sigma} )$ to be ${\mathcal{S}_p}({\Sigma}) \otimes {\mathcal{S}_p}({\Sigma})$,
then $Z_p(C)$ would not map ${\mathcal{S}_p}({\Sigma} \# {\Sigma} )$ into ${\mathcal{S}_p}({\Sigma} \sqcup {\Sigma} ).$
On the other hand, with Definition \ref{defS} of ${\mathcal{S}_p}$, for any cobordism $C$ from, say ${\Sigma}'$ to ${\Sigma}''$ such that $\beta_0(C,{\Sigma}'')=0,$ the induced map $Z_p(C)$ will map
${\mathcal{S}_p}({\Sigma} ')$ into ${\mathcal{S}_p}({\Sigma}'').$ Such cobordisms are called {\em targeted} in \cite{G}.
More generally, we have that if $C$ is any cobordism $C$ from ${\Sigma}'$ to ${\Sigma}''$, then $Z_p(C)$ maps ${\mathcal{S}_p}({\Sigma} ')$ into ${\mathcal{D}}^{-\beta_0(C,{\Sigma}'')} {\mathcal{S}_p}({\Sigma}'').$
We remark that our theory is not half-projective with parameter ${\mathcal{D}}$ in the sense of Kerler \cite{K}. Nor does it seem possible to make it so by rescaling.} \end{rem}
\section{Extra
structure, mapping
class group representations, and ${\mathcal{S}^+_p}({\Sigma})$}\label{mcg.sec}
The TQFTs we have been studying require that surfaces and $3$-manifolds be equipped with some extra structure in order to avoid having a framing anomaly.
Until now this extra structure did not play any essential role in our arguments. Thus a detailed discussion was not required.
However some of the results in the following sections are improved by using the refined theory $V^+_p$ and its integral version ${\mathcal{S}^+_p}$ which are defined and discussed in \cite{G,GMW}. For this reason, we now discuss the extra structures which we employ.
For simplicity, we will focus on the case where ${\Sigma}$ is connected in this and the next section (but we allow ${\Sigma}$ to have colored points).
As in \cite{G,QG}, we
follow the method originally developed by Walker \cite{W} and Turaev \cite{T}. This approach equips $3$-manifolds with integer weights, and surfaces with Lagrangian subspaces of their first
real
homology. The cobordisms from ${\Sigma}$ to ${\Sigma}$ which are
mapping cylinders
form a central extension, denoted
$\widetilde \Gamma({\Sigma})$,
of the mapping class group. Forgetting the
weight on these cobordisms
defines a projection onto the
ordinary mapping class group $\Gamma({\Sigma}).$ The kernel of this homomorphism is the group of integers ${\mathbb{Z}}$.
The extended mapping class group $\widetilde \Gamma({\Sigma})$ acts on ${\mathcal{S}_p}({\Sigma})$ preserving the form $(\ ,\ )_{\Sigma}.$ Thus $\widetilde \Gamma({\Sigma})$ acts on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ as well. We note that elements of $\widetilde \Gamma({\Sigma})$ which map to the identity of $\Gamma({\Sigma})$ act on ${\mathcal{S}_p}({\Sigma})$ by multiplication by some power of $\zeta_p$,
if $p \equiv -1 \pmod{4} $, and by some power of $\zeta_{4p}$, if $p \equiv 1 \pmod{4}$.
By a standard handlebody, we mean one equipped with the weight $0$. The boundary of a standard handlebody $H$ will be equipped with the Lagrangian given by
the kernel of the inclusion to the homology of $H$.
In \cite[ \S 7]{G}, the notion of an even cobordism is defined. The even
cobordisms from ${\Sigma}$ to ${\Sigma}$ which are
mapping cylinders form an index two subgroup of $\widetilde \Gamma({\Sigma})$ denoted $\widetilde \Gamma({\Sigma})^+$ which still surjects onto $\Gamma({\Sigma})$.
In the remainder of this section, we assume $p \equiv 1 \pmod{4}$. Even cobordisms were used in \cite{G} to reduce coefficients from ${\mathcal{O}}={\mathbb{Z}}[\zeta_{4p}]$ to ${\mathbb{Z}}[\zeta_{p}]$ which we will denote by ${\mathcal{O}}^+$. One obtains free ${\mathcal{O}}^+$-modules ${\mathcal{S}^+_p}({\Sigma})$ so that $${\mathcal{S}^+_p}({\Sigma})\otimes_{{\mathcal{O}}^+}{\mathcal{O}}={\mathcal{S}_p}({\Sigma})~.$$ The module ${\mathcal{S}^+_p}({\Sigma})$ is again generated by $v$-graphs in a standard handlebody $H$, but coefficients are now required to lie in ${\mathcal{O}}^+$. (Here we are using the fact that a standard handlebody $H$ when viewed as a morphism from the empty set to ${\Sigma}$ is even.) Thus
our ${\mathfrak{b}}(a,b,c)$ also define elements of ${\mathcal{S}^+_p}({\Sigma})$. Moreover the set of ${\mathfrak{b}}(a,b,c)$ associated to small colorings forms a basis for ${\mathcal{S}^+_p}({\Sigma})$ by the same proof.
There is a sesquilinear form
\[ (\ ,\ )^+_{\Sigma}: {\mathcal{S}^+_p}(\Sigma)
\times
{\mathcal{S}^+_p}(\Sigma) \rightarrow \mathcal{O}^+\]
obtained by multiplying the form $(\ ,\ )_{\Sigma}$
by $i^{\delta ({\Sigma})},$ where ${\delta ({\Sigma})}$ is zero or one
depending on whether the genus of ${\Sigma}$ is even, or odd. The reason why this form takes values in ${\mathcal{O}}^+$ is explained in \cite[Remark 9.6]{GMW}. Note that $(\ ,\ )^+_{\Sigma}$ is hermitian or skew-hermitian, depending on the parity of the genus.
We define ${{\mathcal{S}}_p^{+\sharp}}({\Sigma})$ in the same way as ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ but using the form $(\ ,\ )^+_{\Sigma}$. Then the ${\mathfrak{b}}^\sharp(a,b,c)$ form a basis for ${{\mathcal{S}}_p^{+\sharp}}({\Sigma}).$
The even extended mapping class group $\widetilde \Gamma({\Sigma})^+$ acts on ${\mathcal{S}^+_p}({\Sigma})$ preserving the form $(\ ,\ )^+_{\Sigma},$ and therefore also acts on ${{\mathcal{S}}_p^{+\sharp}}({\Sigma})$. Elements of $\widetilde \Gamma({\Sigma})^+$ which map to the identity of $\Gamma({\Sigma})$ act by multiplication by some power of $\zeta_p.$
\section{Associated finite torsion modules}\label{torsion.sec}
The extended mapping class group $\widetilde \Gamma({\Sigma})$ acts on ${\mathcal{S}_p}({\Sigma})/h^N {\mathcal{S}_p}({\Sigma})$ ($N\geq 1$) and also on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma}).$ These are finitely generated torsion ${\mathcal{O}}$-modules. The hermitian form ${(\ , \ )}_{\Sigma}$ on ${\mathcal{S}_p}({\Sigma})$ induces in the obvious way an ${\mathcal{O}}/h^N {\mathcal{O}}$-valued form on ${\mathcal{S}_p}({\Sigma})/h^N {\mathcal{S}_p}({\Sigma})$ and an
${\mathcal{O}}(\frac 1 h) /{\mathcal{O}}$-valued form on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$, and $\widetilde \Gamma({\Sigma})$ acts preserving these forms. The structure of these modules and forms follows easily from our bases. Let us describe them in some interesting cases. If $p \equiv 1 \pmod{4}$, we will mainly look at the modules coming from the refined theory ${\mathcal{S}^+_p}$.
Let $G$ be a lollipop tree and consider the basis vectors ${\mathfrak{b}}(a,b,c)$
for ${\mathcal{S}_p}({\Sigma})$, where $(a,b,c)$ runs through the small colorings of $G$.
Clearly, ${\mathcal{S}_p}({\Sigma})/h^N {\mathcal{S}_p}({\Sigma})$ is a free
${\mathcal{O}}/h^N {\mathcal{O}}$-module with the induced basis.
\begin{prop} One has an orthogonal decomposition $${\mathcal{S}_p}({\Sigma}) = \bigoplus_{a,c}
{\mathcal{S}_p}({\Sigma})_{G;a,c}$$ with respect to the form ${(\ , \ )}_{\Sigma}$, where ${\mathcal{S}_p}({\Sigma})_{G;a,c}$ is the ${\mathcal{O}}$-span of the basis vectors ${\mathfrak{b}}(a,b,c)$ where $b$ varies so that $(a,b,c)$ is a small coloring of $G$.
\end{prop}
\begin{proof} This follows from the fact that the small graph basis vectors ${\mathfrak{g}}(a,b,c)$ are an orthogonal basis of $V_p({\Sigma})$ \cite[Theorem 4.11]{BHMV2}.
\end{proof}
Clearly, this also induces an orthogonal decomposition of the induced form on ${\mathcal{S}_p}({\Sigma})/h^N {\mathcal{S}_p}({\Sigma})$.
Let us now look at ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$. One has ${\mathfrak{b}}^\sharp(a,b,c)=h^{-n(a,c)}{\mathfrak{b}}(a,b,c)$, where $n(a,c)= e$, if $\sum a_i$ is even, and $n(a,c)= e+1$, if $\sum a_i$ is odd (here, as always, $e$ is the trunk half-color of $(a,b,c)$). One has an orthogonal decomposition
$${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})=
\bigoplus_{(a,c)}{{\mathcal{S}}_p^{\sharp}}({\Sigma})_{G;a,c}/{\mathcal{S}_p}({\Sigma})_{G;a,c}~,$$ and each summand is a free ${\mathcal{O}}/h^{n(a,c)}{\mathcal{O}}$-module.
If $p \equiv 1 \pmod{4}$, similar statements hold for the ${\mathcal{O}}^+$-modules
${\mathcal{S}^+_p}({\Sigma})/h^N {\mathcal{S}^+_p}({\Sigma})$ ($N\geq 1$) and ${{\mathcal{S}}_p^{+\sharp}}({\Sigma})/{\mathcal{S}^+_p}({\Sigma}).$
We will say $(a,b,c)$ is an
{\em odd (even)} small coloring if $\sum a_i$ is odd (even respectively).
If $p \equiv -1 \pmod{4},$ then ${\mathcal{O}}={\mathbb{Z}}[\zeta_p]$ and ${\mathcal{O}}/h {\mathcal{O}} = {\mathbb{F}}_p$, the finite field with $p$ elements. If $p \equiv 1 \pmod{4}$, then ${\mathcal{O}}={\mathbb{Z}}[\zeta_{4p}]={\mathbb{Z}}[\zeta_{p},i]$ and ${\mathcal{O}}/h {\mathcal{O}} = {\mathbb{F}}_p [i]$.
In this case, we mainly consider ${\mathcal{O}}^+/h {\mathcal{O}}^+$ which is again equal to ${\mathbb{F}}_p$.
\begin{prop}\label{14.2} If $p \equiv -1 \pmod{4}$, the hermitian form ${(\ , \ )}_{\Sigma}$ induces a symmetric form on the ${\mathbb{F}}_p$-vector space ${\mathcal{S}_p}({\Sigma})/h{\mathcal{S}_p}({\Sigma})$. If $p \equiv 1 \pmod{4}$, the $(-1)^g$-hermitian form $(\ , \ )^+_{\Sigma}$ induces a $(-1)^g$-symmetric form on the ${\mathbb{F}}_p$-vector space ${\mathcal{S}^+_p}({\Sigma})/h{\mathcal{S}^+_p}({\Sigma})$.
(Here $g$ is the genus of ${\Sigma}$.)
In both cases, one has an action of the (ordinary) mapping class group $\Gamma({\Sigma})$ preserving these forms.
\end{prop}
\begin{proof} The symmetry properties of the induced forms follow from the fact that the conjugation on ${\mathbb{Z}}[\zeta_p]$ induces the trivial involution on ${\mathbb{F}}_p$. The action of the (even) extended mapping class group descends to the ordinary mapping class group $\Gamma({\Sigma})$ because $\zeta_p$ acts as the identity on
${\mathbb{F}}_p$.
\end{proof}
\begin{rem}\label{14.3}{\em If $\gamma \subset {\Sigma}$ is a separating simple closed curve, and all the colored points of ${\Sigma}$ lie on one side of $\Gamma$, then the associated Dehn twist $T_\gamma\in \Gamma({\Sigma})$ acts trivially on the ${\mathbb{F}}_p$-vector spaces above. This follows from Theorem~\ref{11.1}. Indeed, $T_\gamma$ acts diagonally in the basis given by that theorem, and its eigenvalues are the twist
coefficients $\mu_{2e}$ (see {\em e.g.} \cite[Remark 7.6(ii)]{BHMV2}) which are powers of $\zeta_p$, hence congruent to
$1 \pmod{h}$. This argument is due to Kerler.}\end{rem}
The forms given by Proposition~\ref{14.2} are singular in general. Of course
the radical of such a form is preserved by the mapping class group. In this way we obtain, as well, a subrepresentation of characteristic $p$ given by the radical, and a quotient representation of characteristic $p.$ The quotient representation preserves the induced non-singular inner product space structure. We denote the radical of the form by $\rad_p({\Sigma}).$
For the rest of this section, we consider only the important special case that ${\Sigma}$ has no colored points. Then the radical is spanned by the images of the ${\mathfrak{b}}(a,b,c)$ associated to odd small colorings. The quotient representation has dimension given by the number of even small colorings.
Observe that ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$ is also the free ${\mathcal{O}}/h {\mathcal{O}}$-module on the odd small colorings of $G.$
\begin{prop}\label{14.4} If $p \equiv -1 \pmod{4}$, there is an isomorphism between the ${\mathbb{F}}_p$-vector spaces
${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$ and $\rad_p({\Sigma}) $ intertwining $\Gamma({\Sigma})$ representations. If $p \equiv 1 \pmod{4}$, there is an isomorphism between
${{\mathcal{S}}_p^{+\sharp}}({\Sigma})/{\mathcal{S}^+_p} ({\Sigma})$ and $\rad_p({\Sigma}) $ intertwining the $\Gamma({\Sigma})$ representations.
\end{prop}
\begin{proof} In the first case, consider the composition of the map from ${{\mathcal{S}}_p^{\sharp}}({\Sigma})$ to
${\mathcal{S}_p}({\Sigma})$ given by multiplication by $h$ followed by the quotient map to
${\mathcal{S}_p}({\Sigma})/h{\mathcal{S}_p}({\Sigma}).$ This is onto $\rad_p({\Sigma}) $, and has ${\mathcal{S}_p}({\Sigma})$ as kernel. The second case is seen similarly.
\end{proof}
Recall that the hermitian form ${(\ , \ )}_{\Sigma}$ on ${\mathcal{S}_p}({\Sigma})$
induces a non-singular ${\mathcal{O}}(\frac 1 h) /{\mathcal{O}}$-valued form on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$.
In our special situation, it suffices to multiply this form by $h$ to get an
${\mathcal{O}} /h{\mathcal{O}}$-valued form on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$. We denote this new form on ${\mathcal{S}_p}({\Sigma})$ by $h.{(\ , \ )}_{\Sigma}$.
\begin{prop} If $p \equiv -1 \pmod{4}$, the form $h. {(\ , \ )}_{\Sigma}$ induces a skew-symmetric form on the ${\mathbb{F}}_p$-vector space ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$. If $p \equiv 1 \pmod{4}$, the form $h. (\ , \ )^+_{\Sigma}$ induces a $(-1)^{g+1}$-symmetric form on the ${\mathbb{F}}_p$-vector space ${{\mathcal{S}}_p^{+\sharp}}({\Sigma})/{\mathcal{S}^+_p}({\Sigma})$. In both cases, the forms are non-degenerate, and one has an action of the mapping class group $\Gamma({\Sigma})$ preserving these forms.
\end{prop}
\begin{proof} The proof is basically the same as for Proposition~\ref{14.2}. One just needs to observe that $ \bar h = 1- \zeta_p^{-1}= - \zeta_p^{-1} h $. Assume $p \equiv -1 \pmod{4}$. For $x,y\in {{\mathcal{S}}_p^{\sharp}}({\Sigma})$ one has $$h.(y,x)_{\Sigma}= h. \overline{(x,y)_{\Sigma}}= - \zeta_p\, \overline{h.(x,y)_{\Sigma}}$$ as elements of ${\mathcal{O}}$. As before, $\zeta_p$ acts trivially on ${\mathbb{F}}_p$ and the induced conjugation is the identity. Only the minus sign remains. It is the reason why one gets a skew-symmetric form on ${{\mathcal{S}}_p^{\sharp}}({\Sigma})/{\mathcal{S}_p}({\Sigma})$ while the form in Proposition~\ref{14.2} was symmetric. The case $p \equiv 1 \pmod{4}$ is proved in the same way.
\end{proof}
Via the isomorphism of Proposition~\ref{14.4}, one gets in this way an inner product structure
on $\rad_p({\Sigma}) $ as well.
\section{An upper bound for the cut number of a $3$-manifold}\label{sec.cut}
The
co-rank
of a group is the maximal $k$ such that there is an epimorphism of that group onto the free group on $k$ letters \cite{S}.
If $M$ is an oriented connected closed
$3$-manifold, define $c(M),$ the {\em cut number} of $M,$ to be the
maximal number of closed oriented surfaces that one can place in $M$ and still
have a connected complement.
Jaco \cite{Jaco} showed that the co-rank of $\pi_1(M)$ is the cut number
of $M.$ It is easy to see that $c(M) \le {\beta_1(M)}.$
In this section, we show that another upper bound on the cut number is given by quantum
$SO(3)$-invariants. Let $M$ be an oriented connected closed
$3$-manifold and $L$ a banded colored graph in $M$. We use the normalization
$I_p(M,L) = {\mathcal{D}} \, \langle (M,L)\rangle_p$ where $\langle \ \rangle_p$ is the invariant defined in \cite{BHMV2}, and
$M$ is given the weight zero. This
is
the same normalization as in \cite{MR}.
For example, $I_p(S^3)=1$ (since ${\mathcal{D}}=\langle S^3\rangle_p^{-1}$). The invariant $I_p$ takes values in ${\mathcal{O}}$ by \cite{Mu2,MR}.
We let $\bo_p(M,L)$
denote
the highest power of $h$ which divides $I_p(M,L)$.
\begin{thm} \label{cut} Let $M$ be an oriented connected closed
$3$-manifold,
$L$ a banded colored graph $L$ in $M$, and
$p= 2d+1 \ge 5$
a prime.
Then $$c (M) \le \frac {\bo_p(M, L)} {d-1}~.$$
\end{thm}
Varying $L$, we obtain many upper bounds on the cut number of $M$. So far, no example has been found where an interesting upper bound on $c (M)$ is obtained by choosing $L$ nonempty that cannot also be obtained by choosing $L$ empty.
Theorem \ref{cut} was conjectured by the first author. He and Kerler then obtained Theorem \ref{cut} for $p=5$ and $L$ empty.
The proof we give of the more general result incorporates some ideas
that were
used in
the
proof of
this
special case.
Cochran and Melvin showed \cite{CM} that
$\beta_1(M)/3 \le \bo_p(M)/(d-1)$.
Three recent papers
show that $\beta_1(M)/3$ can be larger than the cut number $c(M)$ \cite{H, LR,Si}. In this case the result of Cochran and Melvin gives a better lower bound on $\bo_p(M)$ than Theorem \ref{cut}.
On the other hand, when $\beta_1(M)/3 < c(M) $ (which may also happen), Theorem \ref{cut} gives a better lower bound on $\bo_p(M)$.
In the proof, we will use the following easy proposition which follows from the functoriality properties of the TQFT. We will also use it in Section~\ref{sec.FKB}.
\begin{prop}\label{exp} Suppose $N \subset M$ and $N'$ is the exterior of $N$ in $M$.
Further suppose $[N] \in {\mathcal{S}_p}(\partial N)$ can be written as a linear combination $\sum_i a_i [N_i]$. We have that
$I_p(M) = \sum_i a_i I_p(N_i \cup_{\partial N} N')$.
\end{prop}
\begin{proof}[Proof of Theorem \ref{cut}]
Let $c$ be the cut number of $M.$
We can find $c$ disjoint connected oriented surfaces in $M$ which do not disconnect $M.$ As these surfaces do not disconnect, we may tube up parallel copies of these surfaces to find a copy of the connected sum of these surfaces disjoint from the original surfaces but homologous to the union of these surfaces. If we add this new surface to the collection, we obtain a collection of $c+1$ connected oriented surfaces which disconnect but no sub-collection of $c$ of them disconnect. These $c+1$ surfaces disconnect $M$ into two components $Y$ and $Y'$ and each of the
$c+1$ surfaces is a boundary component of the closures of both $Y$ and $Y'$.
Let ${\Sigma}_i$ for $1 \le i \le c +1$ denote the individual surfaces. We can find $c$ disjoint arcs $ \alpha_i$ in $Y$ for $1 \le i \le c$ which join points $x_i \in {\Sigma}_i$ to points $y_i \in {\Sigma}_{i+1}$. Similarly we can find $c$ disjoint arcs $ \alpha'_i$ in $Y'$ which join
the points $x_i\in {\Sigma}_i$ to the points $y_i \in {\Sigma}_{i+1}$.
A neighborhood of the union of the ${\Sigma}_i$ and the $\alpha_i$ and the $\alpha'_i$ is a cobordism $P$ from the connected sum of the ${\Sigma}_i$ to itself. We isotope $L$ so that it is transverse to the ${\Sigma}_i$'s. We assign colored points to each ${\Sigma}_i$ according to this intersection.
We denote the connected sum of the ${\Sigma}_i$ by ${\Sigma}.$ On ${\Sigma}$ we have $c$ simple closed curves $\gamma_i$
such that if we perform surgery along them we recover the disjoint union $\sqcup_i {\Sigma}_i.$ The cobordism $P$ has a nice handle decomposition as follows: ${\Sigma} \times I$ union $c$ 2-handles along $\{ \gamma_i \}_{ 1 \le i \le c}$ union $c$ 1-handles which ``reconnect''
the ${\Sigma}_i$. The core of each 1-handle can be completed to a circle which meets exactly one core of a 2-handle transversely in a singe point. We can identify $P$ with the result of framed surgery
on
${\Sigma} \times I$ along $\{ \gamma_i \times \frac 1 2 \}_{ 1 \le i \le c}$ with the framing along ${\Sigma} \times \frac 1 2$. Let $\gamma(\omega)$ denote $\{ \gamma_i \times \frac 1 2 \}_{ 1 \le i \le c}$ colored with the skein element $\omega$ appearing in the surgery axiom (S2) of \cite{BHMV2}. The surgery axiom says that $({\Sigma} \times I, \gamma(\omega))$ induces the same endomorphism of $V_p({\Sigma})$ under the TQFT as does $P.$
Place ${\Sigma} \times I$ in $S^3$ so that its complement is the disjoint union of two handlebodies $H$ and $H'$ and the $ \gamma_j$ bound disks $D_j$ in $H$. Express $[Y] \in {\mathcal{S}_p} ({\Sigma})$ in terms of the basis ${\mathcal{B}}$ associated to a lollipop tree $G$ for $H$ with respect to $\ell({\Sigma})$. \footnote{Warning: we cannot assume that $G$ meets each $D_j$ in a single point, as there may be colored points on each ${\Sigma}_i$. But this causes no problem in the argument.} Similarly express $[Y'] \in {\mathcal{S}_p} (-{\Sigma})$ in terms of the basis ${\mathcal{B}}'$ associated to a lollipop tree $G'$ for $H'$ with respect to $\ell({\Sigma})$.
Applying Proposition \ref{exp} twice, once to expand $Y$ and once to expand $Y'$, we have that $I_p(M,L)$ is a linear combination over ${\mathcal{O}}$ of evaluations of skein classes in $S^3$. Recall that every basis element ${\mathfrak{b}}={\mathfrak{b}}(a,b,c)\in {\mathcal{B}}$ is given by $ h^{- \lfloor\frac 1 2 ( A({\mathfrak{b}})-e({\mathfrak{b}}))\rfloor} {\mathfrak{g}}(a,0,c)$ union some $v$-colored curves. Thus the term corresponding to $({\mathfrak{b}},{\mathfrak{b}}')\in {\mathcal{B}}\times {\mathcal{B}}'$ in the expansion of $I_p(M,L)$ is an integral multiple of $ h^{- \lfloor\frac 1 2 ( A({\mathfrak{b}})-e({\mathfrak{b}}) ) \rfloor - \lfloor\frac 1 2 (A({\mathfrak{b}}')-e({\mathfrak{b}}')) \rfloor }$ times the evaluation of the union of
(i) a colored trivalent graph, namely the gluing of the graphs ${\mathfrak{g}}(a,0,c)$ in $H$ and ${\mathfrak{g}}(a',0,c')$ in $H'$ along their univalent vertices (which correspond to $\ell({\Sigma})$),
(ii) the $ \gamma_i$'s each colored by $\omega$, and
(iii) some some extra
$v$-colored
curves.
We can do fusion along the strands which pass through the spanning disks $D_j$ for the $\omega$-colored $ \gamma_j$'s (none of the $v$-colored curves do), and discard the terms arising with a single non-zero color passing through these
disks, as for all these terms the color must be even.
Then each $\omega$-colored $ \gamma_j$ spanning a disk may be replaced by
an
extra scalar factor of ${\mathcal{D}}$. (This is the same argument as in \cite[Proof of Lemma 4.1]{MR}.) Up to units this gives an extra factor of $h^{(d-1)c}.$ On the other hand, the Lollipop Lemma~\ref{ll}
gives an extra factor of
$h^{ \lceil\frac 1 2 (A({\mathfrak{b}})
+ A({\mathfrak{b}}') )\rceil }$ which compensates the above-mentioned negative power of $h$. Thus each term in the expansion of $I_p(M,L)$ is divisible by $h^{(d-1)c}.$ The result follows.
\end{proof}
\begin{ex}{\em Let ${\Sigma}$ be a closed surface of genus $2$, and let $T$ be the Dehn twist along an essential separating curve
$\gamma$.
Let $M_n$ be the mapping torus of $T^n$. Essential curves disjoint from $\gamma$ sweep out
tori
in $M_n$. Picking one such curve on each side of $\gamma$, we see that the cut number $c(M_n)\geq 2$. On the other hand, computing the trace of the map induced by $T^n$ on $V_p({\Sigma})$, we find $$I_p(M_n)={\mathcal{D}} \sum_{j=0}^{d-1} \zeta_p^{2nj(j+1)} (d-j)^2~.$$ If $n$ is not a multiple of $p$, this can be evaluated using Gauss sums. One finds that
$\bo_p(M_n)=2d-2$, and $c(M_n)\leq 2$ by Theorem~\ref{cut}. \footnote{If $n$ is a multiple of $p$, then $I_p(M_n)$ is ${\mathcal{D}}$ times $\dim V_p({\Sigma})=d(d+1)(2d+1)/6$; this does not lead to a good upper bound for $c(M_n)$.} Varying $p$, this shows $c(M_n)=2$ for every non-zero $n$. We remark that this can also be seen classically by computing the co-rank of $\pi_1(M_n)$.}\end{ex}
\section{The Frohman Kania-Bartoszynska ideal invariant}
\label{sec.FKB}
\begin{de} [\cite{FK}] Given a connected $3$-manifold $N$ with boundary,
let ${\mathcal{J}}_p(N)$ be the ideal in ${\mathcal{O}}$ generated by
\[
\{ I_p(M)| \text{M is a closed connected oriented $3$-manifold containing $N$} \}. \]
In the case $p\equiv 1 \pmod{4},$ we also define
${\mathcal{J}}_p^+(N)= {\mathcal{J}}_p(N)\cap {\mathcal{O}}^+$. \end{de}
\begin{rem}{ \em If $p\equiv 1 \pmod{4},$ the ideal ${\mathcal{J}}_p(N)$ is generated by scalars which are either in ${\mathcal{O}}^+$ or in $i {\mathcal{O}}^+$. (In fact, $I_p(M)$ lies in $i^{\beta_1(M)}{\mathcal{O}}^+$ \cite[Remark 5.2]{MR}.) Thus ${\mathcal{J}}_p(N)$ is generated over ${\mathcal{O}}$ by ${\mathcal{J}}_p^+(N)$. This is why we prefer to use ${\mathcal{J}}_p^+(N)$
if $p\equiv 1 \pmod{4}.$
}\end{rem}
This ideal is interesting because of the following immediate proposition.
\begin{prop} [\cite{FK}] \label{FK} If $N_1$ embeds in $N_2$, then ${\mathcal{J}}_p(N_2) \subset {\mathcal{J}}_p(N_1),$
and ${\mathcal{J}}_p^+(N_2) \subset {\mathcal{J}}^+_p(N_1),$ if $p\equiv 1 \pmod{4}$. \end{prop}
\begin{rem}{ \em Frohman and Kania-Bartoszynska actually made this definition using the $SU(2)$ theory in place of the $SO(3)$ theory used here.} \end{rem}
The ideal ${\mathcal{J}}_p(N)$ is hard to compute from its definition, because it involves the quantum invariants of infinitely many manifolds.
Frohman and Kania-Bartoszynska were able to show that a related ideal (associated to the Turaev-Viro invariant at the third root of unity) is non-trivial
\footnote{ We take the trivial ideal of a ring to be the ring itself.}
for the union of two solid tori glued together by identifying neighborhoods of $(2,1)$ curves on their boundary.
But it seems that ${\mathcal{J}}_p(N)$ has never
been computed exactly, except
for manifolds $N$ with boundary a $2$-sphere.\footnote{However, sometimes it is possible to compute that the
ideal is trivial or a power of $(h)$,
without using Theorem~\ref{calc},
by making use of known theorems about the quantum invariants of closed
3-manifolds. See \cite{G2} for examples of this.}
We can give a finite set of generators for ${\mathcal{J}}_p(N)$ using our bases.
\begin{thm}\label{calc} Let $N$ be an oriented connected $3$-manifold with boundary ${\Sigma}$,
then ${\mathcal{J}}_p(N)$ is generated by the scalar products $([N], {\mathfrak{b}})_{{\Sigma}}$ as ${\mathfrak{b}}$ varies over a basis for ${\mathcal{S}_p}({\Sigma}).$
\end{thm}
This follows immediately from Proposition \ref{exp}.
An example where ${\mathcal{J}}_p(N)$ is computed using Theorem~\ref{calc} will be given below.
\begin{rem}\label{algol} {\em In practice, if $N$ has connected boundary, we may present it as surgery on a link in a handlebody $H$ standardly embedded in $S^3$. Then we can just as well replace the form $([N], {\mathfrak{b}})_{{\Sigma}}$ with the generalized Hopf pairing $(([N], {\mathfrak{b}}'))_{H,H'}$ (see Section \ref{pB}) where ${\mathfrak{b}}'$ are elements of the graph-like basis for ${\mathcal{S}_p}(-{\Sigma})$ associated to a lollipop tree in the complementary handlebody $H'$. Thus ${\mathcal{J}}_p(N)$ can be computed from the evaluation of a finite number of explicitly given skein elements (consisting of $v$-graphs together with some $\omega$-colored curves) in $S^3$.}\end{rem}
When one computes examples of
${\mathcal{J}}_p(N)$,
one frequently gets the trivial ideal or some power of $(h)$. Here is an example where it is neither.
\begin{figure}[h]
\includegraphics[width=1in]{graphy/link.eps}
\caption{L9a12 in
Thistlethwaite's
list of prime links \cite{B}
} \label{9a12}
\end{figure}
Let $L$ be the link in Figure \ref{9a12}. Let $K$ be the knotted
component
($K$ is a $(5,2)$ torus knot),
and $J$ the unknotted component. Let $N(k)$ be the result of surgery to the exterior of $J$ along $K$ with framing $k$.
If $k$ is odd, $N(k)$ is a homology circle.
\begin{prop}\label{16.7} One has
\begin{equation*}
{\mathcal{J}}^+_5(N(k))=\left\{
\begin{array}{cl}
(1 +2 \zeta_5^3)&\text{if $k \equiv 0 \pmod{5}~,$} \\
(1)&\text{otherwise}\\
\end{array}\right.
\end{equation*}
\end{prop}
\begin{proof}
Proceeding as in Remark \ref{algol}, the computation of the ideal
${\mathcal{J}}^+_5(N(k))$ in ${\mathbb{Z}}[ \zeta_5]$ reduces to the evaluation of two skein elements in $S^3$
(since ${{\mathcal{S}}}_5$ of a torus has rank two). We used data in Bar-Natan's Knot Atlas
and his Mathematica package KnotTheory \cite{B} to help calculate this ideal.
\end{proof}
The ideal $(1 +2 \zeta_5^3)$ is
a
non-trivial ideal as the norm of $1 +2 \zeta_5^3$ is $11$. Thus one has the following immediate
\begin{cor}\label{16.8} The manifold $N(5n)$ does not embed into any closed $3$-manifold $M$ whose $I_5(M)$ is not divisible by
$1 +2 \zeta_5^3.$
In particular $N(5n)$ does not embed into the 3-sphere
(since $I_p(S^3)=1)$.
\end{cor}
Further such examples are explored in \cite{G2}. Some more general results about the ideals ${\mathcal{J}}_p(N)$ are given there as well.
|
{
"timestamp": "2007-07-23T22:20:23",
"yymm": "0411",
"arxiv_id": "math/0411029",
"language": "en",
"url": "https://arxiv.org/abs/math/0411029"
}
|
\section{Introduction: The Role of Mergers In Current Cosmological Models}
In cosmological models consistent with current observations of the
cosmic microwave background (CMB), large-scale structure, and estimates
of the mean baryon density, roughly 85\% the matter in
the universe is non-baryonic cold dark matter (CDM). Small
fluctuations in the initial distribution of CDM are known to lead,
via gravitational instability, to the formation of dense,
gravitationally bound structures at later times. These dark matter
`haloes' should reach masses and densities sufficient to allow gas
to cool and condense within them, thereby permitting efficient and
sustained star formation, at redshifts or $\sim$20 or higher. Thus
they represent a natural structural framework within which visible
galaxies can form, from the earliest epoch onwards.
A generic prediction of cosmological models based on CDM is that the
dark matter haloes around galaxies continue to grow through accretion
and mergers with smaller haloes, right up to the present day.
The merger rate for galaxy haloes reaches a peak at a redshift of
$z\sim$1--3, however, and more recent mergers between galaxy haloes
and their associated galaxies will typically be minor ones, adding
little to the total mass of the larger component. The average merger
in the local universe may consist of a small, low surface-brightness
dwarf galaxy being stretched out and disrupted over the course of several
orbits around its parent.
The only direct signature of such an event would be minor fluctuations
in the surface brightness of the stellar halo around the larger galaxy,
lasting for a few orbital periods (Johnston et al.~2001).
Thus, while low-redshift mergers provide an important empirical test of
the CDM framework, they may be extremely difficult to detect in practice.
In the case of the Milky Way, stellar kinematics provide a
unique opportunity to study the merger history in much greater
detail, identifying debris from older or more minor mergers
thanks to its clustering in phase-space
(Tremaine 1993; Johnston 1998; Helmi \& White 1999). Ground-based
radial velocity surveys such as RAVE\footnote{http://www.aip.de/RAVE}
and future astrometric
missions such as GAIA\footnote{http://astro.estec.esa.nl/GAIA}
will explore a completely new range of
parameter space in the merger history of this one particular galaxy,
and should thereby provide an important link between local stellar
populations and the properties of small galaxies forming at
the highest redshifts. To explore the potential of these programmes,
we will review current observations of debris from minor mergers in
external galaxies, and attempt to extend these results to older or
smaller satellites using a semi-analytic model of galaxy formation.
\section{Observed Streams In Nearby Galaxies}
Given their theoretical significance, there has been much
recent interest in detecting ongoing minor mergers in local galaxies.
The most spectacular discoveries of this kind were those made in
our own galaxy and in M31. In 1994, Ibata et al.~announced the
discovery of the Sagittarius Dwarf, a small system in the process of
being disrupted by the Milky Way, whose tidal debris is now spread
out over a large fraction of the sky. More recently, and equally
spectacular tidal stream was discovered around M31 (Ibata et al.\
2001; McConnachie et al.~2003), and there is growing evidence for
another stream in the plane of the Milky Way
(e.g.~Newberg et al.~2002; Ibata et al.~2003).
The LMC-SMC system can also be considered an ongoing
minor merger, since it has already experienced strong tidal effects
due to its proximity to the Milky Way (e.g.~Harris \& Zaritsky 2004),
and will eventually merge with the Galaxy as dynamical friction
acts on its orbit.
Much of the tidal debris detected locally has a very low surface brightness,
and would be difficult or impossible to detect in distant
systems. Nonetheless, there has been some progress in detecting debris
around galaxies outside the Local Group. Early work by Malin
\& Hadly (1997) found faint features in the stellar haloes of
several galaxies, including M83, M104 and NGC2855, some or all
of which may be tidal debris from minor mergers. A clear example
of a tidal stream was discovered around the edge-on spiral NGC5907
by Shang et al.~(1998), and the nearby active galaxy Centaurus~A has a
tidal feature of young blue stars, possibly associated with the recent
merger that triggered its central activity (Peng et al.~2002). These
results were reviewed and discussed recently by Pohlen et al.~(2004),
who also report newly detected streams in four other galaxies. Their
systematic search around 80 edge-on disk galaxies yielded only one of
these examples, however, suggesting the incidence of obvious streams
is low. More recently, Forbes et al.~(2004) have reported the
serendipitous discovery of a dwarf in the process of disruption, with
well-defined tidal streams, in the background of a Hubble Advanced
Camera image of the Tadpole system. These discoveries confirm that
minor mergers are still taking place, but suggest that it may be hard to
obtain a large sample to compare with theoretical models. Finally, we note
that stellar streams are not the only tracer of minor mergers in external
galaxies; others include distinct populations of globular clusters, gas
or dust lanes in early-type galaxies, and kinematic or structural anomalies.
The theoretical interpretation of these features is less straightforward,
however, as their properties may depend more on the long-term response of
the main system (e.g.~via AGN or starbursts) than on the nature of the initial
perturber.
\section{The Unresolved Problem of Dwarf Galaxy Formation}
Given this growing body of information on minor mergers and tidal streams,
it is worth estimating how often these
events are expected to occur in current galaxy formation models.
In recent years several groups have begun to study the formation of
the stellar halo in its full cosmological context (e.g.~Bullock,
Kravtsov, \& Weinberg 2001; Bekki \& Chiba 2001; Brook et al.~2004;
Bullock \& Johnston 2004). The problem is not
as straightforward as it seems, however. The properties of the stellar
halo will depend on when and how the satellites that merged into it
first formed. While simulations of structure formation have now
converged on the properties of dark matter structures on the relevant
scales, it is still not clear how these structures are populated with
visible stars.
It has been known since the early days of CDM that galaxy formation
must be systematically less efficient in smaller
haloes, since the luminosity function of field galaxies flattens
below some magnitude, whereas the halo mass function should be close
to a power law over the corresponding range (e.g.~Kauffmann et al.~1993). More
recently, high-resolution simulations by Klypin et al.~(1999) and
Moore et al.~(1999) have demonstrated that this discrepancy is even
more dramatic for satellites within systems like the Local
Group. Using internal velocities to relate dark matter subhaloes
to visible systems, they showed that down to the scale of
the smallest systems, there should be almost 100 times more subhaloes
around the Milky Way than there are detected satellites. Subsequently,
there has been much discussion of whether dwarf
galaxies populate a small subset of all subhaloes over a wide range of
mass/internal circular velocity, as proposed by Moore et al.~(1999),
or whether they correspond to kinematically cold stellar cores within
the most massive subhaloes, as suggested by Stoehr et al.~(2002) and
Hayashi et al.~(2003).
The second of these solutions is appealing, as it could indicate that
there is a single characteristic halo mass below which star formation
ceases. This solution is strongly disfavoured, however, if not ruled out,
by the strong spatial clustering of the satellites of the Milky
Way. The best estimates of the mass of the Milky Way put its virial
radius somewhere around 300 kpc. If this is the case, then two-thirds
of its satellites lie within the central third of the halo. The
chance of this happening if satellites populated only the most
massive haloes is less than 1\% (Taylor et al.~2003).
The disagreement in the cumulative
radial distributions is shown in Figure~\ref{fig_1}, where we compare the
average results for the dozen most massive satellites from each of a
large set of semi-analytic models of halo substructure, generated
using the method outlined in Taylor \& Babul (2004a, 2004b, 2004c)
(solid line + dotted 99\% contours) with the distribution for the
satellites of the Milky Way (upper line \& triangles). Thus at least
some of the satellites must reside in smaller haloes.
\begin{figure}[h]
\begin{center} \leavevmode
\centerline{\epsfig{file=taylor_figure1c.eps, width=0.3 \textwidth}}
\end{center} \caption{The cumulative radial distribution of visible
satellites around the Milky Way (upper line/triangles), compared with the
predicted distribution of the most massive subhaloes (lower
solid line, with dotted 99\% contours.} \label{fig_1}
\end{figure}
There may be additional clues to the origin of the
local dwarfs in their kinematics, which appear very different from
those of randomly selected subhaloes. In Figure~\ref{fig_2}, for
instance, we show how the positions and radial velocities of known
satellites (points with error bars) compared with the distribution
of massive haloes taken from a large set of semi-analytic haloes.
There is a clear mis-match between the two distributions in
projection along either axis.
\begin{figure}[h] \begin{center} \leavevmode
\centerline{\epsfig{file=taylor_figure2c.eps, width=0.3 \textwidth}}
\end{center} \caption{Radial velocity versus position for the
satellites of the Milky Way (points with error bars) and the most
massive subhaloes from a series of semi-analytic haloes (smaller symbols).}
\label{fig_2}
\end{figure}
One possible explanation, both for the spatial clustering and for the observed
velocity distribution, is that most of the dwarfs are old, and that
the lower cutoff to dwarf formation increases with time. In particular,
Kravtsov et al.~(2004) have produced a detailed model
of dwarf formation where this occurs naturally, through a combination of
internal and external feedback. In the next section we will consider
a simplified version of this model and discuss its implications for
stream formation.
\section{A Semi-analytic Model of Stream Formation}
Given the complexity of modelling dwarf galaxy formation
self-consistently, it is useful to construct a simplified
semi-analytic model of the process, to get a preliminary estimate of
how often minor mergers with dwarf satellites would have occurred in
a system like the Milky Way. Our model is based on the semi-analytic
model of halo formation presented in (Taylor \& Babul 2004a,b,c),
which predicts the numbers, orbits, internal properties and dynamical
evolution of dark matter subhaloes within a galaxy halo. To make
predictions about visible satellites, we will suppose that stars form
preferentially in subhaloes with deeper potential wells, but also
that the lower threshold to this process increases with time, such
that the some of the largest subhaloes around the Milky Way have never
formed stars. Specifically, we choose a time-varying threshold in peak
circular velocity of the form $V_{\rm p} > V_{\rm
p,0}\,(1+z)^{-\alpha}$. Subhaloes that exceed this threshold at any
time before they merge with the main halo are assumed to host visible
dwarf galaxies. The two parameters in this model can be adjusted to give
the right total number and spatial distribution of surviving satellites at
the present-day.
Having identified which systems form stars, we can then estimate when
stellar material will be stripped from them and added to the stellar
halo. We assume the stars within a given subhalo are restricted to its
central region, in keeping with the observed sizes of dwarf galaxies,
and consequently that tidal stripping does not produce visible streams
until a system has lost most of its original dark matter. Considering
the stellar-to-total mass ratios of larger galaxies, we set a mass-loss
threshold at 83\%, and refer to systems that have lost less than this
fraction of their original mass as `surviving satellites', and those
that have lost more as `tidal streams'. Systems that lose
more than 99\% of their mass may become completely unbound (Taylor \&
Babul 2004a), so we refer to them as `disrupted' in what follows.
The distribution of stripped material from systems containing stars
is shown in Figure \ref{fig_3}, for one particular semi-analytic halo.
The debris is plotted at the point where it was first stripped off,
to highlight the underlying orbital structure.
Over time this material will be heated and phase-mixed, producing a
smoother final halo. The colours in the figure indicate the age at which
the stripped system first merged with the main halo.
There is a strong age gradient in the final system, with ancient mergers
contributing most of the material within the solar radius, while more
recent mergers add streams to the outer halo.
\begin{figure}[h] \begin{center} \leavevmode
\centerline{\epsfig{file=taylor_figure3c2.ps, width=0.3 \textwidth}}
\end{center} \caption{Tidal debris from merging satellites in a
semi-analytic model halo, plotted at the point where it was first
stripped. The material is colour-coded by age, the central material
being systematically older. The figure is in the plane of
the disk and the scale is in kpc.} \label{fig_3}
\end{figure}
\section{Results: Predicted Stream Properties}
Using the model outlined above, we have generated streams in a large
set of model haloes. We find that we can produce an average of ten
`surviving' satellites per halo, roughly the number known to-date for
the Milky Way, by setting $V_{\rm p,0} = 135$ km\,s$^{-1}$ and $\alpha
= 2/3$ in the expression given above.
With this choice of parameters, our model predicts $\sim$8 `tidal
streams' (i.e.\ heavily stripped stellar systems) per halo, and
more than 100 disrupted dwarf galaxies, as well as 15 that have merged
directly into the centre of the main galaxy on very radial
orbits. Figure~\ref{fig_4} shows the distributions of various parameters,
including the original total mass, formation redshift, merger
redshift, and final radius (four columns, from left to right) for
these four classes of objects (four rows, from top to bottom).
Various interesting trends appear in these distributions; in particular,
the progenitors of tidal streams (second row) lie at somewhat smaller radii
($\sim$ 30 kpc), are slightly older, and have larger original
masses than the surviving satellites (top row).
\begin{figure}[h] \begin{center} \leavevmode
\centerline{\epsfig{file=taylor_figure4c.eps, width=0.32 \textwidth}}
\end{center} \caption{Distribution of original total mass,
formation redshift, merger redshift, and final radial position
(columns from left to right) of stellar systems that survive at the
present-day, tidal stream progenitors, disrupted systems and central
mergers (four rows, from top to bottom) in a large set of semi-analytic
haloes.} \label{fig_4}
\end{figure}
We can compare these predictions with the observations discussed
above. On the one hand, only a few streams have been detected in
the Milky Way (Sagittarius, Canis, and possibly a few older streams
associated with objects such as the massive globular cluster
$\omega$ Cen; cf.~Bekki \& Freeman 2003).
On the other hand, given the difficulty of detecting older streams,
substantial incompleteness is not implausible. Our results suggest
that GAIA may expect to sample stars from a much larger population of
disrupted systems -- on the order of 100. Most of these systems will
have merged with the halo at redshifts of 2 or higher, however,
and will consist of old stars, strongly mixed in phase-space
by repeated scattering from their initial orbits. It will take much
more detailed modelling of the disruption and mixing process
(cf.~Johnston 1998; Helmi \& White 1999) to determine how many
distinct structures should be detectable in practice.
\section*{Acknowledgements}
The author gratefully acknowledges helpful discussions with
A.\ Babul, A.\ Ferguson, A.\ Kravtsov and J.\ Silk, as well as
financial support from the U.K.\ Particle Physics and Astronomy
Research Council (PPARC).
|
{
"timestamp": "2004-11-18T20:18:53",
"yymm": "0411",
"arxiv_id": "astro-ph/0411549",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411549"
}
|
\section{DYBR and the relation between factorized $L$-matrix and minimal
$L$-matrix }
~~~~
It is well known that the Boltzmann weight of the $A^{(1)}_{n-1}$
IRF\cite{jim1,jim2,jim3} model can be written as
\begin{equation}\begin{array}{ll}
R(a|z)_{ii}^{ii}=\displaystyle\frac{\sigma(z+w)}{\sigma(w)},&
R(a|z)_{ij}^{ij}=\displaystyle\frac{\sigma(z)\sigma(
a_{ij}-w)}{\sigma(w)\sigma(a_{ij})}\quad
for\;\;i\ne j,\\
R(a|z)_{ij}^{ji}=\displaystyle\frac{\sigma(z+a_{ij})}{\sigma(a_{ij})}
\quad for\;\;i\ne j,&
R(a|z)_{i'j'}^{i\ j}=0 \quad\mbox{for other cases}, \label{R}
\end{array}\end{equation}
where $a\equiv(m_0,m_1,\cdots,m_{n-1})$ is an $n$-vector, and $
a_{ij}= a_i-a_j$, $a_i=w(m_i-\frac{1}{n}\sum_l m_l+w_i)$, $m_i$
($i=0,1,\cdots,n-1$) are integers which describe the state of model, while
$\{w,w_i\}$ are generic c-numbers which are the parameters of the model,
and $\sigma(z)\equiv\theta\left[^{\frac{1}{2}}_{\frac{1}{2}}\right](z,\tau),$
with
$$\theta\left[^a_b\right](z,\tau)\equiv
\sum_{m\in Z}e^{i\pi(m+a)^2\tau+2i\pi(m+a)(z+b)}.$$
We define an n-dimension vector $\hat j=(0,0,\cdots,0,1,0,\cdots)$, in
which $j$th component is 1.
We consider a matrix whose elements are linear operators. We
denote the elements of the matrix as $ L(^a_b|z)^j_i$. The $R$-matrix and
the $L$-matrix can also be depicted by the following figures,\\
\begin{picture}(50,50)(0,0)
\put(150,-20){\vector(-2,1){80}}
\put(150,20){\vector(-2,-1){80}}
\put(107,-10){$a$}\put(127,-20){$i'$}\put(87,16){$i$}
\put(127,16){$j'$}\put(87,-20){$j$}\put(65,25){$z_1$}\put(65,-26.5){$z_2$}
\end{picture}
\begin{picture}(50,50)(0,0)
\put(300,5){\vector(-1,0){80}}
\put(260,35){\vector(0,-1){60}}\put(260.5,35){\vector(0,-1){60}}
\put(261,35){\vector(0,-1){60}}
\put(230,20){$b+{\hat i}$}
\put(280,-10){$a$}\put(210,-15){$b\equiv a+{\hat h}$}\put(240,8){$i$}
\put(275,8){$j$}\put(265,-20){$h$}\put(213,2.5){$z$}
\put(270,25){$a+{\hat j}$}
\end{picture}
\\
\bigskip
\\
$$\begin{array}{cc}
Figure\; 1: The\;elements \;of\;R\mbox{-}matrix\;\;\;\;&\;\;\;\;
Figure\; 2: The\; element \;of\;L\mbox{-}matrix,\\
R(a|z_1-z_2)_{ij}^{i'j'}.&L(a,h|z)_i^j\equiv L(_b^a|z)_i^j.
\end{array}$$
\\
\\
$$
\begin{picture}(50,50)(0,0)
\put(-10,-32.5){\vector(-2,1){100}}
\put(-10,32.5){\vector(-2,-1){100}}
\put(-30,50){\vector(0,-1){100}}\put(-30.5,50){\vector(0,-1){100}}
\put(-31,50){\vector(0,-1){100}}
\put(-27,-35){$a$}\put(-95,-35){$b\equiv a+{\hat h}$}\put(-100,-21){$j$}
\put(-100,17){$i$}\put(-40,-40){$h$}\put(-120,19){$z_1$}
\put(-60,-5){$b+\hat i'$}
\put(-120,-19){$z_2$}\put(-20,-25){$i''$}\put(-25,30){$j''$}
\put(-23,0){$a+{\hat i''}$}\put(-65,10){$j'$}\put(-65,-18){$i'$}
\end{picture}
=
\begin{picture}(50,50)(0,0)
\put(150,-15.5){\vector(-2,1){100}}
\put(150,15.5){\vector(-2,-1){100}}
\put(80,50){\vector(0,-1){100}}\put(80.5,50){\vector(0,-1){100}}
\put(81,50){\vector(0,-1){100}}
\put(116,-10){$a$}\put(30,-50){$b\equiv a+{\hat h}$}\put(60,18){$i$}
\put(38,0){$b+\hat j$}\put(82,-5){$a+\hat j'$}
\put(60,-24){$j$}\put(84,-40){$h$}\put(38,33){$z_1$}
\put(38,-37){$z_2$}\put(140,-25){$i''$}\put(140,17){$j''$}
\put(95,15){$i'$}\put(95,-25){$j'$}
\end{picture}$$
\\
\\
\\$$ Figure\; 3: The\;\; \;dynamical \;\;Yang\mbox{-}Baxter \;\;relation.$$
The dynamical Yang-Baxter relation (DYBR) is written as (also see figure
3)
\begin{equation}
\sum_{i',j'}R(b|z_1-z_2)_{ij}^{i'j'}L(_b^a|z_1)_{i'}^{i''}L(_{b+\hat
i'}^{a+\hat i''}|z_2)_{j'}^{j''}
=\sum_{i',j'}L(_b^a|z_2)_{j}^{j'}L(_{b+\hat j}^{a+\hat j'}|z_1)_{i}^{i'}
R(a|z_1-z_2)_{i'j'}^{i''j''},\label{DYBR}
\end{equation}
where $b\equiv(m_0^b,m_1^b,\cdots,m_{n-1}^b),$
$a\equiv(m_0^a,m_1^a,\cdots,m_{n-1}^a),$ We note that
Eq.(\ref{DYBR}) gives the quadratic relation of the elements of
$L$. If we let $b=a+h$, the form of the equation will be the same
as that given in the Ref.\cite{fe1,fe2}, and the relation which
$L$ satisfies is the definition relation of the elliptic quantum
group proposed by Felder and Varchenko. Here, the elements of the
$L$-matrix are operators, and Eq.(\ref{DYBR}) is the algebra of
these operators. In this paper, we only discuss the minimal form
of the operators, namely, we only consider the simplest case that
all elements are c numbers. In this situation, the
$L(_b^a|z)_i^j$ is scalar functions of $(a,b,z,i,j)$. We will try
to find the general form of such $L$-matrix. From Eq.(\ref{DYBR}),
we have
\begin{eqnarray}
\frac{L(_{b+\hat i}^{a+\hat
i'}|z_1)_{i}^{i'}}{L(_b^a|z_1)_{i}^{i'}}&=&
\frac{L(_{b+\hat
i}^{a+\hat i'}|z_2)_{i}^{i'}}{L(_b^a|z_2)_{i}^{i'}},\label{DYBR-1}\\
R(b|z_1-z_2)_{ii}^{ii}&=&R(a|z_1-z_2)_{i'j'}^{i'j'}
\frac{L(_b^a|z_2)_i^{j'}}{L(_{b+\hat i}^{a+\hat i'}|z_2)_i^{j'}}
\frac{L(_{b+\hat i}^{a+\hat
j'}|z_1)_i^{i'}}{L(_b^a|z_1)_i^{i'}}\nonumber\\
&+&R(a|z_1-z_2)^{i'j'}_{j'i'}
\frac{L(_b^a|z_2)_i^{i'}}{L(_{b+\hat i}^{a+\hat i'}|z_2)_i^{j'}}
\frac{L(_{b+\hat i}^{a+\hat i'}|z_1)_i^{j'}}{L(_b^a|z_1)_i^{i'}}
\quad (i'\ne j'),\label{DYBR-2}\\
R(a|z_1-z_2)_{i'i'}^{i'i'}&=&R(b|z_1-z_2)_{ij}^{ij}
\frac{L(_b^a|z_1)_i^{i'}}{L(_{b+\hat j}^{a+\hat i'}|z_1)_i^{i'}}
\frac{L(_{b+\hat i}^{a+\hat
i'}|z_2)_j^{i'}}{L(_b^a|z_2)_j^{i'}}\nonumber\\
&+&R(b|z_1-z_2)_{ij}^{ji}
\frac{L(_b^a|z_1)_j^{i'}}{L(_{b+\hat j}^{a+\hat i'}|z_1)_i^{i'}}
\frac{L(_{b+\hat j}^{a+\hat i'}|z_2)_i^{i'}}{L(_b^a|z_2)_j^{i'}}
\quad (i\ne j).\label{DYBR-3}
\end{eqnarray}
By solving Eq.(\ref{DYBR-2}) and Eq.(\ref{DYBR-3}), we can
determine $L(^a_b|z)_i^j$ as the function of $z$. Let
\begin{eqnarray}
&&\frac{L(_b^a|z_2)_i^{j'}}{L(_{b+\hat i}^{a+\hat
i'}|z_2)_i^{j'}}\equiv
g(z_2),\;\;\;\;\;\;
\frac{L(_{b+\hat i}^{a+\hat j'}|z_1)_i^{i'}}{L(_b^a|z_1)_i^{i'}}\equiv
h(z_1),\;\;\;\;\;\;
\frac{L(_{b+\hat i}^{a+\hat i'}|z)_i^{j'}}{L(_b^a|z)_i^{i'}}\equiv
f(z),\\
&&\frac{R(b|z_1-z_2)_{ii}^{ii}}{R(a|z_1-z_2)_{i'j'}^{i'j'}}\equiv
A(z_1-z_2),\;\;\;\;\;\;\;\;\;
-\frac{R(a|z_1-z_2)_{j'i'}^{i'j'}}{R(a|z_1-z_2)_{i'j'}^{i'j'}}\equiv
B(z_1-z_2).
\end{eqnarray}
We then rewrite Eq.(\ref{DYBR-2}) as
\begin{equation}
g(z_2)h(z_1)=A(z_1-z_2)+B(z_1-z_2)f(z_1)/f(z_2)\equiv F(z_1,z_2).\label{bas}
\label{eq1-bas}
\end{equation}
We find that the left hand side of the above equation is
factorized by the functions of $z_1$ and $z_2$. So taking logarithm
to the both sides of the above equation and taking the derivative with
respect to $z_1$ and $z_2$, we have
\begin{equation}
\frac{\partial^2}{\partial z_1\partial z_2}\ln F(z_1,z_2)=0.
\end{equation}
Hence the above equation gives
\begin{equation}
F(z_1,z_2)\frac{\partial^2}{\partial z_1 \partial z_2}F(z_1,z_2)
-\frac{\partial}{\partial z_1}F(z_1,z_2)
\frac{\partial}{\partial z_2}F(z_1,z_2)=0.
\label{eq2-bas}
\end{equation}
By using Eq.(\ref{eq1-bas}) and Eq.(\ref{eq2-bas}), we can get an
algebraic equation of 2nd order about $f(z_1)$
\begin{eqnarray}
& & f(z_1)^2\left [d_1f'(z_2)+d_2f(z_2)\right ]+f(z_1)\left
[d_3f'(z_2)^2+d_4f'(z_2)f(z_2)+d_5f(z_2)^2\right ]\nonumber \\
&+&d_6f'(z_2)f(z_2)^2+d_7f(z_2)^3=0, \label{eq-2order}
\end{eqnarray}
where $d_i \;\; (\; i=1,\;2,\;\cdots, \; 7 \;)$ are known
functions of $z_1-z_2$. Define
\begin{eqnarray*}
y&=&\frac{f(z_1)}{f(z_2)},\\
\theta&\equiv& \frac{f'(z_2)}{f(z_2)}=\displaystyle
\frac{\partial}{\partial z_2}
\ln\left\{\frac{L(_{b+\hat i}^{a+\hat i'}|z_2)_i^{j'}}
{L(_b^a|z_2)^{i'}_i} \right\}.
\end{eqnarray*}
Then, Eq.(\ref{eq-2order}) can be rewritten as
\begin{equation}
y^2(d_1\theta+d_2)+y(d_3\theta^2 +d_4\theta+d_5)+(d_6\theta+d_7)=0.
\label{eq-2order1}
\end{equation}
When $z_2$ is fixed, the coefficients of Eq.(\ref{eq-2order1}) are
the functions of $z_1$. So $y$ is also a function of $z_1$. Since
Eq.(\ref{eq-2order1}) is of 2nd order, the $y$ can only have two
solutions $ y_1(z_1,z_2)$ and $y_2(z_1,z_2)$ at most. If we can
find two different $L$-matrices $L_1(^a_b|z)$ and $L_2(^a_b|z)$
which satisfy the DYBR with same $\theta$. we must have
$f(z_1)/f(z_2)=f_1(z_1)/f_1(z_2)$ or
$f(z_1)/f(z_2)=f_2(z_1)/f_2(z_2)$, where $f_1$ and $f_2$ are
obtained by the two different $L$'s.
Then, we can obtain
$f(z_1)\sim f_1(z_1)$ or $f(z_1)\sim f_2(z_1)$, where $``\sim"$
implies that as the function of $z_1$, two sides of it can only be
different with a constant respect to $z_1$. Thus, we can conclude
that if there are two $L_i(^a_b|z)$ ($i=1,2$) which satisfy the
DYBR and are not proportion to each other, and when $z=z_2$, they
have same $\theta=\theta_1(z_2)=\theta_2(z_2)$, then every $f(z)$
related with $L(_b^a|z)$ satisfying $f'(z)/f(z)=\theta$ when
$z=z_2$, must satisfy
\begin{equation}
f(z)=f_1(z)const. \quad \mbox{or}\quad f(z)=f_2(z)const. \;,\label{f(z)}
\end{equation}
where the constants do not depend on $z$.
Now we consider the factorized $L$-matrix[14-17]
which has an adjustable parameter $\delta$. We will show that for the
given $z_2$ and $\theta$, there are generally two different $\delta$'s
which can give
$f'_{\delta_1}(z_2)/f_{\delta_1}(z_2)
=f'_{\delta_2}(z_2)/f_{\delta_2}(z_2)=\theta$
Considering the intertwiner of the $Z_n$ Belavin model and the
$A^{(1)}_{n-1}$ IRF model\cite{bax,jim4}, we have
\begin{eqnarray*}
\varphi_{a+\hat i,a}^{(j)}(z)&=&\theta\left[\begin{array}{c}
\frac{1}{2}-\frac{j}{n}\\ \frac{1}{2}\end{array}\right
](z+n(a+\hat i)_i,n\tau)\equiv
\theta^{(j)}(nz_i),\\
(a+\hat i)_i&=&w(m_i+1-\frac{1}{n}\sum_l(m_l+\delta_{il})+w_i)
=a_i+w(1-\frac{1}{n}).
\end{eqnarray*}
Define $\tilde\varphi_{a+\hat\nu,a}^{(j)}(z)$ which satisfies
$$
\sum_{j=0}^{n-1}\tilde\varphi_{a+\hat\nu,a}^{(j)}(z)
\varphi_{a+\hat\mu,a}^{(j)}(z)=\delta_{\mu\nu}.$$
Let
\begin{equation}
\bar L_s(_b^a|z)^\nu_\mu
=\sum_{j=0}^{n-1}\tilde\varphi_{a+\hat\nu,a}^{(j)}(z)
\varphi_{b+\hat\mu,b}^{(j)}(z+s),
\end{equation}
where $s$ is an arbitrary parameter. Then by using the
correspondence relation between face and vertex\cite{jim4}, we can
prove that the $L$-matrix above satisfies the DYBR
Eq.(\ref{DYBR}). After some derivation, we have\cite{shi}
$$
\bar L_s(_b^a|z)^\nu_\mu
=\frac{\sigma(z+\Delta+(n-1)w-\frac{n-1}{2}+\frac{s}{n}+b_\mu- a_\nu)}
{\sigma(z+\Delta+(n-1)w-\frac{n-1}{2})}
\prod_{j(\ne \nu)}\frac{\sigma(\frac{s}{n}+b_\mu-a_j)}
{\sigma(a_\nu-a_j)}
$$
with $\Delta=w\sum_jw_j$. Let
$\delta=\Delta+(n-1)w-(n-1)/2+s/n=\delta(s),\ \delta'=s/n$. Since
$\sigma(z+\Delta+(n-1)w-(n-1)/2)$ is irrelevant with
$a,b,\mu,\nu$, from the above formula, we can prove that
\begin{eqnarray}
L_\delta(_b^a|z)^\nu_\mu
&=&\bar L_s(_b^a|z)^\nu_\mu\sigma(z+\Delta+(n-1)w-\frac{n-1}{2})\nonumber\\
&=&\sigma(z+\delta+b_\mu-a_\nu)
\prod_{j\ne \nu}\frac{\sigma(\delta'+b_\mu-a_j)}{\sigma(a_\nu-a_j)} \label{factor}
\end{eqnarray}
also satisfy the DYBR (Eq.(\ref{DYBR})).
Considering the definition of $\theta$, we have
\begin{equation}
\theta(z)=\frac{f'_\delta(z)}{f_\delta(z)}
=\frac{\sigma'(z+\delta+b_i-a_{j'}+w)}{\sigma(z+\delta+b_i-a_{j'}+w)}
-\frac{\sigma'(z+\delta+b_i-a_{i'})}{\sigma(z+\delta+b_i-a_{i'})}.\label{theta}
\end{equation}
By using the properties of the $\theta$-function, one can show that for a given $\theta$,
there generally exist two different $\delta$'s satisfying Eq.(\ref{theta}).
From Eq.(\ref{f(z)}), we know that for the $L$-matrix which
satisfies the DYBR,
\begin{equation}
f(z)\sim f_\delta(z) \label{f}
\end{equation}
must be held for certain $\delta$. And from Eq.(\ref{bas}), we
know that $g(z)$ and $h(z)$ can be determined completely by $f(z)$
up to a scale. So we have
\begin{equation}
g(z)\sim g_\delta(z), \quad h(z)\sim h_\delta(z). \label{g}
\end{equation}
Here the parameter $\delta$ is the same as that in Eq.(\ref{f}). Then, from
Eq.(\ref{f}) and Eq.(\ref{g}), we have
\begin{eqnarray}
& &\frac{L(_{b+\hat i}^{a+\hat i'}|z)_i^{j'}}{L(_b^a|z)_i^{i'}}\sim
\frac{\sigma(z+\delta+b_i-a_{j'}+w)}{\sigma(z+\delta+b_i-a_{i'})},
\label{L/L-1}\\
& &\frac{L(_{b+\hat i}^{a+\hat j'}|z)_i^{i'}}{L(_b^a|z)_i^{i'}}\sim
\frac{\sigma(z+\delta+b_i-a_{i'}+w)}{\sigma(z+\delta+b_i-a_{i'})},
\label{L/L-2}\\
& &\frac{L(_b^a|z)_i^{j'}}{L(_{b+\hat i}^{a+\hat i'}|z)_i^{j'}}\sim
\frac{\sigma(z+\delta+b_i-a_{j'})}{\sigma(z+\delta+b_i-a_{j'}+w)}.
\label{L/L-3}
\end{eqnarray}
So from Eq.(\ref{L/L-1}) and Eq.(\ref{L/L-3}), we can obtain
\begin{equation}
\frac{L(_b^a|z)_i^{j'}}{L(_b^a|z)_i^{i'}}\sim
\frac{\sigma(z+\delta+b_i-a_{j'})}{\sigma(z+\delta+b_i-a_{i'})}.\label{L/L-4}
\end{equation}
In Eqs.(\ref{L/L-1})-(\ref{L/L-4}), all $\delta$'s are the same.
We note here that the $\delta$ may depend on $i,i',j',a,b$, but it
does not depend on $z$, i.e. $\delta=\delta_i(abi'j')$. One sees
from Eqs.(\ref{L/L-1}), (\ref{L/L-2}) and (\ref{L/L-4})
$\delta_i(i'j')\cong\delta_i(j'i')$ (mod $\Lambda_\tau$).
Similarly, from the Eq.(\ref{DYBR-3}), we have
\begin{eqnarray}
& &\frac{L(_{b+\hat j}^{a+\hat i'}|z)_i^{i'}}{L(_b^a|z)_j^{i'}}\sim
\frac{\sigma(z+\delta+b_i-a_{i'}-w)}{\sigma(z+\delta+b_j-a_{i'})},
\label{L/L-5}\\
& &\frac{L(_{b+\hat i}^{a+\hat i'}|z)_j^{i'}}{L(_b^a|z)_j^{i'}}\sim
\frac{\sigma(z+\delta+b_j-a_{i'}-w)}{\sigma(z+\delta+b_j-a_{i'})},
\label{L/L-6}\\
& &\frac{L(_{b+\hat j}^{a+\hat i'}|z)_i^{i'}}{L(_b^a|z)_i^{i'}}\sim
\frac{\sigma(z+\delta+b_i-a_{i'}-w)}{\sigma(z+\delta+b_i-a_{i'})},
\label{L/L-7}\\
& &\frac{L(_b^a|z)_j^{i'}}{L(_b^a|z)_i^{i'}}\sim
\frac{\sigma(z+\delta+b_j-a_{i'})}{\sigma(z+\delta+b_i-a_{i'})}.
\label{L/L-8}
\end{eqnarray}
Here the dependence of the $\delta$'s are similar with the former.
We also have $\delta=\delta^{i'}(abij)$ and
$\delta^{i'}(ij)\cong\delta^{i'}(ji)$ (mod $\Lambda_\tau$).
\section{Dependence of elements of $L$-matrix with spectral parameter $z$}
~~ In this section, we study the dependence of $L(^a_b|z)^j_i$
with respect to $z$. It is found that there are only five possible
forms of $L$-matrices in the whole lattice. We prove this in the
following steps.
Step 1. Assume $i\ne i',\;j\ne j'$. From Eq.(\ref{L/L-4}) and
Eq.(\ref{L/L-8}), we have
\begin{eqnarray}
\displaystyle \frac{L(_b^a|z)_i^j}{L(_b^a|z)_{i'}^{j'}}&=&
\frac{L(_b^a|z)_i^j L(_b^a|z)_i^{j'}}
{L(_b^a|z)_i^{j'}L(_b^a|z)_{i'}^{j'}}\sim
\frac{\sigma(z+\delta_i+b_i-a_{j})}{\sigma(z+\delta_i+b_i-a_{j'})}
\frac{\sigma(z+\delta^{j'}+b_i-a_{j'})}{\sigma(z+\delta^{j'}+b_{i'}-a_{j'})},
\label{L/L-9}\\
\displaystyle \frac{L(_b^a|z)_i^j}{L(_b^a|z)_{i'}^{j'}}&=&
\frac{L(_b^a|z)_i^j L(_b^a|z)_{i'}^{j}}
{L(_b^a|z)_{i'}^{j}L(_b^a|z)_{i'}^{j'}}\sim
\frac{\sigma(z+\delta^j+b_i-a_{j})}{\sigma(z+\delta^j+b_{i'}-a_{j})}
\frac{\sigma(z+\delta_{i'}+b_{i'}-a_j)}{\sigma(z+\delta_{i'}+b_{i'}-a_{j'})}
\label{L/L-10}
\end {eqnarray}
giving
\begin{eqnarray}
&&\frac{\sigma(z+\delta_i+b_i-a_{j})}{\sigma(z+\delta_i+b_i-a_{j'})}
\frac{\sigma(z+\delta_{i'}+b_{i'}-a_{j'})}{\sigma(z+\delta_{i'}+b_{i'}-a_j)}
\frac{\sigma(z+\delta^{j'}+b_i-a_{j'})}{\sigma(z+\delta^{j'}+b_{i'}-a_{j'})}
\frac{\sigma(z+\delta^j+b_{i'}-a_{j})}{\sigma(z+\delta^j+b_i-a_{j})}
\nonumber \\
&&\equiv \frac{(1)\;(2)\;(3)\;(4)}{(1')(2')(3')(4')}\sim 1, \label{1}
\end{eqnarray}
where $$\begin{array}{ll}
\delta_i=\delta_i(a,b,j,j'),& \quad \delta^j=\delta^j(a,b,i,i'),\\
\delta_{i'}=\delta_{i'}(a,b,j,j'),&\quad
\delta^{j'}=\delta^{j'}(a,b,i,i').
\end{array}$$
Obviously in Eq.(\ref{1}), the zeroes of numerator must coincide
with those of denominator. From this fact and noticing that $a_j$
and $a_{j'}$, $b_i$ and $b_i'$ are generic complex numbers, we
analyze all cases and obtain
\begin{equation}
\delta_i-\delta_{i'}\cong K(b_{i'}-b_i)
\quad \mbox{and}\quad \delta^j-\delta^{j'}\cong K(a_j-a_{j'})\quad
K=0,1,2,\label{K}
\end{equation}
where $\delta_i=\delta_i(j\ j')$, $\delta_{i'}=\delta_{i'}(j\
j')$, $\delta^{j}=\delta^{j}(i\ i')$, $\delta^{j'}=\delta^{j'}(i\
i')$ and $K=K(i\ i'\ j\ j')$. From Eq.(\ref{1}), we also have
\begin{eqnarray}
&& \delta_i\cong\delta^{j'}\cong\delta_{i'}\cong\delta^j,\quad
\mbox{when} \ K=0,\label{K=0}\\
&&\delta_i(j\ j')-\delta^{j'}(i\ i')\cong
b_{i'}-b_i+a_j-a_{j'},\quad \mbox{when} \ K=2.\label{K=2}
\end{eqnarray}
Step 2. Since the dimension $n\ge 4$, we may choose three
different $i_1, i_2, i_3$ and substitute $\{i_1, i_2\}$, $\{i_2,
i_3\}$, $\{i_1, i_3\}$ as $\{i, i'\}$ into Eq.(\ref{K}). This
leads to the conclusion that $K$ is independent of the indices
$i,i',j$ and $j'$.
These are the rules for the differences between $\delta_i(jj')$ and
$\delta_{i'}(jj')$ and for the differences between $\delta^j(ii')$ and
$\delta^{j'}(ii')$.
Step 3. Now let us study the differences between
$\delta_i(j_1j_2)$ and $\delta_{i}(j_3j_4)$. Consider different
indices $j_1,\ j_2,\ j_3$. We have from Eq.(\ref{L/L-4})
\begin{eqnarray*}
&&\frac{L(_b^a|z)_{i}^{j_1}}{L(_b^a|z)_{i}^{j_2}}
\frac{L(_b^a|z)_{i}^{j_2}}{L(_b^a|z)_{i}^{j_3}}
\sim\frac{\sigma(z+\delta_i(j_1j_2)+b_i-a_{j_1})}
{\sigma(z+\delta_i(j_1j_2)+b_i-a_{j_2})}
\frac{\sigma(z+\delta_i(j_2j_3)+b_i-a_{j_2})}
{\sigma(z+\delta_i(j_2j_3)+b_i-a_{j_3})}.\\
&&LHS=\frac{L(_b^a|z)_{i}^{j_1}}{L(_b^a|z)_{i}^{j_3}}\sim
\frac{\sigma(z+\delta_i(j_1j_3)+b_i-a_{j_1})}
{\sigma(z+\delta_i(j_1j_3)+b_i-a_{j_3})}.
\end{eqnarray*}
This implies
\begin{eqnarray*}
&&\frac{\sigma(z+\delta_i(j_1j_2)+b_i-a_{j_1})}
{\sigma(z+\delta_i(j_1j_2)+b_i-a_{j_2})}
\frac{\sigma(z+\delta_i(j_2j_3)+b_i-a_{j_2})}
{\sigma(z+\delta_i(j_2j_3)+b_i-a_{j_3})}
\frac{\sigma(z+\delta_i(j_1j_3)+b_i-a_{j_3})}
{\sigma(z+\delta_i(j_1j_3)+b_i-a_{j_1})}\\
&&\equiv \frac{(1)\ (2)\ (3)}{(1')\ (2')\ (3')}\sim 1.
\end{eqnarray*}
From this equation, we obtain ,
\begin{equation}
\delta_i(j_1j_2)-k(a_{j_1}+a_{j_2})\cong
\delta_i(j_2j_3)-k(a_{j_2}+a_{j_3})\cong
\delta_i(j_1j_3)-k(a_{j_1}+a_{j_3})\quad k=0,1,\label{del-j}
\end{equation}
where $k=k(ij_1j_2j_3)$.
Step 4. Consider unequal $j_a, j_b, j_c, j_d$, and substitute
$\{j_a,j_b,j_c\},\{j_b,j_c,j_d\},\{j_a,j_c,j_d\}$ as
$\{j_1,j_2,j_3\}$ into Eq.(\ref{del-j}), we may show that $k$ is
also independent of the indices.
Therefore, from Eq.(\ref{K}) and Eq.(\ref{del-j}), we conclude
that one can always find a number $C$ independent of indices $i\
j\ j'$ such that
\begin{equation}
C\cong \delta_i(jj')-k_0(a_j+a_{j'})+Kb_i.\label{C}
\end{equation}
Similarly, we also find a number $D$ satisfying
\begin{equation}
D\cong \delta^j(ii')+k^0(b_i+b_{i'})-Ka_j,\label{D}
\end{equation}
where $D, K,\ k_0,\ k^0$ are independent of indices, and are fixed
for a given lattice point $(a,b)$.
Step 5. In the following, we discuss the cases $K=0$ and 2. For
$K=0$, one has from Eq.(\ref{K}), Eq.(\ref{K=0}), Eq.(\ref{C}) and
Eq.(\ref{D})
\begin{eqnarray*}
&&\delta_i(jj')\cong C+k_0(a_j+a_{j'})\cong \delta^j(ii')
\cong D-k^0(b_i+b_{i'})\\
&\Rightarrow& D-C=k_0(a_j+a_{j'})+k^0(b_i+b_{i'}).
\end{eqnarray*}
Thus, the $k_0$ and $k^0$ must be zero since $C$ and $D$ are independent
of the indices. We have
\begin{equation}
\delta\cong C\cong D\cong\delta_i\cong\delta^j.\label{delj-deli}
\end{equation}
When $K=2$, From Eq.(\ref{K=2}), we can find a number $E$
satisfying
\begin{equation}
E\cong C\cong D \quad\mbox{and}\quad k_0=k^0=1.\label{E}
\end{equation}
Step 6. We next study the relations for $C,D,K, k_0,K^0$ between
adjacent lattice point $(a,b)$ and $(a+\hat i', b+\hat i)$.
Eq.(\ref{L/L-1}) and Eq.(\ref{L/L-5}) intertwine two lattice
points. Notice that in Eqs.(\ref{L/L-1})-(\ref{L/L-4}) (or in
Eqs.(\ref{L/L-5})-(\ref{L/L-8})) the $\delta$'s are the same. By
using these equations, we can prove that $k, k_0,k^0$ are
unchanged for adjacent lattice points while
\begin{eqnarray}
&& C(a+\hat i',b+\hat i)-C(a,b)= C'-C\cong
-k_0w(1-\frac{2}{n})+Kw(1-\frac{1}{n}),\label{C'-C}\\
&& D(a+\hat i',b+\hat i)-D(a,b)= D'-D\cong
k^0w(1-\frac{2}{n})-Kw(1-\frac{1}{n}).\label{D'-D}
\end{eqnarray}
These equations imply that Eq.(\ref{E}) can not be realized in two
adjacent lattice points. Thus $K=2$ must be discarded.
According to $K,\ k_0,\ k^0$, when $(a,b)$ is given, the elements of the
$L$-matrix can take five forms.\\
(1). Form A(1). $K=1$, $ k_0=k^0=0$, from Eq.(\ref{L/L-4}),
Eq.(\ref{C}) and Eq.(\ref{D}), we have
\begin{eqnarray*}
\frac{L(^a_b|z)^j_i}{L(^a_b|z)^0_i}\sim
\frac{\sigma(z+\delta_i(0j)+b_i-a_j)}{\sigma(z+\delta_i(0j)+b_i-a_0)}\sim
\frac{\sigma(z+C-b_i+b_i-a_j)}{\sigma(z+C-b_i+b_i-a_0)}
=\frac{\sigma(z+C-a_j)}{\sigma(z+C-a_0)},
\end{eqnarray*}
and from Eq.(\ref{L/L-8}), we have
\begin{eqnarray*}
\frac{L(^a_b|z)^0_i}{L(^a_b|z)^0_0}\sim
\frac{\sigma(z+\delta^0(i0)+b_i-a_0)}{\sigma(z+\delta^0(i0)+b_0-a_0)}\sim
\frac{\sigma(z+D+a_0+b_i-a_0)}{\sigma(z+D+a_0+b_0-a_0)}
=\frac{\sigma(z+D+b_i)}{\sigma(z+D+b_0)}.
\end{eqnarray*}
Therefore, we obtain
\begin{eqnarray}
L(_b^a|z)_i^j&\sim&\frac{\sigma(z+C-a_j)}{\sigma(z+C-a_0)}
\frac{\sigma(z+D+b_i)}{\sigma(z+D+b_0)} L(_b^a|z)_0^0\nonumber\\
&\sim&\sigma(z+C-a_j)\sigma(z+D+b_i)F(^a_b|z). \label{A1}
\end{eqnarray}
By using Eq.(\ref{L/L-9}), Eq.(\ref{C}) and Eq.(\ref{D}), we can
similarly derive other forms as follows,\\
(2). Form A(2). $K=1,\ k_0=0,\ k^0=1$, we have
\begin{eqnarray}
L(_b^a|z)_i^j\sim
\frac{\sigma(z+C-a_j)}{\sigma(z+D-b_i)}F(^a_b|z).\label{A2}
\end{eqnarray}
(3). Form A(3). $K=1,\ k_0=1,\ k^0=1$, we have
\begin{eqnarray}
L(_b^a|z)_i^j\sim
\frac{1}{\sigma(z+C+a_j)\sigma(z+D-b_i)}F(^a_b|z). \label{A3}
\end{eqnarray}
(4). Form A(4). $K=1,\ k_0=1,\ k^0=0$, we have
\begin{eqnarray}
L(_b^a|z)_i^j\sim
\frac{\sigma(z+D+b_i)}{\sigma(z+C+a_j)}F(^a_b|z).\label{A4}
\end{eqnarray}
(5). Form B. $K=0,\ k_0=0,\ k^0=0$, we have from Eq.(\ref{L/L-9})
and Eq.(\ref{delj-deli}), one obtains
\begin{eqnarray}
L(_b^a|z)_i^j\sim
\sigma(z+\delta+b_i-a_j)F(^a_b|z).\label{B}
\end{eqnarray}
The relation of $F(z)$ between adjacent lattice points $(a,b)$ and
$(a',b')$ are discussed in the appendix A.
In conclusion, there can at most five classes of $L$-matrices in the
whole lattice. Each of them is of the same form at all lattice points.
We must check that if these inductive relations are integrable in the
whole lattice. That is, if one goes from $(a,b)$ to
$(a''=a+\hat i'+\hat j',b''=b+\hat i+\hat j)$ via different paths, the
resulting $C''\ D''\ F''(z)$ should be the same. The conclusion is
affirmative.
For $a\equiv (m_0,m_1\cdots,m_{n-1}),$ define $m\equiv \sum_i
m_i$. Thus $m(a'=a+\hat i',b'=b+\hat i)=m(a,b)+1.$ We can express
five forms as follows, which satisfy all relations of adjacent lattice points,\\
(1). Form A(1). Let $C=C_0+mw(1-1/n),\ D=D_0-mw(1-1/n)$. Then
\begin{equation}
L(^a_b|z)^l_k\sim\sigma(z+C_0+mw(1-\frac{1}{n})-a_l)
\sigma(z+D_0-mw(1-\frac{1}{n})+b_k)F_0(z)
\end{equation}
and $C_0,\ D_0,\ F_0(z)$ are unchanged in the whole lattice.\\
(2). Form A(2). Let
\begin{eqnarray*}
C&=&C_0+mw(1-\frac{1}{n}),\quad\quad D=D_0-m\frac{w}{n},\nonumber\\
F(z)&=&F_0(z)\prod_{j=0}^{n-1}\sigma(z+D_0-m\frac{w}{n}-b_j).
\end{eqnarray*}
We then have
\begin{eqnarray*}
\frac{F'(z)}{F(z)}=
\frac{\sigma(z+D_0-(m+1)\frac{w}{n}-b_i-w(1-\frac{1}{n}))}
{\sigma(z+D_0-m\frac{w}{n}-b_i)} =
\frac{\sigma(z+D-b_i-w)}{\sigma(z+D-b_i)}.
\end{eqnarray*}
Thus,
\begin{eqnarray}
L(^a_b|z)^l_k\sim\sigma(z+C_0+mw(1-\frac{1}{n})-a_l)\prod_{j(\ne k)}
\sigma(z+D_0-m\frac{w}{n}-b_j)F_0(z)
\end{eqnarray}
and $C_0,\ D_0,\ F_0(z)$ are unchanged in the whole lattice.\\
(3). Form A(3). Let
\begin{eqnarray*}
C&=&C_0+m\frac{w}{n},\quad\quad D=D_0-m\frac{w}{n},\\
F(z)&=&F_0(z)\prod_{j=0}^{n-1}\sigma(z+C_0+m\frac{w}{n}+a_j)
\sigma(z+D_0-m\frac{w}{n}-b_j).
\end{eqnarray*}
We then have
\begin{eqnarray*}
\frac{F'(z)}{F(z)}=
\frac{\sigma(z+C+a_{i'}+w)}{\sigma(z+C+a_{i'})}
\frac{\sigma(z+D-b_i-w)}{\sigma(z+D-b_i)}.
\end{eqnarray*}
Thus,
\begin{eqnarray}
L(^a_b|z)^l_k\sim\prod_{j(\ne l)}\sigma(z+C_0+m\frac{w}{n}+a_j)
\prod_{j(\ne k)}\sigma(z+D_0-m\frac{w}{n}-b_j)F_0(z)
\end{eqnarray}
and $C_0,\ D_0,\ F_0(z)$ are unchanged in the whole lattice.\\
(4). Form A(4). Let
\begin{eqnarray*}
C&=&C_0+m\frac{w}{n},\quad\quad D=D_0-mw(1-\frac{1}{n}),\\
F(z)&=&F_0(z)\prod_{j=0}^{n-1}\sigma(z+C_0+m\frac{w}{n}+a_j).
\end{eqnarray*}
We then have
\begin{eqnarray}
L(^a_b|z)^l_k\sim\sigma(z+D_0-mw(1-\frac{1}{n})+b_k)\prod_{j(\ne l)}
{\sigma(z+C_0+m\frac{w}{n}+a_j})F_0(z)
\end{eqnarray}
and $C_0,\ D_0,\ F_0(z)$ are unchanged in the whole lattice.\\
(5). Form B.
\begin{equation}
L(^a_b|z)^l_k\sim\sigma(z+\delta_0+b_k-a_l)F_0(z)
\end{equation}
and $\delta_0, F_0(z)$ are unchanged in the whole lattice.
Thus we can establish the $L$-matrix in the whole lattice,
if we can properly choose the coefficients of the elements of $L$-matrix.
We will discuss this problem in the next section.
\section{The coefficients irrelevant with $z$ of the elements of
$L$-matrix }
~~~~
In this section, we study the sufficient condition of $L$-matrices
for DYBR and derive the equations satisfied by the coefficients
irrelevant with $z$ of the elements of $L$-matrix.
As an example, we study the form B which is useful in the later.
From the Eq.(\ref{B}) for the form B, The $L$-matrix takes the
form
\begin{eqnarray}
&&
L(^a_b|z)^j_i=(^a_b)^j_i\sigma(z+\delta+b_i-a_j)F(z),\\
&&L(^{a+\hat i'}_{b+\hat i}|z)^{j'}_j=(^{a+\hat i'}_{b+\hat i})^{j'}_j
\sigma(z+\delta+b'_j-a'_{j'})F(z).
\end{eqnarray}
Then, substituting the above equation and the Eq.(\ref{R}) for
the $R$-matrix into the DYBR Eq.(\ref{DYBR}) and noticing the fact
$$ a'_{j'}=a_{j'}+w(\delta_{i'j'}-{1\over n}),\quad
b'_j=b_j+w(\delta_{ij}-{1\over n}), \quad
(\mbox{ for }a'=a+\hat i',\ b'=b+\hat i)
$$
we obtain the equations for the coefficients:
\begin{eqnarray}
& & \left(\begin{array}{l}a\\b\end{array}\right)^{i'}_i
\left(\begin{array}{l} a+\hat i' \\ b+\hat i \end{array}\right)^{i'}_i=
\left(\begin{array}{l}a\\b\end{array}\right)^{i'}_i
\left(\begin{array}{l} a+\hat i' \\ b+\hat i
\end{array}\right)^{i'}_i, \label{B1}
\end{eqnarray}
which is trivially satisfied, and
\begin{eqnarray}
(^a_b)^{i'}_i(^{a+\hat i'}_{b+\hat i})^{j'}_i
-\frac{\sigma(a_{i'j'}-w)}{\sigma(a_{i'j'}+w)}
(^a_b)^{j'}_i(^{a+\hat j'}_{b+\hat i})^{i'}_i=0 \quad (i'\neq j'),\label{sig-B1}
\end{eqnarray}
\begin{eqnarray}
(^a_b)^{i'}_i(^{a+\hat i'}_{b+\hat i})^{i'}_j
-(^a_b)^{i'}_j(^{a+\hat i'}_{b+\hat j})^{i'}_i=0\quad (i\neq j),
\label{sig-B2}
\end{eqnarray}
\begin{eqnarray}
&&(^a_b)^{i'}_j(^{a+\hat i'}_{b+\hat j})^{j'}_i
\sigma(a_{i'j'}+b_{ij})\sigma(w)
+ (^a_b)^{i'}_i(^{a+\hat i'}_{b+\hat i})^{j'}_j
\sigma(b_{ij}-w)\sigma(a_{i'j'}) \nonumber \\
&&-\;(^a_b)^{j'}_j(^{a+\hat j'}_{b+\hat j})^{i'}_i
\sigma(a_{i'j'}-w)\sigma(b_{ij})=0 \quad (i\neq j,\ i'\neq j'), \label{sig-B3}
\end{eqnarray}
respectively. In the derivation, we have used the addition
formula
\begin{eqnarray}
&& \sigma(u+x)\sigma(u-x)\sigma(v+y)\sigma(v-y)
-\sigma(u+y)\sigma(u-y)\sigma(v+x)\sigma(v-x)\nonumber\\
&& \;\;=\sigma(u+v)\sigma(u-v)\sigma(x+y)\sigma(x-y),\label{add}
\end{eqnarray}
Define
\begin{eqnarray}
&&(^a_b)^{i'}_i\times \prod_{l(\ne i')}\sigma(a_l-a_{i'})=[^a_b]^{i'}_i,
\nonumber\\
&&[^a_b]^{i'}_i[^{a+\hat i'}_{b+\hat i}]^{j'}_j=Y^{i'j'}_{i\;j}.
\end{eqnarray}
Then for form B, we rewrite the Eqs.(\ref{sig-B1})-(\ref{sig-B3})
as
\begin{eqnarray}
&&Y^{i'j'}_{i\;i}-Y^{j'i'}_{i\;i}=0 \quad (i'\ne j'),
\label{coff-B1}\\
&&Y^{i'i'}_{i\;j}-Y^{i'i'}_{j\;i}=0 \quad(i\ne j),
\label{coff-B2}\\
&&\sigma(w)\sigma(a_{i'j'}+b_{ij})Y^{i'j'}_{j\;i}
+\sigma(a_{i'j'})\sigma(b_{ij}-w)Y^{i'j'}_{i\;j}\nonumber\\
&&-\ \sigma(a_{i'j'}+w)\sigma(b_{ij})Y^{j'i'}_{j\;i}=0 \quad (i\ne
j,\ i'\ne j').
\label{coff-B3}
\end{eqnarray}
With same procedure, one can also show that all A forms (form
A(1)-A(4)) share a common coefficient relations
\begin{eqnarray}
&&Y^{i'j'}_{i\ i}-\frac{\sigma(a_{i'j'}-w)}
{\sigma(a_{i'j'}+w)}Y^{j'i'}_{i\ i}=0
\quad (i'\ne j'),
\label{coff-A1}\\
&&Y^{i'i'}_{i\ j}-Y^{i'i'}_{j\ i}=0 \quad (i\ne j),
\label{coff-A2}\\
&&Y^{i'j'}_{j\ i}=Y^{i'j'}_{i\ j}=\frac{\sigma(a_{i'j'}-w)}
{\sigma(a_{i'j'}+w)}Y^{j'i'}_{j\ i} \quad (i\ne j,\ i'\ne
j').
\label{coff-A3}
\end{eqnarray}
For the coefficients of form A(i) (i=1,2,3,4), we can easily find
the rule. Consider a function $G(a,b)$ on a lattice points
$(a=\sum_jm^a_j\hat j,b=\sum_im^b_i\hat i)$. From the lattice
$(a,b)$, by using the relation $G(a+\hat i',b+\hat
i)=G(a,b)[^a_b]^{i'}_i,$ we can construct the function on the
other lattice point. Because of the
Eqs.(\ref{coff-A1})-(\ref{coff-A3}), we can obtain same $G(a+\hat
i'+\hat j',b+\hat i+\hat j)$ through different paths from $(a,b)$
to $(a+\hat i'+\hat j',b+\hat i+\hat j)$. So this procedure is
integrable. This implies that there must exist a function $G(a,b)$
which can determine $[^a_b]^{i'}_i$ with
\begin{equation}
[^a_b]^{i'}_i=G(a+\hat i',b+\hat i)/G(a,b).
\end{equation}
Hence, we can solve the problem of form A completely. However, to the
coefficients of the form B, its rule is more complicated and we will
discuss it in the next section.
Obviously, if we take a gauge transformation
$$ [^a_b]^j_i\longrightarrow
\overline {[^a_b]}^j_i=[^a_b]^j_i\frac{g(a+\hat j,b+\hat
i)}{g(a,b)},$$ and if $[^a_b]^j_i$ satisfies
Eqs.(\ref{coff-A1})-(\ref{coff-A3}), $\overline {[^a_b]}^j_i$ also
satisfies these equations. In this sense, all form A coefficients
are gauge equivalent to a constant.
\section{The algebra for form (B) coefficients}
\subsection{The PBW base of the algebra}
~~
In this section, we give the PBW base of the algebra for form (B)
coefficients. The main result is Theorem 1. We also give the center of
this algebra. Eqs.(\ref{coff-B1})-(\ref{coff-B3}) can
be regarded as the algebraic relations
which are satisfied by the operators in the lattice
$(a=\sum_{j=0}^{n-1}m^a_j\hat j, b=\sum_{i=0}^{n-1}m^b_i\hat i)$.
We define a new operator
\begin{equation}
A^{i'}_i\equiv [^a_b]^{i'}_i\Gamma^{i'}_i,
\end{equation}
where
\begin{equation}
\Gamma^{i'}_if(a,b)=f(a+\hat i',b+\hat i)\Gamma^{i'}_i.
\end{equation}
Namely, we regard the $a,b$ as operators, $\Gamma^{i'}_i$ is not
commutative with the function of $a,b$. In this way, we have the
following exchange relations of the operators $\{A^{i'}_i\}$
\begin{eqnarray}
(a)& &A^{i'}_iA^{j'}_i=A^{j'}_iA^{i'}_i\quad\quad (i'\ne j'), \nonumber\\
(b)& &\sigma(a_{i'j'}+w)\sigma(b_{ij})A^{j'}_jA^{i'}_i \nonumber\\
&=&\sigma(a_{i'j'})\sigma(b_{ij}-w)A^{i'}_iA^{j'}_j
+\sigma(w)\sigma(a_{i'j'}+b_{ij})A^{i'}_jA^{j'}_i
\ (i\ne i',\ j\ne j'), \label{coff-A}\\
(c)& &A^{i'}_iA^{i'}_j=A^{i'}_jA^{i'}_i \quad\quad (i\ne
j).\nonumber
\end{eqnarray}
These equations are equivalent relations to the Felder and
Varchenko's elliptic quantum algebra under special condition. It
is worth noting that in the Eq.(\ref{coff-A}b), the coefficients
should be regarded as the functions of operators and they do not
commute with $A^j_i$. These equations are irrelevant with the
parameter $z$. This situation is similar to the relation between
the Sklyanin algebra[21-25] and the YBR of the Belavin
model[26-28]. In formulation, Eq.(\ref{coff-A}b) is also similar
to the function $R$-matrices given by Shibukawa and
Ueno\cite{shibu}.
Using the (a) and (b) of Eq.(\ref{coff-A}), we can exchange the
order of the up-indices of a pair of operators $A^{j'}_iA^{i'}_j$.
So $A^{j'}_iA^{i'}_j$ can be written as linear combination of
$A^{i'}_iA^{j'}_j$ and $A^{i'}_jA^{j'}_i$. Therefore, we can write
the product of three operators $A^{i'}_iA^{j'}_jA^{k'}_k$ as the
linear combination of $A^{k'}_\cdot A^{j'}_\cdot A^{i'}_\cdot$.
This procedure can be done in two different ways. For the two
ways, by using Eqs.(\ref{coff-B1})-(\ref{coff-B3}), we can show
that according to the rules Eq.(\ref{coff-A}a) and
Eq.(\ref{coff-A}b) ( we will simplify it as (ab)), if the product
of three operators $A^{i'}_iA^{j'}_jA^{k'}_k$ is changed to the
linear combination of $A^{k'}_\cdot A^{j'}_\cdot A^{i'}_\cdot$ by
two different paths, their results are equal. The paths are as
follows:
\begin{eqnarray*}
(A)&& i'j'k'\longrightarrow i'k'j'\longrightarrow k'i'j' \longrightarrow
k'j'i',\\
(B)&& i'j'k'\longrightarrow j'i'k' \longrightarrow j'k'i' \longrightarrow
k'j'i'.
\end{eqnarray*}
In the above transformation, we think that the result of the (ab)
transformation on two adjacent operators with same up-indices does not
change the order of them, namely,
$A^{i'}_iA^{i'}_j\Rightarrow A^{i'}_iA^{i'}_j$. And we
think the associative and the distributive law are satisfied in the
transformation.
Further more, if we consider the rule Eq.(\ref{coff-A}c), the
linear expansions of operator products $A^{i'}_iA^{j'}_jA^{j'}_k$
and $A^{i'}_iA^{j'}_kA^{j'}_j$ by $A^{j'}_\cdot A^{j'}_\cdot
A^{i'}_\cdot$ via the (ab) transformation are equal. Therefore,
we also call this fact Yang-Baxter equation (YBE).
Similarly, after $A^{j'}_iA^{j'}_jA^{i'}_k$ and the
$A^{j'}_jA^{j'}_iA^{i'}_k$ change to the linear combination of the
$A^{i'}_\cdot A^{j'}_\cdot A^{j'}_\cdot $ by (ab), these two
expansion are equal via the rule Eq.(\ref{coff-A}c).
For the coefficient algebra (or Yang-Baxter algebra) which we
discussed above, we will give it a PBW base in the following. We
first give some definitions for establishing the base.
{\bf Definition 1: Bunch.} {\sl A bunch is a polynomial (or
monomial) of operator $A$'s, in which all terms has the same
number of $A$'s and the upper indices of $A$'s in all terms are
arranged in the same way.}
{\bf Example:} $$ B=\sum_{i_1i_2i_3i_4}C_{i_1i_2i_3i_4}
A^{j_1}_{i_1}A^{j_2}_{i_2}A^{j_3}_{i_3}A^{j_4}_{i_4}$$
is a bunch. A polynomial is always a bunch.
{\bf Definition 2: Inverse order number.} {\sl To any two integers
$i',j'$ with a given order, we call the inverse order number is 1
if $i'>j'$, is 0 if $i'\le j'$. And the inverse order number of a
successive product $A^{i'}_\cdot A^{j'}_\cdot A^{k'}_\cdot\cdots$
is the sum of the inverse order numbers of all up-index pairs.}
{\bf Definition 3: Normal order product}. {\sl The (ab) normal
order product is a successive product of operators in which the
up-indices are arranged from smaller to bigger when inspecting
from the left to the right, while the arrangement of the
down-indices can be arbitrary. The (abc) normal order product is
that the up-indices are arranged from the smaller to the bigger
and the down-indices of the operators with the same up-indices are
also arranged from smaller to bigger. Their inverse order numbers
are zero.}
{\bf Example:} $A^1_2A^1_1A^2_1A^2_3A^3_5A^4_1A^5_3A^5_1$ is an
(ab) normal order product but is not an (abc) normal order
product. By using the rule Eq.(\ref{coff-A}c), we can change it
to the (abc) normal order product
$A^1_1A^1_2A^2_1A^2_3A^3_5A^4_1A^5_1A^5_3$.
{\bf Definition 4: Normal order expansion}. {\sl The (ab) normal
order expansion of a polynomial of $A$'s is a procedure in which
we change each term of the polynomial into a bunch of (ab) normal
order products by only using rules Eq.(\ref{coff-A}a) and
Eq.(\ref{coff-A}b). We also call the final resulting polynomial as
the (ab) normal expansion of the original polynomial.
The (abc) normal order expansion is a procedure, in which we first
perform the (ab) normal order expansion and then we rearrange each
term of the resulting polynomial into (abc) normal order product
by using rule Eq.(\ref{coff-A}c). We also call the final result as
an (abc) normal expansion of the original polynomial.}
Then, we have a theorem.
{\bf Theorem 1:} {\sl Transforming on a polynomial of operators
$A^j_i$ by using the rules (abc) of Eq.(\ref{coff-A}) does not
change its (abc) normal order expansion. }\\
It is worth noting that the coefficients of the expansions are
functions of the parameters $\{a,b\}$, they do not commute with
operator $A^j_i$.
The detailed proof of the theorem will be given in the appendix B.
{\bf Corollary:} {\sl The (abc) normal order products are linearly
independent.}
{\bf Proof:} If there were a linear relation $\sum C_ig_i=0$,
where $g_i$ are (abc) normal order products. The LHS must be able
to be changed to zero via Eq.(\ref{coff-A}). However, each
operation does not change the (abc) expansion. Thus it is
impossible since $C_i$ are not all zero. $\quad\quad {\bf \Delta}$
Thus the set of all (abc) normal order products is the PBW base of
the algebra defined by Eq.(\ref{coff-A}).
\subsection{The center of the algebra}
~~ By standard procedure, we may obtain the center of elliptic
quantum group (the detail will be given elsewhere).
$$ I=\frac{\Delta(a)}{\Delta(b)}\mbox{Det}\ L(^a_b|z),$$
where $\Delta(a)=\prod_{i<j}\sigma(a_{ij}),\
\Delta(b)=\prod_{i<j}\sigma(b_{ij})$,
\begin{eqnarray*}
&&\mbox{Det}\ L(^a_b|z)=\sum_P
(-1)^{\left[\mbox{Sign}P(^{0\ 1\ \cdots\
n-1}_{\mu_0\mu_1\cdots\mu_{n-1}})\right]}\\
&&\times\ L(^a_b|z)^0_{\mu_0}L(^{a+\hat 0}_{b+\hat\mu_0}|z+w)^1_{\mu_1}
\cdots L(^{a+\hat 0+\hat 1+\cdots+\hat{n-2}}
_{b+\hat\mu_0+\hat\mu_1+\cdots+\hat\mu_{n-2}}|z+(n-1)w)
^{n-1}_{\mu_{n-1}},
\end{eqnarray*}
and $P$'s are permutations of integers $0,\ 1,\ \cdots,\ n-1$.
This agrees with that of Ref.\cite{fe2} for $n=2$.
In the case of
$$L(^a_b|z)^{i'}_{i}=\sigma(z+\delta+b_i-a_{i'})A^{i'}_i,$$
the quantum determinant can be written as
\begin{eqnarray*}
&&I(^a_b|z)=\sum_P(-1)^{\left[\mbox{Sign} P(^{0\ 1\ \cdots\
n-1}_{\mu_0\mu_1\cdots\mu_{n-1}})\right]}\\
&&\times\ \sigma(z+\delta+b_{\mu_0}-a_0)
\sigma(z+w+\delta+b_{\mu_1}-a_1)\cdots \\
&&\times\ \sigma(z+(n-1)w+\delta+b_{\mu_{n-1}}-a_{n-1})
A^0_{\mu_0}A^1_{\mu_1}\cdots A^{n-1}_{\mu_{n-1}}.
\end{eqnarray*}
It is easy to check
$$\Phi(z)_{\mu_0\cdots\mu_{n-1}}\equiv \sigma(z+\delta+b_{\mu_0}-a_0)
\cdots \sigma(z+(n-1)w+\delta+b_{\mu_{n-1}}-a_{n-1}) $$
is quasi doubly periodic in $\tau$ and 1:
\begin{eqnarray*}
&&\Phi(z+1)=(-1)^n\Phi(z),\\
&&\Phi(z+\tau)\\
&=&\exp\left[-2\pi i\left(\frac{n\tau}{2}+n\delta+nz+\frac{n(n-1)}{2}w
+\frac{n}{2}+\sum_ib_{\mu_i}-\sum_ia_i\right)\right]\Phi(z) \\
&=&\exp\left[-2\pi i\left(\frac{n\tau}{2}+n\delta+nz+\frac{n(n-1)}{2}w
+\frac{n}{2}\right)\right]\Phi(z)
\end{eqnarray*}
for all $\mu_0,\cdots,\mu_{n-1}$ being a permutation of
$(0,1,\cdots,n-1)$. Therefore, from a theorem of such function (see D.
Mumford, Tata Lectures on Theta, Birkhauser 1983), we have
\begin{equation}
\Phi(z)_{\mu_0,\cdots,\mu_{n-1}}=
\sum_{i=0}^{n-1}C^i_{\mu_0,\cdots,\mu_{n-1}}f_i(z),
\end{equation}
where $\{f_i(z)\}$ are base functions of the space of such quasi double
periodic function. For example, we may choose
$$f_i(z)=\theta\left[^{\frac{1}{2}-\frac{i}{n}}_{\ \ \frac{1}{2}}\right]
\left(nz+n\delta+\frac{n(n-1)w}{2}+\frac{n-1}{2},\ n\tau\right).$$
One can obtain $C^i_{\mu_0,\cdots,\mu_{n-1}}$ by choosing
$n$ points $z_1,\cdots,z_n$ in the above equation and solve a set of $n$
linear equations. We then derive the $n$ center elements for the algebra.
From
\begin{eqnarray*}
&&I(^a_b|z)\\
&=&\sum_i f_i(z)\left\{
\sum_P (-1)^{\left[\mbox{Sign}P(^{0\ 1\ \cdots\
n-1}_{\mu_0\mu_1\cdots\mu_{n-1}})\right]}C^i_{\mu_0,\cdots,\mu_{n-1}}
A^0_{\mu_0}A^1_{\mu_1}\cdots A^{n-1}_{\mu_{n-1}}\right\}\\
&\equiv& \sum_i f_i(z)J_i,
\end{eqnarray*}
we see that $[\Delta(a)/\Delta(b)]J_i$ are the center elements of
the algebra.
\section{A known solution for the form B coefficients}
~~
The equations (Eqs.(\ref{coff-B1})-(\ref{coff-B3})) of form B coefficients
seem simple but they interrelate the values of the coefficients
$[^a_b]^{i'}_i$ between
different lattice points. To our best knowledge, we only know the
analytic solution
\begin{equation}
[^a_b]^{i'}_i=\prod_{j(\ne i')}\sigma(\delta'+b_i-a_{j'}),
\label{solution}
\end{equation}
which can be derived by the factorized operator of Eq.(\ref{factor})
\begin{eqnarray*}
L_\delta(^a_b|z)^{i'}_i
&=&\sigma(z+\delta+b_i-a_{i'}) \prod_{j(\ne i')}
\frac{\sigma(\delta'+b_i-a_{j})}{\sigma(a_{i'}-a_j)}\\
&\equiv&(-1)^{n-1}\sigma(z+\delta+b_i-a_{i'})(^a_b)^{i'}_i
\end{eqnarray*}
and
$$ [^a_b]^{i'}_i=(^a_b)^{i'}_i\prod_{j(\ne i')}\sigma(a_j-a_{i'}).$$
The corresponding $Y^{i'j'}_{i\;j}$ is,
\begin{eqnarray}
Y^{i'j'}_{i\;j}&=&[^a_b]^{i'}_i[^{a+\hat i'}_{b+\hat i}]^{j'}_j\nonumber\\
&=&\prod_{l(\ne i')}\sigma(\delta'+b_i-a_l)\prod_{m(\ne j')}
\sigma(\delta'+b'_j-a'_m).
\end{eqnarray}
By using the addition formula Eq.(\ref{add}), we can check that
the solution satisfies Eqs.(\ref{coff-B1})-(\ref{coff-B3})
directly.
This solution can be
proved to be equivalent with the results obtained by using the
symmetry fusion method for the $A^{(1)}_{n-1}$ model in the
Ref.\cite{jim3}. And it is also equivalent with the evaluation
modules ($n=2$) obtained by Felder and Varchenko in the
Ref.\cite{fe2}.
Eq.(\ref{solution}) is the only known solution for the form B
coefficients. We do not know if there are other analytic solutions.
This is still a worthy studying open question.
\section*{Appendix A $\quad$ The relation $F(z)$ between adjacent lattice points}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
~~ Suppose we go from $(a,b)$ to $(a+\hat i',b+\hat i)$, then we
have $a'_j=a_j+w(\delta_{i'j}-{1\over n}),$
$b'_j=b_j+w(\delta_{ij}-{1\over n})$. From Eq.(\ref{C'-C}) and
(\ref{D'-D}), we may choose
\begin{eqnarray}
C'-C&=&-k_0w(1-\frac{2}{n})+Kw(1-\frac{1}{n}),\label{B.1}\\
D'-D&=&k^0w(1-\frac{2}{n})-Kw(1-\frac{1}{n}) \label{B.2}
\end{eqnarray}
without loss of generality. This is the explicit relations of $C\
D\ (\delta\ E)$ between adjacent lattice points for each form of
$L$-matrices. From Eq.(\ref{L/L-1})
$$ \frac{L(^{a+\hat i'}_{b+\hat i}|z)^{j'}_i}
{L(^{a}_{b}|z)^{i'}_i}\sim
\frac{\sigma(z+\delta_{i}(i'j')+b_{i}-a_{j'}+w)}
{\sigma(z+\delta_{i}(i'j')+b_{i}-a_{i'})}$$
and Eq.(\ref{C})
$$ \delta_{i}(i'j')\cong
C-Kb_i+k_0(a_{i'}+a_{j'}),$$ we have
\begin{equation}
\frac{L(^{a+\hat i'}_{b+\hat i}|z)^{j'}_i}
{L(^{a}_{b}|z)^{i'}_i}\sim
\frac{\sigma(z+C+(1-K)b_i+k_0a_{i'}+(k_0-1)a_{j'}+w)}
{\sigma(z+C+(1-K)b_i+(k_0-1)a_{i'}+k_0a_{j'})}. \label{B.3}
\end{equation}
The relations of $F(z)$ and $F'(z)$ (the new function at lattice
point $(a',b')$) can be obtained by putting the explicit forms of
five forms of $L$-matrices (Eqs.(\ref{A1})-(\ref{B})) into
Eq.(\ref{B.3}). For example, we
study the A(1) form.\\
(1) $A(1)\stackrel{i\ i'}{\longrightarrow}A(1) \quad K=1,\ k_0=k^0=0$
From Eq.(\ref{B.1}) and Eq.(\ref{B.2}), one has
\begin{equation} C'= C+w(1-\frac{1}{n}),\quad
D'= D-w(1-\frac{1}{n}).
\end{equation}
Then Eq.(\ref{A1}) and Eq.(\ref{B.3}) yield
\begin{eqnarray}
&&\frac{L(^{a'}_{b'}|z)^{j'}_i}{L(^{a}_{b}|z)^{i'}_i}\sim
\frac{\sigma(z+C'-a'_{j'})\sigma(z+D'+b'_{i})F'(z)}
{\sigma(z+C-a_{i'})\sigma(z+D+b_{i})F(z)}\nonumber\\
&&\quad\quad\quad\quad \sim
\frac{\sigma(z+C-a_{j'}+w)\sigma(z+D+b_{i})F'(z)}
{\sigma(z+C-a_{i'})\sigma(z+D+b_{i})F(z)} \nonumber\\
&&\quad\quad\quad\quad \sim
\frac{\sigma(z+C-a_{j'}+w)}{\sigma(z+C-a_{i'})}\nonumber\\
&&\Rightarrow\frac{F'(z)}{F(z)}\sim 1.
\end{eqnarray}
Other A(i)'s are similar. We list them in the following.
(2) $A(2)\stackrel{i\ i'}{\longrightarrow}A(2) \quad
K=1,\ k_0=0,\
k^0=1$
Eq.(\ref{B.1}) and Eq.(\ref{B.2}) give
\begin{equation}
C'= C+w(1-\frac{1}{n}),\quad D'= D-\frac{w}{n}.
\end{equation}
From Eq.(\ref{A2}) and Eq.(\ref{B.3}), we have
\begin{eqnarray*}
\frac{F'(z)}{F(z)}\sim
\frac{\sigma(z+D-b_{i}-w)}{\sigma(z+D-b_{i})}.
\end{eqnarray*}\\
(3) $A(3)\stackrel{i\ i'}{\longrightarrow}A(3) \quad K=1,\
k_0=k^0=1$
\begin{equation}
{F'(z)\over F(z)}\sim {\sigma(z+C+a_{i'}+w)\sigma(z+D-b_i-w)\over
\sigma(z+C+a_{i'})\sigma(z+D-b_i)}.
\end{equation}\\
(4) $A(4)\stackrel{i\ i'}{\longrightarrow}A(4) \quad K=1,\ k_0=1,\
k^0=0$
\begin{equation}
{F'(z)\over F(z)}\sim {\sigma(z+C+a_{i'}+w)\over
\sigma(z+C+a_{i'})}.
\end{equation}\\
(5) $B\stackrel{i\ i'}{\longrightarrow}B \quad K=0,\ k_0=k^0=0$
From Eq.(\ref{B.1}) and noting $\delta\cong C\cong D$ for this class, we have
$C'\cong D'\cong\delta'\cong C\cong D\cong\delta$. We may choose
$\delta'=\delta$ without loss of generality. Eq.(\ref{B}) and Eq.(\ref{B.3}) imply
\begin{eqnarray}
&&\frac{L(^{a'}_{b'}|z)^{j'}_i}{L(^{a}_{b}|z)^{i'}_i}\sim
\frac{\sigma(z+\delta'+b'_i-a'_{j'})F'(z)}
{\sigma(z+\delta+b_i-a_{i'})F(z)}\nonumber\\
&&\quad\quad\quad\quad =
\frac{\sigma(z+\delta+b_i-a_{j'}+w)F'(z)}
{\sigma(z+\delta+b_i-a_{i'})F(z)}\nonumber\\
&&\quad\quad\quad\quad \sim
\frac{\sigma(z+\delta+b_i-a_{j'}+w)}
{\sigma(z+\delta+b_i-a_{i'})}\nonumber\\
&&\Rightarrow \frac{F'(z)}{F(z)}\sim 1.
\end{eqnarray}
\section*{Appendix B\quad The proof of the theorem 1}
\setcounter{equation}{0}
\renewcommand{\theequation}{B.\arabic{equation}}
~~ To prove the theorem, firstly, we prove the following lemma.
{\bf Lemma 1:} {\sl To any successive product of operators, if we
transform it by using Eq.(\ref{coff-A}a) and Eq.(\ref{coff-A}b)
such that at each step its inverse order number is reduced (the
adjacent up-indices is exchanged when the left one is bigger than
that of the right one) the final result of the (ab) normal order
expansion is unique.}
Here we assume that in this transformation, two adjacent
operators with same up-indices do not change the order. And we
think that in every step of the transformation, the location of two
exchanged operators in all terms of the linear combination after
previous step are same.
{\bf Proof:} We can do the procedure by different paths. For
example, if we want to obtain (ab) normal order expansion of
${A^4_{\cdot}}{A^4_{\cdot}}{A^6_{\cdot}}{A^5_{\cdot}}{A^2_{\cdot}}{A^2_{\cdot}}$,
we
may do this in following different paths:\\
(1). $A^4_\cdot A^4_\cdot A^6_\cdot A^5_\cdot A^2_\cdot
A^2_\cdot\equiv (446522)\stackrel{Q_{4,5}}{\longrightarrow}
(446252)\stackrel{Q_{3,4}}{\longrightarrow}
(442652)\stackrel{Q_{5,6}}{\longrightarrow}
(442625)\stackrel{Q_{4,5}}{\longrightarrow}
(442265)\stackrel{Q_{2,3}}{\longrightarrow}
(424265)\stackrel{Q_{3,4}}{\longrightarrow}
(422465)\stackrel{Q_{1,2}}{\longrightarrow}
(242465)\stackrel{Q_{2,3}}{\longrightarrow}
(224465)\stackrel{Q_{5,6}}{\longrightarrow}
(224456),$ \\
(2). $A^4_\cdot A^4_\cdot A^6_\cdot A^5_\cdot A^2_\cdot
A^2_\cdot\equiv (446522)\stackrel{Q_{3,4}}{\longrightarrow}
(445622)\stackrel{Q_{4,5}}{\longrightarrow}
(445262)\stackrel{Q_{3,4}}{\longrightarrow}
(442562)\stackrel{Q_{2,3}}{\longrightarrow}
(424562)\stackrel{Q_{1,2}}{\longrightarrow}
(244562)\stackrel{Q_{5,6}}{\longrightarrow}
(244526)\stackrel{Q_{4,5}}{\longrightarrow}
(244256)\stackrel{Q_{3,4}}{\longrightarrow}
(242456)\stackrel{Q_{2,3}}{\longrightarrow} (224456)$,\\
where $Q_{i,i+1}$ denotes the exchange of the $i$th operator $A$
and $i+1$th operator $A$ by using rules Eq.(\ref{coff-A}a) and
Eq.(\ref{coff-A}b). We may denote such procedure by product of a
set of exchange operators $\{Q_{i,i+1}\}$ acting on the bunch. For
the path (1) in the example, we have
$$ Q_{5,6}Q_{2,3}Q_{1,2}Q_{3,4}Q_{2,3}Q_{4,5}Q_{5,6}Q_{3,4}Q_{4,5}
A^4_\cdot A^4_\cdot A^6_\cdot A^5_\cdot A^2_\cdot A^2_\cdot
=\sum\cdots A^2_\cdot A^2_\cdot A^4_\cdot A^4_\cdot A^5_\cdot
A^6_\cdot. $$ For the path (2), we have
$$ Q_{2,3}Q_{3,4}Q_{4,5}Q_{5,6}Q_{1,2}Q_{2,3}Q_{3,4}Q_{4,5}Q_{3,4}
A^4_\cdot A^4_\cdot A^6_\cdot A^5_\cdot A^2_\cdot A^2_\cdot
=\sum\cdots A^2_\cdot A^2_\cdot A^4_\cdot A^4_\cdot A^5_\cdot
A^6_\cdot. $$
In general cases, a path of such procedure is denoted by
\begin{eqnarray}
Q_{i_1,i_1+1}Q_{i_2,i_1+1}\cdots Q_{i_s,i_s+1}
\left(A^{j_1}_{k_1}A^{j_2}_{k_2}\cdots A^{j_l}_{k_l}\right)
=\sum_{j'k'}c^{t_1\cdots
k'_1\cdots}_{j_1\cdots
k_1\cdots}A^{j_{t_1}}_{k'_1}A^{j_{t_2}}_{k'_2}\cdots
A^{j_{t_l}}_{k'_l} \label{Q-c}
\end{eqnarray}
with $j_{t_1}\leq j_{t_2}\leq\cdots\leq j_{t_l}$. Note that the
original arrangement $\{j_1j_2\cdots j_l\}$ and the final
arrangement $\{j_{t_1}j_{t_2}\cdots j_{t_l}\}$ are same for
whatever path of the (ab) normal product expansion we choose.
Assume there is another path for (ab) normal product expansion
\begin{eqnarray}
Q_{i'_1,i'_1+1}Q_{i'_2,i'_1+1}\cdots Q_{i'_s,i'_s+1}
\left(A^{j_1}_{k_1}A^{j_2}_{k_2}\cdots
A^{j_l}_{k_l}\right
=\sum_{j'k'}d^{t_1\cdots
k'_1\cdots}_{j_1\cdots
k_1\cdots}A^{j_{t_1}}_{k'_1}A^{j_{t_2}}_{k'_2}\cdots
A^{j_{t_l}}_{k'_l}. \label{Q-d}
\end{eqnarray}
Consider the corresponding two products of exchange operators in
the permutation group
$$ P^{(1)}=P_{i_1,i_1+1}P_{i_2,i_2+1}\cdots P_{i_s,i_s+1}$$
and
$$ P^{(2)}=P_{i'_1,i'_1+1}P_{i'_2,i'_2+1}\cdots P_{i'_s,i'_s+1}.$$
They must all be able to permute the arrangement $\{j_1\cdots
j_l\}$ into $\{j_{t_1}j_{t_2}\cdots j_{t_l}\}$. Although some of
the $j$'s may be the same, the permutation $\{^{1\ 2\cdots
l}_{t_1t_2\cdots t_l}\}$ is unique however. This is due to the
rule we do not exchange adjacent operators with same upper
indices. In permutation group, we can express an arbitrary element
by product of exchange operators in different ways. However, we
can always make them equal step by step using the following
equations.
\begin{eqnarray}
&& P_{i,i+1}P_{i,i+1}=id,\label{P-1}\\
&& P_{i,i+1}P_{j,j+1}=P_{j,j+1}P_{i,i+1}\quad (i+1<j),\label{P-2}\\
&&
P_{i,i+1}P_{i+1,i+2}P_{i,i+1}=P_{i+1,i+2}P_{i,i+1}P_{i+1,i+2}.\label{P-3}
\end{eqnarray}
Thus $P^{(1)}$ can be changed to $P^{(2)}$ by using these
equations step by step.
On the other hand, the $\{Q_{i,i+1}\}$ operators have the same
properties. We have checked
\begin{equation}
Q_{i,i+1}Q_{i,i+1}=id \label{Q-1}
\end{equation}
for two adjacent operators $A^{j_1}_{k_1}A^{j_2}_{k_2'}\ (j_1\neq
j_2)$, and thus it is also valid for all bunches due to
distribution law. We also have
\begin{equation}
Q_{i,i+1}Q_{j,j+1}=Q_{j,j+1}Q_{i,i+1}\quad (i+1<j) \label{Q-2}
\end{equation}
because of the distribution law. Finally we have
\begin{equation}
Q_{i,i+1}Q_{i+1,i+2}Q_{i,i+1}=Q_{i+1,i+2}Q_{i,i+1}Q_{i+1,i+2}
\label{Q-3}
\end{equation}
due to YBE for any polynomial
$A^{j_1}_{k_1}A^{j_2}_{k_2}A^{j_3}_{k_3}$ with different indices.
Due to distribution law, this equation is also true for any bunch.
Therefore, we can also change
$Q^{(1)}=Q_{i_1,i_1+1}Q_{i_2,i_2+1}\cdots Q_{i_s,i_s+1}$ into
$Q^{(2)}=Q_{i'_1,i'_1+1}Q_{i'_2,i'_2+1}\cdots Q_{i'_s,i'_s+1}$ in
Eq.(\ref{Q-c}) and Eq.(\ref{Q-d}), respectively, by using
Eqs.(\ref{Q-1})-(\ref{Q-3}) step by step since $P^{(1)}$ and
$P^{(2)}$ can be equaled in such way by using
Eqs.(\ref{P-1})-(\ref{P-3}), respectively. Thus we have
$c^{j_{t_1}\cdots k'_1\cdots}_{j_{1}\cdots k_1\cdots}
=d^{j_{t_1}\cdots k'_1\cdots}_{j_{1}\cdots k_1\cdots}.$
We then conclude that the resulting (ab) normal order expansion
of the two paths give the same result. Therefore, all paths give
the same result. $\quad\quad {\bf \Delta}$
Corollaries then follows:
{\bf Corollary 1:} {\sl If in a product of successive product of
operators $CA^{i'}_iA^{j'}_jD$ where $C,D$ are all products of
operators, we obtain the combination of $CA^{j'}_\cdot
A^{i'}_\cdot D$ (it is, $C(\alpha A^{j'}_iA^{i'}_j+\beta
A^{j'}_jA^{i'}_i)D)$ by changing ( with rule (ab) in
Eq.(\ref{coff-A})) two adjacent operators whose up-indices are
unequal, the results of their (ab) normal order expansions are
same, if the procedure is done according to the rules described in
lemma 1.}
{\bf Proof:} If $i'>j'$, we can regard this changing procedure as the
first step of the (ab) normal order expansion. Thus, we can
prove it. If $i'<j'$, we can do the (ab) normal order expansion of
$C(\alpha A^{j'}_iA^{i'}_j+\beta A^{j'}_jA^{i'}_i)D$,
and let the first step as the changing of
$A^{j'}_\cdot A^{i'}_\cdot$ into $A^{i'}_\cdot A^{j'}_\cdot$.
Then, By using the rule (ab), we can prove that $i'j'\rightarrow
j'i'\rightarrow i'j'$ is the identical transformation. So with the
distributive law, the (ab) normal order expansion of bunch
$C(\alpha A^{j'}_iA^{i'}_j+\beta A^{j'}_jA^{i'}_i)D$ =the (ab)
normal order expansion of $CA^{i'}_iA^{j'}_jD$. Therefore, this
corollary is proved.\quad\quad ${\bf \Delta}$
{\bf Corollary 2:} {\sl With the rules of the Eq.(\ref{coff-A}a)
and Eq.(\ref{coff-A}b), if a polynomial (a linear combination of
products) of operators $C$ can be changed to $D$
$(C\stackrel{(ab)}{\rightarrow} D)$, the (ab) normal order
expansions of $C$ and $D$ are same, if the expansion is done
according to the rules described in lemma 1.}
{\bf Proof:} Because each step of the transformation do not affect
the result of the expansion. \quad\quad${\bf \Delta}$
Thus the Eq.(\ref{coff-A}a) Eq.(\ref{coff-A}b) are compatible with the
(ab) normal order expansion and the (abc) normal order expansion.
Here we note that same results of the (ab) normal order expansion give
same
results of the (abc) normal order expansion, so the above two
corollaries are also true for the (abc) normal order expansion.
Next, we prove the following lemma.
{\bf Lemma 2:} {\sl The (abc) normal order expansion of the bunch
$CA^{i'}_jA^{i'}_kD$ and the bunch $CA^{i'}_kA^{i'}_jD$ are same.}
{\bf Proof:} We need only to prove this when they are monomials.
We prove the following propositions by using the mathematical
inductive
method:\\
Proposition (i). This lemma is true when the inverse order number
is
zero.\\
Proposition (ii). If the lemma is true when the inverse order number is
smaller than $m$, it is also true when the inverse order number is equal
to $m$.
The first proposition is obvious, because in this case,
$CA^{i'}_jA^{i'}_kD$ and $CA^{i'}_kA^{i'}_jD$ are all (ab) normal
order products. To obtain the (abc) normalization, we only need to
rearrange the down-indices of the part of the product where the
up-indices are same from the smaller to the bigger by rule
Eq.(\ref{coff-A}c). Both of the bunches have same sets of the
down-indices for up-indices $i'$. Therefore, the (abc) normal
order products of them are same.
To the second proposition, we have the following cases:
($\alpha$). If in $C$ or $D$, we can rearrange the up-indices $\{i'\}$
of them to reduce the inverse order number, for example,
$D\stackrel{(ab)}{\longrightarrow}D'$. We can obtain $CA^{i'}_jA^{i'}_kD'$
and $CA^{i'}_kA^{i'}_jD'$. According to the corollary 2 of the lemma 1,
the (ab) normal order expansions of both of them will keep unchanged.
However, because the inverse order number must be smaller that $m$ now, so
according to assumption of the proposition (ii), their (abc) normal order
expansions are same. Therefore, the (abc) normal order expansions of the
$CA^{i'}_jA^{i'}_kD$ and the $CA^{i'}_kA^{i'}_jD$ are same.
($\beta$). If $C$ and $D$ have already been normalized but the
inverse order number of the bunch as a whole can be reduced,
namely, the bunch is not an (ab) normal order product. We can let
$C=C_1A^{i'_c}_{i_c}, D=A^{i'_d}_{i_d}D_1$. Then we must have
$i'_c>i'$ or (and) $i'>i'_d$. Let us assume $i'_c>i'$. These two
bunches can be rewritten as $T_1=C_1
A^{i'_c}_{i_c}A^{i'}_jA^{i'}_kD$ and $T_2=C_1
A^{i'_c}_{i_c}A^{i'}_kA^{i'}_jD$ respectively. According to the
rule (ab) in Eq.(\ref{coff-A}), we can change them as
$T_1\Rightarrow T'_1=C_1\sum_{rst}a_{rst} A^{i'}_r A^{i'}_s
A^{i'_c}_t D$ and $T_2\Rightarrow T'_2=C_1\sum_{rst}b_{rst}
A^{i'}_r A^{i'}_s A^{i'_c}_t D$, where $a_{rst}$ and $b_{rst}$ are
some coefficients. With the help of the YBE which we studied in
section 5, one can see that these two combinations
$\sum_{rst}a_{rst} A^{i'}_r A^{i'}_s A^{i'_c}_t$ and
$\sum_{rst}b_{rst} A^{i'}_r A^{i'}_s A^{i'_c}_t$ are the same if
we take the rule Eq.(\ref{coff-A}c) into account. Thus we must
have
\begin{eqnarray*}
\sum_{rst}a_{rst}A^{i'}_r A^{i'}_s A^{i'_c}_t -
\sum_{rst}b_{rst}A^{i'}_r A^{i'}_s A^{i'_c}_t
=\sum_t\left(\sum_{rs}(a_{rst}-b_{rst})A^{i'}_r
A^{i'}_s\right)A^{i'_c}_t
\end{eqnarray*}
with $\sum_{rs}(a_{rst}-b_{rst})A^{i'}_r A^{i'}_s=0$ if we take
the rule Eq.(\ref{coff-A}c) into account. This is to say
\begin{equation}
a_{rst}+a_{srt}=b_{rst}+b_{srt}=2c_{rst}\quad \mbox{for each }t.
\label{c-rst}
\end{equation}
Thus we have
\begin{eqnarray*}
T'_1=\sum_{rst}C_1a_{rst}A^{i'}_r A^{i'}_s A^{i'_c}_t D \equiv
\sum_t\sum_{rs}\left(C_1a_{rst}A^{i'}_r A^{i'}_s D_t\right)
\end{eqnarray*}
and
\begin{eqnarray*}
T'_2=\sum_{rst}C_1b_{rst}A^{i'}_r A^{i'}_s A^{i'_c}_t D \equiv
\sum_t\sum_{rs}\left(C_1b_{rst}A^{i'}_r A^{i'}_s D_t\right).
\end{eqnarray*}
From Eq.(\ref{c-rst}) and due to the assumption of the proposition
(ii), the (abc) normal order expansions of $T'_1$ and $T'_2$ are
the same. According to the procedure of the (abc) normal order
expansion, we see that the (abc) normal order expansions of $T_1$
and $T_2$ are same.
If $i'>i_d'$, the proof is similar. So we see that the proposition
(ii) is true.
Thus, with the mathematical inductive method, we prove the lemma
2. $\quad\quad {\bf \Delta}$
From the corollary 2 of lemma 1 and lemma 2, we obtain theorem 1.
|
{
"timestamp": "2004-11-26T08:28:40",
"yymm": "0411",
"arxiv_id": "hep-th/0411243",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/0411243"
}
|
\section{Introduction}
\def\theequation{\arabic{section}.\arabic{equation}}
\label{intro}
Chiral perturbation theory (CHPT) is the effective field theory of the
Standard Model at low energy \cite{GL1,GL2}. It is based on the spontaneously broken
approximate chiral symmetry of QCD. The pions, kaons and the eta can be
identified with the Goldstone bosons of chiral symmetry breaking. Their
interactions are weak and vanish in the chiral limit of zero quark masses
when the energy goes to zero. This is a consequence of Goldstone's theorem
and allows for a consistent power counting. Consequently, any amplitude can be written
as sum of terms with increasing powers in external momenta and quark masses,
symbolically
\begin{equation}
{\mathcal A} = p^D \, f\left( p/\mu , g\right)~,
\end{equation}
where $f$ is a function of order one,
$p$ collects the small parameters, $D$ is the chiral dimension, $\mu$ a
regularization scale (related to the UV divergences in loop graphs)
and $g$ a collection of coupling constants, the so-called
low-energy constants (LECs). Weinberg \cite{Wein} first established this power
counting, in particular, the expansion in $p$ (the chiral expansion)
can be mapped onto an expansion in terms
of tree and loop graphs, with $n$ loop graphs being parametrically suppressed by
powers of $p^{2n}$ compared to the leading trees. The explicit expression for $D$
reads:
\begin{equation}\label{chiraldim}
D({\mathcal A}) = \sum_N V_n (n-2) + 2L +2~,
\end{equation}
with $L$ the number of Goldstone boson loops and $V_n$ a vertex of
order ${O}(p^n)$.
In essence, this power counting
works because the pion mass vanishes in the chiral limit and thus the only dimensionfull
scale in this limit is the pion decay constant $F_\pi$
(more precisely, its value in the chiral limit).
Utilizing a symmetry preserving regularization scheme
like e.g. dimensional regularization leads to homogeneous functions in the small
parameters (for more details, see the reviews \cite{ulf,pich,ecker,BKMrev,scherer}).
The precise relation between this effective field theory (EFT) and the chiral Ward
identities of QCD was firmly
established in Refs.~\cite{Heiri,SWMIT}.
The active degrees of freedom in CHPT are the Goldstone bosons, chirally coupled to
external sources. It is, however, known since long that vector and axial-vector
mesons also play an important role in low energy hadron physics, as one example
we mention the fairly successful description of the pion charge form factor in
terms of the (extended) vector dominance approach (for more details, see e.g. the
reviews \cite{UlfV,KoichiV}). These heavy degrees of freedom decouple in the chiral
limit and at low energy from the Goldstone bosons. Nevertheless, they leave their
imprint in the low-energy EFT of QCD by saturating the LECs, which has been termed
resonance saturation \cite{NuB,Donoghue,Eck}. We will discuss this issue briefly
in Sect.~\ref{sec:tree}. Here, we are interested in an extension of CHPT where these
spin-1 fields are accounted for explicitly. While it is straightforward to construct
the corresponding chiral effective Lagrangian (as briefly reviewed also
in Sect.~\ref{sec:tree}), the computation of loop diagrams
is not. This is related to the appearance of a large mass scale in this EFT,
namely the non-vanishing chiral limit mass of the spin-1 fields. This scale destroys the
one-to-one mapping between the chiral expansion and the loop expansion, as discussed
in more detail in Sect.~\ref{sec:problem}. To be able to proceed in a systematic fashion,
one has to be able to separate the contributions to loop diagrams originating from this
large mass scale in a controllable and symmetry-preserving fashion. This problem also
appears in the EFT when nucleons (baryons) are included (in fact, it has been analyzed
first in this context \cite{GSS}), and various solutions have been
suggested, like heavy baryon CHPT \cite{JM,BKKM}, subtraction schemes for the hard
momenta \cite{Tang}, infrared regularization \cite{BL} or extended on-mass shell
regularization \cite{Mainz1}. For the case of the vector mesons considered here
an additional complication arises - these particles can decay into Goldstone bosons
and thus appear in loops without appearing in external lines. Here, we will
present an extension of infrared regularization that allows to treat such diagrams.
In Sect.~\ref{sec:softhard}, we briefly review the intuitive approach of Ref.\cite{Tang}
how the contributions from the hard scale can be tamed. The more elegant infrared
regularization \cite{BL} is introduced in Sect.~\ref{sec:IR}. In Sect.~\ref{sec:IRnew},
we discuss the new contributions to the Goldstone boson
self-energy graph when the heavy particle
only appears in the loop. The singularity structure of these loop graphs is analyzed in
Sect.~\ref{sec:sing}. Based on that, we show how the infrared singular part can
be obtained for such type of one-loop diagrams in Sect.~\ref{sec:IRcorr}. The method is then
applied to the self-energy graph where the spin-1 field only appears inside the
loop, see Sect.~\ref{sec:self1}. The corresponding
triangle graph is discussed in Sect.~\ref{sec:tria}.
Section~\ref{sec:Vself} contains the analysis of the vector meson self-energy
diagram with a pure Goldstone boson loop, which does not only involve soft momenta.
As an application, we discuss the chiral extrapolation of lattice QCD data for
the rho meson mass and related topics
in Sect.~\ref{sec:mrho}. A brief summary is given in Sect.~\ref{sec:summ}.
Some technicalities are relegated to the appendices. For other works on the problem
of vector mesons in chiral EFT, we refer
to \cite{JM95,BM95,Urech,bijnens,PichV,Koichi,MainzV}.
\section{Prelude: Vector mesons in trees}
\setcounter{equation}{0}
\label{sec:tree}
In this section we show how vector mesons can be treated in chiral
perturbation theory when only tree graphs are considered. This material is not
new, but is needed to set up the formalism and to set the stage for the
discussion of vector mesons in loops. The reader familiar with this material
might skip this section. Also, our considerations are more general, they
really refer to the coupling of heavy degrees of freedom to the Goldstone
boson fields. Furthermore, when talking about vector mesons, we really mean vector
{\em and} axial-vector mesons (spin-1 fields).
Our aim is to write down an effective Lagrangian containing the vector meson
resonances explicitly. The word 'explicitly' refers to the fact that the
vector meson resonances are present {\em implicitly} in the Goldstone boson
effective Lagrangians through their contributions to (some of) the pertinent
low-energy constants, see Refs.~\cite{NuB,Donoghue,Eck}.
We want to state this on a more formal level. Assume that we have constructed a Lagrangian ${\mathcal L}_{Res}(R,U,v,a,s,p)$ where $R$ are some resonance fields which might be the vector mesons for example. $U$ collects the Goldstone bosons and $v,a,s,p$ are vector,
axial-vector, pseudoscalar and scalar sources. The latter also include the quark
masses, $s(x) = {\mathcal M}$, with ${\mathcal M} = {\rm diag}(m_u,m_d,m_s)$ the
quark mass matrix. The resonances are all very much heavier than the Goldstone bosons (e.g. $M_{\rho} \sim 770$ MeV), and therefore it is a consistent procedure for a low-energy effective theory to integrate out these heavy degrees of freedom by means of a path integration over $R$:
\begin{equation}
\int [dR] e^{i\int d^{4}x {\mathcal L}_{Res}(R,U,v,a,s,p)} = e^{iZ_{ind}(U,v,a,s,p)} .
\end{equation}
By doing the path integral, a Goldstone boson theory (containing only $U$ and
the external fields) is recovered. $Z_{ind}(U,v,a,s,p)$ may be called the
generating functional {\em induced} by
${\mathcal L}_{Res}(R,U,v,a,s,p)$ through the path integration. Such a step may be visualized in a Feynman graph by shrinking the lines symbolizing the $R$-propagators to a point, and attaching contact terms (interactions between the remaining fields) to these points.
What is the physical content of all this? Computing the $R$-induced Goldstone boson field theory means that some interaction terms between the Goldstone (and external) fields are computed by the path integration over $R$ and are therefore expressed through couplings of the resonance to these fields and the resonance mass $M_{R}$, which can be measured in processes where the resonance occurs as an external state. The interactions generated in this way can be compared with the couplings determined by the LECs of the original Goldstone boson field theory. This will show us how important the resonance contributions to the processes under consideration are. We get a microscopic information on these processes which we could not achieve with the theory of the light fields alone.
Numerically, however, one could as well work with the original effective field theory and simply include higher and higher orders. As should be clear by now, the resonance contributions are {\em implicitly} present in this theory, influencing the values of the LECs.
This can be illustrated by considering a diagram with a resonance line through which a small ($O(p)$) momentum flows. The resonance propagator can be expanded in this case using
\begin{equation}
\frac{1}{p^{2} - M_{R}^{2}} = -\frac{1}{M_{R}^{2}}\biggl( 1+\frac{p^{2}}{M_{R}^{2}}+ \ldots \biggr)~,
\end{equation}
leading to an infinite series generating terms of an arbitrary high order. Including the resonance field explicitly, one takes care of {\em all} terms in this series and not just the first few terms of it. This can be advantageous. A nice example can be found in \cite{Kubis}, where the inclusion of vector mesons substantially improves the results for the nucleon form factors computed in that work.
With this motivation, let us now concentrate on the vector mesons. Following
\cite{NuB}, we will (first) use an antisymmetric Lorentz tensor-field $W_{\mu
\nu} = - W_{\nu \mu}$ to describe the vector meson. This has six degrees of
freedom, but we can dispose of three of them in a systematic way, for details
see App.~\ref{app:ten} and Ref.~\cite{NuB}.
The spin-1 fields transform as any matter field under the
non-linearly realized chiral symmetry,
\begin{equation}
W_{\mu\nu}(x) \rightarrow h W_{\mu\nu}(x) h^{\dagger}~ ,
\end{equation}
where the compensator field $h$ is defined via
\begin{equation}
u(x)\rightarrow g_{R} u(x) h^{\dagger} = h u(x) g_{L}^{\dagger}
\end{equation}
with $g_I$ is an element of SU(3)$_I$, $I=L,R$ and $u^2 = U$.
The kinetic and mass term of the effective
Lagrangian for vector mesons has the form
\begin{equation}\label{Lkin}
{\mathcal L}_{W}^{kin} =
-\frac{1}{2}\langle \nabla^{\mu}W_{\mu \nu}\nabla_{\rho}W^{\rho \nu}\rangle
+ \frac{1}{4}M_{V}^{2}\langle W_{\mu \nu}W^{\mu \nu}\rangle~,
\end{equation}
where
\begin{equation}\label{WT}
W_{\mu \nu} = \frac{1}{\sqrt{2}}W_{\mu \nu}^{a}T^{a}
\end{equation}
for an octet of vector mesons, and summation over the flavor index $a=1,
\ldots , 8$ is implied, and $\langle \,\, \rangle$ denotes the trace in flavor space.
The pertinent covariant derivative is
\begin{equation}
\nabla^{\mu}R = \partial^{\mu}R + [\Gamma^{\mu}, R]~.
\end{equation}
It transforms as $W_{\mu\nu}$ under the chiral group. Here, $\Gamma^{\mu}$ is the connection,
\begin{equation}
\Gamma^{\mu} = \frac{1}{2}(u^{\dagger}[\partial^{\mu}-i(v^{\mu}+a^{\mu})]u
+ u[\partial^{\mu}-i(v^{\mu}-a^{\mu})]u^{\dagger})\, .
\end{equation}
For the $SU(3)$ case we consider here, the $T^{a}$ are the usual
Gell-Mann-matrices which obey $\langle T^{a}T^{b}\rangle = 2\delta^{ab}$ as well as
$[T^{a},T^{b}] = 2if^{abc}T^{c}$,
where $f^{abc}$ are the totally antisymmetric structure constants of $SU(3)$.
The mass $M_V$ appearing in Eq.(\ref{Lkin}) is strictly speaking the vector meson
mass in the chiral limit.
The vector meson octet we consider here may be given in matrix form:
\begin{equation}
\mathbf{W} = \left( \begin{array}{ccc}
\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega_{8}}{\sqrt{6}} & \rho^{+} & K^{\ast+} \\
\rho^{-} & -\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega_{8}}{\sqrt{6}} & K^{\ast0} \\
K^{\ast-} & \bar K^{\ast0} & -\frac{2\omega_{8}}{\sqrt{6}}
\end{array} \right)~.
\end{equation}
From the above Lagrangian, one can derive the propagator,
\begin{eqnarray}
\langle 0\mid T(W_{\mu \nu}^{a}(x)W_{\rho \sigma}^{b}(y))\mid 0\rangle =
\frac{i\delta^{ab}}{M_{V}^{2}}\int
\frac{d^{4}k}{(2\pi)^{4}}\frac{e^{ik(x-y)}}{M_{V}^{2}-k^{2}-i\epsilon} \nonumber\\
\times [g_{\mu \rho}g_{\nu \sigma}(M_{V}^{2}-k^{2})+g_{\mu
\rho}k_{\nu}k_{\sigma}-g_{\mu \sigma}k_{\nu}k_{\rho} - (\mu \leftrightarrow
\nu)]\, . &&\nonumber \\ &&
\end{eqnarray}
Now we examine the interaction of the vector meson field with the other fields of the theory, especially the Goldstone boson fields. We actually already have interaction terms coming from the covariant derivative, but they are bilinear in $W_{\mu \nu}$. For the simplest diagrams we want to consider, we need vertices with only one vector meson line attached to them, i.e. couplings linear in $W_{\mu \nu}$. Therefore we neglect the 'connection terms' in the following.
From the philosophy of effective field theories, we are required to construct the most general interaction terms consistent with Lorentz invariance, chiral symmetry, parity, charge conjugation and hermiticity. The building blocks with which this can be done are, in principle, at hand: the Goldstone boson fields (collected in $U$ or its square root $u$), the covariant derivative $D^{\mu}$ acting on $U$, the object $\chi$ which contains the scalar and pseudoscalar fields (in particular, the quark mass matrix), the field $W_{\mu \nu}$ (and the pertinent covariant derivative $\nabla_{\mu}$ acting on it) and the field strength tensor $F_{\mu \nu}$ for the external fields $v_{\mu}$ and $a_{\mu}$.
For our purposes, it is more convenient to collect these blocks in the combinations
\begin{eqnarray*}
u_{\mu} &=& iu^{\dagger}D_{\mu}Uu^{\dagger} = u_{\mu}^{\dagger}~, \\
u_{\mu \nu} &=& iu^{\dagger}D_{\mu}D_{\nu}Uu^{\dagger}~, \\
\chi_{\pm} &=& u^{\dagger}\chi u^{\dagger} \pm u\chi^{\dagger}u~, \\
F^{\pm}_{\mu \nu}&=&uF^{L}_{\mu \nu}u^{\dagger}\pm u^{\dagger}F^{R}_{\mu \nu}u~.
\end{eqnarray*}
This is better from a practical point of view because the so-defined objects all transform like $W_{\mu \nu}$ under chiral transformations. This makes it easy to find chirally invariant expressions: Just take some of the above objects and put them inside a trace $\langle \ldots \rangle$. This will then be invariant by the cyclicity property of the trace. Of course, one can further reduce these possibilities by imposing the other symmetries mentioned above, and using the antisymmetry of $W_{\mu \nu}$.
What concerns power counting, it is clear that $u^{\mu}$ is a quantity of order $O(p)$ due to the covariant derivative $D^{\mu}$ giving one factor of momentum or external (axial-)vector source. Similarly, the other objects in the list are of $O(p^{2})$.
At order $O(1)$, there is no term linear in $W$ because $W_{\alpha \alpha}=0$.
The lowest order interaction terms turn out to be of order $O(p^{2})$ and read \cite{NuB}
\begin{equation}\label{Wint}
L^{int}_{W} = \frac{F_{V}}{2\sqrt{2}}\langle F^{+}_{\mu \nu}W^{\mu \nu}\rangle + \frac{iG_{V}}{2\sqrt{2}}\langle [u_{\mu},u_{\nu}]W^{\mu \nu}\rangle .
\end{equation}
This is more complicated than it looks, because both terms contain interactions with an arbitrary high (even) number of Goldstone boson fields. One must carefully expand the objects $F^{\pm}_{\mu \nu}$ and $u_{\mu}$ to obtain the vertex for a particular amplitude.
It is now clear why vector meson singlets can be neglected: The sources to
which they would be allowed to couple are $\langle F^{\pm}_{\mu \nu}\rangle$
and $\langle [u_{\mu},u_{\nu}]\rangle$, but both traces are zero.
Note that a discussion of the numerical values of $F_V$ and $G_V$ is given
in Refs.~\cite{NuB,Eck}. Furthermore, the vector field formulation is
summarized in App.~\ref{app:vec}.
\section{Vector mesons in loops: Statement of the problem}
\setcounter{equation}{0}
\label{sec:problem}
Difficulties arise when in a Feynman diagram lines representing a heavy matter field are part of a loop. The corresponding amplitude will in general not be of the chiral order expected from power counting. This has been noted many years ago when the nucleon was incorporated in CHPT \cite{GSS}.
We already saw in the last chapter that power counting in CHPT is not as straightforward as in the purely Goldstone bosonic sector, and we already noted the reason for this, namely, the presence of a new large scale: The mass of the heavy matter field.
In \cite{GSS}, it was shown that the parameters of the lowest order pion-nucleon Lagrangian already were (infinitely) renormalized by loop graphs in the chiral limit. There exists a mismatch between the loop expansion in $\hbar$ and the chiral expansion in small parameters of order $O(p)$. The loop graphs in general generate power counting violating terms, confusing the perturbative scheme suggested by CHPT. For example, a graph with dozens of loops might give a contribution as low as $O(p^{2})$. Clearly, we will have to get rid of these power counting violating terms if we want to keep this scheme when including heavy matter fields.
For graphs where the heavy field only shows up in internal tree lines, the problem concerning power counting was not urgent, because the chiral expansion of the amplitude corresponding to such a graph at least {\em started} with the correct order. We saw this in the last chapter: We counted the vector meson propagator as $O(1)$, and in the low-energy region, the momentum transfer variable $t$ was much smaller than the square of the heavy mass, allowing an expansion of the propagator in the small dimensionless variable $t/M_{V}^{2}$, which starts at $O(1)$.
\section{Soft and hard poles}
\setcounter{equation}{0}
\label{sec:softhard}
If the heavy particle line belongs to a loop, an integration over the four-momentum flowing through this line takes place. Due to the pole structure of the propagator, the integral will pick up a large contribution from the region where the line momentum squared $k^{2} \approx M_{V}^{2}$, with $M_{V}$ the heavy mass (we keep the index $V$ to remind us that we will concentrate on vector mesons, but the present discussion is more general). This is the region of the 'hard poles' in the terminology of \cite{Tang}. These contributions are of high-energy origin and clearly do not fit in a low-energy effective theory - they must be identified as generating the part of the loop integral which violates the power counting scheme, because this scheme would clearly be valid if these 'hard poles' were missing. The only hard-momentum effects for loop integrals in the Goldstone boson sector are the ultraviolet divergences, which are handled by dimensional regularization.
Before discussing the infrared regularization method, for illustrative purposes
we will shortly present the idea of \cite{Tang}. We remarked that that the power counting violating terms stem from the region $k^{2}\approx M_{V}^{2}$. Far off that region, for $k^{2} \ll M_{V}^{2}$, it would be allowed to expand the propagator in the small variable $k^{2}/M_{V}^{2}$, as we did for tree lines in the last section. Of course, this is not allowed under a momentum integral which extends to arbitrary high momenta $k^{\mu}$. But it is too much of a temptation to do so, {\em because in this way one destroys the 'hard poles' responsible for the power counting violating terms}. Doing the expansion, treating the loop-momentum $k^{\mu}$ over which one integrates as a small quantity, and interchanging integration and summation of this expansion, one ends up with an expression which obeys power counting, because the hard poles are not present in any individual term of the series which one integrates term by term. Only the soft poles from the Goldstone boson (or perhaps also photon) propagators will be present in the individual integrals.
Clearly, this is not the old integral any more, but a certain part of it, collecting only the contributions from the 'soft poles' - it is called the 'soft part' of the full integral in the terminology of \cite{Tang}. The remaining part, which was dropped by this procedure, collects the contributions from the 'hard poles' and stems from the high-energy-momentum region. It is argued that this part is expandable in the small chiral parameters and, truncated at a sufficiently high order (depending on the order to which one calculates) it is a local polynomial in these small parameters and can be taken care of by a renormalization of the parameters of the most general effective Lagrangian. The power counting violating terms are then hidden in the renormalization of the LECs, and the 'soft part', with subtracted residual high-energy divergences, is taken as the renormalized amplitude appearing as a part of the perturbation series. If this argument is correct, and the part of the full integral which is dropped is indeed expandable in the small parameters, the 'soft part' contains all the terms of the full integral which are non-analytic in the expansion parameters, like terms of the typical form
$\ln M_{\phi}/\mu$, the so-called 'chiral log'-terms where $M_{\phi}$ is the Goldstone boson mass,
and $\mu$ is the renormalization scale (we will use dimensional regularization throughout).
This becomes large if one lets the Goldstone boson mass go to zero. The physical interpretation is that in this limit the range of the interaction mediated by the Goldstone bosons becomes infinite, so that, for example, the scalar radius of the pion (or other hadrons) is infrared divergent in the chiral limit, which makes sense because it measures the spatial extension of the Goldstone boson cloud.
We keep in mind that the 'soft poles' in the region where the loop momentum $k\sim O(p)$ are responsible for the terms non-analytic in the small parameters, like chiral log terms, and that all these terms {\em obey a simple power counting}. This follows from the argument given by \cite{Tang}. If one ever encounters a power counting violating term non-analytic in the quark masses, for example, this would invalidate the above argument - such terms can not be hidden in the renormalization of the parameters of an effective Lagrangian.
We would like to demonstrate that arguments such as the one cited above are indeed more than just wishful thinking. The basic features of the argument can be exhibited by a toy model loop integral (it is straightforward to generalize it - the formulas would only look a bit more complicated).
Consider the integral
\begin{equation}
I = \oint_{C} \frac{dk}{2\pi i} \frac{f(k)}{(k-a)(k-b)}
\end{equation}
where $C$ is some contour in $\mathbf{C} \setminus \{a,b\}$ (enclosing the poles or not) and $f(k)$ is an analytic function. Such an integral might e.g. be the $k^{0}$-integration over a full loop integral. The soft pole is called $a$ and is of $O(p)$ while the hard pole is $b$. In fact, we will only need $|a| < |b|$.
In the spirit of the argument of \cite{Tang}, we destroy the hard pole in two steps
\begin{eqnarray}\label{soft}
I &\rightarrow& \oint_{C} \frac{dk}{2\pi i}\frac{f(k)}{(k-a)}
\left(-\frac{1}{b}\right)
\left( 1+\frac{k}{b} + \frac{k^{2}}{b^{2}} + \ldots \right)\nonumber \\
&\rightarrow& -\frac{1}{b}\sum_{n=0}^{\infty}\oint_{C}\frac{dk}{2\pi i}\frac{f(k)}{(k-a)}\frac{k^{n}}{b^{n}}\nonumber \equiv I_{\rm soft}~.
\end{eqnarray}
Depending on the form of $f(k)$, this will be ultraviolet divergent if the contour extends to infinity, but this divergence has nothing directly to do with the pole structure and can be handled by a regularization scheme.
By the method of residues, $I_{\rm soft}$ is computed to be
\begin{equation}
I_{\rm soft} = \frac{f(a)}{(a-b)}\oint_{C}\frac{dk}{2\pi i}\frac{1}{(k-a)} .
\end{equation}
We were allowed to sum the geometric series because $|a| < |b|$. Therefore $I_{\rm soft}$ is just the first summand in the decomposition
\begin{eqnarray}
I &=& \oint_{C}\frac{dk}{2\pi i}\frac{f(k)}{(k-a)(k-b)} = \nonumber\\
&=& \frac{1}{a-b}\biggl(\oint_{C} \frac{dk}{2\pi i}\frac{f(a)}{(k-a)}
- \oint_{C}\frac{dk}{2\pi i}\frac{f(b)}{(k-b)}\biggr) \nonumber \\
&=& I_{\rm soft} + I_{\rm hard}~,
\end{eqnarray}
which clearly separates the contributions from the soft and the hard pole, respectively. We have thus demonstrated that the expansion of the hard pole structures followed by an interchange of summation and integration indeed gives {\em exactly} the contribution from the soft pole. Note that the above argument can be iteratively used for multiple poles.
A complementary approach would be to treat $k$ as large, expand the soft pole structure in the small variable $a$ and again interchange summation and integration, thereby isolating the hard pole contribution which is by construction analytic in the small parameter $a$. This may be called the 'regular' part of the loop integral. Again, divergences can show up which have nothing to do with the details of the pole structure (infrared divergences if the contour $C$ encloses $k=0$), but apart from this, the calculation analogous to the preceding one will give nothing but $I_{\rm hard}$. In this sense, the approaches of \cite{Tang} and \cite{Mainz1} can be said to be complementary to each other.
We will see in the next section that the arguments of \cite{Tang} can be refined or, as one should say, modified in a rather elegant way. In particular, the method
described in this section does not always work because not all integrals converge
in the low energy region, for more details on that issue, see e.g. \cite{BL}.
\section{Infrared regularization}
\setcounter{equation}{0}
\label{sec:IR}
In order to find a more elegant way to separate the low-energy part of the loop integrals, we will examine a loop consisting of one 'heavy' propagator and one Goldstone propagator. We use dimensional regularization to handle the ultraviolet divergence of such an integral, i.e. we give a meaning for the notion of an integral in $d$ dimensions, where $d$ might be fractional, negative , etc.$\,$.
We consider the scalar loop integral
\begin{equation}\label{IVphidef}
I_{V\phi}(q) = i\int \frac{d^{d}k}{(2\pi)^{d}}\frac{1}{((k-q)^{2}-M_{\phi}^{2})(k^{2}-M_{V}^{2})} .
\end{equation}
Here $q$ denotes an external momentum flowing into the loop, $M_{\phi}$ is the Goldstone boson mass and $M_{V}$ is the mass of the heavy particle. The '$i\epsilon$'-prescription, giving the masses a small negative imaginary part, should be understood here. We leave it out because it will play no role in the following discussion.
There are two cases which have to be distinguished: 1) the momentum $q$ belongs to the heavy particle line of the loop, in the sense that this line is connected to external heavy particle lines (this would necessarily be the case if the heavy particle is a baryon, because of baryon number conservation). For the soft processes we consider, this would mean that
\begin{displaymath}
q^{2}-M_{V}^{2} = O(p) .
\end{displaymath}
The second case is that 2) the loop is connected only to Goldstone boson lines, which is the case for the vector meson contribution to the Goldstone boson self energy (Fig.~\ref{fig:self}b).
This can not happen if the heavy particle line represents a baryon. Then
\begin{displaymath}
q^{2} = O(p^{2}) .
\end{displaymath}
We first investigate case 1). To this end we use a Feynman representation
\begin{displaymath}
\frac{1}{ab} = \int_{0}^{1} dz \frac{1}{(a(1-z)+bz)^{2}}
\end{displaymath}
to write
\begin{eqnarray}
&&I_{V\phi}= \nonumber \\
&&i\int \frac{d^{d}k}{(2\pi)^{d}}\int_{0}^{1}dz \frac{1}{[((k-q)^{2}-M_{\phi}^{2})(1-z) + (k^{2}-M_{V}^{2})z]^{2}}\nonumber\\ &&
\end{eqnarray}
Note from this expression that, for $z=0$, the integrand is a pure 'soft pole', while for $z=1$ it is a 'hard pole'. We see that the soft pole structure which we want to extract is associated with the region $z\rightarrow 0$.
\begin{figure}[htb]
\centerline{\psfig{file=self.eps,width=7.cm}}
\caption{Self-energy graphs. Solid and dashed lines denote vector mesons (heavy particles)
and Goldstone bosons, respectively. While a) can be treated by the IR method of
Ref.\protect\cite{BL},
to deal with graphs of type b), the method developed here has to be used.
\label{fig:self}}
\end{figure}
\noindent
After a shift $k\rightarrow k+q(1-z)$,the denominator of the integrand becomes
\begin{displaymath}
[k^{2}-A(z)]^{2}
\end{displaymath}
where
\begin{eqnarray}\label{abdef}
A(z) &=& M_{V}^{2}C(z) , \nonumber\\
C(z) &=& bz^{2} - (b+a-1)z + a ,\nonumber\\
a &=& \frac{M_{\phi}^{2}}{M_{V}^{2}}~,~
b = \frac{q^{2}}{M_{V}^{2}} ,
\end{eqnarray}
and the $d-$dimensional $k-$integral can be done in a standard way to give
\begin{equation}
I_{V\phi}= -\frac{M_{V}^{d-4}}{(4\pi)^{\frac{d}{2}}}\Gamma \left(2-\frac{d}{2}\right)
\int_{0}^{1}dz (C(z))^{\frac{d}{2}-2} .
\end{equation}
This will develop an infrared singularity as $M_{\phi}\rightarrow 0$ for negative enough dimension $d$. From the expression for $C(z)$, we see that this singularity is located at $z=0$, because in that region we will have
\begin{displaymath}
(C(z))^{\frac{d}{2}-2} \rightarrow \biggl(\frac{M_{\phi}}{M_{V}}\biggr)^{d-4}
\end{displaymath}
generating a part non-analytic in the Goldstone boson mass (remember that $d$ can also be fractional). These findings are consistent with the above observation that the region $z\rightarrow 0$ is to be associated with the soft pole structure, coming from the Goldstone boson propagator.
We come to the conclusion that to extract the soft pole contribution, we should extract the part of the loop integral proportional to noninteger powers of the Goldstone boson mass (for noninteger $d$).
Becher and Leutwyler have proposed a way to achieve this \cite{BL}.
They find that the decomposition of the loop integral into a part non-analytic in $a$ (and therefore $M_{\phi}$) and a part regular in $a$ is given by
\begin{displaymath}
I_{V\phi} = I + R ,
\end{displaymath}
where
\begin{eqnarray}\label{IRsplit}
I &=& -\frac{M_{V}^{d-4}}{(4\pi)^{\frac{d}{2}}}\Gamma\left(2-\frac{d}{2}\right)
\int_{0}^{\infty}(C(z))^{\frac{d}{2}-2}dz \nonumber\\
R &=& +\frac{M_{V}^{d-4}}{(4\pi)^{\frac{d}{2}}}\Gamma\left(2-\frac{d}{2}\right)
\int_{1}^{\infty}(C(z))^{\frac{d}{2}-2}dz .
\end{eqnarray}
From the above remarks it is clear that $R$, which contains the parameter integral starting at $z=1$, will not produce infrared singular terms for any value of the dimension parameter $d$. It will therefore be analytic in the Goldstone boson mass. On the other hand, the so-called 'infrared singular part' $I$ is exactly the part proportional to noninteger powers of the Goldstone boson mass, for noninteger dimension $d$. Moreover, it is shown in \cite{BL} that this part of the loop integral fulfills power counting (as we would have by now expected for the part associated with the soft pole structure, see the discussion of the last section). We will not repeat the proof for these assertions here, because we are going to do a very similar calculation in the next section.
The parameter integrals for $I$ and $R$ do not converge for $d=4$. To give them an unambiguous value, a partial integration is performed to express them through convergent integrals and a part that is divergent for $d=4$, but can be left away by an analytic continuation argument (analytic continuation from negative values of the parameter $d$). For details, see Ref.\cite{BL}. This leads to an explicit representation of $I$ and $R$ for the case of four dimensions.
The infrared regularization (IR) scheme can now be implemented by simply dropping the regular part $R$, argueing that it can be taken care of by an appropriate renormalization of the most general effective Lagrangian. This can be done because it is analytic in the small parameter $M_{\phi}$ (and in external momenta).
The regular part contains the power counting violating terms originating from the 'hard pole' of the loop integral, which have now been abandoned with the dropping of $R$. We are left with the infrared singular part $I$ which obeys power counting. It still contains a pole in $(d-4)$, which can be dealt with using e.g. the (modified) minimal subtraction scheme. Having done this, we have achieved the goal of a finite loop correction where power counting allows to compute correctly the order with which this correction will appear in the perturbation series.
The method as presented here strongly relies on dimensional regularization. In particular, the separation of the loop integrals into two parts with fractional versus integer powers of $M_{\phi}$ {\em for a fractional dimension parameter} $d$ allows the argument that chiral symmetry (or more specifically, the Ward identities) have to be obeyed by both parts separately. Therefore dropping one part of it (the regular part) is a chirally symmetric procedure because it leaves us with a regularized amplitude that is again chirally symmetric for itself. It is important here that the scheme of dimensional regularization leaves chiral symmetry untouched, because the validity of the Ward identities does not depend on the space--time dimension parameter. Of course one must deal with {\em all} loop integrals occurring in the perturbation series in the same way to keep the physical content of the theory unchanged. The regular part of any loop integral that one computes will have to be dropped.
If one lets $d \rightarrow$ integer $n$, we will get terms proportional to
\begin{displaymath}
\biggl(\frac{M_{\phi}}{M_{V}}\biggr)^{n+\epsilon} = \biggl(\frac{M_{\phi}}{M_{V}}\biggr)^{n}\biggl(1+\epsilon \ln \biggl(\frac{M_{\phi}}{M_{V}}\biggr)+\ldots \biggr).
\end{displaymath}
In the complete expression, one will leave out terms of $O(\epsilon)$, so that
for an integer dimension, the expression for the infrared part $I$ (up to
$O(\epsilon)$) may well contain a piece analytic in the Goldstone boson mass.
Only after the separation in the two pieces of different analyticity character
has been done, it is allowed to let $d$ approach an integer value. This is
why dimensional regularization, permitting noninteger valued dimension
parameters, is essential for this approach. Note that an elegant extension
of infrared regularization to multi-loop graphs is given in Ref.\cite{PL}.
\section{Another case of IR regularization}
\setcounter{equation}{0}
\label{sec:IRnew}
The last section, where the infrared regularization scheme has been introduced, dealt with the case that the momentum squared $q^{2}$, coming from outside the loop, was of the same order as $M_{V}^{2}$. This is the case, for example, for a diagram contributing to the self-energy of a nearly on-shell baryon with a Goldstone boson loop. This was the case considered in \cite{BL}. However, we have seen in the last chapter that vector mesons can occur as strongly virtual resonance states in processes where only soft Goldstone bosons or photons appear as external particles. If the same loop we considered in the last section is connected only to Goldstone boson lines, the momentum flowing into the loop will then be small. Since this is a Goldstone boson momentum, we must require that the 'regular part' which we want to drop from the regularized amplitude is analytic also in this parameter.
Becher and Leutwyler introduce in their original paper the variable
\begin{equation}
\Omega = \frac{q^{2}-M_{\phi}^{2}-M_{V}^{2}}{2M_{\phi}M_{V}}
\end{equation}
which is $O(1)$ for the processes they examine, which correspond to the first of the two cases we distinguished in the last section. They consider the chiral expansion for fixed $\Omega$. But if
\begin{displaymath}
|q^{2}| \ll M_{V}^{2}, \qquad q^{2}= O(p^{2}),
\end{displaymath}
the variable $\Omega$ will be $O(p^{-1})$ ! This already signals that this case will probably have to be treated differently.
Let us first evaluate the integral directly, for $d=4-\epsilon$. We find (omitting terms of $O(\epsilon)$)
\begin{eqnarray*}
I_{V\phi} &=& -\frac{M_{V}^{d-4}}{(4\pi)^{\frac{d}{2}}}\Gamma\left(2-{\frac{d}{2}}\right)
\int_{0}^{1} dz [b(z-x_{1})(z-x_{2})]^{\frac{d}{2}-2} \\
&=& 2\lambda + \frac{1}{16\pi^{2}} + \frac{1}{16\pi^{2}}\int_{0}^{1} dz \ln (b(z-x_{1})(z-x_{2})) \\
&=& 2\lambda - \frac{1}{16\pi^{2}} \\
&& \quad+ \frac{1}{16\pi^{2}}\biggl(x_{1}\ln \biggl(\frac{x_{1}}{x_{1}-1}\biggr) + x_{2}\ln \biggl(\frac{x_{2}}{x_{2}-1}\biggr)\biggr).
\end{eqnarray*}
Here we have introduced the zeroes of $C(z)$,
\begin{equation}\label{zeros}
x_{1,2} = \frac{b+a-1}{2b} \pm \sqrt{\frac{(b+a-1)^{2}-4ab}{4b^{2}}}
\end{equation}
and the standard notation
\begin{displaymath}
\lambda = \frac{M_{V}^{d-4}}{16\pi^{2}}\biggl(\frac{1}{d-4}-\frac{1}{2}(\ln(4\pi)-\gamma+1)\biggr) .
\end{displaymath}
We have selected the mass of the heavy particle as a natural choice
for the renormalization scale $\mu$ here (this particular choice also leads to
suppression of higher order divergences that appear in the loop integrals and
should thus be made).
Furthermore we have used $C(z=1) = b(1-x_{1})(1-x_{2}) = 1\,$.
To examine this further we write the expansions
\begin{eqnarray}
x_{1} &=& -\frac{a}{1-a} - \frac{ab}{(1-a)^{3}} -\ldots~, \\
\frac{1}{x_{2}} &=& -\frac{b}{1-a} - \frac{b^{2}}{(1-a)^{3}} - \ldots~,
\end{eqnarray}
showing that, for $b\rightarrow 0$, $x_{2}$ behaves like
\begin{displaymath}
x_{2} \rightarrow \frac{b+a-1}{b} .
\end{displaymath}
Remember that in the case we consider now, $a$ and $b$ are both small variables from their definition, of chiral order $O(p^{2})$.
Using these expansions, and the relation
\begin{displaymath}
C(z=0) = a = bx_{1}x_{2} ,
\end{displaymath}
we see that $I_{V\phi}$ contains a term non-analytic in the Goldstone boson mass
\begin{displaymath}
I_{V\phi} = \frac{1}{16\pi^{2}}x_{1}\ln(a) +\ldots,
\end{displaymath}
and that it is analytic in the second small variable $b$ (for $|b| \ll 1$ ).
We want to check the power counting for this case: The non-analytic terms are proportional to $x_{1}$, whose expansion in $a$ and $b$ starts at order $O(p^{2})$. This is the expected chiral order for the integral: The loop integration in 4 dimensions gives 4 powers of small momentum, whereas the Goldstone boson propagator gives $-2$. The hard pole structure is of order $O(1)$ here, because the vector meson appears just as an internal resonance line. So the power counting for the non-analytic terms is fine, as expected.
We remark that these non-analytic terms are also produced when one proceeds after the prescription of \cite{Tang}: Expanding the hard pole structure and integrating term by term, one gets $x_{1}$ as the coefficient of the $\ln a$-terms, order by order.
Now we want to do infrared regularization. But now we remark an important point: We can {\em not} simply take over the formulas of the last section. We note that the expression for $I$ and $R$ will contain pieces non-analytic in the other small variable $b$, which is not small in the case treated in the last section. But the original integral does not contain such a non-analyticity in $b$. We conclude that the extension of the parameter integrals to $z\gg 1$ is responsible for this non-analyticity: Somewhere in that region, we must catch up a pole for $b\rightarrow 0$. So we have the problem that, loosely speaking, the 'regular part' is not regular. We will have to modify the method of \cite{BL} somehow, and find a way to separate off the terms non-analytic in the small variable $b$.
To do this, we will have to examine the nature of the singularity we encounter here. This will be done in the next section.
\section{Singularities in parameter space}
\setcounter{equation}{0}
\label{sec:sing}
We will consider an integral which has exactly the same features as the one we need, but is a bit simpler. Let
\begin{equation}
\tilde I = \int_{0}^{1}dz\ (z+a)^{-\frac{3}{2}}(1+bz)^{-\frac{3}{2}}
\end{equation}
where $a$ and $b$ are again small parameters in the sense that $|a|, |b| \ll 1$.
This will develop a singularity at $z=0$ if $a \rightarrow 0$. What if we extend the integration to infinity? To examine this we first change variables,
\begin{displaymath}
z = \frac{1}{u} ,
\end{displaymath}
and compute
\begin{equation}
\tilde I = \int_{1}^{\infty}u\ du\ (1+au)^{-\frac{3}{2}}(u+b)^{-\frac{3}{2}} .
\end{equation}
This shows the close similarity between the '$a$-singularity' at $z=0$ and the '$b$-singularity' at $u=0$. If we extend the integration to $z\rightarrow \infty$, we will pick up a non-analytic contribution from $u=0$. A 'regular part' defined as
\begin{equation}\label{Rtilde}
\tilde R = -\int_{1}^{\infty}dz\ (z+a)^{-\frac{3}{2}}(1+bz)^{-\frac{3}{2}}
\end{equation}
will be analytic in $a$, but not in $b$. This is the situation encountered in the last section, for a slightly different integral. How do we get rid of this non-analyticity?
Remembering what we have learned so far, we know how we can get rid of certain non-analytic terms: We must 'destroy the poles' by expanding the pole structures and integrating term by term.
Let us do this:
\begin{eqnarray*}
\tilde I &=& \int_{0}^{1} dz (z+a)^{-\frac{3}{2}}\biggl(1-\frac{3}{2}bz + \frac{15}{8}(bz)^{2}\pm \ldots \biggr) \\ &=& \int_{0}^{1} dz (z+a)^{-\frac{3}{2}}\sum_{m=0}^{\infty} \frac{\Gamma(-\frac{1}{2})}{\Gamma(-\frac{1}{2}-m)} \frac{(bz)^{m}}{m!} \\ &=& \sum_{m=0}^{\infty} \frac{\Gamma(-\frac{1}{2})}{\Gamma(-\frac{1}{2}-m)}\frac{b^m}{m!}\int_{0}^{1}dz (z+a)^{-\frac{3}{2}}z^{m} .
\end{eqnarray*}
Here we were allowed to interchange summation and integration without changing the value of the integral, because in the interval of integration the expansion of the integrand is absolutely convergent when $a,b$ are smaller than 1.
The dependence on $a$ now resides in the simpler integrals of the coefficients in the expansion. The idea is now to find the 'infrared singular part' of each of these coefficients. If the sum of these infrared singular parts converges, it must be the infrared singular part of the full integral $\tilde I$, because the expansion of the pole structure that we have performed did not change the integral.
To find the infrared singular part of
\begin{equation}
I_{m} = \int_{0}^{1}dz (z+a)^{d}z^{m}
\end{equation}
with an arbitrary parameter $d$ (possibly negative) and an integer $m$, we follow the method of \cite{BL} and scale $z=ay$ to find
\begin{equation}
I_{m} = a^{d+m+1}\int_{0}^{\frac{1}{a}}dy\ (1+y)^{d}y^{m} .
\end{equation}
We would now like to take the upper limit of the integration to infinity. Whenever the extended integral converges, it gives
\begin{displaymath}
\int_{0}^{\infty}dy\ (1+y)^{d}y^{m} = \frac{\Gamma(m+1)\Gamma(-d-(m+1))}{\Gamma(-d)} .
\end{displaymath}
There will be a convergence problem if $m$ is large enough. But the divergence comes from large values of $z$ and has nothing to do with the infrared singular part. To see this in detail, let $K\gg 1$ , $m>0$, and do a partial integration:
\begin{eqnarray*}
\int_{0}^{K}dz\ (z+a)^{d}z^{m} &=& K^{m}\frac{(K+a)^{d+1}}{d+1} \nonumber\\
&-& \int_{0}^{K}\frac{m}{d+1}(z+a)^{d+1}z^{m-1}dz~.
\end{eqnarray*}
Doing this $m$ times, and dropping all terms proportional to
$(K+a)^{d+n}$ with $ n \in \mathbf{N}$ ,
because these are clearly expandable around $a=0$ and will therefore not contribute to the piece non-analytic in $a$, we end up with the same result as above, namely, the 'infrared singular part' of $I_{m}$ is {\em always}
\begin{equation}
I_{m, {\rm IR}} = a^{d+m+1}\frac{\Gamma(m+1)\Gamma(-d-(m+1))}{\Gamma(-d)}.
\end{equation}
This form clearly shows the character of the infrared singular part as being proportional to noninteger powers of $a$ for noninteger parameter $d$.
We have now performed the relevant step to find the infrared singular part of each coefficient in the expansion of $\tilde I$. We insert this in the series, setting $d=-\frac{3}{2}$ for definiteness:
\begin{eqnarray*}
\tilde I &=& \sum_{m=0}^{\infty}\frac{\Gamma(-\frac{1}{2})}{\Gamma(-\frac{1}{2}-m)}\frac{b^{m}}{m!}\int_{0}^{1}dz\ (z+a)^{-\frac{3}{2}}z^{m} \\ &\rightarrow & \sum_{m=0}^{\infty}\frac{\Gamma(-\frac{1}{2})}{\Gamma(-\frac{1}{2}-m)}\frac{\Gamma(m+1)\Gamma(\frac{1}{2}-m)}{\Gamma(\frac{3}{2})}\frac{(ab)^{m}a^{-\frac{1}{2}}}{m!} \\ &=& \tilde I_{{\rm IR}} .
\end{eqnarray*}
One can easily sum the series,
\begin{eqnarray*}
\tilde I_{{\rm IR}} &=& \frac{4}{\sqrt{a}}\sum_{m=0}^{\infty}(ab)^{m}\biggl((m+1)-\frac{1}{2}\biggr) \\ &=& \frac{4}{\sqrt{a}}\biggl(\frac{1}{(1-ab)^{2}}-\frac{1}{2(1-ab)}\biggr) \\ &=& \frac{2}{\sqrt{a}}\frac{1+ab}{(1-ab)^{2}} .
\end{eqnarray*}
The reason why we have selected the value $d=-\frac{3}{2}$ is that it is very easy to compute the integral $\tilde I$ directly. The part non-analytic at $a\rightarrow 0$ can be read off and is the same as the result just given. We have checked this for various other values of the parameter $d$.
In a 'naive' application of the IR method, one would be tempted to simply take
\begin{eqnarray*}
\tilde I_{{\rm BL}} &=&\int_{0}^{\infty} dz\ (z+a)^{-\frac{3}{2}}(1+bz)^{-\frac{3}{2}} \\
&=& \frac{2}{(1-ab)^{2}}\biggl(\frac{1+ab}{\sqrt{a}}-2\sqrt{b}\biggr) ,
\end{eqnarray*}
which contains a part non--analytic in the second small variable $b$. This is the part we have separated off by our procedure.
We note that this '$b$-singular' part can be extracted by proceeding in exact analogy to the steps just performed: Expand the other pole structure, proportional to a power of $(z+a)$, in the integral $\tilde R$, Eq.(\ref{Rtilde}). The expansion is in powers of $a/z$, which is smaller than one in the pertinent interval of integration.
We can now give the result for arbitrary $d$. The general '$a$-singular part' is
\begin{equation}
\tilde I_{{\rm IR}} = \sum_{m=0}^{\infty}\frac{\Gamma(d+1)\Gamma(-d-(m+1))}{\Gamma(-d)\Gamma(d+1-m)}(ab)^{m}a^{d+1} ,
\end{equation}
while the '$b$-singular part' is
\begin{equation}
\tilde I_{{\rm b}} = \sum_{m=0}^{\infty}\frac{\Gamma(d+1)\Gamma(m-2d-1)}{\Gamma(-d)}\frac{(ab)^{m}}{m!}b^{-(d+1)} .
\end{equation}
In the case $d=-\frac{3}{2}$, the last expression yields indeed
\begin{eqnarray*}
\tilde I_{{\rm b}} &=& \sum_{m=0}^{\infty}\frac{\Gamma(-\frac{1}{2})}{\Gamma(\frac{3}{2})}(m+1)(ab)^{m}\sqrt{b}
= \frac{-4\sqrt{b}}{(1-ab)^{2}}~ ,
\end{eqnarray*}
which is confirmed by the result of the direct calculation. We have
\begin{equation}
\tilde I_{{\rm BL}} = \tilde I_{{\rm IR}} + \tilde I_{{\rm b}},
\end{equation}
where the second part is the one we do not want in a genuine regular part (it will appear in $\tilde R$ because the non-analytic behaviour for $b\rightarrow 0$ is not present in the original integral $\tilde I$, with which we started).
What we will need in the next section is the result for $d=-\epsilon$, where, as always, $\epsilon$ is considered as sufficiently small to allow for the neglecting of terms $O(\epsilon^{2})$. In this case,
\begin{equation}
\tilde I_{{\rm IR}} = \sum_{m=0}^{\infty} \frac{\Gamma(1-\epsilon)\Gamma(-(m+1)+\epsilon)}{\Gamma(\epsilon)\Gamma(1-m-\epsilon)}(ab)^{m}a^{1-\epsilon}\, .
\end{equation}
After some $\Gamma$-function algebra, one gets for the sum
\begin{eqnarray*}
\tilde I_{{\rm IR}} &=& a^{1-\epsilon}(-1-\epsilon) -\epsilon \sum_{m=1}^{\infty}\frac{a(ab)^{m}}{m(m+1)} + O(\epsilon^{2}) \\ &=& a(-1-\epsilon + \epsilon \ln(a)) - \epsilon a \sum_{m=1}^{\infty}\biggl(\frac{1}{m}-\frac{1}{m+1}\biggr)(ab)^{m}\\
&& \qquad \qquad \qquad \qquad\qquad \qquad +O(\epsilon^{2}).
\end{eqnarray*}
The series can easily be summed
\begin{equation}
\tilde I_{{\rm IR}} = -a-2\epsilon a + \epsilon a\ln(a)+\epsilon\biggl(\frac{ab-1}{b}\biggr)\ln(1-ab) .
\end{equation}
Please note that the last term is expandable in $b$, and that we have left out terms of $O(\epsilon^{2})$.
With the very same method, we can also compute the '$b$-singular part' for the case $d=-\epsilon$. The result is
\begin{eqnarray}
\tilde I_{{\rm b}} &=& \frac{1}{2}\biggl(a-\frac{1}{b}\biggr)+\frac{1}{2}\epsilon \biggl(a-\frac{1}{b}\biggr)\ln(b) \nonumber \\
&+&\epsilon\biggl(\frac{1-ab}{b}\biggr)(\ln(1-ab)-1).
\label{Itb}
\end{eqnarray}
We have checked that $\tilde I_{{\rm BL}}$, calculated with the method of Becher and Leutwyler, is again the sum of the '$a$-singular part' and the '$b$-singular part', like it was the case for $d=-\frac{3}{2}$.
We have now achieved a method which allows to split integrals of the form of $\tilde I_{{\rm BL}}$ into an 'infrared singular part', behaving non-analytically as $a\rightarrow 0$, and a part showing such behaviour for $b\rightarrow 0$. Note that this decomposition is unique, and that both parts are of a different analyticity character (for fractional $d$), concerning the small parameters $a$ and $b$, respectively. In the next section, we will see how this method can be applied to the scalar loop integral $I_{V\phi}$.
\section{Corrected infrared singular part}
\setcounter{equation}{0}
\label{sec:IRcorr}
The integral we need for the calculation of $I_{V\phi}$ is
\begin{displaymath}
I = \int_{0}^{1}dz\ (b(z-x_{1})(z-x_{2}))^{\frac{d}{2}-2}.
\end{displaymath}
Extracting a factor
\begin{displaymath}
(-bx_{2})^{\frac{d}{2}-2} = (1-(a+b)+\ldots)^{\frac{d}{2}-2}~,
\end{displaymath}
from the integral, which is expandable in $a$ and $b$, the remainder is of the form of $\tilde I$ treated in the last section, because $x_{1}$ and $x_{2}^{-1}$ are small parameters of $O(p^{2})$ :
\begin{displaymath}
I = (-bx_{2})^{\frac{d}{2}-2}\tilde I'
\end{displaymath}
where
\begin{displaymath}
\tilde I' = \int_{0}^{1}dz\ (z+(-x_{1}))^{\frac{d}{2}-2}(1+(-x_{2})^{-1}z)^{\frac{d}{2}-2}.
\end{displaymath}
Doing the appropriate substitutions in Eq.(\ref{Itb}), the infrared singular part of $I$ becomes
\begin{equation}
I_{{\rm IR}}= x_{1}+\epsilon x_{1}-\frac{\epsilon}{2}x_{1}\ln(a)-\frac{\epsilon}{2}(x_{1}-x_{2})\ln\biggl(1-\frac{x_{1}}{x_{2}}\biggr),
\end{equation}
where we have used $a=bx_{1}x_{2}$.
For completeness, we also rewrite the '$b$-singular part':
\begin{eqnarray}
I_{{\rm b}} &=& \frac{x_{2}-x_{1}}{2}-\frac{\epsilon}{4}(x_{2}-x_{1})\ln(bx_{2}^{2})
\nonumber \\
&+& \frac{\epsilon}{2}(x_{2}-x_{1})\biggl(1-\ln\biggl(1-\frac{x_{1}}{x_{2}}\biggr)\biggr).
\end{eqnarray}
As a check, we add it to the infrared singular part:
\begin{equation}
I_{{\rm IR}}+I_{{\rm b}}=z_{0}\biggl(1+\epsilon-\frac{\epsilon}{2}\ln(a)\biggr)-\frac{\epsilon}{4}(x_{1}-x_{2})\ln\biggl(\frac{x_{1}}{x_{2}}\biggr).
\end{equation}
Here we have used the notation
\begin{displaymath}
z_{0} = \frac{x_{1}+x_{2}}{2}
\end{displaymath}
as in Ref.~ \cite{BL}. Again, the sum of the '$a$-singular part' and the '$b$-singular part' is the result for the integral
\begin{displaymath}
\int_{0}^{\infty}dz (b(z-x_{1})(z-x_{2}))^{-\frac{\epsilon}{2}}
\end{displaymath}
when computing it utilizing standard IR.
But the correct infrared singular part for our case is only a certain part of it, namely, $I_{{\rm IR}}$.
The part which must be split off here, $I_{{\rm b}}$, vanishes if
\begin{displaymath}
x_{1}=x_{2} \Rightarrow (b+a-1)^{2}-4ab = 0 ,
\end{displaymath}
(see Eq.(\ref{zeros})), which is the case for
\begin{displaymath}
q^{2} = (M_{V}\pm M_{\phi})^{2} \equiv q^{2}_{\pm} .
\end{displaymath}
We cannot trust our procedure for $q^{2} > M_{V}^{2}$. Therefore the value $q^{2}_{-}$ should be considered as the point where the standard infrared singular part of \cite{BL} and the representation given here, i.e. $I_{{\rm IR}}$, can be joined together.
We emphasize that the kind of argument we have given here is completely in the spirit of the method of Becher and Leutwyler, in that we examined the analyticity properties of the parameter integrals for a general dimension parameter $d$.
The calculation of the infrared singular part of the scalar loop integral $I_{V\phi}$ can now be completed:
\begin{eqnarray}
I_{V\phi}^{{\rm IR}} &=& -\frac{M_{V}^{d-4}}{(4\pi)^{\frac{d}{2}}}
\Gamma\left(2-\frac{d}{2}\right) \, I_{{\rm IR}}
\nonumber\\
&=& 2x_{1}\lambda -\frac{1}{16\pi^{2}}\biggl(x_{1}-x_{1}\ln(a)\nonumber \\
&& -(x_{1}-x_{2})\ln\biggl(1-\frac{x_{1}}{x_{2}}\biggr)\biggr) .
\label{IVphiIR}
\end{eqnarray}
Terms of $O(\epsilon)$ have been omitted. We claim that the difference
\begin{displaymath}
R' \equiv I_{V\phi}-I_{V\phi}^{{\rm IR}}
\end{displaymath}
is the appropriate regular part. This means that it is expandable in the small parameters $a$ and $b$ around zero. The proof consists of two observations:
1) Both $I_{V\phi}$ and $I_{V\phi}^{{\rm IR}}$ contain the same terms non-analytic in $a$, namely,
\begin{displaymath}
\frac{1}{16\pi^{2}}x_{1}\ln(a) .
\end{displaymath}
The difference is therefore expandable in $a$.
2) $I_{V\phi}$ was expandable in $b$ from the start, whereas $I_{V\phi}^{{\rm IR}}$ is expandable in $b$ {\em by construction}. Therefore the difference is of course also expandable in $b$.
Moreover, $R'$ is unique, because we extracted exactly the part of $I_{V\phi}$ proportional to fractional powers of $a$ for fractional dimension parameter $d$. We conclude that $R'$ is a well-defined regular part, and that it can be absorbed in a renormalization of the LECs of the effective Lagrangian.
We add the remark that the expansion of Eq.(\ref{IVphiIR}) is reproduced by
using the procedure of \cite{Tang}, i.e. expanding the 'hard pole structure'
and interchanging summation and integration. We have checked this
to order $O(p^{8})$, but a formal proof that it will give the same
result to {\em all} orders is still missing. It seems that both procedures are
indeed consistent (remember, however, the remarks made at the end of
Sect.~\ref{sec:softhard}). This means that the 'low-energy-portion' of loop integrals is,
in this sense, unambiguous.
This result is not really surprising: From the arguments of
Sect.~\ref{sec:softhard}, it is seen that an integral like
$I_{{\rm soft}}$ (see eq.(\ref{soft})) is a pure 'soft pole' integral,
i.e. only involving the pole structure associated with the Goldstone boson
propagator, and thus having no regular part, while an integral like
$I_{{\rm hard}}$ is regular in the Goldstone boson mass, and does not contain
fractional powers of $M_{\phi}$ for any choice of the dimension parameter $d$.
\section{Goldstone boson self-energy}
\setcounter{equation}{0}
\label{sec:self1}
We are now ready to compute the vector meson contribution of the Goldstone boson self-energy.
We consider first the novel type of diagrams where the heavy mass line appears in the loop,
see Fig.~\ref{fig:self}b.
We get for this amplitude
\begin{equation}
\frac{12iG_{V}^{2}}{F^{4}}\delta^{ab} I_{\Sigma} \ ,
\end{equation}
using the notation
\begin{equation}\label{Isigma}
I_{\Sigma}(q) = i\int \frac{d^{d}k}{(2\pi)^{d}}\frac{q^{2}k^{2}-(q\cdot k)^{2}}{(k^{2}-M_{V}^{2}+i\epsilon)((k-q)^{2}-M_{\phi}^{2}+i\epsilon)} .
\end{equation}
This integral may be decomposed in a linear combination of scalar loop integrals by standard techniques:
\begin{equation}\label{Isigmad}
I_{\Sigma}= c_{\phi}I_{\phi}+c_{V}I_{V}+c_{V\phi}I_{V\phi},
\end{equation}
where the coefficients $c_{i}$ are given by
\begin{eqnarray}
c_{\phi} &=& \frac{q^{2}+M_{\phi}^{2}-M_{V}^{2}}{4},\nonumber\\
c_{V} &=& \frac{q^{2}-M_{\phi}^{2}+M_{V}^{2}}{4},\nonumber \\
c_{V\phi} &=&\frac{4q^{2}M_{V}^{2}-(q^{2}-M_{\phi}^{2}+M_{V}^{2})^{2}}{4},
\end{eqnarray}
while the scalar loop integrals are defined as
\begin{eqnarray}\label{Iphi}
I_{\phi}=&i\int \frac{d^{d}k}{(2\pi)^{d}}\frac{1}{k^{2}-M_{\phi}^{2}+i\epsilon} =& 2M_{\phi}^{2}\lambda + \frac{M_{\phi}^{2}}{16\pi^{2}}\ln(a), \\ I_{V} =& i\int \frac{d^{d}k}{(2\pi)^{d}}\frac{1}{k^{2}-M_{V}^{2}+i\epsilon} =& 2M_{V}^{2}\lambda , \label{IV}
\end{eqnarray}
and the scalar loop integral $I_{V\phi}$ is defined in Eq.(\ref{IVphidef}).
We repeat the remark that we use $\mu = M_{V}$ for the renormalization scale.
What power would we like to have for this self-energy amplitude ? We have one loop integration, two vertices of order $O(p^{2})$ and one Goldstone boson propagator. The vector meson propagator is counted as $O(1)$ here, since the vector meson line is not connected to any external heavy particle lines. So we end up with the 'expected' power
\begin{displaymath}
D(\Sigma) = 4 + 2\times2 - (2 + 0) = 6~,
\end{displaymath}
using Eq.(\ref{chiraldim}).
The word 'expected' was used from a naive point of view, because we are already sophisticated enough to expect that the 'hard pole structure' associated with the vector meson propagator will give the loop integral a high energy contribution that spoils the power counting.
Indeed, using the decomposition in scalar loop integrals and the expansions of the variables $x_{1}$ and $x_{2}$ given in eq.(3.10) and (3.11), respectively, it is straightforward to see that $I_{\Sigma}$ contains the following terms which violate the power counting:
\begin{displaymath}
I_{\Sigma}= \frac{1}{4}M_{V}^{4}\biggl(\lambda(6b-2b^{2}+6ab)+\frac{1}{16\pi^{2}}\biggl(\frac{b}{2}+\frac{ab}{2}-\frac{5}{6}b^{2}\biggr)\biggr) + \ldots,
\end{displaymath}
where the dots stand for terms satisfying the power counting, i.e. they are of order $O(p^{6})$ or higher.
To find the 'infrared singular part' of $I_{\Sigma}$, it is sufficient to find the infrared singular part of each of the scalar loop integrals. This is because the coefficients $c_{i}$ do not contain any fractional powers of $M_{\phi}$ for any dimension parameter $d$.
The infrared singular part of $I_{V\phi}$ has been computed in the last section. The integrand of $I_{V}$ is a pure hard pole structure without any dependence on a small parameter like $a$ or $b$, and will therefore not have an infrared singular part. Finally, the integral $I_{\phi}$ is proportional to a fractional power of $M_{\phi}$, as a direct calculation using dimensional regularization shows, so it has no regular part (it does not contain a hard pole structure which could be expanded).
The 'infrared regularized' self-energy amplitude is thus
\begin{equation}
I_{\Sigma}^{{\rm IR}}= \frac{12iG_{V}^{2}}{F^{4}}\delta^{ab}(c_{\phi}I_{\phi} + c_{V\phi}I_{V\phi}^{{\rm IR}}),
\end{equation}
where $I_{V\phi}^{{\rm IR}}$ is given in eq.(\ref{IVphiIR}).
Using the expansions of the $x_{i}$ and $\ln(1-y) = -y - {y^{2}}/{2} - \ldots$
($|y| \ < 1$),
it can easily be checked that the 'infrared regularized' amplitude obeys power counting.
Before we go on and apply our modified version of infrared regularization to other graphs, we want to mention one more thing. Using the vector field approach, cf.
App.~\ref{app:vec}, the self-energy graph leads to the expression
\begin{displaymath}
\frac{12iG_{V}^{2}}{M_{V}^{2}F^{4}}\delta^{ab}I'_{\Sigma}\ ,
\end{displaymath}
where now
\begin{displaymath}
I'_{\Sigma} = i\int \frac{d^{d}k}{(2\pi)^{d}}\frac{k^{2}(k^{2}q^{2}-(k\cdot q)^{2})}{(k^{2}-M_{V}^{2}+i\epsilon)((k-q)^{2}-M_{\phi}^{2}+i\epsilon)}.
\end{displaymath}
Subtracting the amplitude computed in the vector field approach from the amplitude computed in the tensor field approach, we get
\begin{eqnarray*}
&&A(W)-A(V) =\frac{12iG_{V}^{2}}{M_{V}^{2}F^{4}}\delta^{ab}i \nonumber\\
&&\times
\int \frac{d^{d}k}{(2\pi)^{d}}\frac{(M_{V}^{2}-k^{2})(k^{2}q^{2}-(k\cdot q)^{2})}{(k^{2}-M_{V}^{2}+i\epsilon)((k-q)^{2}-M_{\phi}^{2}+i\epsilon)}\\ &&= \frac{12G_{V}^{2}}{M_{V}^{2}F^{4}}\delta^{ab}\int \frac{d^{d}k}{(2\pi)^{d}}\frac{k^{2}q^{2}-(k\cdot q)^{2}}{(k-q)^{2}-M_{\phi}^{2}+i\epsilon}
\end{eqnarray*}
But this is the same result as one would get for a self-energy diagram where the vector meson line is replaced by a contact term interaction
\begin{displaymath}
\frac{G_{V}^{2}}{8M_{V}^{2}}\langle [u_{\mu},u_{\nu}][u^{\mu},u^{\nu}]\rangle,
\end{displaymath}
leading to a four-$\phi$ interaction
\begin{displaymath}
-\frac{G_{V}^{2}}{M_{V}^{2}F^{4}}f^{abk}f^{cdk}\partial_{\mu}\phi^{a}\partial_{\nu}\phi^{b}\partial^{\mu}\phi^{c}\partial^{\nu}\phi^{d} .
\end{displaymath}
This confirms the 'duality' between the vector and the tensor field approach \cite{Eck}.
Note that this result does not depend on the regularization scheme -
it was derived without even computing the integrals. Since the difference of the
amplitudes is a pure 'soft pole' Goldstone boson loop diagram, we see that
the 'hard part' (i.e. the regular part) is representation independent.
In particular, both descriptions produce the same power counting violating terms.
\section{Triangle graph}
\setcounter{equation}{0}
\label{sec:tria}
There is one more one-loop diagram of $O(p^{6})$ where the vector meson line
shows up as a loop line: the triangle diagram of Fig.~\ref{fig:tri}.
\begin{figure}[htb]
\centerline{\psfig{file=tri.eps,height=2.1cm}}
\caption{Triangle graph as it contributes e.g. to the pion vector form factor.
Solid, dashed and wiggly lines denote vector mesons (heavy particles),
Goldstone bosons and external sources (fields), respectively.
\label{fig:tri}}
\end{figure}
\noindent
One gets the following expression for the triangle graph:
\begin{equation}
{\mathcal A}_\Delta
= \frac{6G_{V}^{2}}{F^{4}}\biggl(f^{ab3}+\frac{1}{\sqrt{3}}f^{ab8}\biggr)I_{\Delta}^{\tau},
\end{equation}
where the integral is
\begin{eqnarray}
&& -i \, I_{\Delta}^{\tau}(p,p+k)= \int \frac{d^{d}q}{(2\pi)^{d}} \nonumber\\
&&\times\frac{((2p+k) - 2q)^{\tau}
(p\cdot (p+k)q^{2}-(p\cdot q)((p+k)\cdot q))}{(q^{2}-M_{V}^{2})((q-p)^{2}-M_{\phi}^{2})
((q-p-k)^{2}-M_{\phi}^{2})}\, . \nonumber \\
\end{eqnarray}
A decomposition of $I_{\Delta}^{\tau}$ as a linear combination of scalar loop
integrals is given in App.~\ref{app:int} (for the Goldstone boson momenta on mass shell).
What concerns us here is the question how the infrared singular part of this integral
can be obtained. The decomposition in scalar loop integrals contains an
integral we have not yet treated, namely
\begin{eqnarray}
&&I_{V\phi\phi}(p,p+k)\equiv \int \frac{d^{d}q}{(2\pi)^{d}} \nonumber \\
&&\times \frac{i}{(q^{2}-M_{V}^{2})((q-p)^{2}-M_{\phi}^{2})((q-(p+k))^{2}-M_{\phi}^{2})}~.
\nonumber \\ &&
\end{eqnarray}
Fortunately, it is possible to reduce the problem of finding the infrared singular part of this integral to the case we have already examined. The procedure can in full generality be found in section 6.1 of \cite{BL}. We show how this works in the above example: Introducing one more Feynman parametrization, we write $I_{V\phi\phi}$ as
\begin{eqnarray*}
&&\int \frac{d^{d}q}{(2\pi)^{d}}\frac{i}{q^{2}-M_{V}^{2}}\int_{0}^{1}dw \times \\&&
\frac{\partial}{\partial M_{\phi}^{2}}\frac{1}{(1-w)((q-p)^{2}-M_{\phi}^{2})+w((q-(p+k))^{2}-M_{\phi}^{2})} \\ &&= \int\frac{d^{d}q}{(2\pi)^{d}}\frac{i}{q^{2}-M_{V}^{2}}\int_{0}^{1}dw \times \\
&&\frac{\partial}{\partial M_{\phi}^{2}}\frac{1}{(q-(p+wk))^{2}-(M_{\phi}^{2}-k^{2}w(1-w))}.
\end{eqnarray*}
The momentum integral is now of the form of $I_{V\phi}$, with the operator
\begin{equation}
\Delta(\ldots) \equiv \int_{0}^{1}dw\ \frac{\partial}{\partial M_{\phi}^{2}}(\ldots)
\end{equation}
acting on it. We can insert our result for $I_{V\phi}^{{\rm IR}}$, with the substitutions
\begin{eqnarray}
a = \frac{M_{\phi}^{2}}{M_{V}^{2}} &\rightarrow& \frac{M_{\phi}^{2}-k^{2}w(1-w)}{M_{V}^{2}} = a'~,
\\ b=\frac{p^{2}}{M_{V}^{2}} &\rightarrow& \frac{(p+wk)^{2}}{M_{V}^{2}} = b'~.
\end{eqnarray}
Please note that the external Goldstone boson momentum is now called $p$ instead of $q$.
Also note that the new variables $a'$ and $b'$ are also of $O(p^{2})$,
which allows to take over the treatment of infrared regularization presented in the foregoing sections.
We must show that the operator $\Delta$ does not disturb the properties of infrared singularity
and power counting. The 'dangerous' part of this operator is the derivative with
respect to $M_{\phi}^{2}$, since it changes the chiral order.
It is clear from the above definitions that (for fixed $k^{2}$)
\begin{displaymath}
M_{V}^{2}\frac{\partial}{\partial M_{\phi}^{2}}= \frac{\partial}{\partial a}
= \frac{\partial}{\partial a'} \ .
\end{displaymath}
We know from the derivation of the infrared singular part that it can be written in the general form
\begin{displaymath}
I_{V\phi}^{{\rm IR}}(a',b') = (a')^{\frac{d}{2}-1}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}c_{mn}a'^{m}b'^{n}
\end{displaymath}
with some numerical coefficients $c_{mn}$ that depend only on the dimension $d$. For $d\rightarrow 4$, this gives the correct order $O(p^{2})$ for $I_{V\phi}^{{\rm IR}}$. Letting the operator $\Delta$ act on this expression, we get
\begin{eqnarray}
I_{V\phi\phi}^{{\rm IR}} &=& \Delta I_{V\phi}^{{\rm IR}}(a',b') \nonumber\\
&=& \int_{0}^{1}dw\ \frac{1}{M_{V}^{2}}\frac{\partial}{\partial a'}
\biggl((a')^{\frac{d}{2}-1}\sum_{m=0}^{\infty}
\sum_{n=0}^{\infty}c_{mn}a'^{m}b'^{n}\biggr) \nonumber \\
&=& \int_{0}^{1}dw\ (a')^{\frac{d}{2}-2}\sum_{m=0}^{\infty}
\sum_{n=0}^{\infty}\left(\frac{d}{2}-1+m\right)c_{mn}a'^{m}b'^{n}~.\nonumber \\&&
\end{eqnarray}
This shows that the expansion of the thus defined infrared singular part of $I_{V\phi\phi}$ starts with $M_{\phi}^{d-4}$, as one expects for such an integral by simple power counting. We learn from the last expression that, in principle, it is sufficient to know the chiral expansion of $I_{V\phi}^{{\rm IR}}$ to arrive at the chiral expansion of $I_{V\phi\phi}^{{\rm IR}}$. The only problem for practical calculations is that the parameter integrals over $w$ are not at all of a simple form, because $a'$,$b'$ and therefore also $x'_{1}$ and $x'_{2}$, defined in analogy to Eq.(3.9), are nontrivial functions of $w$.
For $d\rightarrow 4$, $I_{V\phi\phi}^{{\rm IR}}$ is
\begin{displaymath}\begin{array}{lll}
\int_{0}^{1}dw\ \frac{1}{M_{V}^{2}}\frac{\partial}{\partial a'}\biggl(2x'_{1}\lambda-\frac{1}{16\pi^{2}}\biggl(x'_{1}-x'_{1}\ln(a')\\-(x'_{1}-x'_{2})\ln(1-\frac{x'_{1}}{x'_{2}})\biggr)\biggr)=\\ \int_{0}^{1} \frac{dw}{M_{V}^{2}}\biggl(2y_{1}\lambda-\frac{1}{16\pi^{2}}\biggl(y_{1}-y_{1}\ln(a') \\- \frac{x'_{1}}{a'}-(y_{1}-y_{2})\ln(1-\frac{x'_{1}}{x'_{2}})-\frac{y_{1}x'_{2}-y_{2}x'_{1}}{x'_{2}}\biggr)\biggr),&
\end{array} \end{displaymath}
where we defined
\begin{displaymath}
y_{1,2}=\frac{\partial}{\partial a'}x'_{1,2} = \frac{1}{2b'}\biggl(1 \pm \frac{b'-a'-1}{\sqrt{(b'+a'-1)^{2}-4a'b'}}\biggr).
\end{displaymath}
The singularity structure of $I_{V\phi\phi}$ is richer than the one of $I_{V\phi}$ because by the definition of $a'$, a term like $\ln(a')$ not only contains the infrared singularity for $M_{\phi}\rightarrow 0$, but also a cut for $k^{2}=t > 4M_{\phi}^{2}$, which is associated with the two-Goldstone boson production threshold.
The important point is that, having the prescription for $I_{V\phi}$, we can find the infrared singular part of any loop integral where a small momentum of order $O(p)$ flows through the heavy particle line(s) (this is the case we have treated in this paper) or where a nearly on-mass-shell heavy particle is involved (in which case the results of \cite{BL} can be used directly). The principle is now well-understood, but practical calculations will be difficult for complicated diagrams, because one needs a parameter integration for every pair of propagators which are combined to one (parameter-dependent) pole structure. An example for this has been shown in the treatment of $I_{V\phi\phi}$. As another point, the decomposition of a complicated loop integral involving a lot of vertices in a decomposition in scalar loop integrals will be lengthy and complicated. But these are no {\em conceptual} problems any more. The conceptual problem of the power-counting violating terms has been solved by dropping the regular parts of all loop integrals, retaining the infrared singular parts that stem from the region where the loop momentum is $O(p)$. As a further consequence, many diagrams, namely those where the loops are formed of heavy particle lines only, can be dropped from the start because the respective loop integrals will only contain 'hard pole' structures and do not lead to an infrared singular part.
Using this scheme, we can proceed and treat all kinds of diagrams where heavy resonances (not only vector mesons) occur in loops, and whose momenta are to be counted as either nearly on-shell or $O(p)$ by the perturbative scheme of power counting. We finally remark
that the treatment of the vector meson loop graphs in the analysis of the nucleon
electromagnetic form factors performed in \cite{Kubis} is consistent with the procedure
we have established.
\section{Vector meson self-energy graph}
\setcounter{equation}{0}
\label{sec:Vself}
In the last section we have considered diagrams where the vector meson
shows up as a strongly virtual intermediate state, with a small momentum
flowing through the vector meson line. We found the 'infrared singular' part
of the corresponding amplitude, and we saw that it was necessary to modify
the method
of {\cite{BL}}, where all particles in the intermediate state
where considered as being close to their respective mass shell.
It is now natural to ask: What happens if the 'light' particles
(the Goldstone bosons) are far from their mass shell ? In principle,
the 'hard momentum part' of the pionic intermediate states has been
integrated out in the effective theory. But in analogy to the case of
the treatment of the 'heavy' vector meson in the last chapter, it
might be useful to take these degrees of freedom into account in a
systematic fashion, because in this way one sums up (infinitely many)
higher order graphs. We will encounter such a situation in the following section.
\subsection{One more case of IR regularization}
\label{subsec:IRmore}
In Fig.~\ref{fig:Vself} we show a graph contributing to the vector meson
self energy. In the case where there is a 'small' ($O(p)$)
momentum flowing through the vector meson line, the corresponding
amplitude would be a homogeneous function of small parameters
(external momentum and quark masses), since the large scale (in this case,
the vector meson mass) does not show up in the loop line propagators
and thus cannot produce a 'hard pole' contribution.
The loop integral is the same as in the Goldstone boson sector,
and therefore has no 'regular part'.
\begin{figure}[htb]
\centerline{\psfig{file=vself.eps,height=1.7cm}}
\caption{The vector meson self-energy diagram with a pure
Goldstone boson loop. Solid (dashed) lines denote
vector mesons (Goldstone bosons).
\label{fig:Vself}}
\end{figure}
\noindent
When computing self-energy contributions, one is usually interested in the
case where the external momentum $P$ is close to the mass shell
of the corresponding particle. This leads to the appearance of the large scale
in the denominator of the integrand of the loop integral, and we expect
a power-counting violating contribution stemming from a 'hard pole' of the
integrand. The integral will develop a regular part in the terminology of
Becher and Leutwyler.
We start our analysis for the case $P^{2} \gg M_{\phi}^{2}$ with the question:
What is the 'soft part' of a diagram like Fig.4.1? That is, which region of
the loop momentum integration produces the infrared singular part? Obviously,
the case where both Goldstone boson propagators are of $O(p^{-2})$ is
excluded by four-momentum conservation at the two vertices. The region
where both Goldstone boson lines are far from their mass shell is a pure
'hard-momentum effect' and thus belongs to the 'regular part'. The
soft part can only come from the region of the loop integration where
one line is soft (i.e. it carries an $O(p)$-momentum), and the other
Goldstone boson line carries the large momentum $P$ and is thus far
from its mass shell.
As an illustration of this argument, let us first try to extract the
'soft pole contribution' following the method of {\cite{Tang}}.
We examine the scalar loop integral
\begin{equation}\label{Idef}
I = i\int \frac{d^{d}k}{(2\pi)^{d}}\frac{1}{(k^{2}-M_{\phi}^{2})((k-P)^{2}-M_{\phi}^{2})}.
\end{equation}
Following the above reflections and the receipt of {\cite{Tang}},
we treat the momentum of one line as 'soft' and expand the propagator
associated with the other line. Then we interchange integration
and summation of the series, thereby 'destroying the hard pole':
\begin{eqnarray*}
I &\rightarrow& i\int\frac{d^{d}k}{(2\pi)^{d}}\frac{1}{k^{2}-M_{\phi}^{2}}\frac{1}{P^{2}}\sum_{n=0}^{\infty}\frac{(2P\cdot k + M_{\phi}^{2} - k^{2})^{n}}{(P^{2})^{n}}\\ &=& i\int\frac{d^{d}k}{(2\pi)^{d}}\frac{1}{k^{2}-M_{\phi}^{2}}\frac{1}{P^{2}}\sum_{n=0}^{\infty}\frac{(2P\cdot k)^{n}}{(P^{2})^{n}} \\ &\rightarrow& \sum_{n=0}^{\infty} \frac{i}{(P^{2})^{n+1}}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{(2P\cdot k)^{n}}{k^{2}-M_{\phi}^{2}}\\ &=& \frac{1}{2}I_{\rm{soft}}.
\end{eqnarray*}
The factor of $\frac{1}{2}$ appears due to the fact that the full soft
part also includes the part where the other line (with momentum $P-k$)
is considered as 'soft'. Of course this part is equal to the above result
due to the symmetry of the graph. We note further that it is {\em not}
legitimate to resum the series in the above result to get
\begin{eqnarray*}
I' &=& i\int
\frac{d^{d}k}{(2\pi)^{d}}\frac{1}{(k^{2}-M_{\phi}^{2})(P^{2}-2k\cdot P)}
\\ &=& i\int\frac{d^{d}k}{(2\pi)^{d}}\int_{0}^{\infty}dz
\\ &\times&\frac{1}{[(k^{2}-M_{\phi}^{2})(1-z)+z((k-P)^{2}-M_{\phi}^{2})]^{2}}
\\ &=& -i\int\frac{d^{d}k}{(2\pi)^{d}}\int_{1}^{\infty}dz
\\ &\times&\frac{1}{[(k^{2}-M_{\phi}^{2})(1-z)+z((k-P)^{2}-M_{\phi}^{2})]^{2}}.
\end{eqnarray*}
This contains a 'hard pole' contribution and will not satisfy the power
counting scheme, which requires the scalar loop integral to be $O(p^{d-2})$,
because only one Goldstone boson propagator is booked as $O(p^{-2})$, while
the other Goldstone boson must be far off its mass shell (its momentum must
be of the order of $P$ by momentum conservation). To repeat, this power
counting is strictly valid only for the 'soft pole' part of the integral,
which we have identified as $I_{\rm soft}$.
The alert reader will note that the result $I_{\rm soft}$ corresponds
to a series of tadpole graphs, involving only one Goldstone boson propagator.
This can of course not be the whole story, because the amplitude of
Fig.~\ref{fig:Vself} has an imaginary part due to the production of two
Goldstone bosons in the intermediate state, while the tadpole sum does
not have such an imaginary part. In order to take only $I_{\rm soft}$ as
the regularized amplitude, one would have to write complex coefficients
in the effective Lagrangian, which we do not want. A direct calculation
of the full scalar loop integral shows that the imaginary part does not
satisfy the power counting mentioned above. This is related to the fact
that for large $P^{2}$ of the heavy external particle, the Goldstone
bosons produced in the decay of this particle are not to be considered
as 'soft'. Below the threshold, we have $P^{2}< 4M_{\phi}^{2}$, so $P^{2}$
can not be considered as being very large compared to the
scale $M_{\phi}^{2}$ in that region, and we would have to take the
full integral $I$ as the soft part, and not $I_{\rm soft}$.
This phenomenon of the 'missing imaginary part' is consistent with
the findings of Ref.\cite{bijnens}, where this was noted using the
Heavy Vector Meson approach. We will not discuss this
further at this point and turn to the scheme of infrared regularization.
Doing the usual steps, we obtain
\begin{equation}
I = -\frac{\Gamma(2-\frac{d}{2})}{(4\pi)^{\frac{d}{2}}}(P^{2})^{\frac{d}{2}-2}
\int_{0}^{1}dz (D(z))^{\frac{d}{2}-2},
\end{equation}
where
\begin{equation}
D(z) = z^{2}-z+\frac{M_{\phi}^{2}}{P^{2}} .
\end{equation}
Motivated by the remarks made in the last paragraph, we will consider the
case that $P^{2}> 4M_{\phi}^{2}$. This is fulfilled for the case we are
interested in, where $P^{2}$ is close to the physical vector meson mass
squared, and $M_{\phi}$ is the mass of the particles we consider as Goldstone bosons.
Obviously, fractional powers of $M_{\phi}$ are produced in the parameter regions where
\begin{displaymath}
z^{2}-z = 0 \Rightarrow z=0 \ {\rm or}\ z=1,
\end{displaymath}
corresponding to the fact that either one or the other Goldstone boson
line in the loop carries soft momentum. In accord with the procedure of
Sect.~\ref{sec:IRnew} (cf. Eq.~(\ref{zeros})), we introduce the zeroes of $D(z)$,
\begin{eqnarray}
d_{1,2} &=& \frac{1}{2}(1 \mp \sigma), \nonumber \\
\sigma &=& \sqrt{1-\frac{4M_{\phi}^{2}}{P^{2}}}~.
\end{eqnarray}
Note that $\sigma \in \mathbf{R}$ and
\begin{displaymath}
0 < d_{2}-d_{1} = \sigma \leq 1 .
\end{displaymath}
We can simplify our analysis by 'folding' the parameter interval
symmetrically,\begin{displaymath}
\int_{0}^{1}dz (D(z))^{\frac{d}{2}-2} = 2\int_{0}^{\frac{1}{2}}dz
(D(z))^{\frac{d}{2}-2},
\end{displaymath}
allowing us to expand the pole due to the zero $d_{2}>\frac{1}{2}$ :
\begin{eqnarray*}
&&\int_{0}^{1}dz (D(z))^{\frac{d}{2}-2} = \int_{0}^{1}dz
(z-d_{1})^{\frac{d}{2}-2}(z-d_{2})^{\frac{d}{2}-2} \\ &=&
2\sum_{m=0}^{\infty}(-d_{2})^{\frac{d}{2}-2-m}\frac{1}{m!}\frac{\Gamma(\frac{d}{2}-1)}{\Gamma(\frac{d}{2}-1-m)}\\
&&\qquad\times\int_{0}^{\frac{1}{2}}(z-d_{1})^{\frac{d}{2}-2}z^{m} dz .
\end{eqnarray*}
We did not yet change the value of the parameter integral.
To find the 'infrared singular' part of the parameter integral
in the last line, we note that $d_{1}$ is proportional to $M_{\phi}^{2}$
and of $O(p^{2})$, and shift the integration variable to write
\begin{eqnarray*}
\int_{0}^{\frac{1}{2}}(z-d_{1})^{\frac{d}{2}-2}z^{m}dz &=&
\int_{-d_{1}}^{0} z^{\frac{d}{2}-2}(z+d_{1})^{m}dz \ \\
&+& \ \int_{0}^{\frac{1}{2}-d_{1}} z^{\frac{d}{2}-2}(z+d_{1})^{m}dz .
\end{eqnarray*}
Terms proportional to $d_{1}^{\frac{d}{2}}$ will only be produced
by the first term on the right-hand side (remember $m\in \mathbf{N}$).
Scaling the variable of integration with $d_{1}$, it takes the form
\begin{eqnarray*}
&&\int_{-d_{1}}^{0}z^{\frac{d}{2}-2}(z+d_{1})^{m}dz \\
&=&
(-1)^{m+1}(-d_{1})^{\frac{d}{2}-1+m}\int_{0}^{1}t^{\frac{d}{2}-2}(1-t)^{m}dt
\\ &=& (-1)^{m+1}(-d_{1})^{\frac{d}{2}-1+m}\frac{\Gamma(\frac{d}{2}-1)\Gamma(m+1)}{\Gamma(\frac{d}{2}+m)},
\end{eqnarray*}
where we substituted $z = -td_{1}$.
The 'infrared singular part' of $I$ is thus
\begin{eqnarray}
&&I_{\rm{IR}} =
-\frac{2\Gamma(2-\frac{d}{2})(P^{2})^{\frac{d}{2}-2}}{(4\pi)^{\frac{d}{2}}}
\\&&\times
\sum_{m=0}^{\infty}\frac{(-1)^{m}(d_{1})^{\frac{d}{2}-1+m}(d_{2})^{\frac{d}{2}-2-m}(\Gamma(\frac{d}{2}-1))^{2}}{\Gamma(\frac{d}{2}-1-m)\Gamma(\frac{d}{2}+m)} \, .\nonumber
\end{eqnarray}
This expansion starts with $d_{1}^{\frac{d}{2}-1} \sim M_{\phi}^{d-2}$ and obeys low-energy power counting. The series could be summed up, but this is not necessary. Reviewing what we have done so far, it becomes clear that we have just selected a certain range of integration which produces the fractional powers of $M_{\phi}$. This step may be symbolized by
\begin{eqnarray}
I &=& \int_{0}^{1}dz (\ldots) \rightarrow \int_{0}^{d_{1}}dz (\ldots)
+ \int_{d_{2}}^{1}dz (\ldots)\nonumber\\
& =& 2\int_{0}^{d_{1}}dz (\ldots) = I_{\rm{IR}}.
\end{eqnarray}
Applying this to $I$ with $d=4-\epsilon$, we find
\begin{equation}\label{VselfIR}
I_{\rm{IR}}= 4d_{1}\lambda+\frac{1}{16\pi^{2}}\biggl(-2d_{1}+\ln(a)+\sigma\ln\biggl(\frac{1+\sigma}{1-\sigma}\biggr)-2\sigma\ln(\sigma)\biggr),
\end{equation}
while the 'regular part' is
\begin{equation}\label{VselfR}
I-I_{\rm{IR}}= (2-4d_{1})\lambda +\frac{1}{16\pi^{2}}\biggl(-(1-2d_{1})+2\sigma\ln(\sigma)-i\pi\sigma\biggr),
\end{equation}
which is indeed expandable in $M_{\phi}^{2}$ for $P^{2}>4M_{\phi}^{2}$. We have again used $M_{V}$ for the renormalization scale, and the variable $a$ defined in
Eq.~({\ref{abdef}).
It may be checked by expanding $d_{1}$ and $\sigma$ in powers of $M_{\phi}^{2}$ that the infrared singular part indeed satisfies the power counting rules, and also that
\begin{displaymath}
I_{\rm{IR}} = I_{\rm{soft}}.
\end{displaymath}
We have already remarked in the last chapter that the low-energy part of a loop integral is unambiguously defined in this sense.
The imaginary part of the scalar loop integral $I$ is
\begin{displaymath}
\frac{-i\sigma}{16\pi},
\end{displaymath}
whose chiral expansion starts $O(1)$ and therefore does not obey the power counting rules. But it cannot be subtracted from the full amplitude since it is not real. The corresponding width of the vector meson due to its possible decay into a pair of Goldstone bosons cannot simply be neglected. In principle, one should give the denominator of the vector meson propagator an imaginary part to deal with this fact.
The result of Eq.~(\ref{VselfIR}) is valid above the two-Goldstone-boson threshold. At
$P^{2}=4M_{\phi}^{2}$, the 'regular part', Eq.~(\ref{VselfR}), vanishes, and remains
zero below the threshold, since as we remarked above the integral $I$ then has
no regular part and is completely 'infrared singular'. The two representations
for the infrared singular part, valid for different ranges of the parameter
$P^{2}$, may be `joined together' at the threshold singularity. A similar
thing happened in the last chapter for the two representations of the infrared
singular part of the scalar loop integral $I_{V\phi}$.
\subsection{Application to the self-energy}
The major problem in finding the 'soft part' of the amplitude of
Fig.~\ref{fig:Vself}
has been solved in the last paragraph. For the full expression, we need
to add some vertex structure from the local effective Lagrangian.
We choose to work with the interaction Lagrangian of Eq.~(\ref{Wint}) and
refrain from constructing interaction terms with a higher number of
derivatives, though not all momenta in the present problem can be
considered as 'soft'. Since the coupling constant $G_{V}$ may be
measured from $\rho$--meson decay, where the Goldstone bosons are also
not of soft momentum, this can be seen as a valid approximation.
Applying the usual Feynman rules, we obtain
\begin{eqnarray}
&&(-i)\Sigma_{V}^{\mu\nu,\rho\sigma} = \frac{1}{2}\frac{G_{V}^{2}}{F^{4}}
f^{abc}f^{bad} \qquad\nonumber \\
&&\times \int\frac{d^{d}k}{(2\pi)^{d}}\frac{(k^{\mu}P^{\nu}
-k^{\nu}P^{\mu})(P^{\rho}k^{\sigma}-P^{\sigma}k^{\rho})}
{(k^{2}-M_{\phi}^{2})((k-P)^{2}-M_{\phi}^{2})}.
\label{Sigint}
\end{eqnarray}
Before further evaluating this, we have to discuss the
power counting. The vertices are both of
the order $O(p)$, since only one momentum in the product $k\cdot P$
is a small momentum in the sense of the power counting scheme.
Remembering the discussion of the last paragraph, we want the amplitude
to be of 'chiral order' $d+1+1-2 = d$. We will see that the infrared
regularized amplitude will indeed respect this power counting. Using
the tensor integral of App.~\ref{app:int}, we get
\begin{eqnarray}
&&\Sigma_{V}^{\mu\nu,\rho\sigma} = \\
&&\frac{3G_{V}^{2}}{2F^{4}}
\delta^{cd}P^{\mu\nu,\rho\sigma}\frac{1}{d-1}\biggl(\frac{1}{2}
I_{\phi}+\frac{1}{4}(4M_{\phi}^{2}-P^{2})I\biggr)~. \nonumber
\end{eqnarray}
Here $I$ is the scalar loop integral of Eq.~(\ref{Idef}), and we defined
\begin{equation}\label{P}
P^{\mu\nu,\rho\sigma} = g^{\mu\rho}P^{\nu}P^{\sigma}-g^{\mu\sigma}
P^{\nu}P^{\rho} - (\mu \leftrightarrow \nu)~.
\end{equation}
The infrared regularized amplitude is obtained from (\ref{Sigint}) by simply
letting $I \rightarrow I_{\rm{IR}}$. The infrared part $I_{\rm{IR}}$
was of $O(p^{d-2})$. In order to check that the terms of $O(p^{d-2})$
cancel in the soft part of (\ref{Sigint}), it is easiest to use that the first
term in the chiral expansion of $I_{\rm{IR}}$ is also the first term
of the series for $I_{\rm{soft}}$, which was given in
Sect.~\ref{subsec:IRmore}:
\begin{displaymath}
I_{\rm{soft}} = \frac{2}{P^{2}}I_{\phi} + \ldots
\end{displaymath}
Inserting this in $(-i)\Sigma_{V,\rm{IR}}^{\mu\nu,\rho\sigma}$, that is the
infrared part of Eq.~(\ref{Sigint}), it is clearly seen that the infrared part
of the amplitude is indeed of order $O(p^{d})$, as required by
low-energy power counting.
\subsection{Contributions to the Vector Meson Mass}
First we introduce some notation. We define
\begin{equation}
\mathbf{1} \equiv \mathbf{1}^{\mu\nu,\rho\sigma} =
\frac{1}{2}(g^{\mu\rho}g^{\nu\sigma}-g^{\mu\sigma}g^{\nu\rho}).
\end{equation}
Furthermore, we write
\begin{displaymath}
\mathbf{P} \equiv P^{\mu\nu,\rho\sigma},
\end{displaymath}
see Eq~(\ref{P}). It is easy to calculate
\begin{eqnarray*}
\mathbf{1}\cdot \mathbf{1} &=& \mathbf{1}~,~~
\mathbf{1}\cdot \mathbf{P} = \mathbf{P}~, \\
\mathbf{P}\cdot \mathbf{1} &=& \mathbf{P}~,~~
\mathbf{P}\cdot \mathbf{P} = 2P^{2}\mathbf{P}~ ,
\end{eqnarray*}
where the multiplication works as e.g.
\begin{displaymath}
\mathbf{1}_{\mu\nu,\alpha\beta}\cdot\mathbf{P}^{\alpha\beta,\rho\sigma}
= \mathbf{P}_{\mu\nu}^{\rho\sigma}.
\end{displaymath}
The tensor field propagator may then be written
\begin{equation}
\mathbf{D} = \frac{i}{M_{V}^{2}}\biggl(2\mathbf{1}
+\frac{\mathbf{P}}{M_{V}^{2}-P^{2}}\biggr),
\end{equation}
while its inverse (in the sense of the above multiplication) is
\begin{equation}
\mathbf{D}^{-1} = \frac{1}{i}\biggl(\frac{M_{V}^{2}}{2}\mathbf{1}
- \frac{1}{4}\mathbf{P}\biggr).
\end{equation}
The one-particle irreducible self-energy amplitude may be parametrized as
\begin{equation}
\mathbf{\Sigma} = \frac{M_{V}^{2}}{2}A\mathbf{1}-\frac{B}{4}\mathbf{P}~.
\end{equation}
where $A$ and $B$ are scalar functions of $P^{2}$ and the meson masses.
The procedure is now standard: Summing over the number of self-energy
insertions, we find that the full propagator
\begin{displaymath}
\mathbf{D}_{\rm full}
= \mathbf{D} + \mathbf{D}(-i)\mathbf{\Sigma}\mathbf{D} + \ldots
\end{displaymath}
is given by
\begin{eqnarray}\label{propfull}
\mathbf{D}_{\rm full} &=& (\mathbf{D}^{-1}+i\mathbf{\Sigma})^{-1}
\\
&=& \frac{i}{M_{V}^{2}(1-A)}\left(2\mathbf{1}
+\frac{\mathbf{P}}{M_{V}^{2}\biggl(\frac{1-A}{1-B}\biggr)-P^{2}}\right)~.
\nonumber
\end{eqnarray}
We have to look for the poles of this expression. Since $A$ is a
small perturbation of $O(p^{2})$, the only pole will be at
\begin{equation}
P^{2} = M_{V}^{2}\biggl(\frac{1-A}{1-B}\biggr)= M_{V,{\rm ph}}^{2}~,
\end{equation}
with $ M_{V,{\rm ph}}$ the physical mass of the vector meson.
Before we use this formula to compute the contribution of
Fig.~\ref{fig:Vself} to the vector meson mass,
let us make a very rough estimate of the
expected size of the contribution. The most general effective
Lagrangian for the tensor field contains a term
\begin{displaymath}
c\langle W_{\mu\nu}W^{\mu\nu}\chi_{+}\rangle,
\end{displaymath}
yielding, among other terms, a contact term contribution of $O(p^{2})$,
which gives rise to a shift of the propagator pole:
\begin{displaymath}
M_{V}^{2}\rightarrow M_{V}^{2} + 8cM_{\phi}^{2}~.
\end{displaymath}
Since the coupling constant $c$ is not known, for the purpose of our
estimate we make a naturalness assumption concerning this coupling,
and set $c=1$, which gives us a value of 100 MeV for the mass shift.
If power counting is a consistent perturbative scheme here, we would
expect for an $O(p^{4})$ correction a number of size of roughly
$(M_{\phi}^{2}/M_{V}^{2})~(100~ \rm{MeV}) \sim 3\,{\rm MeV}$
(for the pion contribution).
Now let us compare this estimate with the
(infrared regularized) amplitude corresponding to Fig.~\ref{fig:Vself}.
It will contribute to $B$, defined above, with
\begin{eqnarray}
B_{V} &=& -\frac{6G_{V}^{2}}{F^{4}}\biggl(\frac{1}{6}I_{\phi}
+\frac{4M_{\phi}^{2}-M_{V}^{2}}{12}I_{\rm{IR}}\nonumber \\
&+& \frac{1}{144\pi^{2}}(d_{1}M_{V}^{2}-(1+4d_{1})M_{\phi}^{2})\biggr),
\end{eqnarray}
giving a mass shift of $1.2\,$MeV, which is really only a small correction,
and also of the size expected by the (very rough) estimate made above.
If we had used the full (real part of) the integral $I$, we would get a
result that is comparable to a correction of $O(p^{2})$ (of course,
there {\em{are}} such terms of $O(p^{2})$ in the full integral, i.e.
the power-counting violating terms). We conclude that the main effect
of the graph of Fig.~\ref{fig:Vself} (at the physical pion mass)
is due to the imaginary part of this diagram,
associated with the width of the vector meson propagator.
\section{Chiral extrapolation of the rho meson mass}
\setcounter{equation}{0}
\label{sec:mrho}
In this section, we analyse the quark mass dependence of the $\rho$-meson
mass and related topics.
This is not entirely new, see e.g. Refs.~\cite{Cohen,Ausrho}, but we do not want
to rely on any model or the assumption of `dominating' contributions to the
$\rho$ self-energy. In fact, there are many different contributions to the
self-energy of the vector mesons, and only a few of the corresponding LECs
are known from phenomenology. Of course, one could resort to models like
the massive Yang-Mills approach or the extended NJL model to estimate these
parameters (as it is done e.g. in the work of Bijnens and collaborators \cite{bijnens}),
but our goal is more modest. We resort to parameterizing the pion mass dependence of
$M_\rho$ and fix the combinations of LECs from existing lattice data \cite{CPPACS}.
This allows e.g. to analyze the value of $M_\rho$ in the chiral limit.
First, let us discuss the many different contributions to the vector meson
mass. We restrict ourselves to terms at most quadratic in the quark masses.
The first type of contribution stems from tree diagrams with quark mass
insertions, i.e. operators $\sim \chi_+$ or $\sim \chi^2_+$, like e.g.
\begin{equation}\label{break}
\langle {\mathbf W} \cdot {\mathbf W} \, \chi_+ \rangle~,
\langle {\mathbf W} \cdot {\mathbf W} \rangle \langle \chi_+ \rangle~,
\ldots,
\langle {\mathbf W} \cdot {\mathbf W} \, \chi_+^2 \rangle~, \ldots~.
\end{equation}
The LECs accompanying such explicit symmetry breaking terms are in
general difficult to determine, as it is well known from the analysis
of the nucleon mass in chiral perturbation theory, see e.g. \cite{FMS,BuM,BL}.
Such tree
graphs lead to the following vector meson mass terms:\footnote{To avoid
notational clutter, we absorb all prefactors like $1/F^2$ etc. in the
coefficients $k_i$.}
\begin{equation}\label{Mtree}
M_V^{\rm tree} = k_1 \, M_\phi^2 + k_2 \, M_\phi^4~,
\end{equation}
with $k_1$ ($k_2$) a combination of dimension two (four) LECs.
There is also a tree graph without quark mass insertion, it corresponds
to the vector meson mass in the chiral limited, denoted as $M_V^0$ in what
follows.
Next, we consider the various one-loop graphs.
Tadpole diagrams with an insertions of the second order effective
chiral Lagrangian have also to be considered, some of the pertinent
structures are
\begin{eqnarray*}
\langle {\mathbf W} \cdot {\mathbf W} \, \chi_+ \rangle~,
\langle {\mathbf W}\cdot {\mathbf W} \, u_\alpha u^\alpha \rangle~,
\langle W^{\alpha\mu} W^{\beta\nu} g_{\mu\nu} \, u_\alpha u_\beta \rangle~,
\ldots~.
\end{eqnarray*}
Note that in addition to the symmetry breakers of the type given in
Eq.~(\ref{break}), kinetic terms $\sim \partial_\mu \phi \, \partial^\mu \phi$
from the second order effective Lagrangian also contribute, thus increasing
the number of LECs to be determined. In the comparable case of the nucleon
mass, these can be determined to good accuracy form the analysis of pion-nucleon
scattering in the low energy regime. The total contribution of the tadpoles
to the vector meson mass takes the form
\begin{equation}\label{Mtad}
M_V^{\rm tadpole} = k_3 \, M_\phi^4 \, \ln \left( \frac{M_\phi^2}{M_V^2}
\right)~,
\end{equation}
with $k_3$ another combination of dimension two LECs. The sunrise diagram
(cf. Fig.~\ref{fig:self}a) starts to contribute at order $p^3$ because there
are one derivative vertices of the form
\begin{displaymath}
\langle \epsilon^{\mu\nu\rho\sigma} \, W_{\mu\nu} \, \nabla^\alpha \,
W_{\alpha\rho} \, u_\sigma \rangle~, \ldots~,
\end{displaymath}
A famous example of such a vertex is the $\omega\rho\pi$ coupling, which is generated
in meson field theory from the Wess-Zumino-Witten term, see e.g. \cite{UlfV,KoichiV}.
It was e.g. considered in the analysis of \cite{Ausrho} as one of what these
authors call `dominating contributions'. Since there are various of such $VV\phi$
couplings, we write the sunrise contribution to the vector meson mass as
\begin{equation}\label{Msun}
M_V^{\rm sunrise} = k_4 \, M_\phi^3
+ k_5 \, M_\phi^4 \,
\ln \left( \frac{M_\phi^2}{M_V^2} \right) + \ldots ~,
\end{equation}
which is again reminiscent of the leading non-analytic contribution to the
nucleon mass. The ellipsis denotes analytic terms $\sim M_\pi^4$ and higher
order contributions. Finally, we have to consider the self-energy graph considered
in the preceding section. It leads only to a fourth order contribution of
the form
\begin{equation}\label{Mself}
M_V^{\rm self} = k_6 \, M_\phi^4 \, \ln \left( \frac{M_\phi^2}{M_V^2}
\right) + \ldots~,
\end{equation}
To be specific, we consider now the pion mass expansion of the $\rho$-meson
mass, i.e. we set $M_V = M_\rho$ and $M_\phi = M_\pi$ in the above formulae.
Including {\em only} the non-analytic terms from the fourth order, it takes the form
\begin{equation}\label{extra}
M_\rho = M_\rho^0 + c_1 \, M_\pi^2 + c_2\, M_\pi^3 +
c_3 \, M_\pi^4 \, \ln \left(\frac{M_\pi^2}{M_\rho^2}\right)
+ {\mathcal O}(M_\pi^4)~,
\end{equation}
where $M_\rho^0$ is the mass in the chiral limit, and the $c_i$ $(i=1,2,3)$ are
combinations of coupling constants as discussed before. In the absence of a
detailed phenomenological analysis of these couplings, we will use the CP-PACS
data \cite{CPPACS} for the $\rho$-meson mass as a function of the pion (average light quark)
mass to determine the parameters $M_\rho^0$, $c_1$, $c_2$ and $c_3$.
We only employ lattice data with $M^2_\pi \lesssim 0.5\,$GeV$^2$.
In fit~1, we fit these parameters by demanding that the physical
$\rho$-mass is obtained for $M_\pi = 140\,$MeV. For fits~2 and 3, however,
this restriction is lifted. In these
fits, we input the chiral limit mass. Throughout, the fits are subjected
to the further restriction that one obtains
natural values for the combinations of LECs, that is we enforce $|c_i| \leq 3$.
The corresponding fit parameters (obtained by least-square
fits) are collected in Tab.~\ref{tab:1}.
\begin{table}[b]
\caption{Fit parameters. $^\star$ denotes an input quantity. \label{tab:1}}
\begin{center}
\begin{tabular}{|l|ccc|}
\hline
& ~~Fit~1 & Fit~2 & ~~Fit~3 \\
\hline
$M_\rho^0$ [GeV] & ~~0.776 & ~~0.650$^\star$ & ~~0.800$^\star$ \\
$c_1$ [GeV$^{-1}$] & $-$0.662 & ~~2.200 & $-$1.215 \\
$c_2$ [GeV$^{-2}$] & ~~1.291 & $-$1.934 & ~~1.915 \\
$c_3$ [GeV$^{-3}$] & $-$1.723 & ~~1.572 & $-$2.367 \\
\hline
\end{tabular}
\end{center}
\end{table}
\noindent
\begin{figure}[t]
\centerline{\psfig{file=mrho.eps,width=6.5cm}}
\caption{The rho meson mass as a function of the light quark mass,
$M_\pi^2 \sim (m_u+m_d)$. The solid (dot-dashed) line(s) refers to fit~1~(2,3)
as described in the text. The lattice data are from CP-PACS \protect
\cite{CPPACS}. The diamond denotes the physical rho mass.
\label{fig:mrho}}
\end{figure}
\noindent
The corresponding curves are shown in Fig.~\ref{fig:mrho}. To get a
better handle on the theoretical uncertainty, we also allow the fits to
stay within the theoretical uncertainty of the lowest point at $M_\pi^2 =
0.1\,$GeV$^2$, as shown in Fig.~\ref{fig:mrho2}.
If we insist again on naturalness of the coupling constants,
we can bound the $\rho$-mass in the chiral limit by
\begin{equation}
650~{\rm MeV} \leq M_\rho^0 \leq 800~{\rm MeV}~.
\end{equation}
These results are similar to what was found in the pioneering
work in Ref.~\cite{Ausrho}, but they are less model-dependent.
The range for $M_\rho^0$ is also consistent with the numbers derived by Bijnens and
collaborators in their study of vector mesons in chiral perturbation theory
\cite{bijnens}. It would be interesting to extend these studies in two
directions, first to include also more recent lattice data and second to
try to give more stringent limits on the combinations of LECs by incorporating
more phenomenological constraints.
\begin{figure}[htb]
\centerline{\psfig{file=mrho2.eps,width=6.5cm}}
\caption{The rho meson mass as a function of the pion mass:
Theoretical uncertainty as described in the text. For further
notations, see Fig.~\ref{fig:mrho}.
\label{fig:mrho2}}
\end{figure}
\noindent
The quark mass expansion of the $\rho$-mass Eq.(\ref{extra}) allows one
to deduce the corresponidng $\sigma$--term,
\begin{equation}
\sigma_{\pi \rho} = \hat m \, \frac{\partial M_\rho}{\partial \hat m} =
M_\pi^2 \, \frac{\partial M_\rho}{\partial M_\pi^2}~,
\end{equation}
with $\hat m$ the average light quark mass.
From the numbers collected in Table~\ref{tab:1}, we find
\begin{equation}
-1.9\, M_\pi^2 \leq \sigma_{\pi \rho} \leq 1.5\, M_\pi^2~.
\end{equation}
This shows again that the rho as a massive particle has a very different
quark mass expansion than the pion, where $\sigma_{\pi} \simeq M_\pi^2$ \cite{GL1}.
In magnitude, the rho $\sigma$-term is similar to the pion-nucleon one,
$\sigma_{\pi N} \simeq 45\,$MeV.
\section{Summary and outlook}
\setcounter{equation}{0}
\label{sec:summ}
In this paper, we have considered chiral perturbation theory in the presence
of vector and axial-vector mesons (spin-1) fields and presented an extension
of the infrared regularization scheme originally developed for baryon chiral
perturbation theory. The pertinent results of this investigation can be
summarized as follows:
\begin{itemize}
\item[1)]The most economic way to deal with vector mesons in chiral
perturbation theory is to utilize the antisymmetric tensor field formulation
as stressed in \cite{NuB}. When vector mesons appear in tree graphs only,
calculations are straightforward as summarized in Sect.~\ref{sec:tree} and
App.~\ref{app:ten}. Of
course, other formulations like the vector field approach can also be used,
see App.~\ref{app:vec},\ref{app:path}.
\item[2)]When vector mesons appear in loops, the appearance of the large mass
scale complicates the power counting, as discussed in Sect.~\ref{sec:problem}
and Sect.~\ref{sec:softhard}. In essence, loop diagrams pick up large
contributions when the loop momentum is close to the vector meson mass. To the
contrary, the contribution from the soft poles (momenta of the order of the
pion mass) that leads to the interesting chiral terms of the low-energy
EFT (chiral logs and alike) obeys power counting. We have briefly summarized
the method proposed in \cite{Tang} to extract the `soft pole' contribution from
one-loop integrals.
\item[3)]The standard case of infrared regularization \cite{BL},
where the heavy particle line is conserved in the (one-loop) graphs
is recapitulated in Sect.~\ref{sec:IR}. For these cases a very elegant
splitting of a Feynman parameter integral allows to unambigouosly separate
the infrared singular from the regular part, cf. Eq.(\ref{IRsplit}).
\item[4)]In the case of spin-1 fields, new classes of self-energy graphs
appear. The case for lines with small external momenta but a vector meson
line appearing inside the diagram in analyzed in Sect.~\ref{sec:IRnew} and
the singularity structure of the corresponding integrals is discussed in
Sect.~\ref{sec:sing}. In Sect.~\ref{sec:IRcorr} the infrared singular part for such types
of integrals is explicitly constructed, cf. Eq.~(\ref{IVphiIR}).
As explicit examples, the Goldstone
boson self-energy and the triangle diagram are worked out in Sect.~\ref{sec:self1}
and Sect.~\ref{sec:tria}, respectively.
\item[5)]A different type of one-loop graphs appears in the vector meson
self-energy, where only light particles (Goldstone bosons) run in the loop.
This is discussed in detail in Sect.~\ref{sec:Vself}, where the corresponding
infrared singular part is extracted, see Eq.~(\ref{VselfIR}), and the
contribution to the vector meson mass is worked out. We briefly discuss the
problems related to the imaginary part of such type of diagrams.
\item[6)]As an application, we consider the pion mass dependence of the $\rho$-meson
mass in Sect.~\ref{sec:mrho}. We show that there are many contributions with unknown
LECs, still one is able to derive a compact formula for $M_\rho (M_\pi)$,
see Eq.~(\ref{extra}). We analyze
existing lattice data \cite{CPPACS} and conclude that the $\rho$-meson mass in the
chiral limit is bounded between 650 and 800~MeV. We have also discussed the
$\pi\rho$ sigma term.
\end{itemize}
The methods outlined here can be applied to many interesting problems, for example
one could systematically analyze vector meson effects on Goldstone properties like
form factors or polarizabilities or extend these considerations to systems including
baryons (for a first step see e.g. \cite{BM95}).
\section*{Acknowledgements}
We thank Hans Bijnens and J\"urg Gasser for useful comments and
communications.
|
{
"timestamp": "2004-11-17T08:22:41",
"yymm": "0411",
"arxiv_id": "hep-ph/0411223",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/0411223"
}
|
\section{Introduction and preliminaries}
\thispagestyle{empty}
Let $V$ be a finite set and let $\mathbf{2}^{V}$ denote the {\em
simplex\/} $\{F:\ F\subseteq V\}$. A family
$\Delta\subseteq\mathbf{2}^{V}$ is called an {\em abstract
simplicial complex\/} (or a {\em complex}) on the {\em vertex\/}
set $V$ if, given subsets $A$ and $B$ of $V$, the inclusions
$A\subseteq B\in\Delta$ imply $A\in\Delta$, and if
$\{v\}\in\Delta$, for any $v\in V$; see, e.g.,
\cite{BB,B,BH,BP,Hibi,MS,St1,Z}. If $\Gamma$ is a complex such
that $\Gamma\subset\Delta$ (that is, $\Gamma$ is a {\em
subcomplex\/} of $\Delta$) then the family $\Delta-\Gamma$ is
called a {\em relative simplicial complex},
see~\cite[\S{}III.7]{St1}.
If $\Psi$ is a relative complex then the sets $F\in\Psi$ are
called the {\em faces\/} of $\Psi$. The {\em dimension\/}
$\dim(F)$ of a face $F$ by definition equals $|F|-1$; the
cardinality $|F|$ is called the {\em size} of $F$. Let $\#$ denote
the number of sets in a family. If $\#\Psi>0$ then the {\em
size\/} $\size(\Psi)$ of $\Psi$ is defined by
$\size(\Psi):=\max_{F\in\Psi}|F|$, and the {\em dimension\/}
$\dim(\Psi)$ of $\Psi$ by definition is $\size(\Psi)-1$.
The row vector
$\pmb{f}(\Psi):=\bigl(f_0(\Psi),f_1(\Psi),\ldots,f_{\dim(\Psi)}\bigr)
\in\mathbb{N}^{\size(\Psi)}$, where $f_i(\Psi):=\#\{F\in\Psi:\
|F|=i+1\}$, is called the {\em $f$-vector\/} of $\Psi$. The row
{\em $h$-vector}
$\pmb{h}(\Psi):=\bigl(h_0(\Psi),h_1(\Psi),\ldots,h_{\size(\Psi)}\bigr)
\in\mathbb{Z}^{\size(\Psi)+1}$ of $\Psi$ is defined by
\begin{equation}
\sum_{i=0}^{\size(\Psi)}h_i(\Psi)\cdot\mathrm{y}^{\size(\Psi)-i}:=
\sum_{i=0}^{\size(\Psi)}f_{i-1}(\Psi)\cdot(\mathrm{y}-1)^{\size(\Psi)-i}\
.
\end{equation}
In this note we consider redundant analogues
$\pmb{f}(\Phi;|V|)\in\mathbb{N}^{|V|+1}$ and
$\pmb{h}(\Phi;|V|)\in\mathbb{Z}^{|V|+1}$ of the $f$- and
$h$-vectors that can be used in some situations for describing the
combinatorial properties of arbitrary {\em face systems}
$\Phi\subseteq\mathbf{2}^{V}$.
For a positive integer $m$, let $[m]$ denote the set
$\{1,2,\ldots,m\}$. We relate to a face system
$\Phi\subseteq\mathbf{2}^{[m]}$ the row vectors
\begin{align}
\label{eq:6}
\pmb{f}(\Phi;m):&=\bigl(f_0(\Phi;m),f_1(\Phi;m),\ldots,f_m(\Phi;m)\bigr)
\in\mathbb{N}^{m+1}\ ,\\ \label{eq:7}
\pmb{h}(\Phi;m):&=\bigl(h_0(\Phi;m),h_1(\Phi;m),\ldots,h_m(\Phi;m)\bigr)
\in\mathbb{Z}^{m+1}\ ,
\end{align}
where $f_i(\Phi;m):=\#\{F\in\Phi:\ |F|=i\}$, for $0\leq i\leq m$,
and the vector $\pmb{h}(\Phi;m)$ is defined by
\begin{equation}
\sum_{i=0}^m h_i(\Phi;m)\cdot\mathrm{y}^{m-i}:=\sum_{i=0}^m
f_i(\Phi;m)\cdot(\mathrm{y}-1)^{m-i}\ .
\end{equation}
Note that if $\Psi\subset\mathbf{2}^{[m]}$ is a relative complex
then we set $f_0(\Psi;m):=f_{-1}(\Psi):=\#\{F\in\Psi:\
|F|=0\}\in\{0,1\}$, $f_i(\Psi;m):=f_{i-1}(\Psi)$, for $1\leq
i\leq\size(\Psi)$ and, finally, $f_i(\Psi;m):=0$, for
$\size(\Psi)+1\leq i\leq m$.
Vectors~(\ref{eq:6}) and~(\ref{eq:7}) go back to analogous
constructions that appear, e.g., in~\cite{McMSh,McMWa}. In some
situations, these ``long'' $f$- and $h$-vectors either can be used
as an intermediate description of face systems or they can
independently be involved in combinatorial problems and
computations, see, e.g.,~\cite{M2}. Since the maps
$\Phi\mapsto\pmb{f}(\Phi;m)$ and $\Phi\mapsto\pmb{h}(\Phi;m)$ from
the Boolean lattice $\mathcal{D}(m)$ of all face systems (ordered
by inclusion) to $\mathbb{Z}^{m+1}$ are {\em valuations\/} on
$\mathcal{D}(m)$, the long $f$- and $h$-vectors can also be used
in the study of decomposition problems; here, a basic construction
is a {\em Boolean interval}, that is, the family
$[A,C]:=\{B\in\mathbf{2}^{[m]}:\ A\subseteq B\subseteq C\}$, for
some faces $A\subseteq C\subseteq[m]$.
We consider the vectors $\pmb{f}(\Phi;m)$ and $\pmb{h}(\Phi;m)$ as
elements from the real Euclidean space $\mathbb{R}^{m+1}$ of row
vectors. We present several bases of $\mathbb{R}^{m+1}$ related to
face systems and list the corresponding change of basis matrices.
See, e.g.,~\cite[\S{}IV.4]{Aigner} on valuations,
\cite[Chapter~5]{MS} on Alexander duality,
\cite[\S{}VI.6]{BarvinokConv}, \cite[Chapter~5]{BR},
\cite[\S{}II.5]{BH}, \cite[\S\S{}1.2, 3.6, 8.6]{BP},
\cite[\S{}III.11]{Hibi}, \cite[\S{}5.1]{McMSh}, \cite[\S\S{}II.3,
II.6, III.6]{St1}, \cite[\S{}3.14]{St2}, \cite[\S{}8.3]{Z} on the
Dehn-Sommerville relations, and~\cite{HJ} on matrix analysis.
\section{Notation}
Throughout this note, $m$ means a positive integer; all vectors
are of dimension $(m+1)$, and all matrices are $(m+1)\times(m+1)$
matrices. The components of vectors as well as the rows and
columns of matrices are indexed starting with zero. For a vector
$\pmb{w}$, $\pmb{w}^{\top}$ denotes its transpose.
If $\Phi$ is a face system, $\#\Phi>0$, then its {\em size\/}
$\size(\Phi)$ is defined by $\size(\Phi):=\max_{F\in\Phi}|F|$.
We denote the empty set by $\hat{0}$, and we use the notation
$\emptyset$ to denote the family containing no sets. We have
$\#\emptyset=0$, $\#\{\hat{0}\}=1$, and
\begin{align*}
\pmb{f}(\emptyset;m)&=\pmb{h}(\emptyset;m)=(0,0,\ldots,0)\ ,\\
\pmb{f}(\{\hat{0}\};m)&=\pmb{h}(\mathbf{2}^{[m]};m)=(1,0,\ldots,0)\
.
\end{align*}
$\boldsymbol{\iota}(m):=(1,1,\ldots,1)$;
$\boldsymbol{\tau}(m):=(2^m,2^{m-1},\ldots,1)$.
$\mathbf{I}(m)$ is the {\em identity matrix}.
$\mathbf{U}(m)$ is the {\em backward identity matrix} whose
$(i,j)$th entry is the Kronecker delta $\delta_{i+j,m}$.
$\mathbf{T}(m)$ is the {\em forward shift matrix} whose $(i,j)$th
entry is $\delta_{j-i,1}$.
If $\boldsymbol{\mathfrak{B}}:=(\pmb{b}_0,\ldots,\pmb{b}_m)$ is a
basis of $\mathbb{R}^{m+1}$ then, given a vector
$\pmb{w}\in\mathbb{R}^{m+1}$, we denote by
$[\pmb{w}]_{\boldsymbol{\mathfrak{B}}}:=\bigl(
\kappa_0(\pmb{w},\boldsymbol{\mathfrak{B}}),\ldots,
\kappa_m(\pmb{w},\boldsymbol{\mathfrak{B}})\bigr)\in\mathbb{R}^{m+1}$
the $(m+1)$-tuple satisfying the equality
$\sum_{i=0}^m\kappa_i(\pmb{w},\boldsymbol{\mathfrak{B}})\cdot\pmb{b}_i=\pmb{w}$.
\section{The long $f$- and $h$-vectors}
We recall the properties of vectors~(\ref{eq:6}) and~(\ref{eq:7})
described in~\cite{M1}.
\begin{itemize}
\item[\rm(i)]
The maps $\Phi\mapsto\pmb{f}(\Phi;m)$ and
$\Phi\mapsto\pmb{h}(\Phi;m)$ are valuations
$\mathcal{D}(m)\to\mathbb{Z}^{m+1}$ on the Boolean lattice
$\mathcal{D}(m)$ of all face systems (ordered by inclusion)
contained in $\mathbf{2}^{[m]}$.
\item[\rm(ii)]
Let $\Psi\subseteq\mathbf{2}^{[m]}$ be a relative complex.
\begin{align}
h_l(\Psi)&=\sum_{k=0}^l\binom{m-\size(\Psi)-1+l-k}{l-k}h_k(\Psi;m)\
,\ \ \ 0\leq l\leq\size(\Psi)\ ;\\ h_l(\Psi;m)&=(-1)^l\sum_{k=0}^l
(-1)^k\binom{m-\size(\Psi)}{l-k}h_k(\Psi)\ ,\ \ \ 0\leq l\leq m\ .
\end{align}
\item[\rm(iii)]
Let $\Phi\subseteq\mathbf{2}^{[m]}$.
\begin{itemize}
\item[\rm(a)]
\begin{align}
h_l(\Phi;m)&=(-1)^l\sum_{k=0}^l(-1)^k\binom{m-k}{l-k}f_k(\Phi;m)\
,\\ f_l(\Phi;m)&=\sum_{k=0}^l\binom{m-k}{l-k}h_k(\Phi;m)\ ,\ \ \
0\leq l\leq m\ .
\end{align}
\item[\rm(b)]
\begin{align}
h_0(\Phi;m)&=f_0(\Phi;m)\ ,\\
h_1(\Phi;m)&=f_1(\Phi;m)-mf_0(\Phi;m)\ ,\\
h_m(\Phi;m)&=(-1)^m\sum_{k=0}^m(-1)^k f_k(\Phi;m)\ ,\\
\pmb{h}(\Phi;m)\cdot\boldsymbol{\iota}(m)^{\top}&=f_m(\Phi;m)\ .
\end{align}
\item[\rm(c)]
\begin{equation}
\pmb{h}(\Phi;m)\cdot\boldsymbol{\tau}(m)^{\top}
=\pmb{f}(\Phi;m)\cdot\boldsymbol{\iota}(m)^{\top}=\#\Phi\ .
\end{equation}
\item[\rm(d)]
Consider the face system
\begin{equation}
\Phi^{\star}:=\{[m]-F:\ F\in\mathbf{2}^{[m]},\ F\not\in\Phi\}
\end{equation}
``dual'' to the system $\Phi$.
\begin{align}
h_l(\Phi;m)+(-1)^l\sum_{k=l}^m\binom{k}{l}h_k(\Phi^{\star};m)&=\delta_{l,0}\
,\ \ \ 0\leq l\leq m\ ;\\
h_m(\Phi;m)&=(-1)^{m+1}h_m(\Phi^{\star};m)\ .
\end{align}
If $\Delta$ is a complex on the vertex set $[m]$ then the complex
$\Delta^{\star}$ is called its {\em Alexander dual}. If
$\#\Delta>0$ and $\#\Delta^{\star}>0$ then
\begin{align}
h_l(\Delta;m)&=0\ ,\ \ \ 1\leq l\leq m-\size(\Delta^{\star})-1\
,\\
h_{m-\size(\Delta^{\star})}(\Delta;m)&=-f_{\size(\Delta^{\star})}
(\Delta^{\star};m)\ .
\end{align}
\end{itemize}
\end{itemize}
\section{Bases, and change of basis matrices}
We relate to the simplex $\mathbf{2}^{[m]}$ three pairs of bases
of the space $\mathbb{R}^{m+1}$. Let $\{F_0,\ldots,
F_m\}\subset\mathbf{2}^{[m]}$ be a face system such that
$|F_k|=k$, for $0\leq k\leq m$; here, $F_0:=\hat{0}$ and
$F_m:=[m]$.
The first pair consists of the bases $\bigl(\
\pmb{f}(\{F_0\};m),\pmb{f}(\{F_1\};m),\ldots,$
$\pmb{f}(\{F_m\};m)\ \bigr)$ and $\bigl(\
\pmb{h}(\{F_0\};m),\pmb{h}(\{F_1\};m),\ldots,\pmb{h}(\{F_m\};m)\
\bigr)$.
The bases $\bigl(\
\pmb{f}([F_0,F_0];m),\pmb{f}([F_0,F_1];m),\ldots,\pmb{f}([F_0,F_m];m)\
\bigr)$ and $\bigl(\
\pmb{h}([F_0,F_0];m),\pmb{h}([F_0,F_1];m),\ldots,
\pmb{h}([F_0,F_m];m)\ \bigr)$ compose the second pair.
The third pair consists of the bases $\bigl(\
\pmb{f}([F_m,F_m];m),\pmb{f}([F_{m-1},F_m];m),$
$\ldots,\pmb{f}([F_0,F_m];$ $m)\ \bigr)$ and $\bigl(\
\pmb{h}([F_m,F_m];m),\pmb{h}([F_{m-1},F_m];m),\ldots,\pmb{h}([F_0,$
$F_m];m)\ \bigr)$:
\begin{itemize}
\item[1)]
We use the notation $\bS_{m}$ to denote the {\em standard basis\/}
$\bigl(\boldsymbol{\sigma}(i;m):\ 0\leq i\leq m\bigr)$ of
$\mathbb{R}^{m+1}$, where
\begin{equation}
\boldsymbol{\sigma}(i;m):=(1,0,\ldots,0)\cdot\mathbf{T}(m)^i\ .
\end{equation}
We define a basis
$\bH^{\bullet}_m:=\bigl(\boldsymbol{\vartheta}^{\bullet}(i;m):\
0\leq i\leq m\bigr)$ of $\mathbb{R}^{m+1}$, where
\begin{equation}
\boldsymbol{\vartheta}^{\bullet}(i;m):=
\bigl(\vartheta^{\bullet}_0(i;m),\vartheta^{\bullet}_1(i;m),\ldots,
\vartheta^{\bullet}_m(i;m)\bigr)\in\mathbb{Z}^{m+1}\ ,
\end{equation}
by
\begin{equation}
\vartheta^{\bullet}_j(i;m):=(-1)^{j-i}\tbinom{m-i}{j-i}\ ,\ \ \
0\leq j\leq m\ .
\end{equation}
\item[2)]
Bases $\bF^{\blacktriangle}_m:=
\bigl(\boldsymbol{\varphi}^{\blacktriangle}(i;m):\ 0\leq i\leq
m\bigr)$ and $\bH^{\blacktriangle}_m:=
\bigl(\boldsymbol{\vartheta}^{\blacktriangle}(i;m):\ 0\leq i\leq
m\bigr)$ of $\mathbb{R}^{m+1}$ are defined in the following way:
\begin{equation}
\boldsymbol{\varphi}^{\blacktriangle}(i;m):=\bigl(\varphi^{\blacktriangle}_0(i;m),
\varphi^{\blacktriangle}_1(i;m),\ldots,\varphi^{\blacktriangle}_m(i;m)\bigr)
\in\mathbb{N}^{m+1}\ ,
\end{equation}
where
\begin{equation}
\varphi^{\blacktriangle}_j(i;m):=\tbinom{i}{j}\ ,\ \ \ 0\leq j\leq
m\ ,
\end{equation}
and
\begin{equation}
\boldsymbol{\vartheta}^{\blacktriangle}(i;m):=
\bigl(\vartheta^{\blacktriangle}_0(i;m),
\vartheta^{\blacktriangle}_1(i;m),\ldots,\vartheta^{\blacktriangle}_m(i;m)\bigr)
\in\mathbb{Z}^{m+1}\ ,
\end{equation}
where
\begin{equation}
\vartheta^{\blacktriangle}_j(i;m):=(-1)^{j}\tbinom{m-i}{j}\ ,\ \ \
0\leq j\leq m\ .
\end{equation}
The notations $\boldsymbol{\varphi}(i;m)$ and
$\boldsymbol{\vartheta}(i;m)$ were used in~\cite{M1} instead of
$\boldsymbol{\varphi}^{\blacktriangle}(i;m)$ and
$\boldsymbol{\vartheta}^{\blacktriangle}(i;m)$, respectively.
\item[3)]
The third pair consists of bases $\bF^{\blacktriangledown}_m:=
\bigl(\boldsymbol{\varphi}^{\blacktriangledown}(i;m):\ 0\leq i\leq
m\bigr)$ and $\bH^{\blacktriangledown}_m:=
\bigl(\boldsymbol{\vartheta}^{\blacktriangledown}(i;m):\ 0\leq
i\leq m\bigr)$ of $\mathbb{R}^{m+1}$ defined as follows:
\begin{equation}
\boldsymbol{\varphi}^{\blacktriangledown}(i;m):=
\bigl(\varphi^{\blacktriangledown}_0(i;m),
\varphi^{\blacktriangledown}_1(i;m),\ldots,
\varphi^{\blacktriangledown}_m(i;m)\bigr)\in\mathbb{N}^{m+1}\ ,
\end{equation}
where
\begin{equation}
\varphi^{\blacktriangledown}_j(i;m):=\tbinom{i}{m-j}\ ,\ \ \ 0\leq
j\leq m\ ,
\end{equation}
and
\begin{equation}
\boldsymbol{\vartheta}^{\blacktriangledown}(i;m):=
\bigl(\vartheta^{\blacktriangledown}_0(i;m),
\vartheta^{\blacktriangledown}_1(i;m),\ldots,
\vartheta^{\blacktriangledown}_m(i;m)\bigr)\in\mathbb{Z}^{m+1}\ ,
\end{equation}
where
\begin{equation}
\vartheta^{\blacktriangledown}_j(i;m):=\delta_{m-i,j}\ ,\ \ \
0\leq j\leq m\ .
\end{equation}
Note that $\bH^{\blacktriangledown}_m$ is up to rearrangement the
standard basis $\bS_{m}$.
\end{itemize}
Let $\mathbf{S}(m)$ be the change of basis matrix from $\bS_m$ to
$\bH^{\bullet}_m$: {\footnotesize
\begin{equation}
\mathbf{S}(m):=\begin{pmatrix}
\boldsymbol{\vartheta}^{\bullet}(0;m)\\ \vdots\\
\boldsymbol{\vartheta}^{\bullet}(m;m)
\end{pmatrix}\ ;
\end{equation}
} the $(i,j)$th entry of the inverse matrix $\mathbf{S}(m)^{-1}$
is $\tbinom{m-i}{j-i}$.
For any $i\in\mathbb{N}$, $i\leq m$, we have
\begin{align}
\boldsymbol{\vartheta}^{\bullet}(i;m)&=
\boldsymbol{\sigma}(i;m)\cdot\mathbf{S}(m)\ ,\\
\boldsymbol{\vartheta}^{\blacktriangle}(i;m)&=
\boldsymbol{\varphi}^{\blacktriangle}(i;m)\cdot\mathbf{S}(m)\ ,\\
\boldsymbol{\vartheta}^{\blacktriangledown}(i;m)&=
\boldsymbol{\varphi}^{\blacktriangledown}(i;m)\cdot\mathbf{S}(m)\
.
\end{align}
For any face system $\Phi\subseteq\mathbf{2}^{[m]}$, we have
\begin{align}
\label{eq:5} \pmb{h}(\Phi;m)&= \pmb{f}(\Phi;m)\cdot\mathbf{S}(m)
=\sum_{l=0}^m
f_l(\Phi;m)\cdot\boldsymbol{\vartheta}^{\bullet}(l;m) \ ,\\
\pmb{f}(\Phi;m)&= \pmb{h}(\Phi;m)\cdot\mathbf{S}(m)^{-1}\ .
\end{align}
The change of basis matrices corresponding to the bases defined
above are collected in Table~\ref{table:1}.
\section{Representations of the long $f$- and $h$-vectors
with respect to some bases}
If $\Phi\subseteq\mathbf{2}^{[m]}$ then we by~(\ref{eq:5}) have
\begin{equation}
\pmb{f}(\Phi;m)=[\pmb{h}(\Phi;m)]_{\bH^{\bullet}_m}\ ,
\end{equation}
and several observations follow:
\begin{align}
[\pmb{h}(\Phi;m)]_{\bH^{\blacktriangle}_m}&=
[\pmb{f}(\Phi;m)]_{\bF^{\blacktriangle}_m}\ ;\\
[\pmb{h}(\Phi;m)]_{\bH^{\blacktriangledown}_m}&=
[\pmb{f}(\Phi;m)]_{\bF^{\blacktriangledown}_m}\\ \nonumber
&=\pmb{h}(\Phi;m)\cdot \mathbf{U}(m)\ ;\\
[\pmb{f}(\Phi;m)]_{\bH^{\blacktriangledown}_m}&=\pmb{f}(\Phi;m)\cdot
\mathbf{U}(m)\\ \nonumber
&=[\pmb{h}(\Phi;m)]_{\bH^{\bullet}_m}\cdot \mathbf{U}(m)\ .
\end{align}
\section{Partitions of face systems into Boolean intervals,
and the long $f$- and $h$-vectors}
If
\begin{equation}
\label{eq:9}
\Phi=[A_1,B_1]\dot\cup\cdots\dot\cup[A_{\theta},B_{\theta}]
\end{equation}
is a partition of a face system $\Phi\subseteq\mathbf{2}^{[m]}$,
$\#\Phi>0$, into Boolean intervals $[A_k,B_k]$, $1\leq
k\leq\theta$, then we call the collection $\mathsf{P}$ of positive
integers $\mathsf{p}_{ij}$ defined by
\begin{equation}
\mathsf{p}_{ij}:=\#\{[A_k,B_k]:\ |B_k-A_k|=i,\ |A_k|=j\}>0
\end{equation}
the {\em profile} of partition~(\ref{eq:9}). If $\theta=\#\Phi$
then $\mathsf{p}_{0l}=f_l(\Phi;m)$ whenever $f_l(\Phi;m)>0$.
Table~\ref{table:2} collects the representations of the vectors
$\pmb{f}(\Phi;m)$ and $\pmb{h}(\Phi;m)$ with respect to various
bases.
\section{Appendix: Dehn-Sommerville type relations}
The $h$-vector of a complex $\Delta$ satisfies the {\em
Dehn-Sommerville relations\/} if it holds
\begin{equation}
h_l(\Delta)=h_{\size(\Delta)-l}(\Delta)\ ,\ \ \ 0\leq l\leq
\size(\Delta)
\end{equation}
or, equivalently (see, e.g.,~\cite[p.~171]{McMSh}),
\begin{equation}
h_l(\Delta;m)=(-1)^{m-\size(\Delta)}h_{m-l}(\Delta;m)\ ,\ \ \
0\leq l\leq m\ .
\end{equation}
We say, for brevity, that a face system
$\Phi\subset\mathbf{2}^{[m]}$ is a {\em DS-system} if the
Dehn-Sommerville type relations
\begin{equation}
\label{eq:2} h_l(\Phi;m)=(-1)^{m-\size(\Phi)}h_{m-l}(\Phi;m)\ ,\ \
\ 0\leq l\leq m
\end{equation}
hold. The systems $\emptyset$ and $\{\hat{0}\}$ are DS-systems.
If $\#\Phi>0$, then define the integer
\begin{equation}
\label{eq:3} \eta(\Phi):=\begin{cases}|\bigcup_{F\in\Phi}F|,
&\text{if $|\bigcup_{F\in\Phi}F|\equiv \size(\Phi)\pmod{2}$,}\\
|\bigcup_{F\in\Phi}F|+1, &\text{if
$|\bigcup_{F\in\Phi}F|\not\equiv \size(\Phi)\pmod{2}$.}
\end{cases}
\end{equation}
Note that, given a complex $\Delta$ with $v$ vertices, $v>0$, we
have
\begin{equation}
\eta(\Delta)=\begin{cases}v, &\text{if $v\equiv
\size(\Delta)\pmod{2}$,}\\ v+1, &\text{if $v\not\equiv
\size(\Delta)\pmod{2}$.}
\end{cases}
\end{equation}
Equality~(\ref{eq:2}) and definition~(\ref{eq:3}) lead to the
following observation: A face system $\Phi$ with $\#\Phi>0$ is a
DS-system if and only if for any $n\in\mathbb{P}$ such that
\begin{equation}
\label{eq:8}
\begin{split}
\eta(\Phi)&\leq n\ ,\\ n&\equiv\size(\Phi)\pmod{2}\ ,
\end{split}
\end{equation}
it holds
\begin{equation}
h_l(\Phi;n)=h_{n-l}(\Phi;n)\ ,\ \ \ 0\leq l\leq n\ ,
\end{equation}
or, equivalently,
\begin{equation}
\label{eq:11} \pmb{h}(\Phi;n)=\pmb{h}(\Phi;n)\cdot\mathbf{U}(n)\ ,
\end{equation}
that is, $\pmb{h}(\Phi;n)$ is a {\em left eigenvector\/} of the
$(n+1)\times(n+1)$ backward identity matrix corresponding to the
eigenvalue $1$.
We come to the following conclusion:
Let $\Phi$ be a DS-system with $\#\Phi>0$, and let $n$ be a
positive integer satisfying conditions~(\ref{eq:8}). Let
$l\in\mathbb{N}$, $l\leq n$.
\begin{itemize}
\item[\rm(i)]
\begin{align}
\nonumber \kappa_l\bigl(\pmb{h}(\Phi;n),
\bH^{\blacktriangle}_n\bigr)&=\kappa_l\bigl(\pmb{f}(\Phi;n),
\bF^{\blacktriangle}_n\bigr)\\ &=(-1)^{n-l}f_l(\Phi;n)\ ;\\
\nonumber \kappa_l\bigl(\pmb{h}(\Phi;n),
\bH^{\blacktriangledown}_n\bigr)&=\kappa_l\bigl(\pmb{f}(\Phi;n),
\bF^{\blacktriangledown}_n\bigr)\\ &=h_l(\Phi;n)=h_{n-l}(\Phi;n)\
;\\ \kappa_l\bigl(\pmb{h}(\Phi;n),
\bF^{\blacktriangle}_n\bigr)&=\kappa_l\bigl(\pmb{h}(\Phi;n),
\bF^{\blacktriangledown}_n\bigr)\ .
\end{align}
\item[\rm(ii)] If $\mathsf{P}$ is the profile of a partition of
$\Phi$ into Boolean intervals then the following equalities hold:
\begin{gather}
\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^{i+j}
\binom{j}{l-i}=(-1)^n\sum_{i,j}{\mathsf
p}_{ij}\cdot\binom{i}{l-j}\ ;\\ \sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{n-i-j}{l-i}=(-1)^n\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{n-i-j}{l-j}\ ;\\ \nonumber
\sum_s\binom{s}{l}\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^j
\binom{n-i-j}{s-j}
\\ =(-1)^n\sum_s\binom{n-s}{l}\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^j
\binom{n-i-j}{s-j}\ .
\end{gather}
\end{itemize}
\newpage
{\footnotesize
\begin{table}[ht]
\caption{Change of basis matrices} \label{table:1}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
\em Change of basis matrix & \em $(i,j)$th entry & \em Notation &
\em Case $m=3$\\ \hline\hline
from $\bS_m$ to $\bF^{\blacktriangle}_m$ & &
$\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangle}(m;m)\end{smallmatrix}\right)$
& \\
from $\bH^{\bullet}_m$ to $\bH^{\blacktriangle}_m$ &
$\binom{i}{j}$ & & $\left(\begin{smallmatrix}1&0&0&0\\ 1&1&0&0\\
1&2&1&0\\ 1&3&3&1\end{smallmatrix}\right)$ \\
from $\bH^{\blacktriangledown}_m$ to $\bF^{\blacktriangledown}_m$
& & &
\\ \hline
from $\bF^{\blacktriangle}_m$ to $\bS_m$ & &
$\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangle}(m;m)\end{smallmatrix}\right)^{-1}$
&
\\
from $\bH^{\blacktriangle}_m$ to $\bH^{\bullet}_m$ &
$(-1)^{i+j}\binom{i}{j}$ & & $\left(\begin{smallmatrix}1&0&0&0\\
-1&1&0&0\\ 1&-2&1&0\\ -1&3&-3&1\end{smallmatrix}\right)$
\\
from $\bF^{\blacktriangledown}_m$ to $\bH^{\blacktriangledown}_m$
& & &
\\ \hline\hline
from $\bS_m$ to $\bF^{\blacktriangledown}_m$ & &
$\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)$
& \\
from $\bH^{\bullet}_m$ to $\bH^{\blacktriangledown}_m$ &
$\binom{i}{m-j}$ & & $\left(\begin{smallmatrix}0&0&0&1\\ 0&0&1&1\\
0&1&2&1\\ 1&3&3&1\end{smallmatrix}\right)$ \\
from $\bH^{\blacktriangledown}_m$ to $\bF^{\blacktriangle}_m$ & &
&
\\ \hline
from $\bF^{\blacktriangledown}_m$ to $\bS_m$ & &
$\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)^{-1}$
& \\
from $\bH^{\blacktriangledown}_m$ to $\bH^{\bullet}_m$ &
$(-1)^{m-j-i}\binom{m-i}{j}$ & &
$\left(\begin{smallmatrix}-1&3&-3&1\\ 1&-2&1&0\\ -1&1&0&0\\
1&0&0&0\end{smallmatrix}\right)$ \\
from $\bF^{\blacktriangle}_m$ to $\bH^{\blacktriangledown}_m$ & &
& \\ \hline\hline
from $\bS_m$ to $\bH^{\blacktriangle}_m$ & $(-1)^j\binom{m-i}{j}$
&
$\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangle}(m;m)\end{smallmatrix}\right)$
& $\left(\begin{smallmatrix}1&-3&3&-1\\ 1&-2&1&0\\ 1&-1&0&0\\
1&0&0&0\end{smallmatrix}\right)$ \\ \hline
from $\bH^{\blacktriangle}_m$ to $\bS_m$ &
$(-1)^{m-j}\binom{i}{m-j}$ &
$\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangle}(m;m)\end{smallmatrix}\right)^{-1}$
& $\left(\begin{smallmatrix}0&0&0&1\\ 0&0&-1&1\\ 0&1&-2&1\\
-1&3&-3&1\end{smallmatrix}\right)$ \\ \hline\hline
from $\bS_m$ to $\bH^{\blacktriangledown}_m$ & &
$\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)$,
or &
\\
& $\delta_{m-i,j}$ & $\mathbf{U}(m)$, or &
$\left(\begin{smallmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\
1&0&0&0\end{smallmatrix}\right)$
\\
from $\bH^{\blacktriangledown}_m$ to $\bS_m$ & &
$\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)^{-1}$
&
\\ \hline
\end{tabular}
\end{center}
\end{table}
$\quad$\newline $\quad$\newline $\quad$\newline $\quad$\newline
$\quad$\newline
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
\em Change of basis matrix & \em $(i,j)$th entry & \em Notation &
\em Case $m=3$\\ \hline\hline
from $\bS_m$ to $\bH^{\bullet}_m$ & $(-1)^{j-i}\binom{m-i}{j-i}$ &
$\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\bullet}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\bullet}(m;m)\end{smallmatrix}\right)$, or
& $\left(\begin{smallmatrix}1&-3&3&-1\\ 0&1&-2&1\\ 0&0&1&-1\\
0&0&0&1\end{smallmatrix}\right)$ \\ & & $\mathbf{S}(m)$&\\ \hline
from $\bH^{\bullet}_m$ to $\bS_m$ & $\binom{m-i}{j-i}$ &
$\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\bullet}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\bullet}(m;m)\end{smallmatrix}\right)^{-1}$,
or & $\left(\begin{smallmatrix}1&3&3&1\\ 0&1&2&1\\ 0&0&1&1\\
0&0&0&1\end{smallmatrix}\right)$\\ & & $\mathbf{S}(m)^{-1}$&\\
\hline\hline
from $\bF^{\blacktriangle}_m$ to $\bH^{\blacktriangle}_m$ &
$(-1)^j2^{m-j-i}\binom{m-i}{j}$ & &
$\left(\begin{smallmatrix}8&-12&6&-1\\ 4&-4&1&0\\ 2&-1&0&0\\
1&0&0&0\end{smallmatrix}\right)$ \\ \hline
from $\bH^{\blacktriangle}_m$ to $\bF^{\blacktriangle}_m$ &
$(-1)^{m-j}2^{i+j-m}\binom{i}{m-j}$ & &
$\left(\begin{smallmatrix}0&0&0&1\\ 0&0&-1&2\\ 0&1&-4&4\\
-1&6&-12&8\end{smallmatrix}\right)$\\ \hline\hline
from $\bF^{\blacktriangle}_m$ to $\bF^{\blacktriangledown}_m$ & &
& \\
from $\bF^{\blacktriangledown}_m$ to $\bF^{\blacktriangle}_m$ & &
& \\
& $(-1)^{m-j}\binom{m-i}{m-j}$ & &
$\left(\begin{smallmatrix}-1&3&-3&1\\ 0&1&-2&1\\ 0&0&-1&1\\
0&0&0&1\end{smallmatrix}\right)$ \\
from $\bH^{\blacktriangle}_m$ to $\bH^{\blacktriangledown}_m$ & &
& \\
from $\bH^{\blacktriangledown}_m$ to $\bH^{\blacktriangle}_m$ & &
& \\ \hline\hline
from $\bH^{\blacktriangle}_m$ to $\bF^{\blacktriangledown}_m$ &
$(-1)^{m-j}\sum_{s=\max\{m-i,m-j\}}^m\binom{i}{m-s}\binom{s}{m-j}$
& & $\left(\begin{smallmatrix}-1&3&-3&1\\ -1&4&-5&2\\ -1&5&-8&4\\
-1&6&-12&8\end{smallmatrix}\right)$\\ \hline
from $\bF^{\blacktriangledown}_m$ to $\bH^{\blacktriangle}_m$ &
$(-1)^{m-j}\sum_{s=0}^{\min\{m-i,m-j\}}\binom{m-i}{s}\binom{m-s}{j}$
& & $\left(\begin{smallmatrix}-8&12&-6&1\\ -4&8&-5&1\\ -2&5&-4&1\\
-1&3&-3&1\end{smallmatrix}\right)$ \\ \hline\hline
from $\bH^{\bullet}_m$ to $\bF^{\blacktriangle}_m$ &
$\sum_{s=0}^{\min\{i,j\}}\binom{i}{s}\binom{m-s}{m-j}$ & &
$\left(\begin{smallmatrix}1&3&3&1\\ 1&4&5&2\\ 1&5&8&4\\
1&6&12&8\end{smallmatrix}\right)$\\ \hline
from $\bF^{\blacktriangle}_m$ to $\bH^{\bullet}_m$ &
$(-1)^{i+j}\sum_{s=\max\{i,j\}}^m\binom{m-i}{m-s}\binom{s}{j}$ & &
$\left(\begin{smallmatrix}8&-12&6&-1\\ -4&8&-5&1\\ 2&-5&4&-1\\
-1&3&-3&1\end{smallmatrix}\right)$\\ \hline\hline
from $\bH^{\bullet}_m$ to $\bF^{\blacktriangledown}_m$ &
$2^{i+j-m}\binom{i}{m-j}$ & & $\left(\begin{smallmatrix}0&0&0&1\\
0&0&1&2\\ 0&1&4&4\\ 1&6&12&8\end{smallmatrix}\right)$\\ \hline
from $\bF^{\blacktriangledown}_m$ to $\bH^{\bullet}_m$ &
$(-2)^{m-j-i}\binom{m-i}{j}$ & &
$\left(\begin{smallmatrix}-8&12&-6&1\\ 4&-4&1&0\\ -2&1&0&0\\
1&0&0&0\end{smallmatrix}\right)$\\ \hline
\end{tabular}
\end{center}
}
{\footnotesize
\begin{table}[ht]
\caption{Representations (based on the profile of a partition of
$\Phi\subseteq\mathbf{2}^{[m]}$, $\#\Phi>0$, into Boolean
intervals) of $\pmb{f}(\Phi;m)$ and $\pmb{h}(\Phi;m)$ with respect
to various bases} \label{table:2}
\begin{center}
\begin{tabular}{|c|c|} \hline
\em $l$th component & \em Expression \\ \hline\hline
$f_l(\Phi;m)$ & $\sum_{i,j}{\mathsf p}_{ij}\cdot\binom{i}{l-j}$\\
\hline
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\bH^{\bullet}_m\bigr)$ & $\sum_s\binom{m-s}{m-l}\sum_{i,j}{\mathsf
p }_{ij}\cdot \binom{i}{s-j} $\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\bF^{\blacktriangle}_m\bigr)$ & $(-1)^l\sum_{i,j}{\mathsf p
}_{ij}\cdot(-1)^{i+j}\binom{j}{l-i}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\bH^{\blacktriangle}_m\bigr)$ &
$(-1)^{m-l}\sum_s\binom{s}{m-l}\sum_{i,j}{\mathsf
p}_{ij}\cdot\binom{i}{s-j} $\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\bF^{\blacktriangledown}_m\bigr)$ & $(-1)^{m-l}\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{m-i-j}{l-i}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\bH^{\blacktriangledown}_m\bigr)$ & $\sum_{i,j}{\mathsf p
}_{ij}\cdot\binom{i}{m-l-j}$\\ \hline \hline
$h_l(\Phi;m)$ & $(-1)^l\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{m-i-j}{l-j}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\bH^{\bullet}_m\bigr)$ & $\sum_{i,j}{\mathsf
p}_{ij}\cdot\binom{i}{l-j}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\bF^{\blacktriangle}_m\bigr)$ &
$(-1)^l\sum_s\binom{s}{l}\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^j
\binom{m-i-j}{s-j}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\bH^{\blacktriangle}_m\bigr)$ & $(-1)^l \sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^{i+j}\binom{j}{l-i}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\bF^{\blacktriangledown}_m\bigr)$ &
$(-1)^{m-l}\sum_s\binom{m-s}{l}\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j \binom{m-i-j}{s-j}$\\ \hline
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\bH^{\blacktriangledown}_m\bigr)$ & $(-1)^{m-l} \sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{m-i-j}{l-i}$\\ \hline
\end{tabular}
\end{center}
\end{table}
}
\newpage
|
{
"timestamp": "2004-11-10T21:10:05",
"yymm": "0411",
"arxiv_id": "math/0411240",
"language": "en",
"url": "https://arxiv.org/abs/math/0411240"
}
|
\section{INTRODUCTION}
Lattice QCD (LQCD) is the only complete definition of
perturbative and non-perturbative QCD, but is also a technique
with a history of results that deviate from experiment by 10-20\%.
This is beginning to change. Recent advances in LQCD culminated in
the precision calculations of nine, previously measured, diverse
quantities~\cite{Davies}, that agree with experiment within a
few per cent. This could not have come at a better time, as the
era of experimental precision quark flavor physics we are now in,
depends crucially on the precise calculation of non-perturbative
quantities in the beauty sector. How will the community know if
the lattice calculations of these quantities are correct? Charm at
threshold can provide the data necessary to test the calculations,
and an experiment operating there, CLEO-c, has just begun.
\subsection{Big Questions in Flavor Physics}
The big questions in quark flavor physics are: (1) ``What is the
dynamics of flavor?'' The gauge forces of the standard model (SM)
do not distinguish between fermions in different generations. The
electron, muon and tau all have the same electric charge, quarks
of different generations have the same color charge. Why
generations? Why three? (2) ``What is the origin of
baryogenesis?'' Sakharov gave three criteria, one is
$CP$-violation~\cite{Sakharov}. There are only three known
examples of $CP$-violation: the Universe, and the beauty and kaon
sectors. However, SM $CP$-violation is too small, by many orders
of magnitude, to give rise to the baryon asymmetry of the
Universe. Additional sources of $CP$-violation are needed. (3)
``What is the connection between flavor physics and electroweak
symmetry breaking?''
Extensions of the SM, for example
supersymmetry, contain flavor and $CP$-violating couplings that
should show up at some level in flavor physics but precision
measurements and precision theory are required to detect the new
physics.
\subsection{Flavor Physics Today}
This is the decade of precision flavor physics. In the
``$\sin 2 \beta$ era'', the goal is to over-constrain the CKM
matrix with a range of measurements in the quark flavor changing
sector of the SM at the per cent level. If inconsistencies are
found between, for example, measurements of the sides and angles
of the $B_d$ unitarity triangle, it will be evidence for new
physics. Many experiments will contribute including BaBar and
Belle, CDF, D0, and BTeV at Fermilab, ATLAS, CMS, and LHC-b at the
LHC, CLEO-c, and experiments studying rare kaon decays.
However, the study of weak interaction phenomena, and the
extraction of quark mixing matrix parameters remain limited by our
capacity to deal with non-perturbative strong interaction
dynamics.
Current constrains on the CKM matrix are shown in Fig.~\ref{CKM}(a).
The widths of the constraints, except that of $\sin 2 \beta$,
are dominated by the error bars on the calculation of
hadronic matrix elements. Techniques such as lattice QCD directly
address strongly coupled theories and have the potential to
eventually determine our progress in many areas of particle
physics. Recent advances in LQCD have produced calculations of
non-perturbative quantities such as $f_\pi$, $f_K$, and heavy
quarkonia mass splittings that agree with
experiment~\cite{Davies}.
Several per cent precision in charm and beauty decay constants and
form factors is hoped for, but the path to higher precision is
hampered by the absence of accurate charm data against which to
test lattice techniques.
\begin{figure*}[btp]
\includegraphics[width=0.47\textwidth]{CKM_fitter_2004_aug12}
\hfill
\includegraphics[width=0.47\textwidth]{CKM_fitter_2004_theory2percentaug12}
\vskip -2em
\caption{Lattice impact on the $B_d$ unitarity
triangle from $B_d$ and $B_s$ mixing, $|V_{ub}| / |V_{cb}|$,
$\epsilon_K$, and $\sin 2 \beta$.
(a) Summer 2004 status of the constraints.
(b) Prospects under the assumption that LQCD
calculations of $B$ system decay constants and semileptonic form
factors achieve the projections in Table~\ref{table:combine}.}
\label{CKM}
\vskip -0.5em
\end{figure*}
\subsection{CLEO-c and the Lattice}
To meet this challenge the CLEO collaboration has converted CLEO
and CESR into a charm and QCD factory operating at charm threshold
where the experimental conditions are
optimal~\cite{CLEO-cyellowbook}. In a pilot run in 2003 CLEO-c
recorded a data sample about one fiftieth of design
that has already allowed the most precise measurements of several
quantities that are important tests of LQCD including $f_{D^+}$
and ${\cal B}(D^0 \rightarrow \pi^- e^+ \nu_e)$ or are important
to set the scale for heavy quark physics including ${\cal B}(D^+
\rightarrow K^- \pi^+ \pi^+)$. Beginning September 2004 CLEO-c
will obtain charm data samples one to two orders of magnitude
larger than any previous experiment.
This data has the potential to provide unique and crucial tests of
LQCD with accuracies of 1-2\%.
If LQCD passes the CLEO-c test,
the community will have much greater confidence in lattice
calculations of decay constants and semileptonic form factors in
beauty physics. When these calculations are combined with
500~fb$^{-1}$ of $B$ factory data, and improvement in the direct
measurement of $|V_{tb}|$ expected from the Tevatron
experiments~\cite{Swain}, they will allow a significant reduction
in the size of the errors on the quark couplings $|V_{ub}|, |V_{cb}|,
|V_{td}| {\rm ~and~} |V_{ts}|$, quantitatively and qualitatively
transforming knowledge of the $B_d$ unitarity triangle, see
Fig.~\ref{CKM}(b), and thereby maximizing the sensitivity of
heavy quark physics to new physics.
Of equal importance, LQCD combined with CLEO-c allows a
significant advance in understanding and control over
strongly-coupled, non-perturbative quantum field theories in
general. Field theory is generic, but weak coupling is not. Two of
the three known interactions are strongly coupled: QCD and gravity
(string theory). An understanding of strongly coupled theories may
well be a crucial element in helping to interpret new phenomena at
the
high energy frontier.
\section{TESTS OF LQCD WITH CHARM}
\subsection{Decay Constants}
The $B_d$ $(B_s)$ meson mixing probability can be used to
determine $ |V_{td}|$ $(|V_{ts}|)$.
\begin{equation}
\Delta m_d \propto |V_{tb}V_{td}|^2 f_{B_d}^2 B_{B_d}
\end{equation}
The $B_d$ mixing rate is measured with exquisite precision
(1.4\%)~\cite{PDG2004} but the decay constant is calculated with a
precision of about 15\%. If theoretical precision could be
improved to 3\%, $|V_{td}|$ would be known to about 5\% without
any need for improvement in the experimental measurement.
Since LQCD hopes to predict $f_B/f_{D^+}$ with a small error,
measuring $f_{D^+}$ would allow a precision prediction for $f_B$.
Hence a precision extraction of $|V_{td}|$ from the $B_d$ mixing
rate becomes possible. Similar considerations apply to $B_s$
mixing once it is measured i.e. a precise determination of
$f_{D_s^+}$ would allow a precision prediction for $f_{B_s}$ and
consequently a precision measurement of $|V_{ts}|$. Finally the
ratio of the two neutral $B$ meson mixing rates determines $|V_{td}| /
|V_{ts}|$, but $|V_{ts}| = |V_{cb}|$ by unitarity and $|V_{cb}|$ is known
to a few per cent, and so the ratio determines $V_{td}$. Which
method of determining $|V_{td}|$ will have the greater utility
depends on which combination of hadronic matrix elements have the
smallest error.
Charm leptonic decays can be used to measure the charm decay
constants $f_{D_s^+}$ and $f_{D^+}$ because $|V_{cs}|$ and
$|V_{cd}|$ are known from unitarity to 0.1\% and 1\% respectively.
\begin{equation}
{ {{\cal B}(D^+ \rightarrow \mu \nu_\mu) }\over {\tau_{D^+} } }=
{\rm (const.)} f_{D^+}^2 |V_{cd}|^2
\end{equation}
(Charge conjugation is implied throughout this paper.) The
measurements also provide a precision test of the lattice
calculations of $f_{D_s^+}$ and $f_{D^+}$. At the start of 2004
$f_{D^+}$ was experimentally undetermined and $f_{D_s^+}$ was
known to 33\%.
\subsection{Semileptonic form factors}
$V_{ub}$ measures the length of the side opposite the angle
$\beta$ in the $B_d$ unitarity triangle and consequently it is a
powerful check of the consistency of the CKM matrix paradigm of
$CP$-violation. $|V_{ub}|$ is determined from beauty semileptonic
decay
\begin{equation}
{{d\Gamma(B \rightarrow \pi e^- \bar\nu_e)} \over {dq^2}} = { \rm
(const.)} |V_{ub}|^2f_+(q^2)^2
\label{eq:Bsemi}
\end{equation}
The differential rate depends on a form factor, $f_+(q^2)$ that
parameterizes the strong interaction non-perturbative effects. A
recent representative value of $|V_{ub}|$ determined from $B
\rightarrow \pi \ell^- \bar{\nu_e}$ is~\cite{Ali}:
\begin{equation}
|V_{ub}| = (3.27 \pm 0.70 \pm 0.22^{+0.85}_{-0.51}) \times 10^{-3}
\end{equation}
where the uncertainties are experimental statistical and
systematic, and from the LQCD calculation of the form factor,
respectively.
The large experimental errors are expected to
be reduced to 5\% with $B$ factory data samples of
$500 {\rm fb}^{-1}$ each, and the theory error will dominate.
Again, because the charm CKM matrix elements are know from unitarity,
the differential charm semileptonic rate
\begin{equation}
{{d\Gamma(D \rightarrow \pi e^+ \nu_e)} \over {dq^2}} ={\rm
(const.)} |V_{cd}|^2f_+(q^2)^2
\end{equation}
tests calculations of charm semileptonic form factors.
Thus, a precision measurement tests the LQCD
calculation of the $D \rightarrow \pi$ form factor.
As the form
factors governing $B \rightarrow \pi e^- \bar{\nu_e}$ and $D
\rightarrow \pi e^+ \nu_e$ are related by heavy quark symmetry,
the charm test gives confidence in the accuracy of the $B
\rightarrow \pi$ calculation. The $B$ factories can then use a
tested LQCD prediction of the $B \rightarrow \pi$ form factor to
extract a precise value of $|V_{ub}|$ from Eq.~(\ref{eq:Bsemi}).
At the start of 2004,
${\cal B} (D \rightarrow \pi e^+ \nu_e)$ had been determined to
45\%~\cite{PDG2004,PDG2004_C}, but the absolute value of the $D
\rightarrow \pi$ form factor had not been measured.
\section{FIRST RESULTS FROM CESR-c AND CLEO-c}
The Cornell Electron Storage Ring (CESR) has been upgraded to
CESR-c with the installation of 12 wiggler magnets to increase
damping at low energies. Six wigglers were installed in the summer
of 2003 and the remainder this summer. Between September 2003 and
March 2004 a CLEO-c pilot run accumulated $57.1~{\rm pb}^{-1}$ at the
$\psi(3770)$, about three times larger than any previous sample
collected at this energy. The accelerator achieved a luminosity of
$L = 4.6\times 10^{31}~{\rm cm}^{-2}{\rm s}^{-1}$, as anticipated.
Starting in September 2004 CLEO-c will take data at
$\sqrt{s} \sim 3770$~MeV,
$\sqrt{s} \sim 4140$~MeV, and
$\sqrt{s} \sim 3100$~MeV ($J/\psi$).
The design luminosity at these energies
ranges from $5 \times 10^{32}~{\rm cm^{-2} s^{-1}}$ down to about
$1 \times 10^{32}~{\rm cm^{-2} s^{-1}}$ yielding 3~fb$^{-1}$ each
at the $\psi^{\prime\prime}$ and at $\sqrt{s} \sim 4140$~MeV above
$D_s \bar{D_s}$ threshold, and 1~fb$^{-1}$ at the $J/\psi$ in a
Snowmass year of $10^7~{\rm s}$. These integrated luminosities
correspond to samples of 20 million $D \bar{D}$ pairs, 1.5 million
$D_s \bar{D_s}$ pairs, and one billion $J/\psi$
decays~\cite{CLEO-cyellowbook}. These datasets will exceed those
of the BESII (Mark III) experiment by factors of 130 (480), 110
(310) and 20 (170), respectively.
The CLEO-c detector is a minimal modification of the well
understood CLEO III detector. A silicon vertex detector was
replaced with a small-radius low-mass drift chamber, and the
magnetic field was lowered to 1.0~T from $1.5~{\rm T}$.
CLEO-c is the first
modern detector to operate at charm threshold.
\subsection{Analysis Technique }
There are significant advantages to running at charm threshold. As
$\psi \rightarrow D \bar D$, the strategy is to fully
reconstruct one $D$ meson in a hadronic final state, which is
referred to as the tag, and then to analyze the decay of the
second $D$ meson in the event to extract inclusive or exclusive
properties. A typical event, in which both $D$ mesons have been
reconstructed, is shown in Fig.~\ref{event_hadronic}.
\begin{figure}[btp]
\centerline{\epsfig{figure=kpipi_kpipi_bw.eps,width=0.45\textwidth}}
\vskip -1.0em \caption{ A CLEO-c event where $D^+ \rightarrow K^-
\pi^+ \pi^+, D^- \rightarrow K^+ \pi^- \pi^-$.} \vskip -1.5em
\label{event_hadronic}
\end{figure}
As $E_{\rm beam}=E_D$, a requirement that the candidate have energy
close to the beam energy is made, and the beam-constrained
candidate mass,
$M(D) = \sqrt{E_{{\rm beam}}^2 - p_{{\rm cand}}^2}$,
is computed. The $M(D)$ distribution for the mode $D^+
\rightarrow K^- \pi^+ \pi^+$ is shown in Fig.~\ref{single}.
The signal to noise,
which is optimal at threshold, is about 50:1.
\begin{figure}[btp]
\centerline{\epsfig{figure=0140804-032.eps,width=0.45\textwidth}}
\vskip -2.0em
\caption{Distribution of calculated $M(D)$ values for single tag
$D$ candidates in the mode $D^+ \rightarrow K^- \pi^+ \pi^+$.
Preliminary. } \label{single}
\vskip -1.5em
\end{figure}
Charm mesons have many large branching ratios to low multiplicity
final states. In consequence the tagging efficiency is very high,
about 25\%, this should be compared to less than 1\% for $B$
tagging at a $B$ factory.
Tagging creates a single $D$ meson beam of known momentum. This is
a particularly favorable experimental situation.
Figure~\ref{double} shows $M(D)$ of the second $D$ meson in events
where both $D$ mesons have decayed into the $K^- \pi^+ \pi^+$
final state.
\begin{figure}[btp]
\centerline{\epsfig{figure=0140804-027.eps,width=0.45\textwidth}}
\vskip -2.0em
\caption{Projection of the double tag $D^+D^-$ candidate masses
onto the $M(D)$ axis for $D^+ \rightarrow K^- \pi^+ \pi^+, D^-
\rightarrow K^+ \pi^- \pi^-$. Preliminary.} \label{double}
\vskip -1.5em
\end{figure}
These double tag events are pristine.
They are key to making absolute branching fraction measurements:
\begin{equation}
{\cal B} (D^+ \rightarrow K^- \pi^+ \pi^+) = { {N(K^- \pi^+
\pi^+)}\over { \epsilon (K^- \pi^+ \pi^+) \times N(D^-)}}
\end{equation}
where $N(K^- \pi^+ \pi^+)$ is the number of $D^+ \rightarrow K^-
\pi^+ \pi^+$ observed in tagged events, $\epsilon (K^- \pi^+
\pi^+)$ is the reconstruction efficiency and $N(D^-)$ is the
number of tagged events. In a method similar to that pioneered by
Mark III~\cite{Balt,Adler}, CLEO fits to the observed single
tag and double tag yields for five $D^+$ and $D^0$ modes, and
finds the preliminary branching ratios listed in
Table~\ref{table:br}. The statistical errors are comparable to
previous measurements, while the preliminary systematic errors are
likely to be reduced in the near future. This is the most precise
measurement of ${ \cal B} (D^+\to K^-\pi^+\pi^+) $.
\begin{table}[tbp]
\caption{Preliminary CLEO-c absolute charm branching ratios.
Further detail in Ref.~\cite{CLEO_had}.}
\label{table:br}
\begin{tabular}{@{}ll}
\hline
Mode & ${\cal B}$ (\%) \\
\hline
$D^0 \rightarrow K^- \pi^+$ & $3.92 \pm 0.08 \pm 0.23$ \\
$D^0\to K^-\pi^+\pi^0 $ & $14.3 \pm 0.3 \pm 1.0 $ \\
$D^0\to K^-\pi^+\pi^+\pi^- $ & $8.1 \pm 0.2 \pm 0.9$ \\
$D^+\to K^-\pi^+\pi^+ $ & $9.8 \pm 0.4 \pm 0.8$ \\
$ D^+\to K_S\pi^+ $ & $1.61 \pm 0.08 \pm 0.15$ \\
\hline
\end{tabular}
\vskip -1.5em
\end{table}
The fit also returns the number of $D$ meson pairs, from which the
cross section is obtained:
\begin{equation}
\sigma(e^+ e^- \rightarrow D \bar D) =
(6.06 \pm 0.13 \pm 0.32)~{\rm nb}
\end{equation}
where the uncertainties are statistical and systematic,
respectively. The cross section is independent of charm branching
ratios.
The CLEO-c $\psi(3770)$ integrated luminosity goal of $3~{\rm
fb^{-1}}$ may sound small compared to the $500~{\rm fb^{-1}}$
expected at each of the $B$ factories.
The ability to
perform a tagged analysis is comparable at the two facilities, however,
because the tagging efficiency is about 25 times larger at a charm
factory than at a $B$ factory, and the cross section is about six
times larger. Hence,
\begin{equation}
{ N(B~{\rm tags~at~a~}B~{\rm factory}) \over
N(D~{\rm tags~at~a~charm~factory}) } \sim 1.
\end{equation}
The absolute branching ratios ${\cal B}(D^+\to K^-\pi^+\pi^+)$, ${\cal B}(D^0
\rightarrow K^-\pi^+)$, and ${\cal B} (D_s^+ \rightarrow \phi
\pi^+ )$ are important as, currently, all other $D^+$, $D^0$ and
$D_s^+$ branching ratios are determined from ratios to one or the
other of these branching fractions~\cite{PDG2004}. In consequence,
nearly all branching fractions in the $B$ and $D$ sectors depend
on these reference modes. Projections for the expected precision
with which the reference branching ratios will be measured with
the full CLEO-c data set are given in Table~\ref{table:brproj}.
CLEO-c will set the scale for all heavy quark measurements.
\begin{table}[tbp]
\caption{CLEO-c hadronic branching ratio projections.
Further detail in Ref.~\cite{CLEO-cyellowbook}.}
\label{table:brproj}
\begin{tabular}{@{}lll}
\hline
Mode & \multicolumn{2} {c} { $ \delta {\cal B} / {\cal B} $ (\%) } \\
& PDG 2004 & CLEO-c \\ \hline
$D^0 \rightarrow K^- \pi^+$ & 2.4\% & 0.6\% \\
$D^+\to K^-\pi^+\pi^+ $ & 6.1\% & 0.7\% \\
$D_s^+ \rightarrow \phi \pi $ & 12.5\%~\cite{BaBar} & 1.9\% \\
\hline
\end{tabular}
\vskip -1.5em
\end{table}
\subsection{Measurement of the Charm Decay Constant }
The measurement of the leptonic decay $D^+ \rightarrow \mu^+
\nu_\mu$ benefits from the fully tagged $D^-$ at the $\psi(3770)$.
One observes a single charged track recoiling against the tag that
is consistent with a muon of the correct sign. Energetic
electromagnetic showers un-associated with the tag are not
allowed. The missing mass $MM^2 = m_{\nu}^2$ is computed; it
peaks at zero for a decay where only a neutrino is unobserved. The
clear definition of the initial state, the cleanliness of the tag
reconstruction, and the absence of additional fragmentation tracks
make this measurement straightforward and nearly background-free.
The $MM^2$ distribution is shown in Fig.~\ref{missmass}.
\begin{figure}[btp]
\centerline{\epsfig{figure=mm2_new_2.eps,width=0.45\textwidth}}
\vskip -1.0em
\caption{ The $MM^2$ distribution in events with $D^-$ tag, a
single charged track of the correct sign, and no additional
(energetic) showers. The insert shows the signal region for $D^+
\rightarrow \mu \nu_\mu$.
A $\pm 2 \sigma$ range is
indicated by the arrows.
Preliminary. } \label{missmass}
\vskip -1.5em
\end{figure}
There are 8 candidate signal events, and $1.07 \pm 1.07$ background
events. After correcting for efficiency, CLEO-c finds
\begin{equation}
{\cal B} (D^+ \rightarrow \mu^+ \nu_\mu) = (3.5 \pm 1.4 \pm 0.6)
\times 10^{-4},
\end{equation}
where the uncertainties are statistical and systematic,
respectively. Under the assumption of three generation unitarity,
and using the precisely known $D^+$ lifetime, CLEO-c obtains
\begin{equation}
f_{D^+} = (201 \pm 41 \pm 17) {\rm ~MeV}.
\end{equation}
This is the most precise measurement of $f_{D^+}$~\cite{BESIII_f}.
The combined experimental error is 22\% while the LQCD error
reported at this conference is 10\% ~\cite{Wingate}. With the full
CLEO-c data sample a 2\% error for $f_{D^+}$ is expected. Similar
precision is expected for $f_{D_s^+}$ at $\sqrt{s}= 4140$~MeV.
\subsection{Measurement of the Charm Semileptonic Form Factors }
The measurement of semileptonic decays is also based on the use of
tagged events.
A tagged event where the second $D$ decays semileptonically is
shown in Fig.~\ref{event_semilep}.
\begin{figure}[btp]
\centerline{
\includegraphics[width=0.45\textwidth]{kenu_bw}
} \vskip -1.0em \caption{ A CLEO-c event where $D^0 \rightarrow
K^- e^+ \nu_e, \bar{D^0}\rightarrow K^+ \pi^-$. }
\label{event_semilep} \vskip -1.5em
\end{figure}
The analysis procedure, using
$D^0 \rightarrow \pi^- e^+ \nu_e$ as an example is as follows. A
positron and a hadronic track are identified recoiling against the
tag. The quantity $U = E_{miss}- P_{miss}$ is calculated, where
$E_{miss}$ and $P_{miss}$ are the missing energy and missing
momentum in the event. $U$ peaks at zero if only a neutrino is
missing. The $U$ distribution in data is shown in
Fig.~\ref{fig:pi_rhoenu}(a) where a remarkably clean signal of about
100 events is observed for $D \rightarrow \pi e^+ \nu_e$.
\begin{figure*}[btp]
\centerline{
\epsfig{figure=pienu_U_plot.eps,width=0.45\textwidth}
\hfill
\epsfig{figure=rhoenu_U_plot.eps,width=0.45\textwidth}
}
\vskip -1.0em
\caption{
The $U=E_{miss}-P_{miss}$ distribution in events with a
$\bar{D^0}$ tag, a positron, either
(a) a single charged track of the correct sign
or
(b) a $\rho^- \rightarrow \pi^+ \pi^0$, \emph{and}
and no additional (energetic) showers.
The peaks at zero and 0.13~GeV correspond to
(a) $D^0 \rightarrow \pi^- e^+ \nu_e$ and
$D^0 \rightarrow K^- e^+ \nu_e$
or
(b) $D^0 \rightarrow \rho^- e^+ \nu_e$ and
$ D^0 \rightarrow K^{*-} e^+ \nu_e$.
Preliminary. }
\label{fig:pi_rhoenu}
\end{figure*}
The kinematic power of running at threshold also allows previously
unobserved modes such as $D^0 \rightarrow \rho^- e^+ \nu_e$ to be
easily identified see Fig.~\ref{fig:pi_rhoenu}(b).
CLEO-c results are given in Table~\ref{table:slbr}.
\begin{table}[tbp]
\caption{CLEO-c charm semileptonic branching ratios.
Further detail in Ref.~\cite{CLEO_semileptonic}.}
\label{table:slbr}
\begin{tabular}{@{}ll}
\hline
Mode & ${\cal B}$ (\%) \\
\hline
$D^0\to \pi^-e^+\nu_e $ & $0.25 \pm 0.03 \pm 0.25 $ \\
$D^0\to K^-e^+\nu_e $ & $3.52 \pm 0.10 \pm 0.9$ \\
$D^0\to \rho^-e^+\nu_e $ & $2.07 \pm 0.28 \pm 0.18$ \\
$ D^0\to K^{*-}e^+\nu_e $ & $0.19 \pm 0.04 \pm 0.02$ \\
\hline
\end{tabular}
\vskip -1.5em
\end{table}
This modest data sample has
already produced the most precise determination of
${\cal B} (D^0 \rightarrow \pi^- e^+ \nu_e ) $.
With the full data set,
CLEO-c will make a significant improvement in the precision
with which each absolute charm semileptonic branching ratio is
known, see Table~\ref{table:slbr_proj}.
\begin{table}[tbp]
\caption{CLEO-c absolute semileptonic branching ratio projections.
(Some PDG2004 values are an average of $e$ and $\mu$.)
Further detail in Ref.~\cite{CLEO-cyellowbook}.}
\label{table:slbr_proj}
\begin{tabular}{@{}lll}
\hline
Mode & \multicolumn{2} {c} { $ \delta {\cal B} / {\cal B} $ (\%) } \\
& PDG 2004 & CLEO-c \\ \hline
$ D^0\to K^-e^+\nu_e$ &5 &0.4 \\
$D^0\to \pi^-e^+\nu_e $ & 45 & 1.0 \\
$D^+ \rightarrow \pi^0 e^+ \nu_e $ & 48 & 2.0 \\
$D_s^+ \rightarrow \phi e^+ \nu_e $ & 25 & 3.1 \\
\hline
\end{tabular}
\vskip -1.5em
\end{table}
The $q^2$ resolution is about 0.025 GeV$^{2}$, which is more than
a factor of 10 better than CLEO III which achieved a resolution of
0.4 GeV$^{2}$~\cite{Hsu}. This huge improvement is due to the
unique kinematics at the $\psi(3770)$ resonance, i.e. that the $D$
mesons are produced almost at rest. The combination of large
statistics, and excellent kinematics will enable the absolute
magnitudes and shapes of the form factors in every charm
semileptonic decay to be measured, in many cases to a precision of
a few per cent. This is a stringent test of LQCD.
By taking ratios of semileptonic and leptonic rates, CKM factors
can be eliminated. Two such ratios are ${\Gamma(D^+ \rightarrow
\pi^0 e^+ \nu_e)} / {\Gamma (D^+ \rightarrow \mu \nu_\mu)} $ and $
{\Gamma(D_s^+ \rightarrow (\eta {\rm ~or~} \phi ) e^+ \nu_e)} /
{\Gamma (D_s^+ \rightarrow \mu \nu_\mu)} $.
These ratios depend purely on hadronic matrix elements and can be
determined to 4\% and so will test amplitudes at the 2\% level.
This is an exceptionally stringent test of LQCD.
If LQCD passes the experimental tests outlined above it will be
possible to use the LQCD calculation of the $B \rightarrow \pi$
form factor with confidence at the $B$ factories to extract a
precision $|V_{ub}|$ from $B \rightarrow \pi e^- \bar\nu_e$. BaBar
and Belle will also be able to compare the LQCD prediction of the
shape of the $B \rightarrow \pi$ form factor to data as an
additional cross check.
Successfully passing the experimental tests will also allow CLEO-c
to use LQCD calculations of the charm semileptonic form factors to
directly measure $|V_{cd}|$ and $|V_{cs}|$, currently known to
7\% and 11\%~\cite{PDG2004}, with a greatly improved
precision of better than 2\% for each element. This in turn allows
new unitarity tests of the CKM matrix. For example, the second row
of the CKM matrix can be tested at the 3\% level;
the first column of the CKM matrix
will be tested with similar precision to the first row (which is
currently the most stringent test of CKM unitarity); finally, the
ratio of the long sides of the $uc$ unitarity triangle will be
tested to 1.3\%.
Table~\ref{table:combine} provides a summary of projections for
the precision with which the CKM matrix elements will be
determined if LQCD passes the CLEO-c tests in the $D$ system. In
the tabulation the current precision of the CKM matrix elements is
obtained by considering methods applicable to LQCD, for example
the determination of $|V_{cb}|$ and $|V_{ub}|$ from inclusive decays
and OPE is not included. The projections are made assuming $B$
factory data samples of 500~fb$^{-1}$ and improvement in the
direct measurement of $|V_{tb}|$ expected from the Tevatron
experiments~\cite{Swain}.
\begin{table}[tbp]
\caption{LQCD impact (in per cent) on the precision of CKM matrix elements.
Further detail in Ref.~\cite{CLEO-cyellowbook}.}
\label{table:combine}
\centering
\begin{tabular}{lrrrrrr}
\hline
~~ & $V_{cd}$ & $V_{cs}$ & $V_{cb}$ & $V_{ub}$ & $V_{td}$ & $V_{ts}$ \\
\hline
2004 & 7 & 11 & 4 & 15 & 36 & 39 \\
LQCD & 1.7 & 1.6 & 3 & 5 & 5 & 5 \\
\hline
\end{tabular}
\vskip -1.5em
\end{table}
\subsection{Probing QCD with Heavy Quarkonia}
Here the twin goals are to verify the theoretical tools for
strongly coupled field theories and quantify the accuracy for
application to flavor physics. As the same actions are used in
both onia and $B/D$ calculations, onia provide an independent
calibration of $c$ and $b$ quark actions used in $B/D$ physics.
Heavy quarkonia is the richest calibration/testing ground for
lattice techniques.
In the $\psi$ and $\Upsilon$ systems there are more than thirty
gold plated (few \%) lattice calculations now possible.
Measurements of masses and spin fine structure for $S$, $P$, and $D$
states reveal the magnitude of relativistic corrections and the
nature of confinement. The measurement of leptonic widths for $S$
states test wave function techniques that are important for
calculating decay constants, while electromagnetic transitions for
$P \rightarrow S$ and $S \rightarrow P$ matrix elements are
related to calculations of semileptonic form factors.
Recently, there has been an order of magnitude increase in the
data available to test predictions; Upsilonia at CLEO III and
charmonium at BES II and CLEO-c. One noteworthy discovery has been
the observation of the $1^3D_J$ states. The $b\bar{b}$ system is
unique as it has states with $L = 2$ that lie below the open-flavor
threshold.
These states are of considerable theoretical
interest~\cite{bib:EFI01-14}.
The mass of the $\Upsilon(1^3D_2)$
tests the lattice at large~$L$. CLEO has observed the
$\Upsilon(1^3D_2)$ state in the four-photon cascade $\Upsilon(3S)
\rightarrow \gamma_1\chi'_b \rightarrow \gamma_1\gamma_2
\Upsilon(^3D_J) \rightarrow \gamma_1\gamma_2\gamma_3 \chi_b
\rightarrow \gamma_1\gamma_2\gamma_3\gamma_4\ell^+\ell^-$,
finding~\cite{Upsilon1D}
$M(\Upsilon(1^3D_2)) = (10161.1 \pm 0.6 \pm 1.6)$~MeV/$c^2$,
in good agreement
with~\cite{Davies}. Some other important goals are the observation
of the $\eta_b$ and $h_b$ in the $\Upsilon$ system and the $h_c$
in the charmonium system.
\subsection {Glueballs and hybrid states}
QCD is the only known theory in nature where gauge particles can
also be constitutents. Glueballs and hybrids are fundamental
states of the theory and the current lack of strong, unambiguous
evidence for their existence is a challenge to QCD. If glueballs
are observed this will be a major discovery in particle physics
and a highly nontrivial test of lattice QCD~\cite{Morningstar}.
The approximately one billion $J/\psi$ produced at CLEO-c will be
a glue factory to search for glueballs and other glue-rich states
via $J/\psi \rightarrow \gamma gg \rightarrow \gamma X$ decays.
The region $1 < M_X < 3$~GeV/$c^2$ will be explored with partial
wave analyses for evidence of scalar or tensor glueballs,
glueball-$q\bar{q}$ mixtures, quark-glue hybrids and other new
forms of matter. The goals include the establishment of masses,
widths, spin-parity quantum numbers, decay modes and production
mechanisms for any identified states, a detailed exploration of
reported glueball candidates such as
the scalar states $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$, and
the examination of the inclusive photon spectrum $J/\psi
\rightarrow \gamma$X with $<$ 20 MeV photon resolution and
identification of states with up to 100 MeV width and inclusive
branching ratios above $1 \times 10^{-4}$.
\section{ THE EXPERIMENTER'S VIEW}
\subsection{The bottom line}
How can we be sure that if LQCD works for $D$ mesons it will work
for $B$ mesons? Or, equivalently, are CLEO-c data sufficient to
demonstrate that lattice systematic errors are under control?
There are a number of reasons to answer this question in the
affirmative. (1) There are two independent effective field
theories: NRQCD and the Fermilab method. (2) The CLEO-c data
provide many independent tests in the $D$ system; leptonic decay
rates, and semileptonic modes with rate and shape information. (3)
The $B$ factory data provide additional independent cross checks
such as $ { d \Gamma(B \rightarrow \pi \ell \nu ) / d p_\pi}$. (4)
Unlike models, methods used for the $D/B$ system can be tested in
heavy onia with measurements of masses, and mass splittings,
$\Gamma_{ee}$ and electromagnetic transitions. (5) The main
systematic errors limiting accuracy in the $D/B$ systems are:
chiral extrapolations in $m {\rm light}$, perturbation
theory, and finite lattice spacing. These are similar for
charm and beauty quarks. In my opinion a combination of CLEO-c
data in the $D$ systems and onia, plus information on the light
quark hadron spectrum, can clearly establish whether or not
lattice systematic errors are under control.
While this picture is encouraging, experimentalists also have
concerns. The lattice technique is all encompassing but LQCD
practitioners are very conservative about what can be calculated.
Much of the excitement this summer in flavor physics revolves
around whether $\sin 2\beta (\psi K_S^0) = \sin 2 \beta (\phi
K_S^0 )$, and also the observation of $A_{CP}$ in $B \rightarrow K
\pi$~\cite{ACP}. The lattice is not yet able to contribute in
these areas. There is a need to move beyond gold-plated quantities
in the next few years: for example resonances such as $\rho, ~\phi
{\rm ~and~} K^*$ may be difficult to treat on the lattice, but they
feature in many important $D$ semileptonic decays which will be
well measured by CLEO-c. There is also a pressing need to be able
to calculate for states near threshold such as $\psi(2S)$ and
$D_s(0)^+$, and hadronic weak decays as well.
\subsection {Systematic Errors}
It will take accurate and precise experimental measurements
combined with accurate and precise theoretical calculations to
search for new physics in the CKM matrix. Therefore, it is
essential to chase down each and every source of systematic error
in lattice calculations.
Usually, by far the most demanding part of a precision
experimental measurement is the careful evaluation of the
systematic errors. Therefore one way an experimenter evaluates the
quality of a measurement is by the completeness of the systematic
error analysis. As lattice results increase in precision,
experimentalists will expect to see full error reporting and
discussion of errors with every lattice calculation. So, with
experimentalists, phenomenologists, and lattice colleagues in mind
lattice results should:
\begin{enumerate}
\item Include a comprehensive table of systematic errors with {\em
every} calculation. Many calculations already have this. An error
budget makes it more straightforward to compare results from
different groups. It is understood that different methods will
have somewhat different lists.
\item Include a statement of whether an error is Gaussian or
non-Gaussian. Errors are often estimates of higher order terms in
a truncated expansion, so the quoted error bar is non-Gaussian.
For the statistical error a distribution could be provided.
\item Report the correlation between individual sources of
systematic error (if such correlation exists).
\item Provide a total systematic error by suitably combining
individual errors. This is redundant and should not replace the
individual error breakdown, but certainly convenient.
\end{enumerate}
\subsection{Outlook}
I will begin this section with a few quotes that summarize the
outlook over the next few years.
``Expect to see a growing number of lattice results for gold
plated quantities within the next few years with an ultimate goal
of a few \% errors within five years." A prominent lattice
theorist (2003).
``Prediction is better than postdiction." Every experimentalist
(every time).
``We need high precision experimental results in order to test
lattice QCD, we need CLEO-c for $D$ decays." A prominent lattice
theorist (2003).
``CLEO-c may have a few \% preliminary determination of $f_{D^+}$
as early as the summer conferences in 2005." A CLEO-c
collaboration member (summer, 2004).
A more precise unquenched lattice calculation of $f_{D^+}$ with
complete error report {\em before} the CLEO-c result from the
first full run is announced next summer, will clearly demonstrate
the {\em current} precision of the lattice approach to the
community and give credibility to the goal of a few \% errors.
A similar argument applies to the calculation of the form factors
in $D \rightarrow K/ \pi e \nu_e$ by summer 2005 and $f_{D_s^+}$
and form factors in $D_s^+$ semileptonic decays by summer 2006. It
must be noted, however, that the precision of the CLEO-c results,
and when that precision is reached, depend crucially on the
luminosity performance of CESR-c.
\section{SUMMARY}
This is a special time in flavor physics. The lattice goal is to
calculate to a few percent precision in the $D, B, \Upsilon, {\rm
~and~} \psi$ systems. CLEO-c, and later BES III, is about to
provide few per cent precision tests of lattice calculations in
the $D$ system and in heavy onia, which will quantify the accuracy
for the application of LQCD to the B system. Then BaBar, Belle,
CDF, D0, BTeV, CMS, ATLAS, and LHC-b data, in combination with
LQCD will lead to a few per cent determinations of $|V_{ub}|,
|V_{cb}|, |V_{td}|,$ and $|V_{ts}|$.
To borrow from the title of Ref.~\cite{Davies}: precision LQCD
confronts experiment, but equally, precision experiment confronts
LQCD. The combination of LQCD and CLEO-c have the potential to
maximize the sensitivity of the flavor physics program to new
physics this decade and pave the way for understanding beyond the
SM physics at the LHC in the coming decade.
\section{ACKNOWLEDGEMENTS}
I thank my colleagues on the CLEO Collaboration for allowing me to
include many CLEO results in this talk. I thank Elisabetta
Barberio, Tom Browder, Toru Ijima and Bruce Yabsley (Belle), David
Hitlin, Jeff Richman, Brian Meadows, and David Williams (BaBar),
Daniela Bortoletto, Art Garfinkel and Stefano Giagu (CDF), Vivek
Jain, John Womersley, and Daria Zieminska (D0) and Ed Blucher
(KTeV) for making results and projections from their respective
experiments available. I am grateful to the following
for valuable discussions on the relationship between the lattice
and CLEO-c: Nora Brambilla, Aida El-Khadra, Andreas Kronfeld,
Peter Lepage, Vittorio Lubicz, Paul Mackenzie, Roberto Mussa, Jim
Napolitano, Matthias Neubert, Jon Rosner, Anders Ryd, Kam Seth,
Jim Simone, Sheldon Stone, Matt Wingate, and Jim Wiss. Finally, thanks to the
organizers of Lattice 2004 for a stimulating and superbly run
conference.
|
{
"timestamp": "2004-11-06T00:02:13",
"yymm": "0411",
"arxiv_id": "hep-lat/0411009",
"language": "en",
"url": "https://arxiv.org/abs/hep-lat/0411009"
}
|
\section{Introduction}
Softly broken supersymmetric models contain a fairly large number of
scalar fields not present in the standard model. Their existence leads
to a complicated scalar potential, which might contain undesirable
minima which spontaneously break charge and/or color symmetry, a
situation which can not happen within the Standard Model. The
condition that the ``realistic'' minimum is the global minimum of the
theory can be used to obtain restrictions on the parameter space of
supersymmetric models, as already realized more than 20 years ago
\cite{Frere:1983ag,Claudson:1983et,Nilles:1982mp}. This way a
disadvantage of supersymmetry may turn into a virtue by shedding some
light into the unknown supersymmetry breaking mechanism itself.
Due to the enormous complexity of the full scalar potential in the
minimal supersymmetric extension of the standard model (MSSM) early
papers on this
subject~\cite{Frere:1983ag,Claudson:1983et,Nilles:1982mp,Komatsu:1988mt}
have only analyzed particular, but especially dangerous directions in
field--space. Casas et al \cite{casas:1997ze} have presented a more
detailed analysis of this subject. They were able to show that in the
constrained MSSM (CMSSM) with minimal supergravity boundary conditions
strong constraints arise ruling out sizeable parts of the parameter
space~\cite{casas:1997ze}.
Similar studies in R--parity violating versions of the MSSM, however,
have not been published~\footnote{The work of Abel and Savoy
\cite{Abel:1998ie} contains a discussion on the possibility of
lifting flat directions by adding explicit trilinear R--parity
violating terms to the superpotential. However, they discuss the
impact of bilinear terms only briefly. This is our main emphasis.}.
Our main goal is to present a detailed analysis of the
'unbounded--from--below' (UFB) as well as charge/colour breaking
minima (CCB) in the bilinear R--parity breaking model
(RMSSM)~\cite{Diaz:1997xc}. This model breaks lepton number and
R--parity explicitly through the simplest bilinear terms. The
justification for such emphasis is threefold.
First, it represents the simplest possible scheme of R--parity
violation, a mere six parameter extension of the MSSM. It is therefore
interesting to investigate the ``stability'' of the MSSM against such
``innocuous'' perturbation. For this reason we can also call this
model the generalized MSSM where R--parity breaks in the minimal way.
Second, this model is motivated by the fact that it produces the
paradigm for the idea that supersymmetry is the origin of neutrino
mass~\cite{Hirsch:2004he}, leading to a pattern of neutrino
masses~\cite{hirsch:2000ef} that successfully describes current
neutrino data~\cite{Maltoni:foc}. Last, but not least, it represents
the only model of R--parity breaking consistent with a spontaneous
violation of R--parity~\cite{Masiero:1990uj,Romao:1992vu}, where it is
the vacuum, not the fundamental theory, that breaks the symmetry.
In this model the atmospheric neutrino mass scale \cite{Fukuda:1998mi}
is generated at the tree--level, through the mixing of the three
neutrinos with the neutralinos~\cite{Ellis:1984gi}, in an effective
`low--scale'' variant of the seesaw mechanism. In contrast, the solar
mass and mixings needed to account for solar neutrino
data~\cite{Ahmad:2002jz,Eguchi:2002dm} are generated
radiatively~\cite{hirsch:2000ef}.
A very important difference between such a supersymmetric approach to
the origin of neutrino mass and seesaw--type schemes, is that here the
dimension--five operator responsible for (Majorana) neutrino masses is
generated at an accessibly low energy scale -- namely the weak scale.
This makes this model potentially testable by experiment.
In fact it has been shown that such a low--scale scheme for neutrino
masses has the advantage of being testable also ``outside'' the realm
of neutrino physics experiments. Although neutrino properties can not
be predicted from first principles, interpreting current neutrino data
in this framework implies unambiguous tests of the theory at
accelerator
experiments~\cite{Mukhopad:1998xj,Porod:2000hv,Hirsch:2002ys,Chun:2002rh,Hirsch:2003fe}
which can potentially be used to falsify the model.
This paper is organized as follows. In the next section we will
briefly recall some basics of the discussion on CCB and UFB bounds in
the MSSM. This will serve as a basis for section 3, where we will
discuss new features related to the R--parity violating terms. We show
how the bounds from unbounded--from--below directions have to be
modified, once non--zero bilinear R--parity violating (BRpV) terms are
allowed. We point out the novel possibility to generate a non--zero
vacuum expectation value of the charged Higgs field, albeit in regions
of parameter space which are now excluded by neutrino physics
\cite{Maltoni:foc}. We show that, given current data on neutrino
masses, bilinear R--parity violation can be understood as a small
perturbation of the MSSM. From the point of view of charge breaking
minima the RMSSM is thus as safe (or unsafe) as the MSSM itself. We
will then close with a short summary.
\section{Review of the MSSM results on UFB and CCB}
\noindent
To set up the notation, the superpotential of the MSSM can
be written as
\begin{eqnarray}
W&=&\varepsilon_{ab}\left[
h_U^{ij}\widehat Q_i^a\widehat U_j\widehat H_u^b
+h_D^{ij}\widehat Q_i^b\widehat D_j\widehat H_d^a
+h_E^{ij}\widehat L_i^b\widehat R_j\widehat H_d^a
-\mu\widehat H_d^a\widehat H_u^b
\right].
\label{eq:W}
\end{eqnarray}
Here, $h_U^{ij}$, $h_D^{ij}$ and $h_E^{ij}$ are $3\times 3$ Yukawa
matrices, $\widehat Q$, $\widehat U$ and $\widehat D$ are quark doublet and
singlet superfields and $\widehat L$ and $\widehat R$ are the usual lepton
doublet and singlet fields.
Supersymmetry must be broken and the most general set of soft breaking terms
allowed by the standard model gauge group under the assumption of
lepton number conservation can be written as
\begin{eqnarray}
\label{Vsoft}
{V}_{SB}&\hskip-5mm=\hskip-5mm&
M_Q^{ij2}\widetilde Q^{a*}_i\widetilde Q^a_j+M_U^{ij2}
\widetilde U_i\widetilde U^*_j+M_D^{ij2}\widetilde D_i
\widetilde D^*_j
+M_L^{ij2}\widetilde L^{a*}_i\widetilde L^a_j
+M_R^{ij2}\widetilde R_i\widetilde R^*_j
+\sum_{i=1}^2 m_{H_i}^2 H^{a*}_i H^a_i \cr
&&+\left[
- \ifmath{{\textstyle{1 \over 2}}} \sum_{i=1}^3 M_i\lambda_i\lambda_i
+\varepsilon_{ab}\left(
A_U^{ij}h_U^{ij}\widetilde Q_i^a\widetilde U_j H_u^b
+A_D^{ij}h_D^{ij}\widetilde Q_i^b\widetilde D_j H_d^a
+A_E^{ij}h_E^{ij}\widetilde L_i^b\widetilde R_j H_d^a
\right.\right.\cr
&&\left. \left. \hskip 45mm
-B\mu H_d^a H_u^b
\right) +h.c. \vb{18} \right]
\end{eqnarray}
The Higgs doublets giving mass to the standard model fermions are
\begin{equation}
\label{eq:2}
H_d=\left(
\begin{array}{l}
H_d^0\\
H_d^-
\end{array}
\right),
\qquad
H_u=\left(
\begin{array}{l}
H_u^+\\
H_u^0
\end{array}
\right)
\end{equation}
and the parameters in Eq. (\ref{Vsoft}) are to be understood at some
renormalization scale $Q$ chosen to minimize the effects of the one loop
corrections. This way we can neglect in the analysis the effect of the one
loop radiative corrections \cite{casas:1997ze}.
Without loss of generality, we now consider that the fields take the
following vev's\footnote{Our normalization here for the vev's differs
from Refs. \cite{Diaz:1997xc,hirsch:2000ef} by a factor of
$\sqrt{2}$.},
\begin{equation}
\label{eq:3}
\vev{H_u^+}=0,\quad \vev{H_d^-}=v_{-},\quad \vev{H_d^0}=v_d,\quad
\vev{H_u^0}=v_u
\end{equation}
to obtain
\begin{eqnarray}
\label{eq:4}
V_{\hbox{Higgs}}&=&\left(m^2_{H_u} + \mu^2\right) v_u^2
+\left(m^2_{H_d} + \mu^2\right) \left(v_d^2 +v_{-}^2 \right)
- 2 B \mu v_u v_d
-\frac{1}{2} g^2 v_u^2 v_d^2\nonumber \\[+2mm]
&&+
\frac{1}{8} \left(g^2 +g'^2\right) \left(v_u^4+v_d^4+v_{-}^4+2 v_d^2
v_{-}^2\right)
+\frac{1}{4} \left(g^2 -g'^2\right) \left( v_d^2 + v_{-}^2\right) v_u^2
\end{eqnarray}
This Higgs potential has the property that $v_{-}=0$. To see this
we note that the potential can be written in the form,
\begin{equation}
\label{eq:5}
V_{\hbox{Higgs}}= C_4 v_{-}^4 + C_2 v_{-}^2 + C_0
\end{equation}
where
\begin{eqnarray}
\label{eq:6}
C_4&=&\frac{1}{8} \left(g^2+g'^2\right)\nonumber \\
C_2&=&\frac{1}{4} \left(g^2-g'^2\right) v_u^2 +\frac{1}{4}
\left(g^2+g'^2\right) v_d^2 + \left(m^2_{H_d} + \mu^2\right)\nonumber \\
C_0&=&\frac{1}{8} \left(g^2+g'^2\right) \left(v_u^2-v_d^2\right)^2
+\left(m^2_{H_u} + \mu^2\right) v_u^2
+\left(m^2_{H_d} + \mu^2\right) v_d^2
-2 B \mu v_u v_d
\end{eqnarray}
Now since $g>g'$ we must have $C_2 >0$, unless $m^2_{H_d} + \mu^2 <0$
\footnote{Casas et al.~\cite{casas:1997ze} assume that only $m^2_{H_u}
+ \mu^2$ can be negative. Even though in mSugra at very large
$\tan\beta$ values $m^2_{H_d} + \mu^2 <0$ can occur in exceptional
cases, we will follow their assumption.}. Therefore the minimum of
the Higgs potential occurs for vanishing vev of the charged Higgs
boson.
By using the minimization equations,
\begin{eqnarray}
\label{eq:7}
0&=&-2 B \mu v_d + 2 \left(m^2_{H_u} + \mu^2\right) v_u - \frac{1}{2}
\left(g^2+g'^2\right) \left(v_d^2-v_u^2\right) v_u \nonumber \\[+2mm]
0&=&-2 B \mu v_u + 2 \left(m^2_{H_d} + \mu^2\right) v_d + \frac{1}{2}
\left(g^2+g'^2\right) \left(v_d^2-v_u^2\right) v_d
\end{eqnarray}
one can find the value of the Higgs potential at the real minimum,
\begin{eqnarray}
\label{eq:8}
V_{MIN}&=&-\frac{1}{8} \left(g^2+g'^2\right)
\left(v_u^2-v_d^2\right)^2
\end{eqnarray}
Eq. (\ref{eq:8}) will be important to compare with the values of other
(and potentially deeper) minima.
Before starting the discussion of the dangerous directions, a word of
caution should be added, namely, that the condition that the realistic
minimum is the global one might actually be too conservative. In fact,
it is possible that the universe resides in a false vacuum which is
stable because the tunneling time into the global minimum is large
with respect to the age of the universe. In this sense, CCB and UFB
constraints on the supersymmetric parameter space are sufficient but
might not be necessary, see for example
\cite{Kusenko:1996jn,Kusenko:1996xt}. However, we will not follow
this line of reasoning any further.
\subsection{UFB directions}
The 'unbounded--from--below' (UFB) directions are those where the
quartic D--terms vanish and some coefficient(s) quadratic in the vev's
are negative. Then the potential at the weak scale
seems to be unbounded from below. However, this is a slight
misnomer, since if one assumes that all soft masses are positive at
the high unification scale, it appears that these dangerous directions
are not really unbounded from below but there exists a true local
minimum at some large scale. It then must be checked that this local
minimum is not deeper than the physical one. As was shown in Ref.
\cite{casas:1997ze} there are three kinds of such directions. The
first and most obvious one corresponds to the D--flat direction where
$|v_u|=|v_d|$, all other vev's being zero. The potential along this
direction reads,
\begin{equation}
\label{eq:10}
V_{UFB-1}=\left(m^2_{H_u} + m^2_{H_d} +2 \mu^2 -2 |B \mu| \right) v_u^2
\end{equation}
and a sufficient condition to avoid developping a deep minimum at large
values of the field is
\begin{equation}
\label{eq:11}
m^2_{H_u} + m^2_{H_d} +2 \mu^2 -2 |B \mu| >0.
\end{equation}
In principle, one should check the depth of the true minimum along the
dangerous direction when this coefficient is negative. For simplicity, we
will stick however to the condition given in Eq. (\ref{eq:11}).
The second dangerous direction corresponds to the case where a slepton
$L_i$ takes a vev $v_i$. Then a combination of $v_u, v_d$ and $v_i$
can cancel the D--term and the potential reads,
\begin{equation}
\label{eq:12}
V_{UFB-2}=\left( m^2_{H_u} + \mu^2 + m^2_{L_i} - \frac{| B
\mu|^2}{m^2_{H_d}+\mu^2 -m^2_{L_i}} \right) v_u^2 -\frac{2
m^4_{L_i}}{g^2+g'^2}
\end{equation}
which constrains the coefficient of the quadratic term as
\begin{equation}
\label{eq:ufb2}
m^2_{H_u} + \mu^2 + m^2_{L_i} - \frac{| B
\mu|^2}{m^2_{H_d}+\mu^2 -m^2_{L_i}} > 0.
\end{equation}
Note that in the case of a universal $m_0$ at the unification scale
the $m_{L_i}$ are usually the smallest soft masses at the weak scale.
Dropping the universality assumption the bound obtained for $m_{L_i}$,
Eq. (\ref{eq:ufb2}), must be verified for the squark soft masses as
well.
Finally the last UFB direction corresponds to the case where $v_d=0$
but we have a neutral slepton $L_i$ with nonzero vev, like in the
UFB-2 case. This direction is both D-- and F--flat. The difference
with respect to UFB-2 is that the F--term is canceled by giving vev's
to the charged sleptons. The resulting potential reads
\begin{equation}
\label{eq:ufb3}
V_{UFB-3}=\left( m^2_{H_u} + m^2_{L_i} \right) v_u^2
+ \frac{|\mu|}{h_{e_j}} \left(m^2_{L_i}+ m^2_{L_j}+ m^2_{e_j}\right) v_u
-\frac{2 m^4_{L_i}}{g^2+g'^2}
\end{equation}
Since $m^2_{H_u}$ must be negative in order to break electroweak symmetry and
$m^2_{L_i}$ is small when one assumes universality of the soft terms, the
coefficient quadratic in $v_u$ is generally negative. As shown in
Refs.~\cite{casas:1997ze,Abel:1998ie} in the case of universal soft masses at
the GUT scale, the condition that the minimum along this UFB-3 direction is
not deeper than the physical minimum implies $m_0 > \alpha M_{1/2}$, where
$\alpha$ is a coefficient of ${\cal O}(1)$.
\subsection{CCB minima}
\label{sec:CCBMSSM}
\noindent
For the classical CCB minima, dangerous negative contributions to the
scalar potential are generated by cubic ($A$--type) soft supersymmetry
breaking terms. Therefore these directions cannot be F--flat, but they
are still D--flat. The traditional bound of Ref. \cite{Frere:1983ag}
corresponds to the case where
\begin{equation}
\label{eq:13}
\vev{Q^1}=\vev{H_u^2}=\vev{U}=v
\end{equation}
all other vev's vanishing. This choice cancels the D--term and the
potential reads,
\begin{equation}
\label{eq:14}
V_{CCB}=v^2 \left(3 h_u^2 v^2 + 2 A_u h_u v + m^2_{H_u}+\mu^2+m^2_Q
+m^2_U \right)
\end{equation}
In order to avoid a very deep color and charge breaking minimum we
must make sure that the parenthesis in Eq.~(\ref{eq:14}) never
vanishes, which happens if the corresponding second order equation can
not have real solutions. This leads to the well known condition,
\begin{equation}
\label{eq:16}
|A_u|^2 < 3 \left( m^2_{H_u}+\mu^2+m^2_Q +m^2_U \right)
\end{equation}
A more complete and general analysis of this and similarly dangerous
directions can be found in Ref. \cite{casas:1997ze}. Note again, that
the bound given in Eq.(\ref{eq:16}) for $A_u$ must be checked for all
$A$--terms in the general non--universal MSSM.
\section{UFB and CCB in the RMSSM}
The RMSSM is simply the bilinear R--parity violating model, defined by
the following superpotential~\cite{Diaz:1997xc}
\begin{eqnarray}
W&=&W_{MSSM} + \varepsilon_{ab}\epsilon_i\widehat L_i^a\widehat H_u^b
\label{eq:Wrpv}
\end{eqnarray}
and corresponding soft supersymmetry breaking terms,
\begin{eqnarray}
{V}_{SB}&=& V_{MSSM} +
B_i\epsilon_i\widetilde L^a_i H_u^b \,.
\label{eq:Vrpv}
\end{eqnarray}
It is therefore a rather mild extension of the MSSM. In the following
it will be sufficient to consider for simplicity only a one generation
version of the model~\footnote{We do not believe that this
simplification has any impact on the following discussion, since
neutrino oscillation data require $\frac{\epsilon}{\mu}\ll 1$ and
intergenerational effects between different families of leptons due
to BRpV terms scale as $(\frac{\epsilon}{\mu})^2$.}. We are mainly
interested in studying how the appearance of the new terms in the
superpotential (and in $V_{SB}$) changes the conclusions which hold
for the MSSM. Since the MSSM is the limit of the RMSSM when $\epsilon
\to 0$ we expect that the results of the MSSM will hold in that limit.
Note also that the structure of the trilinear terms is not modified,
so conclusions like those of Eq.~(\ref{eq:16}) are expected also to
hold in our case. Defining
\begin{equation}
\label{eq:17}
\vev{H_u^+}=0,\quad \vev{H_d^-}=v_{-},\quad \vev{H_d^0}=v_d,\quad
\vev{H_u^0}=v_u, \quad \vev{L^0}=v', \quad \vev{L^-}=v'_{-}
\end{equation}
one finds for the scalar potential
\begin{eqnarray}
\label{eq:15}
V&=& M^2_{H_u} v_u^2 + M^2_{H_d} \left(v_d^2 + v_{-}^2\right)
+ M^2_{L} \left( v'^2 + v'^2_{-}\right) - 2 B \mu\, v_d v_u + 2 B'
\epsilon\, v_u v' \nonumber\\[+1mm]
&&
+\epsilon^2 \left( v_u^2 + v'^2_{-} + v'^2 \right)
+\mu^2 \left( v_u^2 + v_d^2 +v^2_{-} \right)
-2 \mu \epsilon \left( v' v_d + v_{-} v'_{-} \right)\nonumber\\[+1mm]
&&
+\frac{g^2}{8} \left[
\left( v_u^2-v_d^2-v'^2 +v_{-}^2 +v'^2_{-} \right)^2
+ 4 \left(v_d v_{-} + v' v'_{-} \right)^2 \right] \nonumber\\[+1mm]
&&
+\frac{g'^2}{8} \left( v_u^2-v_d^2-v'^2 -v_{-}^2 -v'^2_{-} \right)^2
\end{eqnarray}
where $ B'$ characterizes the soft supersymmetry and R--parity
violating bilinear term. We note that it is not possible to have an
UFB direction with non vanishing charged vev's in this potential,
because the D--terms can not be made to vanish for $v_{-}$ and
$v'_{-}$ different from zero. The minimization equations can be found
in the usual way taking derivatives with respects to the fields
\begin{eqnarray}
\label{eq:18}
0&=&
\left[ 2 \left(M^2_{H_d} + \mu^2\right) -\frac{g^2}{2} \left(
v_u^2 -v_d^2 -v'^2 -v_{-}^2 +v'^2_{-} \right) -\frac{g'^2}{2} \left(
v_u^2 -v_d^2 -v'^2 -v_{-}^2 -v'^2_{-} \right)
\right] v_d \nonumber\\
&&
-\left( 2 \epsilon \mu -g^2 v_{-} v'_{-}\right) v'
-2 B \mu v_u\nonumber\\[+2mm]
0&\hskip-3mm=&\hskip-3mm
\left[ \frac{g^2}{2} \left(
v_u^2 -v_d^2 -v'^2 +v_{-}^2 +v'^2_{-} \right)
+\frac{g'^2}{2} \left(
v_u^2 -v_d^2 +v'^2 -v_{-}^2 -v'^2_{-} \right)
\right] v_u \nonumber\\
&&
+ 2 \left(M^2_{H_u} + \mu^2 + \epsilon^2 \right) v_u +
2 \left( B' \epsilon v' -B \mu v_d \right)\nonumber\\[+2mm]
0&\hskip-3mm=&\hskip-3mm
\left[ 2 \left(M^2_{L} + \epsilon^2\right) -\frac{g^2}{2} \left(
v_u^2 -v_d^2 -v'^2 +v_{-}^2 -v'^2_{-} \right) -\frac{g'^2}{2} \left(
v_u^2 -v_d^2 -v'^2 -v_{-}^2 -v'^2_{-} \right)
\right] v' \nonumber\\
&&
-\left( 2 \epsilon \mu -g^2 v_{-} v'_{-}\right) v_d
+2 B' \epsilon v_u\nonumber\\[+2mm]
0&\hskip-3mm=&\hskip-3mm
\left[ 2 \left(M^2_{H_d} + \mu^2\right) +\frac{g^2}{2} \left(
v_u^2 +v_d^2 -v'^2 +v_{-}^2 +v'^2_{-} \right) -\frac{g'^2}{2} \left(
v_u^2 -v_d^2 -v'^2 -v_{-}^2 -v'^2_{-} \right)
\right] v_{-} \nonumber\\
&&
-\left( 2 \epsilon \mu -g^2 v_d v' \right) v'_{-} \nonumber\\[+2mm]
0&\hskip-3mm=&\hskip-3mm
\left[ 2 \left(M^2_{L} + \epsilon^2\right) +\frac{g^2}{2} \left(
v_u^2 -v_d^2 +v'^2 +v_{-}^2 +v'^2_{-} \right) -\frac{g'^2}{2} \left(
v_u^2 -v_d^2 -v'^2 -v_{-}^2 -v'^2_{-} \right)
\right] v'_{-} \nonumber\\
&&
-\left( 2 \epsilon \mu -g^2 v_d v' \right) v_{-}
\end{eqnarray}
Since we are dealing with a set of five coupled equations this system
is difficult to solve for the vev's. We can however use the following
trick. Instead of solving for the five vev's we try to solve those
equations for the three soft masses squared $M^2_{H_u}$, $M^2_{H_d}$
and $M^2_{L}$~\cite{Romao:1992vu} and for the charged vev's. Using
this approach we could find two types of solutions.
Before discussing the general case, however, we consider first the
limit in which RMSSM is considered a perturbation of the MSSM. This
is a reasonable approach since the BRpV parameters must be small to
account for the neutrino data~\cite{hirsch:2000ef}. Therefore we can
pose the following question. Suppose that in the limit $\epsilon \to
0$ the parameters are such that the MSSM has no UFB directions or CCB
minima. This means $v_u\not=0\, , v_d\not=0$ and $v'=v_{-}=v'_{-}=0$.
If we now consider a small non--vanishing value for the $\epsilon$
what will be the corresponding minimum? In order to answer this
question in perturbation theory we write
\begin{equation}
\label{eq:23}
v_d=\sum_{i=0}^{\infty} v_d^{(i)}\, \epsilon^i, \
v_u=\sum_{i=0}^{\infty} v_u^{(i)}\, \epsilon^i, \
v'=\sum_{i=0}^{\infty} v'^{(i)}\, \epsilon^i, \
v_{-}=\sum_{i=0}^{\infty} v_{-}^{(i)}\, \epsilon^i, \
v'_{-}=\sum_{i=0}^{\infty} v{'}_{-}^{(i)}\, \epsilon^i
\end{equation}
Now we substitute back in the extremum Eq.~(\ref{eq:18}) and solve
order by order in perturbation theory. The result that we get is as
follows,
\begin{eqnarray}
\label{eq:24}
v_d&=&v_d^{(0)} +v_d^{(2)}\epsilon^2 +v_d^{(4)}\epsilon^4 + \cdots\nonumber\\
v_u&=&v_u^{(0)} +v_u^{(2)}\epsilon^2 +v_u^{(4)}\epsilon^4 + \cdots\nonumber\\
v'&=&v'^{(1)} \epsilon +v'^{(3)}\epsilon^3 +v'^{(5)}\epsilon^5 + \cdots\nonumber\\
v_{-}&=& 0\nonumber\\
v'_{-}&=& 0
\end{eqnarray}
where $v_u^{(0)}\, ,v_d^{(0)}$ are the MSSM values for $\epsilon=0$.
This is precisely the solution of type \textbf{I} that we will discuss
shortly. Note that if $\epsilon \not=0$ then also $v'\not=0$. In
fact,
\begin{equation}
\label{eq:25}
v'= \frac{\mu v_d^{(0)}
- B' v_u^{(0)}}{M^2_L- \frac{1}{4} (g^2 + g'^2)
\left(v_u^{(0)}{}^2 - v_d^{(0)}{}^2\right)}\, \epsilon +\cdots
\end{equation}
So we can formulate the following important result: {\it If we start
with the MSSM parameters such that in the limit $\epsilon
\rightarrow 0$ the minimum has no UFB or CCB problems, then by
turning on perturbatively a small value for $\epsilon$ we get a
correspondingly safe minimum of the RMSSM.} However, as we will now
discuss, in general there are two types of solutions for the minimum
equations.
\subsubsection*{Type I}
This solution corresponds to the case where the charged vev's vanish.
We are then in the situation studied usually \cite{Diaz:1997xc} in the
bilinear R--parity model. We get
\begin{eqnarray}
\label{eq:19}
M^2_{H_d}&=&
\epsilon\, \mu\, \frac{v'}{v_d} - \mu^2 + B\, \mu\, \frac{v_u}{v_d}
+ \frac{g^2 + g'^2}{4} \left(v_u^2 -v'^2 - v_d^2 \right)
\nonumber\\[+2mm]
M^2_{H_u}&=&
-\epsilon^2\, - \mu^2 + B\, \mu\, \frac{v_d}{v_u}
-B'\, \epsilon\, \frac{v'}{v_u}
- \frac{g^2 + g'^2}{4} \left(v_u^2 -v'^2 - v_d^2 \right)
\nonumber\\[+2mm]
M^2_{L}&=&
-\epsilon^2 +\epsilon \left(\mu \frac{v_d}{v'}- B' \frac{v_u}{v'}\right)
+ \frac{g^2 + g'^2}{4} \left(v_u^2 -v'^2 - v_d^2 \right)
\nonumber\\[+2mm]
v_{-}&=&0
\nonumber\\[+2mm]
v'_{-}&=&0
\end{eqnarray}
This corresponds to the neutral Higgs potential that we will discuss
further below. Here we just note that the value of the potential at
the minimum can be shown to be
\begin{equation}
\label{eq:26}
V_{BRpV}= -\frac{g^2+g'^2}{8}
\left(v_u^2-v_d^2-v'^2\right)^2.
\end{equation}
\subsubsection*{Type II}
In the general case we can find the solutions of the minimization
equations in the following way. We start by solving the first three
equations in Eq.~(\ref{eq:18}) for the soft masses. We get,
\begin{eqnarray}
\label{eq:20}
M^2_{H_d}&=& M^2_{H_d}(0) -\frac{1}{4} \left(g^2-g'^2\right)
\left(v_{-}^2+ v'^2_{-}\right) \nonumber\\
M^2_{H_u}&=& M^2_{H_u}(0) -\frac{1}{4} \left(g^2+g'^2\right)
\left(v_{-}^2+ v'^2_{-}\right)
-\frac{1}{2}\, g^2\, \frac{v'}{v_d}\, v'_{-} v_{-}\nonumber\\
M^2_{L}&=&M^2_{L}(0) +\frac{1}{4} \left(g^2-g'^2\right) v_{-}^2
-\frac{1}{4} \left(g^2+g'^2\right) v'^2_{-}
-\frac{1}{2}\, g^2\, \frac{v'}{v_d}\, v'_{-} v_{-}
\end{eqnarray}
where $M^2_{H_d}(0)$, $M^2_{H_u}(0)$ and $M^2_{L}(0)$ are the soft
masses when $v_{-}=v'_{-}=0$ and are given in
Eq.~(\ref{eq:19}). Now we substitute Eq.~(\ref{eq:20}) into the last
two equations in Eq.~(\ref{eq:18}) to obtain,
\begin{eqnarray}
\label{eq:35}
0&\hskip-3mm=\hskip-3mm&
-g^2\left( v'^2 v_{-} - v' v_d v'_{-} +
v_{-}^2 v'_{-} \frac{v'}{v_d} - v_{-} v'^2_{-} \right)
+ 2 \epsilon \mu \left( v_{-}\frac{v'}{v_d} - v'_{-} \right)
+ 2 B \mu v_{-} \frac{v_u}{v_d} + g^2 v_{-} v_u^2\nonumber\\[+2mm]
0&\hskip-3mm=\hskip-3mm&
g^2 \left( v'^2 v_{-}
- v' v_d v'_{-} + v_{-}^2 v'_{-} \frac{v'}{v_d}
- v_{-} v'^2_{-} \right) \frac{v_d}{v'}
- 2 \epsilon \mu \left( v_{-}\frac{v'}{v_d} - v'_{-} \right)
\frac{v_d}{v'} \\
&&
- 2 B' \epsilon v'_{-} \frac{v_u}{v'} + g^2 v'_{-} v_u^2\nonumber
\end{eqnarray}
Multiplying the second of the equations in Eq.~(\ref{eq:35}) by
$v'/v_d$ and adding them one obtains,
\begin{equation}
\label{eq:36}
v'_{-}=\kappa\, v_{-}
\end{equation}
where
\begin{equation}
\label{eq:37}
\kappa=\frac{2 B \mu + g^2 v_d v_u}{2 B' \epsilon - g^2 v' v_u}
\end{equation}
\vspace{1mm}
\noindent
Finally we use Eq.~(\ref{eq:36}) to reduce either one of Eq.~(\ref{eq:35})
to
\begin{equation}
\label{eq:38}
0=v_{-} \Big( D_2\, v_{-}^2 - D_0 \Big)
\end{equation}
where
\begin{eqnarray}
\label{eq:39}
D_2&=& g^2 \left( \kappa^2 - \frac{v'}{v_d}\, \kappa \right)\nonumber\\
D_0&=&g^2 \left( v'^2 -v_d v' \kappa -v_u^2 \right)
- \left( B v_u + \epsilon v'\right) \frac{2 \mu}{v_d}
+ 2 \epsilon \mu \kappa
\end{eqnarray}
Eq.~(\ref{eq:38}) has the trivial solution $v_{-}=0$ which corresponds
to type \textbf{I}, the BRpV solutions. However, if
\begin{equation}
\label{eq:22}
\frac{D_0}{D_2} > 0
\end{equation}
we have a new type of solutions for the minimization equations,
\begin{equation}
\label{eq:40}
v_{-}=\pm \sqrt{\frac{D_0}{D_2}}, \qquad v'_{-}= \kappa\, v_{-}
\end{equation}
As $D_{0,2}$ do not have in general a well defined sign it can happen
that such solutions do exist for some combination of the
parameters. We will discuss this later in more detail.
\subsection{UFB Directions}
We have seen before that for the Higgs potential of the RMSSM the UFB
directions can only arise when the charged Higgs vev's vanish,
otherwise it is not possible to cancel the quartic D--terms. The
neutral Higgs potential obtained from Eq.~(\ref{eq:15}) when
$v_{-}=0,v'_{-}=0$ is given by
\begin{eqnarray}
\label{eq:27}
V_{Neutral}&=& \left( M^2_{H_u} +\epsilon^2 +\mu^2 \right) v_u^2
+ \left(M^2_{H_d} +\mu^2 \right) v_d^2
+ \left(M^2_{L} + \epsilon^2 \right) v'^2 \nonumber\\[+1mm]
&&- 2 B \mu\, v_d v_u + 2 B' \epsilon\, v_u v'
-2 \mu \epsilon v' v_d
+\frac{g^2+g'^2}{8}
\left( v_u^2-v_d^2-v'^2\right)^2
\end{eqnarray}
From this equation we can see that we can make the D--term vanish if
we choose the condition
\begin{equation}
\label{eq:28}
v_u^2=v_d^2+v'^2
\end{equation}
To implement this condition it is convenient to write
\begin{equation}
\label{eq:29}
v_d=v_u \cos\theta, \quad v'=v_u\sin\theta
\end{equation}
Then we get
\begin{equation}
\label{eq:30}
V_{Neutral}=B(\theta) v_u^2
\end{equation}
where
\begin{eqnarray}
\label{eq:31}
B(\theta)&=&
\left[\vb{14} M^2_{H_u} +\epsilon^2 +\mu^2
+ \left(M^2_{H_d} +\mu^2 \right) \cos^2\theta
+ \left(M^2_{L} + \epsilon^2 \right) \sin^2\theta \right.\nonumber\\[+1mm]
&&\left.\vb{14} \hskip 3mm
- 2 B \mu\, \cos\theta + 2 B' \epsilon\, \sin\theta
-2 \mu \epsilon \sin\theta \cos\theta \right]
\end{eqnarray}
Therefore the condition for avoiding an UFB direction is that,
\begin{equation}
\label{eq:32}
B(\theta_{\min}) > 0
\end{equation}
where $\theta_{\min}$ is the value of $\theta$ that corresponds to the
minimum of $B(\theta)$. Now consider Eq. (\ref{eq:31}) in the limit
$\epsilon \to 0$ and take the derivative,
\begin{equation}
\label{eq:Btheta}
\frac{dB}{d\theta}= 2\sin\theta \left[-( M^2_{H_d} +\mu^2-M^2_{L})\cos\theta +
B \mu\ \right]
\end{equation}
The right hand side vanishes when $\theta=0$ and when $\cos\theta =
\frac{B\mu}{M^2_{H_d} +\mu^2 - M^2_{L}}$. These two solutions
correspond to the UFB-1 and UFB-2 directions given in
Eqs.~(\ref{eq:11}) and (\ref{eq:ufb2}), respectively, when
$\epsilon=0$.
For $\epsilon \not=0$ it does not seem possible to have an analytical
expression for $\theta_{\min}$. However for a given set of parameters
it is always easy to verify whether Eq.~(\ref{eq:32}) holds for
$\theta \in [0, 2 \pi]$. It is also clear from Eq.~(\ref{eq:31}) that
the MSSM condition, Eq.~(\ref{eq:11}), is not enough to ensure that we
are free from UFB directions. This fact can be best illustrated from
figure (\ref{fig:1}) that shows a typical example.
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth,clip]{Btheta-fig3a.eps}&
\includegraphics[width=0.45\textwidth,clip]{Btheta-fig3b.eps}
\end{tabular}
\caption{$B(\theta)$ as a function of $\theta$ for an example where
$B(\theta_{min})<0$ but $B(0)>0$. The right panel is an enlarged
view of the left one close to the zeros of $B(\theta)$.}
\label{fig:1}
\end{figure}
One can see clearly that starting from a large value of $B(0)$ is not
enough to decide upon the sign of $B(\theta_{\min})$. However it is
easy to check numerically whether $B(\theta_{\min})>0$ or not.
Therefore, although we lack a simple analytical formula, the criterium
for avoiding UFB directions is easily implemented.
Finally we comment briefly on the direction UFB-3. It can be easily
shown that at large values of the field the potential in direction
UFB-3 is given as
\begin{equation}
\label{eq:ufb3mod}
V_{UFB-3}=\left( m^2_{H_u} + m^2_{L_i} + \epsilon B' \right) v_u^2
+ \cdots
\end{equation}
where the dots stand for irrelevant terms. Since in our notation
$\epsilon B' < 0$ this leads, in principle, to a slightly more
stringent requirement than the one corresponding to the R--parity
conserving MSSM. However, since $\frac{\epsilon}{\mu} \sim {\cal
O}(10^{-(3-4)})$ is required by neutrino oscillation data
\cite{hirsch:2000ef}, this modification is numerically irrelevant.
This is in agreement with the argument presented in
Ref.~\cite{Abel:1998ie}.
\subsection{Nonzero charged Higgs and Slepton Vev's}
We now turn to the solutions of type II. We have already seen in Eqs.
(\ref{eq:38}) - (\ref{eq:40}) that there are potentially dangerous
solutions for the Higgs potential with nonzero vev's for the charged
scalars. These solutions, if they exist, would provide new CCB
solutions different from those already present in the MSSM, as
explained above. As can be seen from Eq.~(\ref{eq:22}) such solutions
can exist if the parameters satisfy the relation $D_0/D_2 >0$, where
the $D_i$ are given in Eq.~(\ref{eq:39}).
Since it does not seem possible to give a strict analytic criterion
which relates the condition $D_0/D_2 <0$ (guaranteeing the absence of
unwanted minima) to the parameters of the potential we have resorted
to a numerical scan of the parameter space. Our approach to find the
minima of the potential was as follows. We always started with a
random set of parameters with zero charged vev's and subject to the
requirement that,
\begin{equation}
\label{eq:41}
v_u^2+v_d^2+v'^2=v^2=\left( 2 \sqrt{2} G_F\right)^{-1/2}= 174.1\,
\mathrm{GeV}
\end{equation}
\noindent Note that with this procedure we should always have,
\begin{equation}
\label{eq:42}
|\eta| =\frac{|v'|}{v} < 1.
\end{equation}
We then search for the global minimum numerically. If we find a minimum
deeper than the realistic minimum but which breaks charge this part of
parameter space should be discarded.
Two examples are shown in Fig. (\ref{fig:2}).
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth,clip]{tb105.eps}&
\includegraphics[width=0.45\textwidth,clip]{tb12.eps}
\end{tabular}
\caption{Range of RMSSM parameters where
nonzero charged vev's for the Higgs and slepton fields are
favoured over the realistic minimum for two examples of
$\tan\beta$, left $\tan\beta=1.05$, right $\tan\beta=1.2$. Here we
fix for convenience $B=B'=\mu=$ 100 GeV. For a discussion see
text.}
\label{fig:2}
\end{figure}
The results shown in Fig. (\ref{fig:2}) can be understood
qualitatively as follows. Starting with the definitions Eqs
(\ref{eq:37}) and (\ref{eq:39}) and taking into account the smallness
of $\frac{\epsilon}{\mu}$ one can show that in the limit $\epsilon \to
0$ we always have $D_2 >0$. On the other hand the condition $D_0>0$
requires
\begin{equation}
\label{eq:condd}
v'^2 > v^2\frac{\tan^2\beta-1}{1+\tan^2\beta} + \frac{2 B \mu}{g}
\frac{\tan^2\beta-1}{\tan\beta}.
\end{equation}
Note that this condition is not strictly valid for $\tan\beta \equiv
1$, because in this limit we can no longer neglect the terms
proportional to $\epsilon$ in the definitions of $D_0$ and $D_2$. Eq.
(\ref{eq:condd}) shows that charge breaking minima in the limit of
small values of $\epsilon$ require that $v'$ take up a sizeable
fraction of $v$. This trend is clearly visible from Fig.
(\ref{fig:2}). The figure also illustrates how these solutions
disappear very quickly with $\tan\beta$ greater than $1$.
Although we find it amusing that such solutions exist, we wish to
stress that consistency with neutrino data requires
$\frac{\epsilon}{\mu} \sim {\cal O}(10^{-(3-4)})$ and $\frac{v'}{v}
\sim {\cal O}(10^{-(3-4)})$. We therefore conclude that the RMSSM is
automatically safe from these unwanted minima in those ``physical''
parts of parameter space which account for the neutrino oscillation
data.
\section{Conclusions}
We have studied charge breaking minima and unbounded from below
directions within bilinear R--parity breaking supersymmetry. Such a
``reference model'' is nothing but the simplest broken R--parity
version of the Minimal Supersymmetric Standard Model. We have first
generalized some results obtained previously in the R--parity
conserving MSSM. Subsequently we discussed new ways to generate a
nonzero vacuum expectation value of the charged Higgs and slepton
fields. However, such unwanted solutions occur only in regions of
parameter space which are now excluded by neutrino oscillation data.
In summary it can be said that, given the data on neutrino masses,
bilinear R--parity violation can be understood as a small perturbation
of the MSSM. From the point of view of CCB and UFB directions the
RMSSM is as robust as the R--parity--conserving MSSM: it is equally
safe from unwanted minima in the same portions of parameter space.
\section{Acknowledgments}
This work was supported by Spanish grant BFM2002-00345, by the
European Commission Human Potential Program RTN network
HPRN-CT-2000-00148 and by the European Science Foundation network
grant N.86. M.H. is supported by a MCyT Ramon y Cajal contract.
JCR was supported by the Portuguese
\textit{Funda\c{c}\~ao para a Ci\^encia e a Tecnologia}
under the contract CFIF-Plurianual and grant POCTI/FNU/4989/2002.
We thank Werner Porod for useful discussions.
|
{
"timestamp": "2004-11-09T11:15:25",
"yymm": "0411",
"arxiv_id": "hep-ph/0411129",
"language": "en",
"url": "https://arxiv.org/abs/hep-ph/0411129"
}
|
\section{Introduction}
Arp220 is the nearest ultraluminous infrared galaxy (ULIG), at $z =
0.018$ (corresponding to $d = 74$ Mpc for $H_{0} = 75$km~s$^{-1}$~Mpc$^{-1}$), with
$> 95$ per cent of its total bolometric luminosity emitted at
infrared/submillimetre wavelengths [$L_{\rm ir}(8-1000\mu m) \simeq
1.3\times 10^{12}${\it L}$_\odot$] (Soifer et al. 1984; Sanders et al 1988, 2003).
It is an advanced merger system involving two relatively large spiral
disks, as evidenced by the detection in the optical of two large,
faint, crossed tidal tails (Joseph \& Wright 1985; Sanders et al
1988), with two nuclei separated by $\approx 1$ arcsec ($\approx 300$
pc), as determined from images in the radio (e.g. Becklin \&
Wynn-Williams 1987; Norris 1988; Baan \& Haschick 1995) and
near-infrared (e.g., Graham et al 1990; Scoville et al 2000) bands.
High resolution CO observations imply an extreme nuclear molecular gas
concentration ($\sim 3 \times 10^9${\it M}$_\odot$\ at radii $< 300$pc
corresponding to a mean H$_2$ density of $10^4$-$10^5$ cm$^{-3}$) (e.g.,
Scoville et al 1986; Solomon, Radford, Downes 1990; Scoville et al
1998; Sakamoto et al 1999). This is probably a consequence of disk
gas being funnelled into the central regions of both galaxies during
the merger (e.g., Barnes \& Hernquist 1992). There is evidence for a
galactic-scale outflow, as H$\alpha $ and soft X-ray nebulae with a
size of $\sim 30$ kpc (Armus, Heckman \& Miley 1990; Heckman et al
1996) suggest, which is thought to be driven by a super-starburst (but
see Colina, Arribas \& Clements 2004 for a merger shock
interpretaion). The optical spectrum is LINER-like (Veilleux, Kim \&
Sanders 1999), most likely due to shock-heated interstellar gas by the
galactic wind (e.g., Taniguchi et al 1999), while the double nucleus
region is heavily obscured by dust.
Arp220 is often used as a nearby template of luminous star-forming
galaxies at high redshift, thus verifying the true dominant power
source in Arp220 continues to be of great importance. There is still
no convincing direct evidence, from radio to hard X-ray wavelengths,
for an active nucleus in Arp220. The general assumption has been that
the bolometric luminosity of Arp220 is mostly powered by a starburst,
as perhaps best represented by a view based on ISO mid-infrared
spectral characteristics (e.g. the PAH feature at 7.7 $\mu $m --
Genzel et al 1998; Sturm et al 1996). However, the interpretation of
the ISO data for ULIGs, and Arp220 in particular, has recently been
revised by Spoon et al (2004), who took ice absorption into account
and concluded that the PAH component is rather weak with only moderate
obscuration. They inferred that a major power source, whether a super
star cluster or an AGN, must be deeply enshrouded in dust, similar to
an interpretation previously proposed by Dudley \& Wynn-Williams
(1997). A few authors have suggested that a hidden QSO could be a
major power source for Arp220 (e.g., Sanders et al 1988; Haas et al
2001). If an energetically significant AGN is present in Arp220, it
must be Compton-thick (e.g., Rieke 1988). Hard X-rays constrain the
lower bound of the absorbing column density to be \nH $\sim 10^{25}$
cm$^{-2}$\ (Iwasawa et al 2001). The lack of cold reflection
characterised by a 6.4 keV Fe K line means that the covering factor of
the obscuring matter has to be close to unity.
Recent studies of nearby elliptical galaxies suggests that all large
spheroids contain a supermassive black hole (SMBH: e.g., Richstone et
al 1998). Given that the host galaxy of Arp220 already appears to
have relaxed into an elliptical-like $r^{1/4}$-law profile (e.g.
Wright et al 1990), it seems reasonable to speculate that Arp220 may
also contain a SMBH, perhaps even two, since the two observed nuclei
have yet to merge. Two SMBH have recently been discovered in the
luminous infrared galaxy NGC6240 (Komossa et al 2003), which is also
an advanced merger system similar to Arp220. Given the detection of an
enormous nuclear concentration of molecular gas in Arp220 there is
plenty of material in the nuclear region to feed, and therefore build
a SMBH, while also simultaneously suppressing the amount of direct
emitted radiation from the nuclear source. This is not an implausible
scenario, and a sensitive hard X-ray observation would have a better
chance to catch the faintest sign of such a hidden AGN than
observations at longer wavelengths, as sensitive hard X-ray
observations of NGC4945 (Iwasawa et al 1993; Done et al 1996;
Guainazzi et al 2001) and NGC6240 (Iwasawa \& Comastri 1998; Vignati
et al 1999; Ikebe et al 2000) have shown.
We present here our analysis and interpretations of the public archive
XMM-Newton data on Arp220, with which a strong Fe K line is detected
for the first time. This will hopefully be followed by a longer observation
that should be scheduled sometime during AO-3.
\section{Observations and data reduction}
Arp220 was observed with XMM-Newton on 2002, August 11, and 2003,
January 15. The two XMM-Newton observations were carried out in the
Full Window mode and, when combined together, provide a useful
exposure time of 19{\thinspace}ks. We only use the EPIC pn data because of
its high sensitivity in the Fe K band. The data analysis presented
here was performed using the latest version of the standard analysis
package, SAS 6.0. Single and double events from the detector were
selected and the data reduction was carried out following the standard
procedure.
\section{Results}
\subsection{The XMM-Newton EPIC spectrum}
\begin{figure}
\centerline{\includegraphics[width=0.37\textwidth,angle=270,
keepaspectratio='true']{hbsp.ps}}
\caption{
The 2--10 keV band XMM-Newton EPIC pn spectrum of Arp220. }
\end{figure}
The hard X-ray ($\geq 3$ keV) emission in Arp220 is only resolved at
the resolution of the Chandra X-ray Observatory (Clements et al 2002;
and see Section 3.2), and is point-like when viewed with the
XMM-Newton telescope, although a much larger extension of soft X-ray
emission is clearly resolved. As we focus on the hard X-ray emission,
the spectral data are taken from a circular region of a
$25$ arcsec radius, enough to collect most of the hard X-ray
photons. The detailed location of the hard X-ray source is shown in
Section 3.2.
The EPIC pn spectrum is shown in Fig. 1. The soft X-ray emission below
$\sim 2 $keV is mostly due to the extended nebula, within which a wide
range of spectral variations are present between regions, as revealed
by the Chandra data (McDowell et al 2003).
The data analysis presented here is restricted to
the energy range 2.5--10 keV to focus on the hard X-ray emission around
the double nucleus.
A prominent Fe K$\alpha $ line is detected at $3.5\sigma $
significance (Fig. 1). The centroid of the feature is found at
$6.72^{+0.05}_{-0.05}$ keV (in the rest frame; the errors represent
the 90 per cent confidence region for one parameter of interest),
indicating that FeXXV is the major line component. Fitting a single
gaussian gives a line flux of $1.7^{+0.8}_{-0.8}\times 10^{-6}$
ph\thinspace s$^{-1}$\thinspace cm$^{-2}$. The corresponding equivalent width (EW) against the
underlying continuum is $1.9\pm 0.9$ keV. The inclusion of the
gaussian line in the fitted model reduces $\chi^2$ by $\simeq 10$ (23
degrees of freedom). The single gaussian fit does not require
statistically significant broadening, but a multiple line complex in
the 6.5--7 keV range would be more likely.
There are suggestions of other spectral features at 3.3, 3.9, 4.2 and
5.5 keV with their detection being at $2.6\sigma $ or lower. They all
have possible identifications with a highly ionized plasma. Because of
their low significance, we only give results of spectral fitting with
narrow gaussians for those emission features in Table 1, and will not
use them as critical materials for further discussion. However, we
point out that they would be an important key to constrain the origin
of the hard X-ray emission, once their detections are confirmed by
better quality data. Especially, radiative recombination continua are
spectral features unique to photoionized gas (Section 4.5).
\begin{table}
\begin{center}
\caption{ Emission line features in the 3--10 keV band. Results are
obtained from fitting each spectral feature with a narrow
gaussian, and the centroid energy is corrected for the galaxy
redshift ($z=0.018$). The fit with these gaussian lines gives
$\chi^2_{\nu}=1.0$ for 17 degrees of freedom. The errors quoted
are the 90 per cent confidence region for one parameter of
interest. RRC stands for radiative recombination continuum, which
is only relevant for photoionized gas. $\dagger $The line flux
limits are obtained by fitting with the energies
fixed for ArXVIII and CaXX, respectively.}
\begin{tabular}{cccc}
$E$ & $I$ & $EW$ & ID \\
keV & $10^{-7}$ph\thinspace s$^{-1}$\thinspace cm$^{-2}$ & keV & \\[5pt]
$3.32\dagger $ & $3.7^{+5.7}_{-3.7}$ & 0.20 & ArXVIII, SXIV RRC \\
$3.87^{+0.06}_{-0.05}$ & $8.9^{+6.7}_{-5.6}$ & 0.55 & CaXIX \\
$4.11\dagger $ & $2.1^{+9.5}_{-2.1}$ & 0.17 & CaXX \\
$5.50^{+0.10}_{-0.26}$ & $4.7^{+5.5}_{-4.1}$ & 0.45 & CaXX RRC \\
$6.72^{+0.10}_{-0.12}$ & $17.4^{+8.1}_{-7.8}$ & 1.85 & FeXX-FeXXVI \\
\end{tabular}
\end{center}
\end{table}
The 2.5--10 keV continuum is very hard: fitting a power-law modified
only by Galactic absorption (\nH $= 4\times 10^{20}$cm$^{-2}$) gives
$\Gamma = 1.2^{+0.4}_{-0.7}$. The total 2--10 keV flux is $1.1\times
10^{-13}$erg~cm$^{-2}$~s$^{-1}$. This value is in agreement with the value
obtained from the Chandra observation (Clements et al 2002), but is
smaller than the BeppoSAX MECS value $1.8\times 10^{-11}$erg~cm$^{-2}$~s$^{-1}$.
As pointed out by Clements et al (2002), the MECS aperture contains
two hard X-ray sources to the south, and the discrepancy with BeppoSAX
is probably due to contamination from these sources. The
corresponding 2--10 keV luminosity is $7\times 10^{40}$erg~s$^{-1}$, and
$L_{\rm 2-10keV}/L_{\rm ir}\simeq 1.5\times 10^{-5}$.
The possibility that these contaminating sources in the XMM-Newton
beam emit the iron line emission can be ruled out. The nucleus of
Arp220 is the only significant source within the beam in the narrow
band (6--7 keV) Chandra image (see Section 3.2). Given the large EW of
the line (which means the iron line emission to dominates the 6--7 keV
band), the Arp220 nucleus is certainly the iron line source.
\subsection{Chandra imaging}
\begin{figure}
\centerline{\includegraphics[width=0.37\textwidth,angle=270,
keepaspectratio='true']{ki2004fig2.ps}}
\caption{
The Chandra 3--7 keV image of the double nucleus region of Arp220
overlaid by the HST NICMOS 1.6 $\mu $m contour map (Scoville et al
2000). The positions of the double radio nuclei are indicated with
plus symbols in green.}
\end{figure}
As reported previously, the hard X-ray emission is slightly extended
but concentrated around the double nucleus (Clements et al 2002). Most
of the 3--7 keV emission comes from radii within 0.7 kpc and very
little originates beyond 1 kpc. Fig. 2 shows the Chandra 3--7 keV band
image with the HST NICMOS 1.6$\mu $m image superposed. The astrometry
of the Chandra image has been corrected using the latest attitude
correction file, and the HST image has been registered in the manner
discussed in Scoville et al (1998). In this updated registration, the
western nucleus (radio and near-infrared) now coincides with the 3--7
keV peak. Note the positions of the Eastern (radio) nucleus and the
1.6 $\mu$m peak, the displacement of which is presumably due to strong
obscuration (see Scoville et al 2000 for details on the near-infrared
extinction in this region). The centroid of the 6--7 keV band
emission, for which only 12 counts are detected and which is
presumably mostly due to Fe K line emission (see also Clements et al
2002), may be slightly displaced from the Western nucleus to the East,
but this is not conclusive. The Chandra spectrum of the hard X-ray
source is consistent with the spectral model with gaussian lines
fitted to the XMM-Newton spectrum in the 2--5 keV range. At higher
energies, a comparison becomes difficult as the efficiency of
Chandra declines sharply.
\section{The origin of the hard X-ray emission}
\subsection{X-ray binary emission}
It is generally assumed that the 2--10 keV emission in starburst
galaxies is dominated by integrated emission from X-ray binaries, and
its luminosity has been found to be correlated with star formation
rate indicators, e.g., infrared luminosity (Nandra et al 2002;
Ranalli, Comastri \& Setti 2003; Grimm et al 2003). There may be
non-thermal inverse Compton scattered emission of infrared photons by
relativistic electrons (e.g., Moran \& Lehnert 1997), but its
contribution is probably minor. In Arp 220, the hard X-ray emission
is resolved with Chandra and its extension coincides roughly with the
dense molecular disk (e.g., Sakamoto et al 1999) in shape. However,
the detection of the strong Fe K line readily rules out X-ray binaries
as a major source of the 2--10 keV emission because of the spectral
incompatibility.
If the infrared luminosity from Arp220 is entirely due to a starburst,
the implied star formation rate is $\approx 200${\it M}$_\odot$\ yr$^{-1}$
(Kennicutt 1998). With this high star formation rate, emission from
X-ray binaries -- which is estimated by assuming only high-mass X-ray
binaries\footnote{In a young starburst system, as low mass X-ray
binaries are not yet formed, it is appropriate to consider only
high-mass X-ray binaries (Persic \& Rephaeli 2002).}, and by
following Franceschini et al (2003; see also Persic \& Rephaeli 2002;
Persic et al 2004) -- should dominate, or in fact, exceed the observed
luminosity by more than one order of magnitude, even if absorption of
the order of $10^{22}$-$10^{23}$cm$^{-2}$\ in \nH\ is taken into account.
The lack of X-ray binary emission appears to be a general problem with
a starburst interpretation for Arp220, given the good correlation
between the 2--10 keV X-ray binary emission and the infrared
luminosity claimed for star forming galaxies (e.g., Ranalli, Comastri
\& Setti 2003; Grimm et al 2003; Persic et al 2004).
In the following subsections, we discuss two possible origins of the Fe K
line in the context of a starburst, based on the thermal emission
model for the observed X-ray spectrum. Both explanations require as
high as a star formation rate mentioned above, therefore the lack of
the X-ray binary emission remains to be a problem.
\subsection{Thermal emission model}
With the Fe K line centred at 6.7 keV, it seems tempting to interpret the hard
X-ray emission as thermal emission from hot gas associated with a
starburst. Fitting the 2.5--10 keV data with the collisionally ionized
plasma spectra computed by the MEKAL code (e.g., Kaastra 1992) gives
the following results: The temperature of the gas implied from the fit is
$7.4^{+5.4}_{-3.1}$ keV. The absorption column density is not well
constrained, but the likely value for the best-fit temperature is
$3\times 10^{22}$cm$^{-2}$. The metallicity, which is primarily
determined by the Fe K line strength, is found to be
$2.2^{+3.2}_{-1.4}$ solar (the solar abundance table by Anders \&
Grevesse 1989 is used here).
The thermal emission model accounts for the continuum shape and the Fe
K feature (with $\chi^2=21.5$ for 17 degrees of freedom), but would
leave the possible lower energy features unexplained. Since the
interstellar gas in a starburst region is expected to be enriched by
core-collapse supernovae (e.g., Type II SNe), which produce
substantial $\alpha$-elements but a relatively small amount of iron.
The twice-solar Fe metallicity required to explain the Fe K feature is
already large, although it may not be surprising for a region with
intense star formation (e.g., Fabbiano et al 2004). A non-solar
abundance ratio between $\alpha $-elements and Fe, as expected for
chemical enrichment by Type II SNe, might be required, if the
detection of the high ionization Ar and Ca features are confirmed by
higher quality data. If confirmed, the CaXIX He-$\alpha$ at 3.9 keV
(Table 1) would not be compatible with the single temperature thermal model,
because with a temperature of 7 keV, calcium ions are mostly CaXX.
\subsection{A hot bubble in a starburst region}
According to the starburst-driven superwind model, a hot bubble of
internally shocked wind material with a temperature of several keV
will form in the starburst region (Chevalier \& Clegg 1985), which
eventually breaks away to drive a galactic-scale outflow (e.g.,
Tomisaka \& Ikeuchi 1988). Although such a hot bubble is expected not
to be radiative because of its rarefied interior, and is indeed rarely
observed directly in starburst galaxies (e.g., Suchkov et al 1994;
Strickland \& Stevens 2000; Hoopes et al 2003)\footnote{The presence
of such hot gas has been reported in M82 by Griffiths et al (2000)
and in NGC253 by Pietsch et al (2001), but for NGC253,
photoionization by AGN is proposed by Weaver et al (2002).}, the
temperature of $kT\approx 7$ keV implied from the above thermal
emission model matches the prediction. In fact, the observed
temperature and luminosity agree roughly with the predicted values
from the wind solution by Chevalier \& Clegg (1985) with the mass and
energy input rates estimated by Starburst99 (Leitherer et al 1999) for
the star formation rate of 200 {\it M}$_\odot$\ yr$^{-1}$ (see below) and a high
thermalization efficiency ($\simeq 1$).
As the Chandra image shows, most of the hard X-ray emission in Arp220
comes from within $\approx 0.7$ kpc of the double nucleus. Assuming the thermal
plasma is uniformly distributed within a sphere with a radius of 0.7
kpc, the mean gas density and the total gas mass are derived to be
$n_{\rm gas}\simeq 0.6$ cm$^{-3}$\ and $M_{\rm gas}\simeq 2\times
10^7${\it M}$_\odot$, respectively. The thermal energy contained in the volume is
estimated to be $\approx 3.5\times 10^{56}$ erg. The bolometric
luminosity of the gas of $\approx 1.6\times 10^{41}$erg~s$^{-1}$\ means that
the cooling time of the gas is $\sim 7\times 10^7$ yr. This value is
insensitive to the non-solar abundance ratio, because, at a
temperature of $kT\sim 7$ keV, the cooling is dominated by
bremsstrahlung continuum rather than line emission.
With the star formation rate of 200 {\it M}$_\odot$\ yr$^{-1}$, the mass injection
rate from OB stars is $\sim 30${\it M}$_\odot$\ yr$^{-1}$, assuming a Salpeter IMF
and a mass loss rate of $10^{-5}$ {\it M}$_\odot$, typical for each massive star
(see also Elson, Fall \& Freeman 1989; Heckman et al 1990).
Starburst99 also gives a mass injection rate of $\sim 50${\it M}$_\odot$\
yr$^{-1}$ both from stellar winds and supernovae, assuming continuous
star formation and a starburst age $>10^7$ yr. The above IMF is
assumed to have an upper mass cut-off at 100 {\it M}$_\odot$. If the IMF is
truncated at 30 {\it M}$_\odot$, the estimate is reduced by a factor of 2 or less.
However, given the cooling time, the expected mass input is sufficient
to supply and maintain the hot gas.
At a temperature of 7 keV, the sound speed is $c_{\rm s}\approx 800$
km s$^{-1}$. So, the sound-crossing time over the radius of the hard
X-ray emitting region (0.7 kpc), is $8\times 10^5$ yr. Since the sound
crossing time is much shorter than the cooling time, the hot gas,
which will not be static because of its high pressure ($\sim 10^{-8}$
dyne cm$^{-2}$), will drive a superwind.
In terms of the energetics of the gas, heating by SNe is sufficient to
counterbalance the radiative loss. As discussed in the following
section, the expected supernova rate is $\sim 2$ SNe yr$^{-1}$,
implying an energy injection rate of $\sim 6\times 10^{43}$ erg~s$^{-1}$ (or
Starburst99 gives the mechanical luminosity of $\sim 1\times 10^{44}$
erg~s$^{-1}$). The required fraction of the SN energy that goes to heating
is very small $\eta_{\rm h}\sim 3\times 10^{-3}$. Therefore, the bulk
of the energy has to escape in the form of a wind -- given the
moderate soft X-ray nebula luminosity -- without depositing the energy
onto the galactic medium to radiate. This hot bubble interpretation
appears entirely plausible, apart from the X-ray binary problem
mentioned above.
\subsection{Luminous radio supernovae}
An alternative origin for the thermal emission is an ensemble of
supernovae. Several radio knots distributed over the nuclear region
have been imaged with VLBI, and are interpreted as very luminous
compact radio supernovae taking place in a dense environment (Smith et
al 1998). They can be luminous X-ray sources, as Type IIn SNe
(Benetti et al 1995) like SN1986J (Houck et al 1998), SN1988Z (Fabian
\& Terlevich 1996) and SN1995J (Fox et al 2000) have X-ray
luminosities of $10^{40}$--$10^{41}$erg~s$^{-1}$. Their X-ray spectra seem
hard enough to match the hard-band spectrum of Arp220, i.e.,
temperatures of 3--10 keV in $kT$ when fitted with a thermal emission
model (as summarised in Fox et al 2000), and at least SN1986J shows
evidence for a strong Fe K line at 6.7 keV (Houck et al 1998). With a
star formation rate of 200 {\it M}$_\odot$\ yr$^{-1}$, the expected supernova rate
is $\sim 2$ SNe yr$^{-1}$, as also estimated by Smith et al (1998). If
1 per cent of the total energy of each SN goes into radiation, and is
emitted at $10^{41}$erg~s$^{-1}$, then the cooling time is $\sim 3$ yr,
which is roughly comparable to the estimated average time between SNe
(however, as the radiative efficiency might be much higher in the
dense environment, these SNe could overproduce the integrated X-ray
luminosity). Therefore it is plausible to have multiple X-ray SNe at
the same time, and to maintain a stable hard X-ray luminosity until
most of the massive stars die out.
Whether powerful SNe like SN1986J are constantly produced in the
Arp220 nuclear region is questioned by the second VLBI observation
following the results presented in Smith et al (1998) three years
later. Most of the radio knots have faded by 8--50 per cent (some
remain at the same brightness), but no new compact sources have
appeared (Smith et al 1999). The slow decline is not consistent with a
luminous SN model. The SN rate also has to be lowered ($\leq 0.3$ SNe
yr$^{-1}$, Smith et al 1999). Although this estimate is only for the
powerful radio SNe, X-ray luminous SNe are probably closely related.
With these facts, it is unclear whether the above explanation of the
hard X-ray emisson with powerful SNe is sustainable.
\subsection{Photoionized gas}
Photoionized gas illuminated by a hidden AGN is still a viable
alternative for Arp220. In fact, if the possible radiative
recombination continuum (RRC) feature is real, it may be the most
likely interpretation. A high ionization parameter $\xi =
L_{ion}/(nR^2)\sim 10^3$ erg\thinspace cm\thinspace s$^{-1}$, where $L_{ion}$ erg~s$^{-1}$, $n$
cm$^{-3}$, $R$ cm, are respectively the ionizing luminosity, density and
distance from the source, is implied by the line emission centred at
6.7 keV. The extension of the hard X-ray emission points to the
presence of a larger region of low density interstellar gas with
density of the order of 1-10 cm$^{-3}$. Extended photoionized nebulae
have been found in a number of nearby Seyfert 2 galaxies, some of which
show extended Fe K emission (e.g., NGC4945, Done et al 2003; NGC4388,
Iwasawa et al 2003).
The photoionized gas in AGN appears to have ionization parameters
distributed over a wide range (e.g., Krolik \& Kriss 2001; Kinkhabwala
et al 2002). Although detailed modelling of the Arp220 hard X-ray
spectrum is beyond the scope of this paper, given the quality (and
also the spectral resolution) of the present data, our preliminary
inspection using the photoionization code, XSTAR (Kallman \& Bautista
2001) indicates log $\xi$ in the range of 2.8--3.5 is relevant to the
observed spectral features.
Absorption with a
column density of the order of $10^{23}$cm$^{-2}$\ would be required for
this photoionized spectrum not to dominate the soft X-ray band and to
make the spectrum as hard as is observed.
This photoionization model predicts RRC features at 5-5.5 keV from
CaXIX,XX and at $\sim 9$ keV from FeXXV,XXVI, which should be
relatively isolated so that they could be resolved at the CCD
resolution if the data quality is improved. This could provide a
crucial test for the photoionization model when longer exposure hard
X-ray data are obtained.
Strong Fe K line complex emission, consisting of a 6.4 keV line, which
is usually the strongest as expected from FeI-XVII, and higher energy
lines of FeXXV (6.7 keV) and FeXXVI (6.97 keV), has been observed in a
number of Compton-thick Seyfert 2 nuclei (e.g., NGC1068, NGC6240,
NGC4945). It is clear that the 6.4 keV line and the higher energy
lines originate from different matter because of their large difference
in ionization stages. The narrow line width of the latter naturally
suggests that the line emitting gas is optically thin.
The strong 6.4 keV lines seen in good quality spectra of Seyfert 2
nuclei often show a Compton shoulder, i.e. a weak redward extension to
the line core (e.g., Iwasawa et al 1997; Bianchi et al 2002; Kaspi et
al 2002; see also George \& Fabian 1991; Matt 2002 for theory). This
means that the 6.4 keV line results from reflection from optically
thick matter, which is, in the torus model for the unification scheme,
identified as the visible surface of the inner wall of the obscuring
torus (e.g., Awaki et al 1991; Ghisellini, Haardt \& Matt 1994;
Krolik, Madau \& \.Zycki 1994).
No detection of a 6.4 keV line in the Arp220 spectrum implies the lack
of reflection from optically-thick cold matter, and perhaps suggests
that the visibility of the torus inner wall is very limited. The
conditions for an energetically significant AGN to exist in
Arp220, imposed by the hard X-ray observation, are that the obscuring
matter must have a column density larger than \nH $= 10^{25}$cm$^{-2}$\
with a covering factor close to unity (Iwasawa et al 2001). With the
very high covering factor, the inner wall of the obscuring torus is
capped and never visible, which could explain the lack of a 6.4 keV
line. If a tiny fraction of light from a hidden nucleus leaks through
the heavy obscuration, the surrounding low density medium could be highly ionized to
give rise to ``hot'' Fe K line emission.
The covering fraction, $(1-f)$, can be estimated based on the Fe K
line flux, by comparing the well-studied Compton-thick AGN, NGC1068.
NGC1068 has a large infrared excess with $L_{\rm ir}\simeq 7\times
10^{44}$ erg~s$^{-1}$\ for an assumed source distance of 14.4 Mpc. The 6.8 keV line
flux measured in the Chandra data is $5.5\times 10^{-5}$ph\thinspace s$^{-1}$\thinspace cm$^{-2}$\
(Young, Wilson \& Shopbell 2001; a similar flux was measured by
Iwasawa et al 1997 for the sum of FeXXV and FeXXVI in the ASCA data).
Kinkhabwala et al (2002) estimated $fL_{ion}\approx 10^{43}$erg~s$^{-1}$, by
fitting the detailed photoionization model to the RGS data on NGC1068.
If both NGC1068 and Arp220 are powered by hidden AGN, and the hot Fe K
line emitting region in NGC1068 sees the same ionizing source, then
$fL_{ion}$ for Arp220 is $\sim 0.8\times 10^{43}$erg~s$^{-1}$, and with
$L_{ion}\sim (1/2)L_{\rm ir}$, $f$ is estimated to be $\sim 0.4$ per
cent, which can be compared with $f\sim 3$ per cent for NGC1068
($f<10$ per cent from the Chandra image, Kinkhabwala et al 2002) under
the same assumption. Thus, reducing the opening fraction of the
obscuration further would make the photoionization model consistent
with that for NGC1068.
\section{The power source of the far-infrared emission}
While there is no doubt that a starburst is taking place in Arp220,
various mid-infrared characteristics, summarised by Spoon et al (2004,
and reference therein), the superwind luminosity inferred from the
H$\alpha $ and soft X-ray nebulae (Suchkov et al 1996; Iwasawa 1999;
also McDowell et al 2003 for an alternative interpretation of merger
shock), and the X-ray binary emission in the hard X-ray band, as
discussed above, are all unusually low relative to the far-infrared
luminosity for starburst galaxies.
The peculiarity of Arp220 is well illustrated by Spoon et al (2004),
who re-examined the ISO mid-infrared spectrum with the knowledge of a
number of ice absorption features across the band. Their spectral
analysis and the higher resolution study by Soifer et al (2002)
demonstrate that the mid-infrared emission in Arp220 consists of
diffuse PAH emission with a moderate amount of absorption and a
NGC4418-like, heavily absorbed continuum. The latter is more likely to
contribute to the far-infrared luminosity of Arp220, although the
above authors note that, if all the far-infrared luminosity is powered
by a hidden AGN, it would not even be visible at mid-infrared
wavelengths.
If, say, 10 per cent, of the bolometric luminosity ($\sim 10^{11}${\it L}$_\odot$
) was due to a moderately obscured starburst, the outward
characteristics of such a starburst as described above would then be
in agreement with most other starburst galaxies. For example, the soft
X-ray nebula has the luminosity of $\sim 1\times 10^{41}$erg~s$^{-1}$,
consistent with the predicted luminosity ratio of X-ray and
starburst-powered infrared emission for a superwind nebula (log
(SX/IR) $\approx -3.5$, Leitherer \& Heckman 1999; Strickland \&
Stevens 2000). Also, the X-ray binary emission in the hard X-ray band
approaches the correlation line with the star formation rate for local
starburst galaxies (Ranalli et al 2002; Grimm et al 2002).
If such a lower luminosity starburst was indeed the case, the rest
of Arp220's luminosity would have to be explained by something else.
Spoon et al (2004) suggested a deeply shrouded ultra dense starburst,
in addition to the less obscured starburst, which is responsible for
the diffuse PAH emission. The estimated optical depth based on their
modelling of the ISO spectrum implies that the column density to such
a star cluster would be \nH $\sim 10^{23}$cm$^{-2}$. Since X-rays at
energies higher than a few keV are transparent to this level of
obscuration, luminous hot gas, for instance, associated with the star
cluster would be observable as a hard X-ray excess, which is not
apparent in the data.
An alternative option is an even more deeply buried AGN. If the
photoionized gas interpretation was correct for the detected Fe K
line, then the presence of an AGN would be proved. It is not
straightforward to estimate the intrinsic luminosity of such an
obscured AGN without seeing its transmitted radiation: once the line
of sight Thomson depth exceeds unity (Compton thick), the visibility
of nuclear X-ray emission is largely determined by geometry, e.g., the
covering factor, distribution of the gas density etc., which are not
known. However, the comparison with NGC1068 (Section 4.4) demonstrates
that with $f\sim 0.4$ per cent, the detected Fe K line luminosity is
consistent with the assumption that all the luminosity is powered by
an AGN. Therefore once the photoionization origin for the Fe K line is
confirmed, it would be plausible that an AGN dominates the energetics.
The radio spectrum of Arp220 must then be explained by free-free
absorption, the opacity of which should be larger than those in the
Compton-thick AGN in NGC6240 and NGC4945 (see Fig. 5 in Iwasawa et al
2001).
There are a few interesting infrared luminous objects to compare with
Arp220. NGC4418 is a luminous infrared galaxy with $L_{\rm ir}\simeq
1\times 10^{11}${\it L}$_\odot$, and its mid-infrared spectrum shows no PAH
features, but is shaped by silicate and various ice absorption
features (Spoon et al 2001). Very faint X-ray emission with $L_{\rm
X}\sim 10^{39}$ erg~s$^{-1}$\ has been detected from the centre of NGC4418
with Chandra (Maiolino et al 2003). The luminosity ratio of X-ray and
infrared emission for NGC4418 {\it is even smaller} than that of
Arp220. The hyper-luminous infrared galaxy, IRAS 00182--7112
($z=0.327$), has kinematical evidence for a superwind (Heckman et al
1990), as in Arp220, and its mid-infrared spectrum is classified as
Class 1 by Spoon et al (2002), i.e., it is NGC4418-like. A recent
XMM-Newton observation of IRAS 00182--7112 detected a Fe K line at 6.7
keV with $EW\sim 1.5$ keV on a flat continuum (K. Nandra, priv comm),
reminiscent of Arp220.
These three objects share similar properties, and if they are powered
by stars, they are starbursts for which the widely used relation
between far-infrared (or the translated star formation rate) and hard
X-ray (X-ray binary emission) luminosities does not apply. Their 2--10
keV X-ray luminosities are well below the correlation line of Ranalli
et al (2003) and Grimm et al (2003). The detection of the Fe K line,
in Arp220 in particular, means that their X-ray binary contribution to
the 2--10 keV band must be minor, making them deviate from the
correlation further. Hiding most of the X-ray binaries behind thick
obscuration required a substantial mass of gas and dust, given the
observed spatial extension (e.g., Arp220). If the formation of X-ray
binaries were suppressed, an unusually large suppressing factor ($\sim
20$ for Arp220 when compared with Grimm et al 2003) would be required.
An heavily obscured AGN may offer a more relaxed solution for the
embedded power sources for all three of these objects. For Arp220, of
course, a substantial starburst ($\sim 10^{11}${\it L}$_\odot$) would still be
required to explain the near-mid infrared light, superwind signatures
and radio supernovae etc.
\section*{Acknowledgements}
The XMM-Newton data presented here were obtained from the XMM-Newton
Science Archive maintained by the Science Operations Centre and the
observations were carried out in the Guaranteed-Time program (PI, B.
Aschenbach). We thank Steve Allen, Steve Smartt, Massimo Ricotti,
Andy Fabian, and Dave Strickland for useful discussion and Paul Nandra
for information on his unpublished result. ASE was supported by NSF
grant AST 00-80881. GM, KI and NT thank PPARC for support.
|
{
"timestamp": "2004-11-19T12:26:37",
"yymm": "0411",
"arxiv_id": "astro-ph/0411562",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411562"
}
|
\section{Introduction}
\label{intro}
The compact elliptical galaxy M32 has been widely used in the past as
template for the study of stellar populations and chemical evolution
of elliptical galaxies (e.g$.$ Freedman 1992 and references
therein). It is very nearby (780\,kpc; Tonry 1991; Freedman \& Madore
1990). It has very high surface brightness at optical wavelengths and
it is of high metallicity ($-$0.2$<$[Fe/H]$<$$+$0.01; Grillmair et
al$.$ 1996).
However, the initial study of the UV properties of this galaxy by
O'Connell et al$.$ (1992) and more recently by Ohl et al$.$ (1998)
cast some doubt on M32 as being a truly typical example of an
elliptical galaxy. These authors, using data from the Shuttle-borne
Ultraviolet Imaging Telescope (UIT), claimed the presence of a strong
FUV-optical color gradient in M32, but inverted with respect to the
gradients observed in the vast majority of elliptical galaxies. While
in regular, luminous elliptical galaxies the inner regions are
slightly bluer than the outer parts (probably suggesting a stronger
UV-upturn in the nuclear regions), for M32 these authors reported the
opposite: a very strong blue trend ($\sim$3\,mag within the effective
radius; $r_{\mathrm{eff}}$) toward outer regions of the galaxy.
Newly obtained GALEX FUV observations now show that the FUV-optical
gradient in M32 ($\Delta$($FUV$--$B$)/$\Delta$$\log$\,(r)=$+$0.15$\pm$0.03) is
in fact very similar to the gradients commonly measured in luminous
elliptical galaxies. This analysis rests on a careful subtraction of
the background emission from the disk of M31 (see e.g$.$ Choi,
Guhathakurta, \& Johnston 2002). We suggest that the strong negative
gradient reported by Ohl et al$.$ (1998) may have been caused by
problems in the density-to-flux calibration of UIT photographic data
at low surface-brightness levels.
\section{Observations}
\label{observations}
GALEX has recently completed a mosaic image of the entire Andromeda
galaxy. This mosaic includes observations of the compact elliptical
galaxy M32 with exposure times of 6138 seconds in the FUV band
($\lambda$=1530\,\AA) and 4808 seconds in the NUV band
($\lambda$=2310\,\AA). The final spatial resolution (FWHM) of the
combined images of M32 used in this Letter was 6.0\arcsec\ and
6.8\arcsec\ for the FUV and for the NUV. The images were flux
calibrated using the GALEX zero points (Morrissey et al$.$ 2004).
In Figures~\ref{fig1}a \& \ref{fig1}b we show a
25\arcmin$\times$25\arcmin\ section of the GALEX FUV and NUV images
centered on M32. It is evident from these figures that significant FUV
and NUV emission from the disk of M31 seriously affects M32 and that
it is complex with a steep NW-SE gradient. The average FUV (NUV)
background associated with the disk of M31 that we measure close to
the position of M32 is 26.0 (25.7) mag\,arcsec$^{-2}$, while the
background observed far from the disk of M31 is much lower, 27.2
(26.7) mag\,arcsec$^{-2}$. Therefore, if we want to derive reliable
surface photometry for M32, detailed modeling of the M31 disk emission
is required.
Finally, we complemented our GALEX observations with archival $HST$
data obtained with the STIS FUV MAMA (G0 9053; PI: T.M$.$ Brown). The
$HST$ image allows us to analyze the innermost 16\arcsec\ (in radius)
of M32 at high spatial resolution ($<$0.1\arcsec).
\section{Analysis}
\label{analysis}
\subsection{Subtraction of the Disk of M31}
The morphology of the disk of M31 both in the FUV and NUV is very
clumpy (see Figures~\ref{fig1}a \& \ref{fig1}b), mostly due to the
distinct contribution of OB associations and HII regions. This makes
the modelling of the disk more complicated than at optical wavelengths
where the light distribution is significantly smoother and can be
reasonably well reproduced by an exponential disk (Peletier 1993; Choi
et al$.$ 2002).
The subtraction of the disk of M31 was carried out in two
steps. First, we removed the unresolved, diffuse background
component. For the purpose of modeling this background component we
masked all the individual clusters, associations, field stars, and M32
itself. Then, we divided the image into boxes of
75\arcsec$\times$75\arcsec\ and fitted a low-order polynomial to the
remaining (un-masked) pixels using the IRAF task {\sc surfit}. We then
subtracted the fitted background from the images and added the mean
value of the modelled sky.
In the second step of subtracting the M31 disk, we removed the
point-sources contribution by modelling the PSF of the GALEX images
using the IRAF task {\sc psf}. We then subtracted the point sources
previously identified by {\sc daofind} using the task {\sc substar}.
The final result from the subtraction of both the unresolved
background and point sources is shown in Figures~\ref{fig1}c \&
\ref{fig1}d for the FUV and NUV images, respectively. A few point
sources in the outer regions of the halo of M32 and residuals from the
point-source subtraction were further masked in order to derive the
surface brightness and color profiles of M32.
\subsection{Surface Brightness and Color Profiles}
To compute the FUV and NUV surface brightness profiles of M32 we used
isophotal parameters derived by both Peletier (1993) and Choi et al$.$
(2002) at optical wavelengths. This allowed us to directly compare our
UV surface photometry with that derived in the optical and obtain
self-consistent UV-optical color profiles. Note that in both of the
above papers the contamination from the disk of M31 was explicitly
accounted for. For sake of comparison we also fitted isophotes to our
final NUV image using the iterative method of Jedrzejewski (1987). We
found very small differences both in ellipticity ($-$0.04$\pm$0.04)
and position angle (2$\pm$6$^{\circ}$) between our best-fitting
isophotes and those of Peletier (1993).
In Figure~\ref{fig2} we show the FUV and NUV surface brightness
profiles (in AB magnitudes) obtained from GALEX observations. The
equivalent isophotal radius in this plot is computed as $\sqrt{a
\times b}$. The $B$-band and UIT FUV surface brightness profiles
published by Peletier (1993) and Ohl et al$.$ (1998), respectively,
are also plotted for comparison.
The best-fitting S\'ersic-law indices of the FUV and NUV surface
brightness profiles shown in Figure~\ref{fig2} are 0.38$\pm$0.01 and
0.26$\pm$0.01, respectively. These values are very similar to what is
expected for a pure de Vaucouleurs profile (0.25). Note that our UV
observations do not reach the larger galactocentric distances where
Graham (2002) reported the presence of an extended exponential disk.
For the innermost regions of the FUV surface brightness profile of M32
we have used an archival $HST$-STIS image. The background was
estimated by matching the outermost part of the $HST$ profile
(obtained using ground-based optical isophotal parameters) with the
GALEX FUV profile (see Figure~\ref{fig2}).
As seen in Figure~\ref{fig3}a all the color gradients obtained are
rather flat within an $r_{\mathrm{eff}}$ (32\arcsec; e.g$.$ Choi et
al$.$ 2002). Optical data used in this plot come from Peletier (1993);
however almost identical results are obtained if data from Choi et
al$.$ (2002) are used. It is noteworthy that this behavior is observed
even at distances as close to the galaxy center as 2\arcsec, below
which atmospheric seeing starts to affect ground-based optical
photometry (see Figure~\ref{fig3}b).
\section{Discussion}
\label{discussion}
\subsection{M32 and the Origin of the UV-upturn}
A least-squares fit to the ($FUV$--$B$) color profile between
6\arcsec\ (FWHM) and 32\arcsec\ ($r_{\mathrm{eff}}$) in radius yields
a color gradient of
$\Delta$($FUV$--$B$)/$\Delta$$\log$\,(r)=$+$0.15$\pm$0.03 (see
Figure~\ref{fig3}b). This value is similar to that obtained by Ohl et
al$.$ (1998) for luminous elliptical galaxies ($+$0.5$\pm$0.3), but it
is very different from that obtained for M32 by Ohl et al$.$ (1998)
($\Delta$($FUV$--$B$)/$\Delta$$\log$\,(r)$<$$-$2). First, we checked
whether the difference found arises from the UIT surface photometry
obtained by Ohl et al$.$ (1998) being significantly affected by the
emission of the disk of M31. To check this we derived the same
($FUV$--$B$) color gradient using the background subtraction procedure
described above on the archival Astro-1 (B1 filter;
$\lambda_{\mathrm{eff}}$=1520\,\AA) FUV UIT image. The color profile
obtained is remarkably similar to that obtained by Ohl et al$.$ (1998)
(see Figure~\ref{fig3}b) after being offset to match the Astro-2 (B5
filter; $\lambda_{\mathrm{eff}}$=1615\,\AA) FUV UIT photometry. We
also studied the effects of the wings of the UIT PSF on the
($FUV$--$B$) profile. The maximum impact of this effect on the
($FUV$--$B$) color gradient is found to be $\leq$0.4\,mag within the
central 30\arcsec. The other possibility is that this difference may
be a problem in the density-to-flux calibration of the (photographic)
UIT image at very low surface-brightness levels. However, a detailed
study of the linearity of the UIT data is beyond the scope of this
Letter.
The GALEX ($FUV$--$NUV$) color gradient (also sensitive to the strength of
the UV-upturn) is similar to that derived for ($FUV$--$B$). This
confirms that the color gradient derived is real and not an artifact
introduced by the different background-subtraction technique or
because of spatial resolution differences between the UV and optical
data. This is confirmed by the analysis of the archival $HST$-STIS
data, which shows a gradient at the innermost regions of the galaxy
similar to and extending that obtained from GALEX observations alone
(see Figure~\ref{fig3}b).
Despite its very shallow UV-upturn, which in principle could be
explained by emission from post-AGB stars, Brown et al$.$ (2000) have
shown that the UV emission in M32 is dominated by hot HB stars. By
analogy, this suggests that hot HB stars are also responsible for most
of the FUV emission associated with the UV-upturn observed in luminous
elliptical galaxies (see Brown et al$.$ 1997). The ($FUV$--$B$) color
gradient reported in this Letter, similar to that measured in luminous
elliptical galaxies, along with the results of Brown et al$.$ (2000)
suggest that the properties and spatial distribution of the hot HB
stars in M32 are the same as those in luminous elliptical
galaxies. The great advantage here being that M32 is the only object
where individual hot-HB stars have actually been resolved.
Burstein et al$.$ (1988) claimed that elliptical galaxies with larger
Mg$_2$ indices show stronger UV-upturns. This has been interpreted as
resulting from a dependence of mass-loss efficiency and helium
abundance on metallicity (Greggio \& Renzini 1990; O'Connell
1999). However, Rich et al$.$ (2004), using a large sample of
low-redshift red galaxies, do not find any correlation between the
strength of the UV-upturn and the Mg$_2$, D4000, H$\beta$ indices or
the velocity dispersion (see also Deharveng, Boselli, \& Donas
2002). Ohl et al$.$ (1998) also reported a lack of correlation between
the ($FUV$--$B$) gradient and the Mg$_2$-index gradient. These results
suggest the presence of a second parameter (decoupling from the
Fe-peak, helium abundance, age; O'Connell 1999) that could have an
impact even stronger than the metallicity on the evolution of the
UV-upturn.
The suggested presence of a strong negative gradient in ($FUV$--$B$)
color in M32 (Ohl et al$.$ 1998), where the existence of an
intermediate age stellar population (spatially segregated toward the
galaxy center) has been frequently proposed (Grillmair et al$.$ 1996),
had been claimed as an indication that the age may play a significant
role in the evolution of the UV-upturn. However, our results in
combination with the lack of structure in the optical-colors and
spectroscopic-index maps of M32 (e.g$.$ del Burgo et al$.$ 2001)
indicate that if this stellar population is present, it is very
smoothly distributed across the body of the galaxy and that the
properties of the hot HB responsible for the UV-upturn are also very
similar at any position in the galaxy.
\subsection{Is M32 a Peculiar Object?}
The two most intriguing differences reported between the properties of
M32 and those of luminous elliptical galaxies had been: (1) the
presence of an intemediate-aged stellar population (e.g$.$ Grillmair
et al$.$ 1996) and (2) the large (inverted) ($FUV$--$B$) color measured
by Ohl et al$.$ (1998). Regarding the former problem we note that many
of the spectral synthesis analyses carried out to date assume a pure
red clump HB morphology, while Brown et al$.$ (2000) have identifed a
large population of hot HB stars in M32. With regard to the latter
topic, our results show that the previously reported unusual
($FUV$--$B$) color gradient does not exist and that the UV properties of
M32 are very similar to those of luminous ellipticals.
We conclude that, although M32 is certainly an extreme example in the
sequence of elliptical galaxies in many of its properties and the
possible presence of an intermediate-aged stellar population should
not be ignored, it cannot be considered to be a peculiar object and
its use as a reference object for stellar populations synthesis is
justified.
In summary, the analysis of GALEX FUV and NUV imaging data of the
compact elliptical galaxy M32 yields very small (positive) ($FUV$--$B$)
and ($FUV$--$NUV$) color gradients, comparable to values seen in luminous
elliptical galaxies. This result suggests that the properties of the
hot HB stars responsible for the formation of the (very weak)
UV-upturn in M32 are not a strong function of position in the galaxy
and that they are probably similar to hot HB stars in luminous
elliptical galaxies.
\acknowledgments
GALEX (Galaxy Evolution Explorer) is a NASA Small Explorer, launched
in April 2003. We gratefully acknowledge NASA's support for
construction, operation, and science analysis for the GALEX mission,
developed in cooperation with the Centre National d'Etudes Spatiales
of France and the Korean Ministry of Science and Technology. We thank
Robert W$.$ O'Connell and Jean-Michel Deharveng for valuable comments.
|
{
"timestamp": "2004-11-11T22:14:34",
"yymm": "0411",
"arxiv_id": "astro-ph/0411304",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411304"
}
|
\section{Introduction}
Bayesian inference is a powerful principle for modeling and manipulating probabilistic informations.
In many cases, Bayesian inference is considered as an optimal and legitimate rule for inferring such informations.
\begin{itemize}
\item Bayesian filters for example are typically regarded as optimal filters~\cite{arulampalam02tutorial},
\item Bayesian networks are particularly powerful tools for modeling uncertain informations.
By merging independence priors to the logical priors, Bayesian Networks are generally associated to Markovian properties, which allow quite efficient computations~\cite{pearlRus,murphy,Dambreville:cepomdp}.
\end{itemize}
Although Bayesian inference is an established principle, it is recalled~\cite{jaynes} that it has been disputed until the middle of the XXth century, in particular by the frequencist community.
What made the Bayesian inference established is chiefly a logical justification of the rule~\cite{cox,jaynes}.
Some convergence with the frequencist interpretation achieved this acceptation.
Cox derived the characteristics of Probability and of Bayesian conditional from hypothesized axioms about the probabilistic system, which themselves were reduced in terms of functional equations.
Typical axioms are:
\begin{itemize}
\item The operation which maps the probability of a proposition to the probability of its negation is idempotent,
\item The probability of $A\wedge B$ depends only of the probability of $A$ and of the probability of $B$ given that $A$ is true,
\item The probability of a proposition is independent of the way it is deduced (consistency).
\end{itemize}
It is noticed that Cox interpretation has been criticized recently for some imprecision and reconsidered~\cite{debrucq2,Halpern}.
\\[5pt]
In some sense, Cox justification of the Bayesian conditional is not entirely satisfactory, since it is implicit: it justifies the Bayesian conditional as the operator fulfilling some natural properties, but does not construct a full underlying logic priorly to the probability.
The purpose of this paper is to construct an explicit logic for the Bayesian conditional as a conditional logic:
\begin{enumerate}
\item Build a (deterministic) conditional logic, \emph{priorly to any notion of probability}.
This logic will extend the classical propositional logic (unconditioned propositions).
It will contain conditional propositions $(\psi|\phi)$ built for any propositions $\phi$ and $\psi$,
\item Being given a probability $p$ over the unconditioned propositions, derive the probabilistic Bayesian conditional from an extension $\overline{p}$ of $p$ over the whole logic.
The Bayesian conditional will be recovered by setting $p(\psi|\phi)=\overline{p}\bigl((\psi|\phi)\bigr)$\,.
\end{enumerate}
The construction of an explicit underlying logic provides a better understanding of the Bayesian conditional, but will also make possible the comparison with other rules for manipulating probabilistic informations, based on other logics~\cite{dambreville}.
\\[5pt]
It is known that the construction of such underlying logic is heavily constrained by Lewis triviality \cite{lewis,hajek1,fraassen}, which has shown some critical issues related to the notion of conditional probability.
In particular, Lewis result implies strong hypotheses about the nature of the conditionals.
Essentially, the conditionals have to be constructed outside the space of unconditioned propositions.
This result implied the way the logic of Bayesian conditional has been investigated.
Many approaches does not distinguish the Bayesian conditional from probabilistic notions.
This is the case of the theory called \emph{Bayesian Logic}~\cite{BayesianLogic}, which is an extension of probabilistic logic programming by the way of Bayesian conditioning.
Other approaches like conditional algebra or conditional logic result in the construction of conditional operators, which finally arise as abstraction independent of any probability.
However, these logical constructions are still approximations of the Bayesian conditional or are constrained in use.
\\[5pt]
Since Lewis triviality is a fundamental reference in this work, it is introduced now.
By the way, different logical approaches of the Bayesian conditional are mentioned, and it is shown how these approaches avoid the triviality.
\paragraph{Lewis triviality.}
Let $\Omega$ be the set of all events, and $\mathcal{M}$ be the set of measurable subsets of $\Omega$.
Let $\mathit{Pr}(\mathcal{M})$ be the set of probability measures on $\mathcal{M}$.
Lewis triviality may be expressed as follows:
\\[5pt]
\emph{Let $A,B\in \mathcal{M}$ with $\emptyset\subsetneq B\subsetneq
A\subsetneq \Omega$\,.
Then, it is impossible to build a proposition $(B|A)\in \mathcal{M}$ such that $\pi\bigl((B|A)\bigr)=\pi(B|A)\stackrel{\Delta}{=}\frac{\pi(A\cap B)}{\pi(A)}$ for any $\pi\in\mathit{Pr}(\mathcal{M})$\,.}
\\[5pt]
Lewis triviality thus makes impossible the construction of a (Bayesian) conditional operator within the same Boolean space.
\begin{description}
\item[Proof.]
Let $\pi$ be a probability such that $0<\pi(B)<\pi(A)<1$\,; \emph{the existence of $\pi$ is ensured by hypothesis $\emptyset\subsetneq B\subsetneq
A\subsetneq \Omega$}\,.
\\[5pt]
For any propositions $C,D\in\mathcal{M}$\,, define $\pi_C(D)=\pi(D|C)=\frac{\pi(C\cap D)}{\pi(C)}$\,, when $\pi(C)>0$.\\
Lewis' triviality relies on the following calculus, derived when $\pi(A\cap C)>0$\,:
\begin{equation}
\label{Eq:DBL:v2:Lewis:1}
\begin{array}{@{}l@{}}\displaystyle\vspace{5pt}
\pi((B|A)|C)=\pi_C((B|A))=\pi_C(B|A)=\frac{\pi_C(A\cap B)}{\pi_C(A)}
\\\displaystyle
\rule{0pt}{0pt}\hspace{100pt}=\frac{\frac{\pi(C\cap A\cap B)}{\pi(C)}}{\frac{\pi(A\cap C)}{\pi(C)}}= \frac{\pi(C\cap A\cap B)}{\pi(A\cap C)}=\pi(B|C\cap A)\;.
\end{array}
\end{equation}
Denote $\sim B=\Omega\setminus B$.\\
$B\subset A$ and $0<\pi(B)<\pi(A)$ imply $\pi(A\cap B)>0$ and $\pi(A\cap \sim B)>0$\,.
\\
Then, it is inferred:
$$
\begin{array}{@{}l@{}}\displaystyle
\frac{\pi(B)}{\pi(A)}=\frac{\pi(A\cap B)}{\pi(A)}=\pi(B|A)=\pi((B|A))=\pi((B|A)|B)\pi(B)+\pi((B|A)|\sim B)\pi(\sim B)
\vspace{3pt}\\\displaystyle
\hspace{15pt}=\pi(B|B\cap A)\pi(B)+\pi(B|\sim B\cap A)\pi(\sim B)=1\times \pi(B)+0\times \pi(\sim B)=\pi(B)\;,
\end{array}
$$
which contradicts the hypotheses $0<\pi(B)$ and $\pi(A)<1$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
In fact, the derivation~(\ref{Eq:DBL:v2:Lewis:1}) relies on the hypothesis that $\pi((B|A)|C)$ is defined as $\pi_C((B|A))$.
This hypothesis is necessary when $(B|A)\in\mathcal{M}$\,, but could be avoided when $(B|A)\not\in\mathcal{M}$\,.
\\[5pt]
More precisely, while the proposition $(B|A)$ is outside $\mathcal{M}$, it becomes necessary to build for any probability $\pi$ its extension $\overline{\pi}$ over the outside propositions; in particular, it will be defined $\overline{\pi}\bigl((B|A)\bigr)=\pi(B\cap A)/\pi(A)$ for any $A,B\in\mathcal{M}$\,.
But there is no reason to have $\overline{\pi_C}(D)=\overline{\pi}\bigl((D|C)\bigr)$ for $D\not\in\mathcal{M}$\,. Thus, the above triviality does not work necessarily.
\\[5pt]
The property $\overline{\pi_C}\ne\overline{\pi}\bigl((\cdot|C)\bigr)$ is somewhat counter-intuitive.
In particular, it means that conditionals are not conserved by conditional probabilities.
However, it allows the construction of a conditional logic for the Bayesian conditional; our work provides an example of such construction.
\paragraph{Probabilistic logic and Bayesian logic.}
Probabilistic logic, as defined by Nilsson~\cite{nilss}, has been widely studied in order to model and manipulate the uncertain information.
It tracks back from the seminal work of Boole~\cite{boole}.
In probabilistic logic, the knowledge, while logically encoded by classical propositions, is expressed by means of constraints on the probability over these propositions.
For example, the knowledge over the propositions $A,B$ may be described by:
\begin{equation}
\label{Eq:DBL:v2:BL:1}
v_1\le p(A)\le v_2\quad\mbox{and}\quad v_3\le p(A\rightarrow B)\le v_4 \;,
\end{equation}
where $v_i|1\le i\le 4$ are known bound over the probabilities.
Equations like (\ref{Eq:DBL:v2:BL:1}) turn out to be a linear set of constraints over $p$, while considering the generating propositions $A\wedge B,A\wedge \neg B,\neg A\wedge B,\neg A\wedge \neg B$.
It is then possible to characterize all the possible values for $p$ by means of a linear system.
Notice that probabilistic logic by itself does not manipulate conditional probabilities or any notion of independence.
Proposals for extending the probabilistic logic to conditionals have appeared rather early~\cite{adams},
but Andersen and Hooker~\cite{BayesianLogic} introduced an efficient modeling and solve of such problems.
This new paradigm for manipulating Bayesian probabilistic constraints has been called \emph{Bayesian Logic}.
It is not linear.
For example, constraints like $p(A\wedge B)=p(A)p(B)$ remains essentially a non-linear constraint.
Constraints involving both conditional and non-conditional probabilities also generate non-linearity.
In~\cite{BayesianLogic}, Andersen and Hooker expose a methodology for solving these non-linear programs.
In particular, the structure of the Bayesian Network is being used in order to reduce the number of non-linear constraints.
\\[5pt]
\emph{Bayesian Logic} is a paradigm for solving probabilistic constraint programs, which involve Bayesian constraints.
Since it does not construct the Bayesian conditional as a strict logical operator, this theory is not concerned by Lewis' triviality.
\emph{Bayesian Logic} departs fundamentally from our approach, since \emph{Deterministic Bayesian Logic} intends to build the logic underlying the Bayesian conditional priorly to the notion of probability.
\paragraph{Conditional Event algebra.}
In Conditional Event Algebra~\cite{calabrese}, the conditional could be seen as an external operator $(\,|\,)$\,, which maps pairs of unconditioned propositions toward an \emph{external} Boolean space.
There are numerous possible constructions of a CEA.
Some typical properties related to the Bayesian conditional are generally implemented:
\begin{itemize}
\item Inference property: $(a|b\wedge c)\wedge (b|c)=(a\wedge b|c)\,,$
\item Boolean compatibility:
$(a\wedge b|c)=(a|c)\wedge(b|c)\quad\mbox{and}\quad(a\vee b|c)=(a|c)\vee(b|c)\,.$
\end{itemize}
But most CEAs provide conditional rules which are richer than the strict Bayesian conditional, and in particular compute the combination of any pairs of conditionals.
\\[5pt]
The counterpart of such nice properties is the necessity to restrict the conditional to unconditioned propositions.
The external space hypothesis is thus fundamental here.
CEAs are practically restricted to only one level of conditioning, and usually avoid any interferences between unconditioned and conditioned propositions.
These restrictions are the way, by which CEAs avoid Lewis' triviality.
\paragraph{Conditional logics.}
\emph{Conditional} is an ambiguous word, since there may be different meaning owing to the community.
Even the classical inference, $\phi\rightarrow\psi\equiv \neg\phi\vee\psi$\,, is called \emph{material conditional} by some.
Despite classical inference is systematically used by mathematicians, its disjunctive definition makes it improper for some conditions of use.
For example, it is known that it is by essence non-constructive, an issue which tracks back to the foundation of modern mathematics~\cite{intuitionism}.
\\[5pt]
Material conditional could be particularly improper for describing the logic of human mind.
For example, consider the sentences:
\begin{enumerate}
\item \label{dbl:sentence:1} ``If Robert were in Berlin, then he would be in France''\,,
\item \label{dbl:sentence:2} ``If Robert were in Berlin, then he would be in Germany''\,.
\end{enumerate}
Since Germany and France are two distinct countries, a human will say that sentence~\ref{dbl:sentence:1} is false, while sentence~\ref{dbl:sentence:2} is true.
For a human, moreover, the meaning of the sentences are \emph{independent} of the fact that Robert is in Berlin or not.
Now, interpreting~\ref{dbl:sentence:1} as a material conditional, it happens that this sentence is true when Robert is not in Berlin.
Sentences~\ref{dbl:sentence:1} and~\ref{dbl:sentence:2} should not be actually interpreted as material conditionals.
In fact, they are called \emph{Counterfactual conditionals}, and their truth does not depend on the truth of their hypotheses and conclusions.
The philosophers David Lewis and Robert Stalnaker have done fundamental works on counterfactual conditionals~\cite{Lewis:counterfactuals,stalnaker}.
While defining counterfactual conditionals
(an example of such conditional, VCU, is detailed in section~\ref{Section:Theorem:DBL:sub:2}), they based their model constructions on the \emph{possible world semantics} of modal logic.
Beside, couterfactuals and other related conditionals are deeply connected to the notion of logical modalities~\cite{giordano2}.
\\[5pt]
Actually, if we interpret the Bayesian conditional as a probabilistic conditional proposition, \emph{i.e.} $p(B|A)=p(A > B)$, it is derived $p(A)p(A > B)=p(A \wedge B)=p\bigl(A\wedge(A > B)\bigr)$, which means a probabilistic independence between $(A > B)$ and $A$.
So, it is tempting to interpret the Bayesian conditional as a counterfactual.
Stalnaker claimed that it was possible to construct such conditional within the universe of events, so as to match the Bayesian conditional.
Lewis answered negatively~\cite{lewis} to this conjecture.
Nevertheless, Lewis proposed an alternative interpretation of the probability $p(A > B)$ of the conterfactual $A>B$, called Imaging~\cite{lepage}, which give up the strong constraint $p(A > B)=p(B|A)$.
\\[5pt]
At last, it appears that (counterfactual) conditional logics are, in principle, a nice framework for interpreting the Bayesian inference, but the problem of the triviality have to be overcome.
As already explained in our previous discussion, the triviality should be avoided by constructing the conditionals outside the classical propositions space and extending the probability accordingly.
Now, existing conditional logics, often inspired from VCU, fail to implement some natural properties of the Bayesian conditional.
This is particularly true, when considering the negation of propositions.
Since the conditional probability of the negation is obtained as the complement of the conditional probability, \emph{i.e} $p(\sim B|A)=1-p(B|A)$, it seems natural to hypothesize a similar logical relation, \emph{e.g.} $(\neg\psi|\phi)\equiv\neg(\psi|\phi)$; \emph{notice that this relation is generally implemented by Conditional Event Algebras}.
This relation is not implemented by conditional logics in general (refer to the logic VCU defined in section~\ref{Section:Theorem:DBL:sub:2}).
In particular, the relation \mbox{$\neg(\phi>\psi)\equiv\phi>\neg\psi$} would contradict the axiom $\phi>\phi\mbox{ (Id)}$ which is widely accepted in the literature; refer to deduction (\ref{DBL:VCU:eq1}) in section~\ref{Section:Theorem:DBL:sub:2}.
\paragraph{Contribution.}
\emph{Bayesian logic} does not provide a logical interpretation of the Bayesian conditional, but rather a methodology for solving the program related to a probabilistic Bayesian modeling.
Existing \emph{conditional algebras} and \emph{conditional logics} are restricted or insufficient for characterizing the Bayesian conditional properly.
Our work intends to supply these limitations, by constructing a new conditional logic, denoted \emph{Deterministic Bayesian Logic} (DBL), which is in accordance with the Bayesian conditional.
The conditional operator is defined conjointly with a meta-relation of logical independence.
The probabilistic Bayesian inference is recovered from the derived logical theorems and the logical independence.
This process implies an extension of probability from the unconditioned logic toward DBL.
As a final result, a theorem is proved that guarantees the existence of such extension (Lewis result is thus avoided).
\\[5pt]
Section~\ref{Section:Def:DBL} is dedicated to the definition of the Deterministic Bayesian Logic.
The languages, axioms and rules are introduced.
In section~\ref{Section:Theorem:DBL}, several theorems of the logic are derived.
A purely logical interpretation of Lewis' triviality is made, and DBL is compared with the known conditional logic VCU.
A model for DBL is constructed in section~\ref{Section:Model:DBL}.
A completeness theorem is derived.
The extension of probabilities over DBL is investigated in section~\ref{Section:Proba:DBL}.
The probabilistic Bayesian inference is recovered from this extension.
The paper is then concluded.
\section{Definition of the logic}
\label{Section:Def:DBL}
The \emph{Deterministic Bayesian Logic} is defined now.
This definition implies a notion of logical independence, which is related to the proof of the propositions.
Typically, the following property holds true for the models of our logic:
\begin{equation}\label{eq:fond:0}\rule{0pt}{0pt}\qquad\begin{array}{@{}l@{}}
\mbox{Assume }\phi\mbox{ and }\psi\mbox{ to be logically independent.}\\
\mbox{Then }\phi\vee\psi\mbox{ is a tautology implies }\phi\mbox{ is a tautology or }\psi\mbox{ is a tautology}\,.
\end{array}\end{equation}
In this document, we propose a definition based on the \emph{sequent} formalism (there is a previous modal embedded definition~\cite{Dambreville:DmBL}).
However, although the definition is formalized by means of sequent, it does not retain the rules of sequent calculus~\cite{girard}.
For a soft introduction of the logic, informal intuitions about the Bayesian inference are given now.
\subsection{Logical relations within Bayesian probability}
\label{Section:Def:DBL:intro}
Here, some typical probabilistic relations are considered, and logical theorems
and axioms are extrapolated from these relations.
{\bf These extrapolations are not justified here;} it is the purpose of the paper to prove the coherence of the whole logic, while this paragraph is only dedicated to the intuitions behind the formalism.
\\[5pt]
The logic of a system may be seen as the collection of behaviors which are common to any instance of this system.
Let us consider the example of probability on a finite \emph{(unconditioned)} propositional space.
For convenience, define $\mathbb{P}$ the set of strictly positive probabilities over this space, that is $p\in\mathbb{P}$ is such that $p(\phi)>0$ for any non-empty proposition~$\phi$\,:
$$
\mathbb{P}=\bigl\{p\;\big/\;p\mbox{ is a probability and }\forall\phi\not\equiv\bot,\,p(\phi)>0\bigr\}\,.
$$
Then, the following properties are easily derived for \emph{unconditioned} propositions:
\begin{eqnarray}
&&\label{JYG:1:1} \forall p\in\mathbb{P},\,p(\phi)+p(\psi)=1\quad\mbox{implies}\quad\phi\equiv\neg\psi\;,
\\
&&\label{JYG:1:2} \forall p\in\mathbb{P},\,p(\phi)+p(\psi)\le p(\eta)+p(\zeta)\quad\mbox{implies}\quad\vdash(\phi\vee\psi)\rightarrow(\eta\vee\zeta)\;,
\end{eqnarray}
with corollary:
\begin{equation}
\label{JYG:1:3}\forall p\in\mathbb{P},\,p(\phi)=1\quad\mbox{implies}\quad\vdash\phi\;.
\end{equation}
For any $p\in\mathbb{P}$, define the conditional extension $\overline{p}$ by:
$$
\overline{p}(\psi|\phi)p(\phi)=p(\phi\wedge\psi)\;,
\quad\mbox{for any unconditioned propositions } \phi,\psi\,.
$$
While results~(\ref{JYG:1:1}) to~(\ref{JYG:1:3}) work for unconditioned propositions, we extrapolate them to some elementary conditional relations related to $\overline{p}$:
\begin{itemize}
\item It is noticed that $\forall p\in\mathbb{P},\;\overline{p}(\psi|\phi)+\overline{p}(\neg\psi|\phi)=1$\,.
Property~(\ref{JYG:1:1}) could be extrapolated to $\overline{p}(\psi|\phi)$ and $\overline{p}(\neg\psi|\phi)$, and then yields:
$$
\neg(\psi|\phi)\equiv(\neg\psi|\phi)\;.
$$
Of course, although intuitively sound, this relation is not justified mathematically.
This logical relation is implemented in DBL as the axiom b4, and is expressed as a sequent:
\begin{equation}\label{JYG:2:1}
\vdash\neg(\psi|\phi)\leftrightarrow(\neg\psi|\phi)\;.
\end{equation}
\item
It is known that $\forall p\in\mathbb{P},\;\overline{p}(\psi|\phi)+p(\bot)\le p(\neg\phi\vee\psi)+p(\bot)$\,.
Then, the extrapolation of property~(\ref{JYG:1:2}) yields:
\begin{equation}
\label{JYG:2:2}
\vdash(\psi|\phi)\rightarrow(\phi\rightarrow\psi)\;.
\end{equation}
This logical relation is implemented in DBL as the axiom b3.
\item
Similarly, it comes $\forall p\in\mathbb{P},\;\overline{p}(\psi\vee\eta|\phi)+p(\bot)\le \overline{p}(\psi|\phi)+\overline{p}(\eta|\phi)$.
Then, the extrapolation of~(\ref{JYG:1:2}) yields:
$$
\vdash(\psi\vee\eta|\phi)\rightarrow\bigl((\psi|\phi)\vee(\eta|\phi)\bigr)\,.
$$
Together with~(\ref{JYG:2:1}), it is then deduced:
\begin{equation}
\label{JYG:2:3}
\vdash(\psi\rightarrow\eta|\phi)\rightarrow\bigl((\psi|\phi)\rightarrow(\eta|\phi)\bigr)\;,
\end{equation}
which constitutes a modus ponens for the conditional.
This logical relation is implemented in DBL as the axiom b2.
\item Another interesting relation is:
$$
\vdash\phi\rightarrow\psi\mbox{ implies }\vdash\neg\phi\mbox{ or }\forall p\in\mathbb{P},\;\overline{p}(\psi|\phi)=1\;.
$$
Extrapolating~(\ref{JYG:1:3}), it comes:
$$
\vdash\phi\rightarrow\psi\mbox{ implies }\vdash\neg\phi\mbox{ or }\vdash(\psi|\phi)\;.
$$
This logical relation is implemented in DBL as the axiom b4, and is expressed as a sequent:
\begin{equation}\label{JYG:2:4}
\phi\rightarrow\psi\vdash\neg\phi,(\psi|\phi)\;.
\end{equation}
\emph{Notice that in DBL, sequent $\vdash\phi,\psi$ is not equivalent to $\vdash\phi\vee\psi$\,.}
In fact, this axiom is related to the property~(\ref{eq:fond:0})\,, mentioned previously.
\item From the Bayesian inference, it is known that $\overline{p}(\psi|\phi)=\overline{p}(\psi)$ implies $\overline{p}(\phi|\psi)=\overline{p}(\phi)$\,.
By similar extrapolation, it is then derived:
$$\vdash(\psi|\phi)\leftrightarrow\psi\mbox{ implies }\vdash(\phi|\psi)\leftrightarrow\phi\,,$$
Notice however that this relation does not make sense in general, when $\phi$ and $\psi$ are both unconditioned propositions.
This logical relation is implemented in DBL as the axiom b5, and is expressed as a sequent:
\begin{equation}\label{JYG:2:5}
\psi\times\phi\vdash\phi\times\psi\,,
\mbox{ where }
\phi\times\psi=(\phi|\psi)\leftrightarrow\phi\,,
\mbox{ and }
\psi\times\phi=(\psi|\phi)\leftrightarrow\psi\,.
\end{equation}
\end{itemize}
In fact, these extrapolated axioms imply constraints, when extending the probabilities $p\in\mathbb{P}$ over the conditioned propositions.
This paper intends to prove that these constraints are actually valid, in regard to Lewis' triviality.
It is now time for the logic definition.
\subsection{Language}
Let $\Theta=\{\theta_i/i\in I\}$ be a \emph{finite} set of atomic propositions.
\\[5pt]
The language $\mathcal{L}_C$ of the classical logic related to $\Theta$ is the smallest set such that:
$$\left\{\begin{array}{@{\,}l@{}}
\Theta\subset\mathcal{L}_C
\\[5pt]
\neg\phi\in\mathcal{L}_C \mbox{ and }\phi\rightarrow\psi\in\mathcal{L}_C
\mbox{ for any }\phi,\psi\in\mathcal{L}_C
\end{array}\right.$$
The language $\mathcal{L}$ of the D\emph{eterministic} B\emph{ayesian} L\emph{ogic} related to $\Theta$ is the smallest set such that:
$$\left\{\begin{array}{@{\,}l@{}}
\Theta\subset\mathcal{L}
\\[5pt]
\neg\phi\in\mathcal{L}\;,\ \phi\rightarrow\psi\in\mathcal{L}\mbox{ and }(\psi|\phi)\in\mathcal{L}
\mbox{ for any }\phi,\psi\in\mathcal{L}
\end{array}\right.$$
The following abbreviations are defined:
\begin{itemize}
\item
$\phi\vee\psi=\neg\phi\rightarrow\psi$\,,\quad $\phi\wedge\psi=\neg(\neg\phi\vee\neg\psi)$\quad
and\quad
$\phi\leftrightarrow\psi=(\phi\rightarrow\psi)\wedge(\psi\rightarrow\phi)$\,,
\item $\psi\times\phi=(\psi|\phi)\leftrightarrow\psi$\,,
\item It is chosen a proposition $\theta_1\in\Theta$, and it is then denoted $\top=\theta_1\rightarrow\theta_1$ and $\bot=\neg\top$\,.
\end{itemize}
The operator $\times$ is involved (subsequently) in the definition of the logical independence, though it is not sufficient to characterize this meta-relation by itself.
$\top$ and $\bot$ are idealistic notations for the tautology and the contradiction.
\subsection{Sequents}
The set of finite sequences of propositions of $\mathcal{L}$, denoted $\mathcal{L}^\ast$, is defined by:
\begin{equation}\label{DBL:def:sequence:eq1}
\mathcal{L}^\ast=\bigcup_{n=0}^{\infty}\mathcal{L}^n\,,
\end{equation}
where $\mathcal{L}^n$ is the set of n-uplets of $\mathcal{L}$.
In particular, $\mathcal{L}^0=\{\emptyset\}$ where $\emptyset$ is the empty sequence.
\paragraph{Notations.}
Subsequently, finite sequences of $\mathcal{L}^\ast$ are denoted without brackets.
\\[5pt]
Being given a finite sequence $\Gamma=\gamma_1,\dots,\gamma_n$ of $\mathcal{L}^\ast$, then $\{\Gamma\}=\{\gamma_1,\dots,\gamma_n\}$ is the set of all components of the sequence $\Gamma$\,.
Notice that the set $\{\Gamma\}$ may contain less components than the sequence $\Gamma$, since a sequence may repeat the same proposition.
\\[5pt]
Let $\Gamma=\gamma_1,\dots,\gamma_n$ and $\Delta=\delta_1,\dots,\delta_m$ be two finite sequences of $\mathcal{L}^\ast$\,.
Then $\Gamma,\Delta$ is the sequence $\gamma_1,\dots,\gamma_n\;,\;\delta_1,\dots,\delta_m$\,, obtained as a concatenation of $\Gamma$ and $\Delta$.
\paragraph{Definition.}
The set of sequents of $\mathcal{L}$, denoted $\mathbf{Seq}$, is defined as the set of pairs of finite sequences of $\mathcal{L}$:
\begin{equation}\label{DBL:def:sequence:eq2}
\mathbf{Seq}=\mathcal{L}^\ast\times\mathcal{L}^\ast\,.
\end{equation}
\paragraph{Notation.}
Being given a subset $X\subset\mathbf{Seq}$ of sequents and a sequent $(\Gamma,\Delta)\in \mathbf{Seq}$, the meta-relation $\Gamma\vdash_X\Delta$ is defined by:
\begin{equation}\label{DBL:def:sequence:eq3}
\Gamma\vdash_X\Delta
\mbox{ if and only if }
(\Gamma,\Delta)\in X\;.
\end{equation}
When $\Gamma=\emptyset$ (resp. $\Delta=\emptyset$), the notation $\vdash_X\Delta$ (resp. $\Gamma\vdash_X$) is used instead of $\Gamma\vdash_X\Delta$.
\\[5pt]
Subsequently are defined the set of sequents deducible in DBL, denoted $\mathcal{B}$, and the set of sequents deducible classically, denoted $\mathcal{C}$.
These sets are defined by means of rules and axioms of construction.
Such axiomatic constructions depart from common sequent calculi, like LK.
\subsection{Rules and axioms}
The sets $\mathcal{B}\subset\mathbf{Seq}$, $\mathcal{B}_\ast\subset\mathbf{Seq}$ and $\mathcal{C}\subset\mathbf{Seq}$ are defined as the smallest subsets of $\mathbf{Seq}$ verifying:
\begin{itemize}
\item
For $X\in\{\mathcal{B},\mathcal{B}_\ast,\mathcal{C}\}$:
\begin{description}
\item[CUT.] $\Gamma\vdash_X\Delta,\phi$ and $\Lambda,\phi\vdash_X\Sigma$ implies $\Gamma,\Lambda\vdash_X\Delta,\Sigma$\,,
\item[STRUCT.]
Assume $\{\Gamma\}\subset\{\Lambda\}\cup\{\top\}$ and $\{\Delta\}\subset\{\Sigma\}\cup\{\bot\}$\,.\\
Then $\Gamma\vdash_X\Delta$ implies $\Lambda\vdash_X\Sigma$\,.
\item[Modus ponens.]$\phi,\phi\rightarrow\psi\vdash_X\psi$\,,
\item[Classical Axioms:]
\item[c1.] $\vdash_X \phi\rightarrow(\psi\rightarrow\phi)$\,,
\item[c2.] $\vdash_X (\eta\rightarrow(\phi\rightarrow\psi))\rightarrow((\eta\rightarrow\phi)\rightarrow(\eta\rightarrow\psi))$\,,
\item[c3.] $\vdash_X (\neg\phi\rightarrow\neg\psi)\rightarrow((\neg\phi\rightarrow\psi)\rightarrow\phi)$\,,
\end{description}
\item For $X\in\{\mathcal{B},\mathcal{B}_\ast\}$:
\begin{description}
\item[b1.] $\phi\rightarrow\psi\vdash_X\neg\phi,(\psi|\phi)$\,,
\item[b2.] $\vdash_X(\psi\rightarrow\eta|\phi)\rightarrow\bigl((\psi|\phi)\rightarrow(\eta|\phi)\bigr)$\,,
\item[b3.] $\vdash_X(\psi|\phi)\rightarrow(\phi\rightarrow\psi)$\,,
\item[b4.] $\vdash_X\neg(\neg\psi|\phi)\leftrightarrow(\psi|\phi)$\,,
\end{description}
\item For $X=\mathcal{B}$:
\begin{description}
\item[b5.] \emph{(logical independence is symmetric)}~:
$\psi\times\phi\vdash_X\phi\times\psi$\,,
\end{description}
\item For $X=\mathcal{B}_\ast$:
\begin{description}
\item[b5.weak.A.]
$\psi\times\neg\phi\vdash_X\psi\times\phi$
and
$\psi\times\phi\vdash_X\psi\times\neg\phi$\,,
\item[b5.weak.B.] $\psi\leftrightarrow\eta\vdash_X(\phi|\psi)\leftrightarrow(\phi|\eta)$\,.
\end{description}
\end{itemize}
$\mathcal{B}$ is the set of sequents deducible in DBL.
$\mathcal{C}$ is the set of sequents deducible classically.
The axioms b5.weak.A and b5.weak.B are actually a weakening of b5 (refer to section~\ref{Section:Theorem:DBL}).
The set $\mathcal{B}_\ast$ is thus related to a weakened version of DBL, denoted DBL$_\ast$.
In fact, DBL$_\ast$ is a quite useful intermediate for the construction of a model of DBL.
It happens that the model of DBL$_\ast$ is constructed directly, while the model of DBL is implied from the model of DBL$_\ast$.
\paragraph{Notations.}
The following meta-abbreviations are defined for $X\in\{\mathcal{C},\mathcal{B},\mathcal{B}_\ast\}$\,:
\begin{itemize}
\item $\phi\equiv_X\psi$ means $\vdash_X\phi\leftrightarrow\psi$.
\end{itemize}
The relation $\equiv_X$ is the logical equivalence related to the deduction system $X$.
\\[5pt]
In order to alleviate the notations, the subscripts $_{\mathcal{B}}$ and $_{\mathcal{B}_\ast}$ are omitted.
In particular, $\vdash$ (resp. $\equiv$) is used instead of $\vdash_{\mathcal{B}}$ or $\vdash_{\mathcal{B}_\ast}$ (resp. $\equiv_{\mathcal{B}}$ or $\equiv_{\mathcal{B}_\ast}$).
\paragraph{Notations relative to $(\cdot|\cdot)$.}
The set $\bigl\{
\eta\in\mathcal{L}
\;\big/\;
\exists\psi\in\mathcal{L},\,\eta\equiv(\psi|\phi)
\bigr\}$ is called the \emph{sub-universe} of $\phi$.
\\[5pt]
The \emph{logical} independence between propositions is a meta-relation expressed from the operator $\times$ and the sequents:
\begin{center}
By definifion, $\psi$ is logically independent of $\phi$\,, when $\vdash\psi\times\phi$.
\end{center}
The logical independence and the conditional $(|)$ are thus conjointly defined.
\paragraph{Interpretation.}
The construction of the model in section~\ref{Section:Model:DBL} infers the following interpretation of sequent $\Gamma\vdash\Delta$:
\begin{equation}\label{DBL:def:Sequent:interpret:eq1}\rule{0pt}{0pt}\hspace{40pt}\begin{array}{@{}l@{}}
\mbox{If all propositions }\gamma\in\{\Gamma\}\mbox{ are tautologies of the model,}
\\\mbox{then there is a proposition }\delta\in\{\Delta\}\mbox{ which is a tautology of the model.}
\end{array}\end{equation}
\paragraph{Meaning of the rules and axioms.}
Axioms $c\ast$ are well known minimal axioms of classical logic.
Axioms $b\ast$ have been introduced in section~\ref{Section:Def:DBL:intro}, and are thought to describe the logical behavior of a Bayesian operator.
Axiom \emph{modus ponens} is the modus ponens rule encoded within a sequent formalism.
Rule CUT is the well known \emph{cut} rule for merging sequent proofs.
Rule STRUCT is a structural rule for the sequent, which includes the weakening, contraction and permutation.
Moreover, it makes possible to suppress $\top$ (resp. $\bot$) from the left (resp. right) side of a sequent.
In particular, STRUCT makes equivalent $\Gamma\vdash\Delta$ and $\Gamma,\top\vdash\Delta,\bot$.
\subsection{DBL extends classical logic.}
It is noticed that the \emph{classical Logic $C$} is obtained by restricting DBL to the language $\mathcal{L}_C$ and to the deduction rules CUT and STRUCT, the axiom \emph{modus ponens} and the classical axioms $c\ast$ described previously (\emph{c.f.} appendix~\ref{proof:clasrestric}).
More precisely, if $\phi$ is a theorem of classical logic, then it is deduced $\vdash_C\phi$.
So, in some common sense, DBL extends classical logic.
However, \emph{the rules of LK} (a common sequent calculus for classical logic) \emph{do not work anymore in our system}, and moreover, there are sequents deduced from LK which cannot be derived from our deduction system.
Examples are provided in appendix~\ref{proof:clasrestric}.
Thus, one have to be careful in the deduction process of classical sequents.
\\[5pt]
While DBL could be seen as an extension of $C$, the following properties are desirable:
\begin{itemize}
\item If $\phi\in\mathcal{L}_C$, then $\vdash \phi$ implies $\vdash_C \phi$\,,
\item For any probability $p$ defined over $\mathcal{L}_C$\,, there is a probability $\overline{p}$ over $\mathcal{L}$ which extends $p$ and verifies $\overline{p}(\phi\wedge\psi)=\overline{p}(\phi)\overline{p}(\psi)$, for any $\phi,\psi\in\mathcal{L}$ such that $\vdash\psi\times\phi$ (logical independence)\,.
\end{itemize}
First property just ensures that DBL axioms will not trivialize the classical logic.
Second property ensures that DBL is not just a trivial extension of $C$, and in particular avoids the triviality of Lewis.
These results are amongst the most contributions of this paper.
Another main contribution of the paper is that such extention $\overline{p}$ actually implies the probabilistic Bayesian inference:
$$
\overline{p}\bigl((\psi|\phi)\bigr)p(\phi)=p(\phi\wedge\psi)\,,\mbox{ for any }\phi,\psi\in\mathcal{L}_C\,.
$$
These results are obtained from the model constructed for DBL.
But first, the following section studies the logical consequences of the rules and axioms of DBL.
\section{Logical theorems and comparison}
\label{Section:Theorem:DBL}
Subsequently, theorems of DBL are derived.
Since both DBL and DBL$_\ast$ are studied, \emph{the possibly needed axioms $b5\ast$ are indicated in bracket.}
\\[5pt]
First at all, it happens that both DBL and DBL$_\ast$ imply the classical tautologies.
In particular, the following property is proved in appendix~\ref{proof:clasrestric}:
\begin{center}
Assume that $\phi$ is a tautology of classical logic.
Then $\vdash_C\phi$ is deduced from the classical subsystem of DBL.
\end{center}
For this reason, the theorems of classical logic are assumed without proof from now on, so that many details in the deductions are implied.
\subsection{Theorems}
\label{Section:Theorem:DBL:sub:1}
\emph{The proofs are done in appendix~\ref{proof:logth}.}
Next theorem is proved here as an example.
\subsubsection{The full universe}\label{DBL:theo:1} $\phi\vdash\psi\times\phi$\,.
In particular $(\psi|\top)\equiv\psi$\,.\\[5pt]
Interpretation: a tautology is independent with any other proposition and its sub-universe is the whole universe.
\begin{description}
\item[Proof.]
From axiom b3, it comes $\vdash(\psi|\phi)\rightarrow(\phi\rightarrow\psi)$ and $\vdash(\neg\psi|\phi)\rightarrow(\phi\rightarrow\neg\psi)$\,.\\
Then $\vdash\phi\rightarrow\bigl((\psi|\phi)\rightarrow\psi\bigr)$ and $\vdash\phi\rightarrow\bigl((\neg\psi|\phi)\rightarrow\neg\psi\bigr)$\,, by classical deductions.\\
Applying b4 (with CUT and classical deductions) yields $\vdash\phi\rightarrow\bigl((\psi|\phi)\leftrightarrow\psi\bigr)$.\\
By applying modus ponens and CUT, it follows $\phi\vdash(\psi|\phi)\leftrightarrow\psi$\,.
\\[5pt]
The remaining proof is obvious.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
\subsubsection{Axioms order}\label{DBL:theo:2} Axiom b5 implies b5.weak.A.
\subsubsection{The empty universe [needs b5.weak.A]}\label{DBL:theo:3} $\neg\phi\vdash\psi\times\phi$\,.
In particular $(\psi|\bot)\equiv\psi$\,.
\subsubsection{Left equivalences}\label{DBL:theo:4}
$\psi\leftrightarrow\eta\vdash\neg\phi,(\psi|\phi)\leftrightarrow(\eta|\phi)$\,.
\\[5pt]
\emph{Corollary [b5.weak.A].} $\psi\leftrightarrow\eta\vdash(\psi|\phi)\leftrightarrow(\eta|\phi)$.
\\[5pt]
\emph{Corollary 2 [b5.weak.A].} $\psi\equiv\eta$ implies $(\psi|\phi)\equiv(\eta|\phi)$.
\\
Proof of corollary~2 is immediate from corollary.
\subsubsection{Sub-universes are classical [b5.weak.A]}\label{DBL:theo:5}
\begin{itemize}
\item $(\neg\psi|\phi)\equiv\neg(\psi|\phi)$\,,
\item $(\psi\wedge\eta|\phi)\equiv(\psi|\phi)\wedge(\eta|\phi)$\,,
\item $(\psi\vee\eta|\phi)\equiv(\psi|\phi)\vee(\eta|\phi)$\,,
\item $(\psi\rightarrow\eta|\phi)\equiv(\psi|\phi)\rightarrow(\eta|\phi)$\,.
\end{itemize}
\subsubsection{Evaluating $(\top|\cdot)$ and $(\bot|\cdot)$ [b5.weak.A]}\label{DBL:theo:6} $\psi\vdash(\psi|\phi)$\,.
In particular $(\top|\phi)\equiv\top$ and $(\bot|\phi)\equiv\bot$\,.
\subsubsection{Inference property}\label{DBL:theo:7}
$(\psi|\phi)\wedge\phi\equiv\phi\wedge\psi$\,.
\\[5pt]
Interpretation: the Bayesian conditional is actually an inference.
\subsubsection{Introspection}\label{DBL:theo:8} $\vdash\neg\phi,(\phi|\phi)$\,.\\[5pt]
Interpretation: a non-empty proposition sees itself as ever true. \\
Notice that this property is compliant with $(\bot|\bot)\equiv\bot$\,, itself derived from~\ref{DBL:theo:3}.
\subsubsection{Inter-independence [b5.weak.A]}\label{DBL:theo:9} $\vdash(\psi|\phi)\times\phi$\,.\\[5pt]
Interpretation: a proposition is independent of its sub-universe.
\subsubsection{Independence invariance [b5.weak.A]}\label{DBL:theo:10}
$$
\begin{array}{@{}l@{}}
\psi\times\phi \vdash \neg\psi\times\phi\;,
\\
\psi\times\phi,\eta\times\phi\vdash(\psi\wedge\eta)\times\phi\;,
\\
\psi\leftrightarrow\eta,\psi\times\phi\vdash\eta\times\phi\;.
\end{array}
$$
\subsubsection{Narcissistic independence}\label{DBL:theo:11} $\phi\times\phi\vdash\neg\phi,\phi$\,.
\\[5pt]
Interpretation: a propositions independent with itself is either a tautology or a contradiction.
\subsubsection{Independence and proof [b5.weak.A]}\label{DBL:theo:12}
$\psi\times\phi,\phi\vee\psi\vdash\phi,\psi$\,.\\[5pt]
Interpretation:
when propositions are independent and their disjunctions are proved, then at least one proposition is ``proved''.
\subsubsection{Independence and regularity [b5.weak.A]}\label{DBL:theo:13}
$$
\phi\times\eta,\psi\times\eta,(\phi\wedge\eta)\rightarrow(\psi\wedge\eta)\vdash\neg\eta,\phi\rightarrow\psi\;.
$$
Interpretation: unless it is empty, a proposition may be removed from a logical equation, when it appears in the both sides and is independent with the equation components.
\\[5pt]
\emph{Corollary.} $\vdash\phi\times\eta$\,, $\vdash\psi\times\eta$\,, $\neg\eta\vdash$ and $\phi\wedge\eta\equiv\psi\wedge\eta$ imply $\phi\equiv\psi$\,.
\\[5pt]
\emph{Corollary 2.} Being given $\psi$ and $\phi$ such that $\neg\phi\vdash$, then $(\psi|\phi)$ is uniquely defined as the solution of equation $X\wedge\phi\equiv\psi\wedge\phi$ (with unknown $X$) which is independent of $\phi$.
\subsubsection{Right equivalences [b5]}\label{DBL:theo:14}
$\psi\leftrightarrow\eta\vdash(\phi|\psi)\leftrightarrow(\phi|\eta)$ (proved with b5 but without b5.weak.B).
\\[5pt]
Interpretation: equivalence is compliant with the conditioning.
\\[5pt]
\emph{Corollary.} Axiom b5 implies b5.weak.B.
In particular, DBL$_\ast$ is weaker than DBL.
\\[5pt]
\emph{Corollary of b5 or b5.weak.B.} $\psi\equiv\eta$ implies $(\phi|\psi)\equiv(\phi|\eta)$.
\\[5pt]
Combined with the classical theorems and~\ref{DBL:theo:4}, this last result implies that the equivalence relation $\equiv$ is compliant with the logical operators of DBL/DBL$_\ast$.
In particular, replacing a sub-proposition with an equivalent sub-proposition within a theorem still makes a theorem.
\subsubsection{Reduction rule [b5]}\label{DBL:theo:15}
Axiom b5 implies $\bigl(\phi\big|(\psi|\phi)\bigr)\equiv\phi$\,.
\subsubsection{Markov Property [b5]}\label{DBL:theo:16}
$$
(\phi_t|\phi_{t-1})\times\phi_1,\dots,(\phi_t|\phi_{t-1})\times\phi_{t-2}
\ \vdash\
\neg\left(\bigwedge_{\tau=1}^{t-1}\phi_{\tau}\right)\;,\;
(\phi_t|\phi_{t-1})\leftrightarrow\left(\phi_t\left|\bigwedge_{\tau=1}^{t-1}\phi_{\tau}\right.\right)
\;.
$$
Interpretation: the Markov property holds, when the conditioning is independent of the past and the past is possible.
\subsubsection{Link between $\bigl((\eta|\psi)\big|\phi\bigr)$ and $(\eta|\phi\wedge\psi)$ [b5]}\label{DBL:theo:17}
It is derived:
$
\bigl((\eta|\psi)\big|\phi\bigr)\wedge\phi\wedge\psi\equiv(\eta|\psi)\wedge\phi\wedge\psi\equiv\phi\wedge\psi\wedge\eta\equiv(\eta|\phi\wedge\psi)\wedge(\phi\wedge\psi)\;.
$\\[5pt]
This is a quite limited result and it is \emph{tempting} to hypothesize the additional axiom ``$\bigl((\eta|\psi)\big|\phi\bigr)\equiv(\eta|\phi\wedge\psi)\quad\mbox{\small$(\ast)$}$''\,.
There is a really critical point here, since axiom~$(\ast)$ implies actually a logical counterpart to Lewis' triviality\,:
\begin{quote}
\emph{Let $\bigl((\eta|\psi)\big|\phi\bigr)\equiv(\eta|\phi\wedge\psi)\quad\mbox{\small$(\ast)$}$ be assumed as an axiom.\\
Then $\vdash\neg(\phi\wedge\psi),\phi\leftrightarrow\psi,\phi\times\psi$\,.}
\end{quote}
Interpretation: if $\phi$ and $\psi$ are not exclusive and not equivalent, then they are independent.
This is irrelevant and forbids the use of axiom~$(\ast)$.
\subsection{Comparison with the conditional logic VCU}
\label{Section:Theorem:DBL:sub:2}
The axioms of the conditional logic VCU (VCU is an abbreviation for the axioms system)~\cite{Lewis:counterfactuals} are considered here and compared to DBL.
This example is representative of the difference with some other conditional logics.
\emph{Theorems derived in section~\ref{Section:Theorem:DBL:sub:1} are referred to.}
\\[5pt]
The language of VCU involves a counterfactual inference operator $\Box\!\!\rightarrow$ in addition to the classical operators.
This operator is characterized by the axioms Ax.1~to~Ax.6 and the counterfactual rule CR expressed as follows:
\begin{description}
\item[(Ax.1)] $\phi\Box\!\!\rightarrow\phi$\,,
\item[(Ax.2)] $(\neg\phi\Box\!\!\rightarrow\phi)\rightarrow(\psi\Box\!\!\rightarrow\phi)$\,,
\item[(Ax.3)] $(\phi\Box\!\!\rightarrow\neg\psi)\vee(((\phi\wedge\psi)\Box\!\!\rightarrow\xi)\leftrightarrow(\phi\Box\!\!\rightarrow(\psi\rightarrow\xi)))$\,,
\item[(Ax.4)] $(\phi\Box\!\!\rightarrow\psi)\rightarrow(\phi\rightarrow\psi)$\,,
\item[(Ax.5)] $(\phi\wedge\psi)\rightarrow(\phi\Box\!\!\rightarrow\psi)$\,,
\item[(Ax.6)] $(\neg\phi\Box\!\!\rightarrow\phi)\rightarrow\bigl(\neg(\neg\phi\Box\!\!\rightarrow\phi)\Box\!\!\rightarrow(\neg\phi\Box\!\!\rightarrow\phi)\bigr)$\,,
\item[(CR)] Being proved $(\xi_1\wedge\dots\wedge\xi_n)\rightarrow\psi$\,, it is proved $((\phi\Box\!\!\rightarrow\xi_1)\wedge\dots\wedge(\phi\Box\!\!\rightarrow\xi_n))\rightarrow(\phi\Box\!\!\rightarrow\psi)$\,.
\end{description}
It appears that Ax.2, Ax.4, Ax.5, Ax.6 and CR are recovered in DBL.
More precisely, Ax.2 becomes $\vdash(\phi|\neg\phi)\rightarrow(\phi|\psi)$ (derived from theorems).
Ax.4 is exactly b3.
Ax.5 is a subcase of $\phi\wedge\psi\equiv\phi\wedge(\psi|\phi)$ (inference theorem).
Ax.6 becomes
$\vdash(\phi|\neg\phi)\rightarrow\bigl((\phi|\neg\phi)\big|\neg(\phi|\neg\phi)\bigr)$
(derived from theorems).
And CR is recovered in DBL from the fact that \emph{sub-universes are classical}:
$$
\vdash(\xi_1\wedge\dots\wedge\xi_n)\rightarrow\psi\mbox{ implies }
\vdash((\xi_1|\phi)\wedge\dots\wedge(\xi_n|\phi))\rightarrow(\psi|\phi)\,.
$$
Ax.1 has a partial counterpart in DBL, \emph{i.e.} $\vdash\neg\phi,(\phi|\phi)$ (theorem).
However Ax.3 has no obvious counterpart in DBL.
\\[10pt]
Conversely, b3 is clearly implemented by VCU.
It is also noteworthy that Ax.1 and CR, with $n=1$ and $\xi_1=\phi$, infer the rule:
\begin{equation}
\label{DBL:VCU:1}
\mbox{Being proved }\phi\rightarrow\psi,\mbox{ it is proved }\phi\Box\!\!\rightarrow\psi\;,
\end{equation}
which is stronger than b1.
Although b2 is not implemented by VCU, it is easily shown that VCU completed by b4 implies b2.
The fact is that b4 is not implemented by VCU.
Moreover, b5 is related to the notion of logical independence, which is not considered within VCU.
\\[5pt]
Then we have to point out three fundamental distinctions of DBL compared to VCU:
\begin{enumerate}
\item\label{DBL:Point:1} In DBL, the negation commutes with the conditional (b4).
More generally, sub-universes are classical in DBL,
\item\label{DBL:Point:2} In DBL, the deductions on the conditionals are often weakened by the hypothesis that \emph{the condition is not empty}; for example, $\neg\phi$ in rule b1, or theorem $\vdash(\phi|\phi),\neg\phi$\,,
\item DBL manipulates a notion of logical independence of the propositions.
\end{enumerate}
\emph{In fact, point~\ref{DBL:Point:1} (commutation of the negation) makes point~\ref{DBL:Point:2} (deduction weakened by the non-empty condition hypothesis) necessary.}
For example, $\bot\Box\!\!\rightarrow\top$ is derived from (\ref{DBL:VCU:1});
by using both Ax.1 and the negation commutation, it is then deduced:
\begin{equation} \label{DBL:VCU:eq1}
\top\equiv\bot\Box\!\!\rightarrow\bot\equiv\bot\Box\!\!\rightarrow\neg\top\equiv\neg(\bot\Box\!\!\rightarrow\top)\equiv\neg\top\equiv\bot\;,
\end{equation}
which is impossible.
Notice that this deduction is also done in DBL, if we replace the ``weakened'' theorem $\vdash\neg\phi,(\phi|\phi)$ by the ``strong'' theorem $\vdash(\phi|\phi)$\,.
\\[5pt]
This example, based on VCU and DBL, illustrates a fundamental difference between DBL and other conditional logics.
DBL considers $\bot$ as a singularity, and will be cautious with this case when inferring conditionals.
This principle is not just a logical artifact.
In fact, it is also deeply related to the notion of logical independence, as it appears in the proof of theorem~\ref{DBL:theo:12}, \emph{Independence and proof}.
\section{Models}
\label{Section:Model:DBL}
\subsection{Definitions}
\paragraph{Notations of Boolean algebra.}
Being given a Boolean algebra~\cite{Boolean:algebra}, $(\mathbf{B},\cup,\cap,\sim,\emptyset,\Omega)$, the binary operators $\cup$ and $\cap$ are respectively the Boolean addition and multiplication, the unary operator $\sim$ is the Boolean complementation, $\emptyset$ and $\Omega$ are the neutal element for $\cup$ and $\cap$ respectively.
Moreover, the order $\subset$ is defined over $\mathbf{B}$ by setting for any $A,B\in \mathbf{B}$:
\begin{center}
$A\subset B$ if and only if $A\cap B=A\;.$
\end{center}
\paragraph{Definition of a conditional model.} A conditional model for DBL (respectively DBL$_\ast$) is a septuplet $\mathbf{M}=(\mathbf{B},\cup,\cap,\sim,\emptyset,\Omega,f)$, where $(\mathbf{B},\cup,\cap,\sim,\emptyset,\Omega)$ is a Boolean algebra, $f:\mathbf{B}\times \mathbf{B}\longrightarrow \mathbf{B}$, and verifying for any $A,B,C\in \mathbf{B}$:
\begin{description}
\item[$\rule{0pt}{0pt}\quad\beta1$.] $A\subset B$ and $A\ne\emptyset$ imply $f(B,A)=\Omega$\,,
\item[$\rule{0pt}{0pt}\quad\beta2$.] $f(B\cup C,A)\subset f(B,A)\cup f(C,A)$\,,
\item[$\rule{0pt}{0pt}\quad\beta3$.] $A\cap f(B,A) \subset B$\,,
\item[$\rule{0pt}{0pt}\quad\beta4$.] $f(\sim B,A)=\sim f(B,A)$\,,
\item[$\rule{0pt}{0pt}\quad\beta5$.] $f(B,A)=B$ implies $f(A,B)=A$\,,\\
(respectively $\beta5w.$ $f(B,A)=B$ implies $f(B,\sim A)=B$)\,.
\end{description}
The objects $\cup,\cap,\sim,\emptyset,\Omega,f$ are a model conterpart of the logical objects $\vee,\wedge,\neg,\bot,\top,(\cdot|\cdot)$.
\paragraph{Definition of a conditional assignment.}
Let $\mathbf{M}=(\mathbf{B},\cup,\cap,\sim,\emptyset,\Omega,f)$ be a conditional model.
\\[5pt]
An atomic assignment on $\mathbf{M}$ is a mapping $h:\Theta\rightarrow \mathbf{B}$\,.
\\[5pt]
A conditional assignment on $\mathbf{M}$ is a mapping $H:\mathcal{L}\rightarrow \mathbf{B}$ such that:
\begin{itemize}
\item $H(\neg\phi)=\sim H(\phi)$,
\item $H(\phi\rightarrow\psi)=\sim H(\phi)\cup H(\psi)$,
\item $H\bigl((\psi|\phi)\bigr)=f(H(\psi),H(\phi))$
\end{itemize}
for any $\phi,\psi\in\mathcal{L}$\,.
\\[5pt]
The set of all conditional assignment on $\mathbf{M}$ is denoted $\mathcal{H}[\mathbf{M}]$\,.
\begin{proposition}\label{DBL:prop:condass:1}
Let $h$ be an atomic assignment.
Then, there is a unique conditional assignment $\overline{h}$ extending $h$, that is such that $\overline{h}(\theta)=h(\theta)$ for any $\theta\in\Theta$\,.
\end{proposition}
The construction of $\overline{h}$ is obvious.
\paragraph{Semantic.}
Let $\mathbf{M}$ be a conditional model.
\\[5pt]
Let $(\Gamma,\Delta)\in\mathbf{Seq}$ be a sequent.
Then, the meta-relation $\Gamma\models_{\mathbf{M}}\Delta$ is defined by:
\begin{equation}\label{Section:Model:DBL:sem:eq:2}
\Gamma\models_{\mathbf{M}}\Delta
\mbox{ if and only if }
\forall H\in \mathcal{H}[\mathbf{M}],\;
\left[\;\forall \gamma\in\{\Gamma\},\; H(\gamma)=\Omega\;\right]
\Rightarrow
\left[\;\exists \delta\in\{\Delta\},\; H(\delta)=\Omega\;\right]\;.
\end{equation}
The relation $\Gamma\models_{\mathbf{M}}\Delta$ means that the sequent $(\Gamma,\Delta)$ is true for the model $\mathbf{M}$.
\begin{proposition}\label{DBL:prop:modelsound:1}
Assuming $\Gamma\vdash\Delta$\,, then $\Gamma\models_{\mathbf{M}}\Delta$ for any conditional model $\mathbf{M}$.
\end{proposition}
Proof is done in appendix~\ref{Apx:Proof2Sem:sect}.
\paragraph{Model construction: purpose.}
Typically, an ultimate issue is to construct a model for which the deduction system is complete, \emph{i.e.}:
\begin{center}
\mbox{Find }$\mathbf{M}$\mbox{ such that }$\Gamma\models_{\mathbf{M}}\Delta$\mbox{ implies }$\Gamma\vdash\Delta$\,.
\end{center}
This problem is not addressed in this article.
Moreover, it is not clear that conditional models are sufficient to specify the sequents.
\\[5pt]
However, the following completeness result is proved in section~\ref{DBL:freemodel:1}:
\begin{quote}
There is a conditional model $\mathbf{M}$ for DBL$_\ast$ such that $\models_{\mathbf{M}}\phi$ implies $\vdash\phi$ for any $\phi\in\mathcal{L}$.
\end{quote}
This model construction is applied subsequently for constructing the probabilistic extension from $\mathcal{L}_C$ to $\mathcal{L}$ for DBL or DBL$_\ast$\,:
\begin{quote}
For any probability $p$ defined over $\mathcal{L}_C$\,, there is a probability $\overline{p}$ over $\mathcal{L}$ which extends $p$ and verifies $\overline{p}(\phi\wedge\psi)=\overline{p}(\phi)\overline{p}(\psi)$, for any $\phi,\psi\in\mathcal{L}$ such that $\vdash\psi\times\phi$\,.
\end{quote}
This result proves that DBL and DBL$_\ast$ fulfill the necessary conditions of a Bayesian logical system.
\subsection{Construction of a free conditional model for DBL$_\ast$}
\label{DBL:freemodel:1}
A free conditional model for DBL$_\ast$ is constructed now.
This model, $\mathbf{M}[\Theta]$, is such that: $\models_{\mathbf{M}[\Theta]}\phi$ implies $\vdash\phi$.
It is constructed as a direct limit of partial models.
These partial models are constructed recursively, based on the iteration of $(|)$ on any propositions.
\\[5pt]
It is recalled that $\Theta$ is a finite set.
\\[5pt]
The following result about direct limits is needed.
\subsubsection{Direct limit}
\begin{proposition}\label{DBL:proj:lim:1}
Let $K\ge1$ and let $r_k\in\mathrm{I\!N}$ be defined for $1\le k\le K$\,.
Let be given a predicate $\varphi(E,R_1,\dots,R_K)$ defined for any set $E$ and subsets $R_k\subset E^{r_k}$.
For any $n\in\mathrm{I\!N}$, let be defined $(E_n,R_{1,n},\dots,R_{K,n})$ and a mapping $\mu_n:E_n\rightarrow E_{n+1}$ verifying:
\begin{itemize}
\item $R_{k,n}\subset E_n^{r_k}$ for $1\le k\le K$,
\item $\mu_n:E_n\rightarrow E_{n+1}$ is one-to-one,
\item $\mu_n(R_{k,n})\subset R_{k,n+1}$\,,
\item $\varphi(E_n,R_{1,n},\dots,R_{K,n})$ holds true,
\item There is $m\ge n$ (for any $n\in\mathrm{I\!N}$) such that $\mu_{m-1}\circ\cdots\circ\mu_n(E_n^{r_K})\subset R_{K,m}$\,.
\end{itemize}
Then there exists $(E_\infty,R_{1,\infty},\dots,R_{K,\infty})$ and a mapping sequence $\nu_n:E_n\rightarrow E_\infty$ for $n\in\mathrm{I\!N}$, such that:
\begin{itemize}
\item $\nu_n$ is one-to-one,
\item $\nu_n=\nu_{n+1}\circ\mu_n$,
\item $\nu_n(R_{k,n})\subset R_{k,\infty}$\,, for any $n\in\mathrm{I\!N}$,
\item $\varphi(E_\infty,R_{1,\infty},\dots,R_{K,\infty})$ holds true,
\item For any $x\in E_\infty$\,, there is $n\in\mathrm{I\!N}$ and $y\in E_n$ such that $\nu_n(y)=x$\,,
\item $R_{K,\infty}=E_\infty^{r_K}$\,.
\end{itemize}
\end{proposition}
\begin{description}
\item[Proof.]
Classical results on \emph{direct limit} just give the property, excepted for the relation $R_{K,\infty}=E_\infty^{r_K}$\,.
\\
Now, let $(x_1,\dots,x_{r_K})\in E_\infty^{r_K}$.
\\
For $1\le k\le r_K$, there is $n_k\in\mathrm{I\!N}$ and $y_k\in E_{n_k}$ such that $\nu_{n_k}(y_k)=x_k$\,.
\\
Let $N\in\mathrm{I\!N}$ be such that $N\ge n_k$ for $1\le k\le r_K$\,.
\\
Then, set $z_k=\mu_{N-1}\circ\dots\circ\mu_{n_k}(y_k)$\,.
\\
It comes $(x_1,\dots,x_{r_K})=\nu_{N}(z_1,\dots,z_{r_K})$\,.
\\
Now, there is $M\ge N$ such that $\mu_{M-1}\circ\cdots\circ\mu_N(E_N^{r_K})\subset R_{K,M}\,.$
\\
Then, it is deduced $(x_1,\dots,x_{r_K})\in\nu_M(R_{K,M})\subset R_{K,\infty}$\,.
\\[5pt]
This last result just proves that $E_\infty^{r_K}\subset R_{K,\infty}$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
It is noticed that operators and relations over a set $E$ could both be modeled by their graph, that is a subset of a power product of $E$.
Then, proposition~\ref{DBL:proj:lim:1} is quite general.
In particular, it makes possible the construction of structures with operators and relations as a limit of partially constructed structures.
The following corollary is implied.
\\[5pt]
{\bf Corollary.}
\emph{Let be defined a sequence $(E_n,\ast_1,\dots,\ast_K,\circ)_{n\in\mathrm{I\!N}}$ of algebraic structures, with common algebraic properties, where the operator $\circ$ is defined on subdomains $D_n\subset E_n^r$.
Let $\mu_n:E_n\rightarrow E_{n+1}$ be a one-to-one morphism;
in particular:
\begin{itemize}
\item $\mu_n(D_n)\subset D_{n+1}$\,,
\item $\forall (x_1,\dots,x_r)\in D_n,\, \mu_n(x_1\circ\cdots\circ x_r)=\mu_n(x_1)\circ\cdots\circ \mu_n(x_r)$\,.
\end{itemize}
Assume moreover that:
\begin{equation}\label{eq:hyp:proj:1}
\mbox{there is }m\ge n\mbox{ such that }\mu_{m-1}\circ\cdots\circ\mu_n(E_n^r)\subset D_{m}\mbox{ for any }n\in\mathrm{I\!N}\,.
\end{equation}
Then there is an algebraic structure $(E_\infty,\ast_1,\dots,\ast_n,\circ)$, where $\circ$ is entirely constructed, and one-to-one morphisms $\nu_n:E_n\rightarrow E_\infty$ such that:
\begin{itemize}
\item $\nu_n=\nu_{n+1}\circ\mu_n$,
\item For any $x\in E_\infty$\,, there is $n\in\mathrm{I\!N}$ and $y\in E_n$ such that $\nu_n(y)=x$\,,
\item The structure $(E_\infty,\ast_1,\dots,\ast_n,\circ)$ has the same algebraic properties than the algebras of the sequence.
\end{itemize}}
\begin{description}
\item[Proof.]
It is obtained by applying the proposition to the sequence $(E_n,R_{1,n},\dots,R_{K+2,n})$ and predicate $\varphi$, where:
\begin{itemize}
\item $R_{1,n},\dots,R_{K+1,n}$ are the graphs of the operators $\ast_1,\dots,\ast_K,\circ$\,,
\item $R_{K+2,n}=D_n$\,,
\item $\varphi(E,R_{1},\dots,R_{K+2})$ encapsulates both the algebraic properties of the algebras, the functional nature of the graphs, and the domain of definition of $\circ$.
\end{itemize}
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
This corollary is used now for the construction of a model of DBL$_\ast$\,.
\subsubsection{Definition of partial models}
In this section are constructed a sequence $(\mathbf{B}_n,\cup,\cap,\sim,\emptyset,\Omega_n, f_n, b_n, r_n)_{n\in\mathrm{I\!N}}$ and a sequence of one-to-one morphisms $(\mu_n)_{n\in\mathrm{I\!N}}$ such that:
\begin{itemize}
\item $(\mathbf{B}_n,\cup,\cap,\sim,\emptyset,\Omega_n)$ is a Boolean algebra,
\item $(\mathbf{B}_n,\cup,\cap,\sim,\emptyset,\Omega_n, f_n)$ is a partial Bayesian model; in particular, $f_n$ is partially constructed,
\item $\mu_n:\mathbf{B}_n\rightarrow \mathbf{B}_{n+1}$ is a one-to-one morphism of Bayesian models,
\item $b_n$ is an element of $\mathbf{B}_n$\,,
\item At step $n+1$, the definition of $f_{n+1}$ is completed, so as to include the domain $\mu_n(\mathbf{B}_n)\times\{\mu_n(b_n),\mu_n(\sim b_n)\}$\,,
\item $r_n:\mathbf{B}_n\rightarrow\mathrm{I\!N}$ is a ranking function; owing to the one-to-one morphism, $r(A)$ indicates the step of construction of $A$.
\end{itemize}
The propositions $b_n$ are chosen in order to make the sequence complient with proposition~\ref{DBL:proj:lim:1} (more precisely, hypothesis~(\ref{eq:hyp:proj:1}) of the corollary).
The choice criterion is computed from the ranking function.
\\[5pt]
Then, a Bayesian model is deduced by the direct limit.
\paragraph{Notations and definitions.}
For any $m>n$ and $A\in \mathbf{B}_n$\,, it is defined $A_{[m}=\mu_{m-1}\circ\dots\circ\mu_{n}(A)$\,.
In the case of a subscripted propositions, say $A_k$, the notation $A_{k[m}=\bigl(A_k\bigr)_{[m}$ is used.\\[5pt]
Subsequently, a singleton $\{\omega\}$ may be denoted $\omega$ if the context is not ambiguous.
In particular, the use of the notation $\omega_{[n}$ instead of $\{\omega\}_{[n}$ is systematic.\\[5pt]
The Cartesian product of sets $A$ and $B$ is denoted $A\times B$\,;
the functions $\mathrm{id}$ and $T$ are defined over pairs by $\mathrm{id}(x,y)=(x,y)$ and $T(x,y)=(y,x)$\,;
for a set of pairs $C$, the abbreviation $(\mathrm{id}\cup T)(C)=\mathrm{id}(C)\cup T(C)$ is also used.
\paragraph{Initialization.}
Define $(\mathbf{B}_0,\cup,\cap,\sim,\emptyset,\Omega_0, f_0, b_0, r_0)$ by:
\begin{itemize}
\item $\Omega_0=\{0,1\}^{\Theta}$,
\item $\mathbf{B}_0=\mathcal{P}(\Omega_0)$ (\emph{i.e.} the set of subsets of $\Omega_0$),
\item Take $\cup,\cap,\emptyset$ as the set union, set intersection and empty set;
define $\sim$ as the set complement, that is $\sim A=\Omega_0\setminus A$\,,
\item Define $f_0(A,\emptyset)=f_0(A,\Omega_0)=A$ for any $A\in \mathbf{B}_0$\,,
\item Define $r_0(A)=0$ for any $A\in \mathbf{B}_0$\,,
\item Choose $b_0\in\mathbf{B}_0\setminus\{\emptyset,\Omega_0\}$\,.
{\bf It is noticed that $b_{0}\not\in\{\emptyset,\Omega_{0}\}$.}
\end{itemize}
\paragraph{Step $n$ to step $n+1$.}
Let $(\mathbf{B}_k,\cup,\cap,\sim,\emptyset,\Omega_k, f_k, b_k, r_k)_{0\le k\le n}$ and the one-to-one morphisms $(\mu_k)_{0\le k\le n-1}$ be constructed.
\\[5pt]
Then, construct the set $I_n$ and the sequences $\Gamma_n(i),\Pi_n(i)|_{i\in I_n}$ according to the cases:
\subparagraph{Case 0.} There is $m<n$ such that $\{b_{m[n},\sim b_{m[n}\}=\{b_n,\sim b_n\}$\,.\\
Let $\nu$ be the greatest of such $m$.
{\bf Notice that the hypothesis $b_n=b_{\nu[n}$ holds by construction~(\ref{bn:construc:eq:1})}.
Then define $I_n=\mu_\nu(b_{\nu})\times\sim \mu_\nu(b_{\nu})$\,,\\
$\Pi_n(\omega,\omega')=f_n(\omega'_{[n},\sim b_{n})\cap\omega_{[n}$ and $\Gamma_n(\omega,\omega')=f_n(\omega_{[n},b_{n})\cap\omega'_{[n}$ for any $(\omega,\omega')\in I_n$\,.
\\[5pt]
Remark: case 0 means that the construction of $f(\cdot,b_n)$ and of $f(\cdot,\sim b_n)$ has already begun over the propositions of $\mathbf{B}_{\nu+1}$.
\subparagraph{Case 1.} Case 0 does not hold;\\
Define $I_n=\{b_n\}$\,, $\Pi_n(i)=i$
and $\Gamma_n(i)=\sim i$ for any $i\in I_n$\,.
\\[5pt]
Remark: case 1 means that $f(\cdot,b_n)$ and $f(\cdot,\sim b_n)$ are constructed for the first time.
\subparagraph{Setting.}
$(\mathbf{B}_{n+1},\cup,\cap,\sim,\emptyset,\Omega_{n+1}, f_{n+1}, b_{n+1}, r_{n+1})$ and $\mu_n$ are defined by:
\begin{itemize}
\item $\mu_n(A)=\bigcup_{i\in I_n} \biggl(\Bigl(\bigl(A\cap\Pi_n(i)\bigr)\times\Gamma_n(i)\Bigr)
\cup \Bigl(\bigl(A\cap\Gamma_n(i)\bigr)\times\Pi_n(i)\Bigr)\biggr)$ for any $A\in \mathbf{B}_{n}$\,,
\item $\Omega_{n+1}=\mu_n(\Omega_n)$\,,
\item $\mathbf{B}_{n+1}=\mathcal{P}(\Omega_{n+1})$\,,
\item Take $\cup,\cap,\emptyset$ as the set union, set intersection and empty set;
define $\sim$ as the set complement, that is $\sim A=\Omega_{n+1}\setminus A$\,,
\item $f_{n+1}(A,\emptyset)=f_{n+1}(A,\Omega_{n+1})=A$ for any $A\in \mathbf{B}_{n+1}$\,,
\item For any $A\in \mathbf{B}_n\setminus\{b_n,\sim b_n,\emptyset,\Omega_n\}$ and any $B\in \mathbf{B}_n$ such that $f_n(B,A)$ is defined, then $f_{n+1}\bigl(\mu_n(B),\mu_n(A)\bigr)$ is defined and $f_{n+1}\bigl(\mu_n(B),\mu_n(A)\bigr)=\mu_n\bigl(f_n(B,A)\bigr)$\,,
\item For any $A\in \mathbf{B}_{n+1}$\,, set
$
f_{n+1}\bigl(A,\mu_n(b_n)\bigr)=(\mathrm{id}\cup T)
\biggl(A\cap\Bigl(\bigcup_{i\in I_n}\bigl(\Pi_n(i)\times\Gamma_n(i)\bigr)\Bigr)\biggr)
$\\[5pt]
and
$
f_{n+1}\bigl(A,\sim\mu_n(b_n)\bigr)=(\mathrm{id}\cup T)
\biggl(A\cap\Bigl(\bigcup_{i\in I_n}\bigl(\Gamma_n(i)\times\Pi_n(i)\bigr)\Bigr)\biggr)
\;,
$
\item Define $r_{n+1}(\mu_n(A))=r_n(A)$ for any $A\in \mathbf{B}_n$ and $r_{n+1}(A)=n+1$ for any $A\in\mathbf{B}_{n+1}\setminus\mu_n(\mathbf{B}_n)$\,,
($r_{n+1}$ just maps to the first step of occurence of the proposition)
\item Define:
\begin{equation}
\label{bn:construc:eq:0:1}\begin{array}{@{}l@{}}\displaystyle\vspace{4pt}
\widetilde{b}_{n+1}\in\arg\min_{B\in\mathbf{B}_{n+1}}\lambda_{n+1}(B)\;,\mbox{ where:}
\\\rule{0pt}{0pt}\hspace{20pt}\displaystyle
\lambda_{n+1}(B)=\inf\left\{
r_{n+1}(A)+r_{n+1}(B)
\;\left/\;A\in\mathbf{B}_{n+1}\mbox{ and }f(A,B)\mbox{ is undefined}
\right.\right\}\;.
\end{array}\end{equation}
Then, define $b_{n+1}$ by:
\begin{equation}\label{bn:construc:eq:1}
\begin{array}{@{}l@{}}\displaystyle
b_{n+1}=b_{m[n+1} \mbox{ if there is }m\le n\mbox{ such that }\bigl\{\widetilde{b}_{n+1},\sim \widetilde{b}_{n+1}\bigr\}=\bigl\{b_{m[n+1},\sim b_{m[n+1}\bigr\}\;,
\\\displaystyle
b_{n+1}=\widetilde{b}_{n+1} \mbox{ otherwise}.
\end{array}\end{equation}
The purpose of equation~(\ref{bn:construc:eq:0:1}) is to choose $b_{n+1}$ (or its negation) in order to continue the construction of $f$ on the oldest pairs first.
By doing that, the condition~(\ref{eq:hyp:proj:1}) of the direct limit is ensured.
The purpose of equation~(\ref{bn:construc:eq:1}) is to choose $b_{n+1}$ in coherence with a possible previous occurence.
{\bf It is noticed that $b_{n+1}\not\in\{\emptyset,\Omega_{n+1}\}$.}
\end{itemize}
\paragraph{Short explanation of the model.}
In fact, $(\omega,\omega')\in\Pi_n(i)\times\Gamma_n(i)$ should be interpreted as $\omega\wedge(\omega'|\neg b_n)$, while $(\omega',\omega)\in\Gamma_n(i)\times\Pi_n(i)$ should be interpreted as $\omega'\wedge(\omega|b_n)$.
The reader should compare this construction to the proof of completeness in appendix~\ref{Appendix:ProofOfAlmostCompletude} for a better comprehension of the mechanisms of the model.
\subsubsection{Properties of $(\mathbf{B}_{n},\cup,\cap,\sim,\emptyset,\Omega_{n}, f_{n}, \mu_n)_{n\in\mathrm{I\!N}}$}
\label{Omega:properties}
It is proved recursively:
\begin{description}
\item[$\rule{0pt}{0pt}\quad\alpha1$.] $\mu_n:\mathbf{B}_n\rightarrow \mathbf{B}_{n+1}$ is a one-to-one Boolean morphism,
\item[$\rule{0pt}{0pt}\quad\alpha2$.] If $A,B\in \mathbf{B}_n$ and $f_n(B,A)$ is defined, then $f_{n+1}\bigl(\mu_n(B),\mu_n(A)\bigr)=\mu_n\bigl(f_n(B,A)\bigr)$\,,
\item[$\rule{0pt}{0pt}\quad\beta1$.] Let $A,B\in \mathbf{B}_n$ such that $f_n(B,A)$ is defined.
\\Then $A\subset B$ and $A\ne\emptyset$ imply $f_n(B,A)=\Omega_n$\,,
\item[$\rule{0pt}{0pt}\quad\beta2$.] Let $A,B,C\in \mathbf{B}_n$ such that $f_n(B,A)$, $f_n(C,A)$ and $f_n(B\cup C,A)$ are defined.
\\Then $f_n(B\cup C,A)= f_n(B,A)\cup f_n(C,A)$\,,
\item[$\rule{0pt}{0pt}\quad\beta3$.] Let $A,B\in \mathbf{B}_n$ such that $f_n(B,A)$ is defined.
\\Then $A\cap f_n(B,A) = A\cap B$\,,
\item[$\rule{0pt}{0pt}\quad\beta4$.] Let $A,B\in \mathbf{B}_n$ such that $f_n(B,A)$ and $f_n(\sim B,A)$ are defined.
\\Then $f_n(\sim B,A)=\sim f_n(B,A)$\,,
\item[$\rule{0pt}{0pt}\quad\beta5w$.] Let $A,B\in \mathbf{B}_n$ such that $f_n(B,A)$ and $f_n(B,\sim A)$ are defined.
\\Then $f_n(B,A)=B$ implies $f_n(B,\sim A)=B$\,.
\end{description}
\emph{Proofs are given in appendix~\ref{Appendix:MainProof}.}
\subsubsection{Limit}
Corollary of proposition~\ref{DBL:proj:lim:1} applies to the sequence $(\mathbf{B}_{n},\cup,\cap,\sim,\emptyset,\Omega_{n}, f_{n}, \mu_n)_{n\in\mathrm{I\!N}}$ \,.
In particular, the condition~(\ref{eq:hyp:proj:1}) is derived from: $$\lim_{n\rightarrow+\infty}\;\;\min_{B\in\mathbf{B}_n}\lambda_n(B)=+\infty\;,$$
which itself is a consequence of~(\ref{bn:construc:eq:0:1}) and the construction process.
\\[5pt]
As a consequence, there is a Bayesian model $\mathbf{M}[\Theta]=(\mathbf{B}[\Theta],\cup,\cap,\sim,\emptyset,\Omega, f)$ and a sequence $(\nu_n)_{n\in\mathrm{I\!N}}$ such that:
\begin{itemize}
\item $\nu_n:\mathbf{B}_n\rightarrow\mathbf{B}[\Theta]$ is a one-to-one morphism of Bayesian model,
\item $\nu_n=\nu_{n+1}\circ\mu_n$,
\item For any $A\in \mathbf{B}[\Theta]$\,, there is $n\in\mathrm{I\!N}$ and $A_n\in \mathbf{B}_{n}$ such that $\nu_n(A_n)=A$\,.
\end{itemize}
\subsubsection{Completeness for the conditional operator}
For any $\theta\in\Theta$\,, define $\xi_\theta\in\mathbf{B}_0$ by $\xi_\theta=\bigl\{(\delta_\tau)_{\tau\in\Theta}\in\Omega_0\,\big/\,\delta_\theta=1\bigr\}$\,.
Then, define the atomic assignment $h:\Theta\rightarrow\mathbf{B}[\Theta]$ by $h(\theta)=\nu_0(\xi_\theta)$ for any $\theta\in\Theta$\,.
Denote $\overline{h}$ the extention of $h$ toward $\mathcal{L}$.
\begin{proposition}\label{DBL:Complet:prop:1}
Let $\phi\in\mathcal{L}_C$\,.
Then, $\vdash_C\phi$ if and only if $\overline{h}(\phi)=\Omega$\,.
\end{proposition}
Proof is obvious since $\mathbf{B}_0$ is isomorph to the factor set of $\mathcal{L}_C$ with respect to $\equiv_C$.
\begin{proposition}\label{DBL:Complet:prop:2}
Let $\phi\in\mathcal{L}$\,.
Then, the following assertions are equivalent:
\begin{itemize}
\item $\vdash\phi$ in DBL$_\ast$\,,
\item $\overline{h}(\phi)=\Omega$\,,
\item $\models_{\mathbf{M}[\Theta]}\phi$\,.
\end{itemize}
\end{proposition}
Proof is done in appendix~\ref{Appendix:ProofOfAlmostCompletude}\,.
\\[5pt]
Proposition 2 expresses that $\mathbf{M}[\Theta]$ is complete for the conditional operator.
\begin{proposition}\label{DBL:Complet:prop:3}
Let $\phi\in\mathcal{L}_C$\,, such that $\vdash\phi$ in DBL$_\ast$\,.
Then $\vdash_C\phi$\,.
\end{proposition}
Obvious from~\ref{DBL:Complet:prop:1} and~\ref{DBL:Complet:prop:2}.
\\[5pt]
This result proves that $DBL_\ast$ is an extension of classical logic.
Now, proposition~\ref{DBL:Complet:prop:2} implies that $DBL_\ast$ is much more than just classical logic:
\begin{proposition}\label{DBL:Complet:prop:4}
\emph{[Non-distortion property]}
Let $\phi,\psi\in\mathcal{L}_C$.
Assume that $\vdash\phi,\psi$ in DBL$_\ast$.
Then $\vdash_C\phi$ or $\vdash_C\psi$.
\end{proposition}
Interpretation: DBL$_\ast$ does not ``distort'' the \emph{classical} propositions.
More precisely, a property like $\vdash\phi,\psi$ would add some knowledge about $\phi$ and $\psi$.
But the \emph{non-distortion} just tells that it is impossible unless there is a trivial knowledge about $\phi$ or $\psi$ within classical logic.
\begin{description}
\item[Proof.]
Assume $\vdash\phi,\psi$\,.\\
Since $\mathbf{M}[\Theta]$ is a model for DBL$_\ast$, it comes that $H(\phi)=\Omega$ or $H(\psi)=\Omega$ for any $H\in\mathcal{H}\bigl[\mathbf{M}[\Theta]\bigr]$\,.\\
In particular, $\overline{h}(\phi)=\Omega$ or $\overline{h}(\psi)=\Omega$ (definition~(\ref{Section:Model:DBL:sem:eq:2})\,)\,.
\\
Since $\phi\in\mathcal{L}_C$ and $\psi\in\mathcal{L}_C$, it comes $\vdash_C\phi$ or $\vdash_C\psi$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
Another \emph{non-distortion} property is derived subsequently in the context of probabilistic DBL$_\ast$\,.
\section{Extension of probability}
\label{Section:Proba:DBL}
\subsection{Probability over propositions}
Probabilities are classically defined over measurable sets.
However, this is only a manner to model the notion of probability, which is essentially an additive measure of the belief of logical propositions.
Probability could be defined without reference to the measure theory, at least when the propositions are countable.
The notion of probability is explained now within a strict propositional formalism.
Conditional probabilities are excluded from this definition, but the notion of independence is considered.
\vspace{5pt}\\
Intuitively, a probability over a space of logical propositions is a measure of belief which is additive (disjoint propositions are adding their chances) and increasing with the propositions.
This measure should be zeroed for the contradiction and set to $1$ for the tautology.
Moreover, \emph{a probability is a multiplicative measure for independent propositions}.
\paragraph{Definition for classical propositions.}
A probability $\pi$ over $C$\,, the classical logic, is a $\mathrm{I\!R}^+$ valued function such that for any propositions $\phi$ and $\psi$ of $\mathcal{L}_C$\,:
\begin{description}
\item[\rule{0pt}{0pt}$\quad$\emph{Equivalence.}]$\phi\equiv_C\psi$ implies $\pi(\phi)=\pi(\psi)$\,,
\item[\rule{0pt}{0pt}$\quad$\emph{Additivity.}]$\pi(\phi\wedge\psi)+\pi(\phi\vee\psi)=\pi(\phi)+\pi(\psi)$\,,
\item[\rule{0pt}{0pt}$\quad$\emph{Coherence.}]$\pi(\bot)=0$\,,
\item[\rule{0pt}{0pt}$\quad$\emph{Finiteness.}]$\pi(\top)=1$\,.
\end{description}
\subparagraph{Property.}
The coherence and additivity implies the increase of $\pi$:
\begin{description}
\item[\rule{0pt}{0pt}$\quad$\emph{Increase.}]$\pi(\phi\wedge\psi)\le \pi(\phi)$\,.
\end{description}
\begin{description}
\item[Proof.]
Since $\phi\equiv_C(\phi\wedge\psi)\vee(\phi\wedge\neg\psi)$ and $(\phi\wedge\psi)\wedge(\phi\wedge\neg\psi)\equiv_C\bot$, the additivity implies:
$$
\pi(\phi)+\pi(\bot)=\pi(\phi\wedge\psi)+\pi(\phi\wedge\neg\psi)\;.
$$
From the coherence $\pi(\bot)=0$\,,
it is deduced $\pi(\phi)=\pi(\phi\wedge\psi)+\pi(\phi\wedge\neg\psi)$\,.\\
Since $\pi$ is non-negatively valued, $\pi(\phi)\ge \pi(\phi\wedge\psi)$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
\paragraph{Definition for DBL/DBL$_\ast$.}
In this case, we have to deal with independence notions.\\[5pt]
A probability $P$ over DBL/DBL$_\ast$ is a $\mathrm{I\!R}^+$ valued function, which verifies (replace $\equiv_C$ by $\equiv$ and $\pi$ by $P$) \emph{equivalence}, \emph{additivity}, \emph{coherence}, \emph{finiteness} and:
\begin{description}
\item[\rule{0pt}{0pt}$\quad$\emph{Multiplicativity.}]$\vdash\phi\times\psi$ implies $P(\phi\wedge\psi)=P(\phi)P(\psi)$\,.
\end{description}
for any propositions $\phi$ and $\psi$ of $\mathcal{L}$\,.
\subsection{Probability extension over DBL$_\ast$}
\label{ProbExt:wDBL}
\begin{proposition}\label{dbl:prob:ext:1}
Let $\pi$ be a probability defined over $C$\,, the classical logic, such that $\pi(\phi)>0$ for any $\phi\not\equiv_C\bot$.
Then, there is a (multiplicative) probability $\overline{\pi}$ defined over DBL$_\ast$ such that $\overline{\pi}(\phi)=\pi(\phi)$ for any classical proposition $\phi\in\mathcal{L}_C$\,.
\end{proposition}
\emph{Remark: this is another non-distortion property, since the construction of DBL$_\ast$ puts no constraint over probabilistic classical propositions.}
\\[5pt]
Proof is done in appendix~\ref{Appendix:Probabilition}.
\\[5pt]
{\bf Corollary.}
Let $\pi$ be a probability defined over $C$\,.
Then, there is a (multiplicative) probability $\overline{\pi}$ defined over DBL$_\ast$ such that $\overline{\pi}(\phi)=\pi(\phi)$ for any $\phi\in\mathcal{L}_C$\,.
\begin{description}
\item[Proof.]
Let $\Sigma=\left\{\left.\bigwedge_{\theta\in\Theta}\epsilon_\theta\;\right/\;\epsilon\in\prod_{\theta\in\Theta}\{\theta,\neg\theta\}\right\}$\,, a generating partition of $\mathcal{L}_C$.\\
For any real number $e >0$\,, define the probability $\pi_e $ over $\mathcal{L}_C$ by:
$$
\forall\sigma\in\Sigma\,,\; \pi_e (\sigma)=\frac{e }{\mathrm{card}(\Sigma)}+(1-e )\pi(\sigma)
\;.
$$
Let $\overline{\pi_e}$ be the extension of $\pi_e$ over DBL$_\ast$ as constructed in appendix~\ref{Appendix:Probabilition}.\\
It is noticed in~\ref{AppC:Conclude}\,, that there is by construction a rational function $R_\phi$ such that $\overline{\pi_e}(\phi)=R_\phi(e )$ for any $\phi\in\mathcal{L}$\,.\\
Now $0\le R_\phi(e )\le 1$\,;
since $R_\phi(e )$ is rational and bounded, $\lim_{e \rightarrow 0+}R_\phi(e )$ exists.\\
Define $\overline{\pi}(\phi)=\lim_{e \rightarrow 0+}R_\phi(e )$\,, for any $\phi\in\mathcal{L}$.\\
The additivity, coherence, finiteness and multiplicativity are then inherited by $\overline{\pi}$.\\
At last, it is clear that $\overline{\pi}(\sigma)=\pi(\sigma)$ for any $\sigma\in\Sigma$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
\subsection{Model and probability extension for DBL}
Let $\mathcal{K}$ be the set of all (multiplicative) probabilities $P$ over DBL$_\ast$ such that $P(\phi)>0$ for any $\phi\not\equiv\bot$\,,
and define the sequences $\mathcal{K}(\phi)=(P(\phi))_{P\in\mathcal{K}}$ for any $\phi\in\mathcal{L}$\,.\\
Then define
$\mathcal{L}_{\mathcal{K}}=\mathcal{K}(\mathcal{L})=\bigl\{\mathcal{K}(\phi)\;\big/\;\phi\in\mathcal{L}\bigr\}\;;$
The space $\mathcal{L}_{\mathcal{K}}$ is thus a subset of ${\mathrm{I\!R}^+}^{\mathcal{K}}$\,.\\
The operators $\neg$, $\wedge$, $\vee$ and $(|)$ are canonically implied over $\mathcal{L}_{\mathcal{K}}$\,:
$$\begin{array}{@{}l@{}}\displaystyle
\neg\mathcal{K}(\phi)=\mathcal{K}(\neg\phi)\,,\
\mathcal{K}(\phi)\wedge\mathcal{K}(\psi)=\mathcal{K}(\phi\wedge\psi)\,,\
\mathcal{K}(\phi)\vee\mathcal{K}(\psi)=\mathcal{K}(\phi\vee\psi)\hspace{50pt}\rule{0pt}{0pt}
\\\rien\hfill\displaystyle
\mbox{and}\quad
\bigl(\mathcal{K}(\psi)\big|\mathcal{K}(\phi)\bigr)=
\mathcal{K}\bigl((\psi|\phi)\bigr)\;.
\end{array}$$
Since any $P\in\mathcal{K}$ verifies the equivalence property, it comes $\mathcal{K}(\phi)=\mathcal{K}(\psi)$ when $\phi\equiv\psi$ in DBL$_\ast$.
In particular, $\bigl(\mathcal{L}_{\mathcal{K}},\vee,\wedge,\neg,\mathcal{K}(\bot),\mathcal{K}(\top))$ is a Boolean algebra.
\begin{proposition}\label{dbl:prob:ext:2}
$\bigl(\mathcal{L}_{\mathcal{K}},\vee,\wedge,\neg,\mathcal{K}(\bot),\mathcal{K}(\top),(|)\bigr)$ is a conditional model of DBL.
\end{proposition}
\begin{description}
\item[Proof.]
Since $\phi\equiv\psi$ in DBL$_\ast$ implies $\mathcal{K}(\phi)=\mathcal{K}(\psi)$\,, it comes $\beta2$, $\beta3$ and $\beta4$, from axioms b2, b3 and b4.
\item[\emph{Proof of $\beta1$.}]
Assume $\mathcal{K}(\phi\rightarrow\psi)=\mathcal{K}(\top)$ and $\mathcal{K}(\phi)\ne\mathcal{K}(\bot)$\,.
\\[5pt]
Let $P\in\mathcal{K}$\,.\\
Then $P(\neg\phi\vee\psi)=P(\phi\rightarrow\psi)=P(\top)=1$\,.\\
Now $P(\neg\phi\vee\psi)=1$ implies $P(\phi\wedge\psi)+P(\neg\phi)=1$\,.\\
As a consequence, $P(\phi\wedge\psi)=1-P(\neg\phi)=P(\phi)$.\\
Now, Hypothesis $\mathcal{K}(\phi)\ne\mathcal{K}(\bot)$ implies $\phi\not\equiv\bot$ and then $P(\phi)\ne0$\,.\\
Since $P$ is multiplicative and $P(\phi)\ne0$\,, it comes $P\bigl((\psi|\phi)\bigr)=P(\phi\wedge\psi)/P(\phi)=1$\,.
\\[5pt]
At last, $\mathcal{K}(\psi|\phi)=\mathcal{K}(\top)$ and, consequently, $\bigl(\mathcal{K}(\psi)\big|\mathcal{K}(\phi)\bigr)=\mathcal{K}(\top)$\,.\\
The model verifies $\beta1$.
\item[\emph{Proof of $\beta5$.}]
Since $P$ is multiplicative for any $P\in\mathcal{K}$\,, $\vdash(\psi|\phi)\times\phi$ and $(\psi|\phi)\wedge\phi\equiv\psi\wedge\phi$ in DBL$_\ast$, it comes $P\bigl((\psi|\phi)\bigr)P(\phi)=P(\psi\wedge\phi)\mbox{ for any }P\in\mathcal{K}\,.$
\\
Now assume $\bigl(\mathcal{K}(\psi)\big|\mathcal{K}(\phi)\bigr)=\mathcal{K}(\psi)$\,, with $\psi\not\equiv\bot$\,.\\
Then $\mathcal{K}\bigl((\psi|\phi)\bigr)=\mathcal{K}(\psi)$, and $P\bigl((\psi|\phi)\bigr)=P(\psi)$ for any $P\in\mathcal{K}$\,.\\
Then $P(\phi)=P(\psi\wedge\phi)/P\bigl((\psi|\phi)\bigr)=P(\psi\wedge\phi)/P(\psi)=P\bigl((\phi|\psi)\bigr)$ for any $P\in\mathcal{K}$\,,
\\
and $\bigl(\mathcal{K}(\phi)\big|\mathcal{K}(\psi)\bigr)=\mathcal{K}\bigl((\phi|\psi)\bigr)=\mathcal{K}(\phi)$\,.\\
Since moreover $(\phi|\bot)\equiv\phi$ and $(\bot|\phi)\equiv\bot$ in DBL$_\ast$, the model verifies $\beta5$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
{\bf Corollary.}
$\phi\equiv\psi$ in DBL implies $\mathcal{K}(\phi)=\mathcal{K}(\psi)$.
\paragraph{Probability extension.}
The corollary implies that any $P\in\mathcal{K}$ is a (multiplicative) probability over DBL.
Now, the probability extensions constructed in appendix~\ref{Appendix:Probabilition} are also elements of $\mathcal{K}$\,.
As a consequence, the proposition~\ref{dbl:prob:ext:1} as well as its corollary still work in DBL:
\begin{proposition}\label{dbl:dbl:prob:ext:1}
Let $\pi$ be a probability defined over $C$\,, the classical logic.
Then, there is a (multiplicative) probability $\overline{\pi}$ defined over DBL such that $\overline{\pi}(\phi)=\pi(\phi)$ for any $\phi\in\mathcal{L}_C$\,.
\end{proposition}
\paragraph{Non-distortion.}
\begin{proposition}\label{dbl:dbl:prob:nondist:2}
Let $\phi,\psi$ be classical propositions.
Assume that $\vdash\phi,\psi$ in DBL.
Then $\vdash_C\phi$ or $\vdash_C\psi$.
\end{proposition}
\begin{description}
\item[Proof.]
Notice first that $\mathcal{K}:\phi\mapsto\mathcal{K}(\phi)$ is a conditional assignment by construction.
\\
From proposition~\ref{dbl:prob:ext:2}, $\vdash\phi,\psi$ implies $\mathcal{K}(\phi)=\mathcal{K}(\top)$ or $\mathcal{K}(\psi)=\mathcal{K}(\top)$\,.
\\
It follows $\forall P\in\mathcal{K}\,,\; P(\phi)=1$ or $\forall P\in\mathcal{K}\,,\; P(\psi)=1$\,.
\\
By the probability extension:
$\forall \pi\,,\; \pi(\phi)=1$ or $\forall \pi\,,\; \pi(\psi)=1$\,, where $\pi$ denotes any probability over $C$\,.\\
At last, $\vdash_C\phi$ or $\vdash_C\psi$\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
\subsection{Properties of the conditional}
\paragraph{Bayes inference.}
Assume a (multiplicative) probability $P$ defined over DBL/DBL$_\ast$.
Define $P(\psi|\phi)$ as an abbreviation for $P\bigl((\psi|\phi)\bigr)$\,.
Then:
$$
P(\psi|\phi)P(\phi)=P(\phi\wedge\psi)\;.
$$
\begin{description}
\item[Proof.]A consequence of \mbox{$(\psi|\phi)\wedge\phi\equiv\phi\wedge\psi$} and \mbox{$\vdash(\psi|\phi)\times\phi$}\,.
\item[$\Box\Box\Box$]\rule{0pt}{0pt}
\end{description}
\paragraph{About Lewis' triviality.}
The previous extension theorems has shown that for any probability $\pi$ defined over $C$\,, it is possible to construct a (multiplicative) probability $\overline{\pi}$ over DBL which extends $\pi$.
This result by itself shows that DBL avoids Lewis' triviality.
But a deeper explanation seems necessary.
\\[5pt]
Assume $\phi\in\mathcal{L}_C$ and define the probability $\pi_\phi$ over $C$ by $\pi_\phi=\pi(\cdot|\phi)$\,.
Let $\overline{\pi_\phi}$ be the extension of $\pi_\phi$ over DBL.
It happens that $\overline{\pi_\phi}\ne \overline{\pi}(\cdot|\phi)$\,, which implies that Lewis' triviality does not work anymore.
It is noticed that although $\overline{\pi}(\cdot|\phi)$ is a probability over DBL in the classical meaning (it is \emph{additive}, \emph{coherent} and \emph{finite}), it is not necessarily \emph{multiplicative}.
\begin{quote}
Conditional probabilities do not recognize the logical independence and the logical conditioning.
\end{quote}
This limitation is unavoidable: otherwise the derivation~(\ref{Eq:DBL:v2:Lewis:1}) of the triviality is possible, even if $(\psi|\phi)$ is not equivalent to a classical proposition.
\section{Conclusion}
\label{Fus2004::Sec:8}
In this contribution, the conditional logics DBL and DBL$_\ast$, a slight relaxation of DBL, have been defined and studied.
These logics have been introduced as an abstraction and extrapolation of general probabilistic properties.
DBL and DBL$_\ast$ implement the essential ingredients of the Bayesian inference, including the classical nature of the sub-universe, the inference property and a related concept of logical independence.
\\[5pt]
The logics are coherent and non-trivial.
A model has been constructed for the logic DBL$_\ast$, and completeness results have been derived.
It has been shown that any probability over the classical propositions could be extended to DBL/DBL$_\ast$, in compliance with the independence relation.
Then, the probabilistic Bayesian rule has been recovered from DBL/DBL$_\ast$.
\\[5pt]
There are still many open questions.
For example, it is possible to bring some enrichment to the conditional of DBL, by means of additional axioms.
It is also possible to consider other valuation mechanisms than the probabilities.
As a perspective, many decision systems for manipulating uncertain information could be derived from this principle.
From the strict logical viewpoint, the Deterministic Bayesian Logic also offers some interesting properties.
In particular, the notion of independence in DBL have nice logical consequences in the deductions (\emph{e.g.} regularity with an inference).
This property should be of interest in mathematical logic.
|
{
"timestamp": "2007-06-29T18:14:38",
"yymm": "0411",
"arxiv_id": "cs/0411097",
"language": "en",
"url": "https://arxiv.org/abs/cs/0411097"
}
|
\section{INTRODUCTION}
Much attention has been focused on
vortices in nature,\cite{vortex1}
especially for the quantized vortex in fermionic superfluid and
superconducting systems.\cite{Rev0,Rev1,Rev2}
One of the fundamental physical quantities of the quantized vortex
is the radius of vortex core.
Kramer and Pesch\cite{KP} have pointed out theoretically that
the radius of vortex core
decreases proportionally to the temperature $T$
at low temperatures,
much stronger than anticipated
from the temperature dependence of the coherence length.
This anomalously strong shrinking of the vortex core,
the so-called Kramer-Pesch (KP) effect,\cite{KP,huebener}
occurs when the fermionic spectrum of
vortex bound states\cite{caroli,hess89,gygi,schopohl2,rainer,haya96,y-tanaka}
crosses the Fermi level.\cite{volovik93}
The temperature dependence of the vortex core
has been theoretically
investigated in the case of
superconductors.\cite{gygi,PK,brandt,rammer,oka96,golubov97,haya98,m-kato,m-kato2,kato01}
The low-temperature limit of the vortex core radius
was discussed also for dilute Fermi superfluids\cite{elgaroy01-1}
and superfluid neutron star matter.\cite{elgaroy99}
There are several length scales
which characterize the radius of
vortex core.\cite{gygi,elgaroy99,sonier04-1}
One of them is the coherence length $\xi(T)$.
The pair potential $\Delta(r)$ depressed inside the vortex core
is restored at a distance $r\sim \xi(T)$
away from the vortex center $r=0$.\cite{ovchinnikov}
However,
$\xi(T)$ is almost temperature independent at low temperatures.
Another length scale is related to the slope of the pair potential
at the vortex center, which is defined as
\begin{equation}
\frac{1}{\xi_1}
=\frac{1}{\Delta(r \rightarrow \infty)}
\lim_{r \rightarrow 0} \frac{\Delta(r)}{r}.
\label{eq:KP}
\end{equation}
The KP effect means $\xi_1(T) \propto T $
for $T \to 0$,
while the pair potential $\Delta(r)$ is restored
at a distance $r\sim\xi$ ($\gg\xi_1$) at low temperatures.\cite{ovchinnikov}
Since the spatial profiles of the pair potential $\Delta(r)$
and the supercurrent density $j(r)$
in the vicinity of the vortex center
are related with each other through the low-energy vortex bound states,
the length $\xi_1$ scales with
the distance $r=r_0$ at which $|j(r)|$ reaches
its maximum value.\cite{KP,sonier04-1,sonier00}
Therefore, the KP effect gives rise to
$r_0 \sim \xi_1 \rightarrow 0$ ($T \rightarrow 0$) linear in $T$.
Recently, $\mu$SR experiments were performed
on the $s$-wave superconductor NbSe$_2$
to observe the KP effect.\cite{sonier04-1,sonier00,sonier97,miller}
The experimental data of spin precessions,
which correspond to
the Fourier transformation of
the Redfield pattern
(the magnetic field distribution)
of a vortex lattice,
are fitted by a theoretical formula,\cite{yaouanc}
extracting the information on the spatial profile of
the supercurrent density $j(r)$ around vortices.\cite{sonier04-1,sonier00}
While a shrinking vortex core was observed, it was weaker than
the theoretical expectation of the KP effect for a clean superconductor.
That is,
indeed the observed vortex core radius $r_0$
seemingly shrank linearly in $T$,
but was extrapolated towards
a finite value in zero-temperature limit,
indicating a saturation of the KP core shrinkage
at a low temperature.
The measurement of the vortex core radius $r_0$ by $\mu$SR
was also performed for CeRu$_2$ with a similar result.\cite{kadono01}
Since the KP effect is directly connected with
the low-energy vortex bound states,
the energy level broadening due to impurities may give rise to
a modification of the KP effect,\cite{KP}
in particular, a saturation as observed.
Impurities exist inevitably in any solid state material,
so that such
a saturation effect is not unlikely to occur.
Even it turns out that a rather
small concentration of impurities,
leaving the material moderately clean,
would have a strong influence
as observed
in experiments.\cite{sonier04-1,sonier00,sonier97,miller,kadono01}
There are several factors which influence
the behavior of vortex core radius.
(i) Impurity effects.\cite{KP}
(ii) The discreteness of the energy levels of
the vortex bound states.\cite{KP,haya98,m-kato,m-kato2}
(iii) Vortex lattice
effects.\cite{sonier04-1,sonier00,golubov94,oka99,miranovic,kogan,sonier97-2}
(iv) Fermi liquid effects $F_1^{\rm s}$.\cite{fogelstrom}
(v) Antiferromagnetic correlations induced inside
the vortex core as suggested for
cuprate superconductors.\cite{kadono04}
(vi) The presence of multiple gaps
in multi-band superconductors.\cite{koshelev,haya}
In this paper,
we investigate
the temperature dependence of
the vortex core radius $\xi_1(T)$ incorporating
the effect of nonmagnetic impurities on the level of the Born approximation,
for both a chiral $p$-wave
and an $s$-wave superconductor.
For this purpose,
we set up a model
of single vortex in a two-dimensional superconductor, where the chiral
$p$-wave pairing has the form\cite{rice,maeno}
${\bf d}({\bar {\bf k}})={\bar {\bf z}} ({\bar k}_x \pm i {\bar k}_y)$
as, for example, realized in
Sr$_2$RuO$_4$.
The examination of the temperature dependence of $ \xi_1(T) $ suggests
that under certain conditions the chiral $p$-wave state shows a more
robust KP effect against impurities
than an $s$-wave state.
This behavior is connected with the compensation of the intrinsic
phase structure and the vortex phase winding, if chirality and phase winding
are oppositely
oriented.\cite{kato01,volovik99,kato00,kato02,haya02-1,haya03-1,kato03,matsumoto00,matsumoto01,Heeb99}
Therefore, we argue that chiral $p$-wave superconductors might be
better candidates for the experimental observation of the KP effect.
The chiral $p$-wave superconductivity has been proposed for
Sr$_2$RuO$_4$
with rather strong experimental evidence,\cite{rice,maeno}
and more
recently for Na$_x$CoO$_{2}\cdot y$H$_{2}$O.\cite{tanaka,han}
The paper is organized as follows.
In Sec.\ 2, the self-consistent system of equations
for the superconducting order parameter and the impurity self energy
is formulated
on the basis of the quasiclassical theory of superconductivity.
In Sec.\ 3, the systems of
the $s$-wave vortex and the chiral $p$-wave vortex are described.
The numerical results for the vortex core radius $\xi_1(T)$
are shown in Sec.\ 4.
The summary is given in Sec.\ 5.
\section{QUASICLASSICAL THEORY}
Our theoretical analysis of the KP effect will be based on
the quasiclassical theory of
superconductivity,\cite{Eilenberger,kusunose}
which allows us to take inhomogeneous structures as a vortex into account
and at the same time to deal with impurity scattering
on a straightforward way.
We start with
the quasiclassical Green function
in the presence of nonmagnetic impurities,
\begin{equation}
{\hat g}(i\omega_n,{\bf r},{\bar{\bf k}})=
-i\pi
\pmatrix{
g &
if \cr
-if^{\dagger} &
-g \cr
},
\label{eq:qcg}
\end{equation}
which is the solution of the Eilenberger equation
\begin{equation}
i {\bf v}_{\rm F}({\bar{\bf k}}) \cdot
{\bf \nabla}{\hat g}
+ \bigl[ i\omega_n {\hat \tau}_{3}-{\hat \Delta}-{\hat \sigma},
{\hat g} \bigr]
=0.
\label{eq:eilen}
\end{equation}
Here, ${\hat \Delta}$ is
the superconducting order parameter
\begin{equation}
{\hat \Delta}({\bf r},{\bar{\bf k}})=
\pmatrix{
0 &
\Delta \cr
-\Delta^{*} &
0 \cr
},
\label{eq:SC}
\end{equation}
and ${\hat \sigma}$ denotes the self energy correction
due to impurity scattering
\begin{equation}
{\hat \sigma}(i\omega_n,{\bf r},{\bar{\bf k}})=
\pmatrix{
\sigma_{11} &
\sigma_{12} \cr
\sigma_{21} &
\sigma_{22} \cr
}.
\label{eq:IMP}
\end{equation}
The Eilenberger equation
is supplemented by the normalization condition
${\hat g}(i\omega_n,{\bf r},{\bar{\bf k}})^2
=-\pi^2{\hat 1}$.\cite{Eilenberger,kusunose}
The vector
${\bf r}$ is the real-space coordinates
and
the unit vector
${\bar{\bf k}}$
represents
the sense of
the wave number
on the Fermi surface.
${\bf v}_{\rm F}({\bar{\bf k}})$ is the Fermi velocity,
$\omega_n=\pi T (2n+1)$ is the fermionic Matsubara frequency
(with the temperature $T$ and the integer $n$),
${\hat \tau}_{3}$ is the third Pauli matrix
in the $2\times2$ particle-hole space,
and
the commutator $[{\hat a},{\hat b}]={\hat a}{\hat b}-{\hat b}{\hat a}$.
We will consider
an isolated single vortex
in extreme type-II
superconductors (Ginzburg-Landau parameter $\kappa \gg 1$),
and therefore
the vector potential
is neglected in the Eilenberger equation (\ref{eq:eilen}).
Throughout the paper, vectors with the upper bar denote unit vectors
and we use units in which $\hbar = k_{\rm B} = 1$.
We now define an alternative impurity self energy ${\hat \Sigma}$ as
\begin{equation}
{\hat \Sigma}(i\omega_n,{\bf r},{\bar{\bf k}})=
\pmatrix{
\Sigma_{\rm d} &
\Sigma_{12} \cr
\Sigma_{21} &
-\Sigma_{\rm d} \cr
}
=
\pmatrix{
\frac{1}{2}(\sigma_{11}-\sigma_{22}) &
\sigma_{12} \cr
\sigma_{21} &
-\frac{1}{2}(\sigma_{11}-\sigma_{22}) \cr
}.
\label{eq:IMP2}
\end{equation}
The original impurity self energy (\ref{eq:IMP}) can be expressed as
\begin{equation}
{\hat \sigma}={\hat \Sigma}
+\frac{\sigma_{11}}{2} {\hat 1}
+\frac{\sigma_{22}}{2} {\hat 1}.
\label{eq:IMP3}
\end{equation}
Hence,
we rewrite the Eilenberger equation (\ref{eq:eilen}) as
\begin{equation}
i {\bf v}_{\rm F}({\bar{\bf k}}) \cdot
{\bf \nabla}{\hat g}
+ \bigl[ i{\tilde \omega}_n {\hat \tau}_{3}-\hat{\tilde \Delta},
{\hat g} \bigr]
=0,
\label{eq:eilen2}
\end{equation}
with the renormalized Matsubara frequency
\begin{equation}
i{\tilde \omega}_n = i\omega_n - \Sigma_{\rm d},
\label{eq:self-w}
\end{equation}
and the renormalized superconducting order parameter
\begin{equation}
\hat{\tilde \Delta}=
\pmatrix{
0 &
\Delta + \Sigma_{12} \cr
- (\Delta^{*} - \Sigma_{21}) &
0 \cr
}.
\label{eq:self-delta}
\end{equation}
We restrict here to $s$-wave scattering at the impurities.
The single-impurity $t$-matrix\cite{Thuneberg84}
is then calculated as
\begin{eqnarray}
{\hat t}(i\omega_n, {\bf r}) =
v{\hat 1} + N_{0}v
\Bigl\langle {\hat g}(i\omega_n, {\bf r},{\bar {\bf k}}) \Bigr\rangle
{\hat t}(i\omega_n, {\bf r}),
\label{eq:t-matrix}
\end{eqnarray}
where
$v$ is the impurity potential for the $s$-wave scattering channel,
$N_{0}$ is the normal-state density of states at the Fermi level, and
the brackets $\langle \cdots \rangle$ denote
the average over the Fermi surface.
The impurity self energy ${\hat \sigma}$
is given by
\begin{eqnarray}
{\hat \sigma}(i\omega_n, {\bf r})
=
n_{\rm i} {\hat t}(i\omega_n, {\bf r})
=
\frac{n_{\rm i} v}{D} \Bigl[
{\hat 1} + N_{0} v
\Bigl\langle {\hat g}(i\omega_n, {\bf r},{\bar {\bf k}}) \Bigr\rangle
\Bigr],
\label{eq:imp-self}
\end{eqnarray}
where
the denominator is
\begin{eqnarray}
D=1+(\pi N_0 v)^2 \bigl[
\langle g \rangle^2
+ \langle f \rangle
\langle f^{\dagger} \rangle
\bigr],
\label{eq:denomi}
\end{eqnarray}
and $n_{\rm i}$ is the density of impurities.
The scattering phase shift $\delta_0$ is defined by
$\tan \delta_0 = -\pi N_0 v$.
In this paper, we investigate the Born limit
($\delta_0 \ll 1$).
The impurity self energy (\ref{eq:imp-self}) in this limit becomes
\begin{eqnarray}
{\hat \sigma}(i\omega_n, {\bf r})
&=& \nonumber
n_{\rm i} v {\hat 1} + \frac{\Gamma_{\rm n}}{\pi}
\Bigl\langle {\hat g}(i\omega_n, {\bf r},{\bar {\bf k}}) \Bigr\rangle \\
&=&
n_{\rm i} v {\hat 1} + \Gamma_{\rm n}
\pmatrix{
-i \langle g \rangle &
\langle f \rangle \cr
- \langle f^{\dagger} \rangle &
i \langle g \rangle \cr
},
\label{eq:imp-self2}
\end{eqnarray}
where we have defined the impurity scattering rate in the normal state as
$\Gamma_{\rm n}=1/2\tau_{\rm n}=\pi n_{\rm i} N_0 v^2$.
The mean free path $l$ is defined by
$l=v_{\rm F}\tau_{\rm n}=v_{\rm F}/2\Gamma_{\rm n}$.
From Eqs.\ (\ref{eq:IMP2}) and (\ref{eq:imp-self2}),
we obtain the self-consistency equations for ${\hat \Sigma}$ as
\begin{eqnarray}
\Sigma_{\rm d}(i\omega_n, {\bf r})
=
-i \Gamma_{\rm n}
\Bigl\langle g(i\omega_n, {\bf r},{\bar {\bf k}}) \Bigr\rangle,
\label{eq:sigma1}
\end{eqnarray}
\begin{eqnarray}
\Sigma_{12}(i\omega_n, {\bf r})
=
\Gamma_{\rm n}
\Bigl\langle f(i\omega_n, {\bf r},{\bar {\bf k}}) \Bigr\rangle,
\label{eq:sigma2}
\end{eqnarray}
\begin{eqnarray}
\Sigma_{21}(i\omega_n, {\bf r})
=
- \Gamma_{\rm n}
\Bigl\langle f^{\dagger}(i\omega_n, {\bf r},{\bar {\bf k}}) \Bigr\rangle.
\label{eq:sigma3}
\end{eqnarray}
The self-consistency equation for $\Delta$,
called gap equation, is given as
\begin{equation}
\Delta({\bf r},{\bar {\bf k}})
=\pi T g F({\bar {\bf k}})
\sum_{-\omega_{\rm c} < \omega_n < \omega_{\rm c}}
\Bigl\langle F^{*}({\bar {\bf k}}') f(i\omega_n, {\bf r},{\bar {\bf k}}')
\Bigr\rangle,
\label{eq:gap}
\end{equation}
where the cutoff energy is $\omega_{\rm c}$,
the pairing interaction is defined as
$g F({\bar {\bf k}}) F^{*}({\bar {\bf k}}')$ with
the coupling constant $g$ given by
\begin{equation}
\frac{1}{g}
=
\ln\Bigl(\frac{T}{T_{{\rm c} 0}} \Bigr)
+ \sum_{0 \le n < (\omega_{\rm c}/\pi T -1)/2} \frac{2}{2n+1}.
\label{eq:coupling}
\end{equation}
We define $T_{{\rm c} 0}$ as the superconducting critical temperature
in the absence of impurities.
Finally, the system of equations to be solved consists of
the Eilenberger equation (\ref{eq:eilen2}) and
the self-consistency equations for
the impurity self-energies
$\bigl[$Eqs.\ (\ref{eq:sigma1})--(\ref{eq:sigma3})$\bigr]$
and
for the superconducting order parameter
$\bigl[$Eq.\ (\ref{eq:gap})$\bigr]$.
\section{$S$-WAVE AND CHIRAL $P$-WAVE VORTEX SYSTEMS}
In this study,
the system is assumed to be an isotropic two-dimensional conduction layer
perpendicular to the vorticity along the $z$ axis.
In the circular coordinate system we use
${\bf r}=(r\cos\phi,r\sin\phi)$ and
${\bar{\bf k}}
=(\cos\theta, \sin\theta)$.
We assume a circular Fermi surface and
${\bf v}_{\rm F}({\bar{\bf k}})
=v_{\rm F}{\bar{\bf k}}
=(v_{\rm F}\cos\theta,v_{\rm F}\sin\theta)$
in the Eilenberger equations (\ref{eq:eilen}) and (\ref{eq:eilen2}).
The average over the Fermi surface
reads:
$\langle \cdots \rangle = \int_0^{2\pi} \cdots d \theta/2\pi$.
\subsection{Pair Potential}
The single vortex is situated at the origin ${\bf r}=0$.
In the case of the $s$-wave pairing,
the pair potential (i.e., the superconducting order parameter)
around the vortex
is expressed as
\begin{equation}
\Delta_{\rm s}({\bf r})=\Delta_{\rm s}(r) e^{i\phi},
\label{eq:op-s}
\end{equation}
where we can take
$\Delta_{\rm s}(r)$ in the right-hand side
to be real
because of axial symmetry of the system.
The $s$-wave pairing means
$F({\bar {\bf k}})=1$ and the gap equation (\ref{eq:gap}) reads:
\begin{equation}
\Delta_{\rm s}({\bf r})
=\pi T g_{\rm s}
\sum_{-\omega_{\rm c} < \omega_n < \omega_{\rm c}}
\int_0^{2\pi} \frac{d \theta}{2\pi}
f(i\omega_n, {\bf r}, \theta),
\label{eq:gap-s}
\end{equation}
where $g_{\rm s}$ follows Eq.\ (\ref{eq:coupling}).
On the other hand,
in the case of the chiral $p$-wave pairing,
$F({\bar {\bf k}})={\bar k}_x \pm i{\bar k}_y = \exp(\pm i\theta)$.
The pair potential around the vortex
has two possible forms
depending on whether
the chirality and vorticity are parallel or
antiparallel each other.\cite{kato01,matsumoto01,Heeb99}
Thus, there are two kinds of vortex.
One form is
\begin{eqnarray}
\Delta^{({\rm n})}({\bf r},\theta)
&=& \nonumber
\Delta_{+}^{({\rm n})}({\bf r}) e^{+i\theta}
+ \Delta_{-}^{({\rm n})}({\bf r}) e^{-i\theta} \\
&=&
\Delta_{+}^{({\rm n})}(r) e^{i(\theta-\phi)}
+ \Delta_{-}^{({\rm n})}(r) e^{i(-\theta+\phi)},
\label{eq:op-pm}
\end{eqnarray}
where the chirality (related to the phase ``$+\theta$")
and vorticity (``$-\phi$") are antiparallel
(``negative vortex").
The other form is
\begin{eqnarray}
\Delta^{({\rm p})}({\bf r},\theta)
&=& \nonumber
\Delta_{+}^{({\rm p})}({\bf r}) e^{+i\theta}
+ \Delta_{-}^{({\rm p})}({\bf r}) e^{-i\theta} \\
&=&
\Delta_{+}^{({\rm p})}(r) e^{i(\theta+\phi)}
+ \Delta_{-}^{({\rm p})}(r) e^{i(-\theta+3\phi)},
\label{eq:op-pp}
\end{eqnarray}
where the chirality (``$+\theta$") and vorticity (``$+\phi$") are parallel
(``positive vortex").
We have assumed that $\Delta_{+}^{({\rm n,p})}$ is the dominant component
in both Eqs.\ (\ref{eq:op-pm}) and (\ref{eq:op-pp}),
and the other component $\Delta_{-}^{({\rm n,p})}$
is the minor one induced with smaller amplitudes inside the vortex core.
The axial symmetry allows us to take
$\Delta_{\pm}^{({\rm n,p})}(r)$
to be real
in each second line of Eqs.\ (\ref{eq:op-pm}) and (\ref{eq:op-pp}).
Far away from the vortex,
the dominant component
$\Delta_{+}^{({\rm n,p})}(r \rightarrow \infty)$
is finite
and
the induced minor one
$\Delta_{-}^{({\rm n,p})}(r \rightarrow \infty) \rightarrow 0$,
namely
\begin{eqnarray}
\Delta^{({\rm n})}(r \rightarrow \infty, \phi;\theta)=
\Delta_{+}^{({\rm n})}(r \rightarrow \infty) e^{i(\theta-\phi)},
\label{eq:op-pm2}
\end{eqnarray}
\begin{eqnarray}
\Delta^{({\rm p})}(r \rightarrow \infty, \phi;\theta)=
\Delta_{+}^{({\rm p})}(r \rightarrow \infty) e^{i(\theta+\phi)}.
\label{eq:op-pp2}
\end{eqnarray}
In the clean limit,
$\Delta_{+}^{({\rm n,p})}(r \rightarrow \infty)
=\Delta_{\rm BCS}(T)$
$\bigl[\Delta_{\rm BCS}(T)$ is the BCS gap amplitude$\bigr]$.
The gap equation (\ref{eq:gap}) reads now:
\begin{equation}
\Delta_{\pm}^{({\rm n,p})}({\bf r})
=\pi T g_{\rm p}
\sum_{-\omega_{\rm c} < \omega_n < \omega_{\rm c}}
\int_0^{2\pi} \frac{d \theta}{2\pi}
e^{\mp i\theta}
f(i\omega_n, {\bf r}, \theta),
\label{eq:gap-p}
\end{equation}
where $g_{\rm p}$ follows Eq.\ (\ref{eq:coupling}).
\subsection{Axial Symmetry and Boundary Condition}
The numerical calculation of
the self-consistent pair potential $\Delta$
and impurity self energy ${\hat \Sigma}$
requires to restrict ourselves to
a finite spatial region,
which we choose axial symmetric with
a cutoff radius $ r_{\rm c} $.
Therefore, it is necessary
to fix
the values of $\Delta$ and ${\hat \Sigma}$
for $r > r_{\rm c}$ outside the boundary $r=r_{\rm c}$
when solving the Eilenberger equation (\ref{eq:eilen2}).
We set,
for the pair potential (\ref{eq:op-s}) of the $s$-wave vortex,
\begin{equation}
\Delta_{\rm s}(r>r_{\rm c},\phi)=\Delta_{\rm s}(r=r_{\rm c}) e^{i\phi}.
\label{eq:boundary-s}
\end{equation}
For the pair potential (\ref{eq:op-pm})
of the chiral negative $p$-wave vortex, we set
\begin{eqnarray}
\Delta^{({\rm n})}(r>r_{\rm c},\phi;\theta)
=
\Delta_{+}^{({\rm n})}(r=r_{\rm c}) e^{i(\theta-\phi)},
\label{eq:boundary-pm}
\end{eqnarray}
and for the pair potential (\ref{eq:op-pp})
of the chiral positive $p$-wave vortex
\begin{eqnarray}
\Delta^{({\rm p})}(r>r_{\rm c},\phi;\theta)
=
\Delta_{+}^{({\rm p})}(r=r_{\rm c}) e^{i(\theta+\phi)},
\label{eq:boundary-pp}
\end{eqnarray}
while $\Delta_{-}^{({\rm n,p})}(r>r_{\rm c})=0$.
Next we consider
the symmetry property and boundary condition
of the impurity self energy ${\hat \Sigma}$.
In the Eilenberger equation (\ref{eq:eilen2}),
${\hat \Sigma}$ appears in
Eqs.\ (\ref{eq:self-w}) and (\ref{eq:self-delta}) in the form of
$i\omega_n - \Sigma_{\rm d}$,
$\Delta + \Sigma_{12}$,
and
$\Delta^{*} - \Sigma_{21}$.
Owing to the axial symmetry of our model,
an axial rotation of the system
($\phi \rightarrow \phi + \alpha$ and $\theta \rightarrow \theta + \alpha$)
leads to a transformation of the impurity self energies
$\Sigma_{\rm d}$, $\Sigma_{12}$, and $\Sigma_{21}$
in
the same manner as $i\omega_n$, $\Delta$, and $\Delta^{*}$, respectively.
The Matsubara frequency $i\omega_n$ is invariant under the axial rotation,
and therefore $\Sigma_{\rm d}(i\omega_n, r,\phi)$
has no azimuthal $\phi$ dependence. Hence,
we set
\begin{eqnarray}
\Sigma_{\rm d}(i\omega_n, r,\phi)
=
\Sigma_{\rm d}(i\omega_n, r),
\label{eq:Sig-d}
\end{eqnarray}
\begin{eqnarray}
\Sigma_{\rm d}(i\omega_n,r>r_{\rm c})
=
\Sigma_{\rm d}(i\omega_n,r=r_{\rm c}),
\label{eq:boundary-Sig-d}
\end{eqnarray}
both in the case of the $s$-wave vortex and the chiral $p$-wave vortex.
For the $s$-wave vortex,
the pair potential
$\Delta_{\rm s}({\bf r})$ in Eq.\ (\ref{eq:op-s})
transforms under an axial rotation as
$\Delta_{\rm s}(r,\phi+\alpha)
=\Delta_{\rm s}(r) \exp\bigl[i(\phi+\alpha)\bigr]
=\Delta_{\rm s}(r,\phi) \exp(i\alpha)$,
and
$\Delta_{\rm s}^{*}(r,\phi+\alpha)
=\Delta_{\rm s}^{*}(r,\phi) \exp(-i\alpha)$.
This rotation means that
$\Delta_{\rm s}+\Sigma_{\rm 12}
\rightarrow (\Delta_{\rm s}+\Sigma_{\rm 12}) \exp(i\alpha)$
and
$\Delta_{\rm s}^{*}-\Sigma_{\rm 21}
\rightarrow (\Delta_{\rm s}^{*}-\Sigma_{\rm 21}) \exp(-i\alpha)$.
Thus, the off-diagonal impurity self energies
$\Sigma_{\rm 12,21}(i\omega_n, r,\phi)$
have to possess an azimuthal $\phi$ dependence as
\begin{equation}
\Sigma_{\rm 12}(i\omega_n, r,\phi)
=\Sigma_{\rm 12}(i\omega_n, r)\exp(i\phi),
\label{eq:Sig-12-s}
\end{equation}
\begin{equation}
\Sigma_{\rm 21}(i\omega_n, r,\phi)
=\Sigma_{\rm 21}(i\omega_n, r)\exp(-i\phi).
\label{eq:Sig-21-s}
\end{equation}
We set, for the $s$-wave vortex, the boundary condition as
\begin{equation}
\Sigma_{\rm 12}(i\omega_n, r>r_{\rm c})
=
\Sigma_{\rm 12}(i\omega_n, r=r_{\rm c}),
\label{eq:boundary-Sig-12-s}
\end{equation}
\begin{equation}
\Sigma_{\rm 21}(i\omega_n, r>r_{\rm c})
=
\Sigma_{\rm 21}(i\omega_n, r=r_{\rm c}),
\label{eq:boundary-Sig-21-s}
\end{equation}
because far away from the vortex core
the anomalous Green functions averaged over the Fermi surface,
which appear in Eqs.\ (\ref{eq:sigma2}) and (\ref{eq:sigma3}),
are generally nonzero
and their amplitudes are spatially uniform,
owing to the $s$-wave pairing symmetry.\cite{haya03-1}
For the chiral negative $p$-wave vortex,
the pair potential
$\Delta^{({\rm n})}$
has the following symmetry
properties. For an axial rotation,
the pair potential
$\Delta^{({\rm n})}({\bf r},\theta)$ in Eq.\ (\ref{eq:op-pm}) transforms
as
$\Delta^{({\rm n})}(r,\phi+\alpha;\theta+\alpha)
=\Delta^{({\rm n})}(r,\phi;\theta)$
and
$\Delta^{({\rm n}) *}(r,\phi+\alpha;\theta+\alpha)
=\Delta^{({\rm n}) *}(r,\phi;\theta)$,
namely invariant.
Therefore, we find that
$\Delta^{({\rm n})}+\Sigma_{\rm 12}
\rightarrow \Delta^{({\rm n})}+\Sigma_{\rm 12}$
and
$\Delta^{({\rm n}) *}-\Sigma_{\rm 21}
\rightarrow \Delta^{({\rm n}) *}-\Sigma_{\rm 21}$
and
the off-diagonal impurity self energies
$\Sigma_{\rm 12,21}(i\omega_n, r,\phi)$ are not
$\phi$-dependent:
\begin{equation}
\Sigma_{\rm 12}(i\omega_n, r,\phi)
=\Sigma_{\rm 12}(i\omega_n, r),
\label{eq:Sig-12-pm}
\end{equation}
\begin{equation}
\Sigma_{\rm 21}(i\omega_n, r,\phi)
=\Sigma_{\rm 21}(i\omega_n, r).
\label{eq:Sig-21-pm}
\end{equation}
We set, for the chiral negative $p$-wave vortex, the boundary condition as
\begin{equation}
\Sigma_{\rm 12}(i\omega_n, r>r_{\rm c})
=
0,
\label{eq:boundary-Sig-12-pm}
\end{equation}
\begin{equation}
\Sigma_{\rm 21}(i\omega_n, r>r_{\rm c})
=
0,
\label{eq:boundary-Sig-21-pm}
\end{equation}
because far away from the vortex core
the anomalous Green functions averaged over the Fermi surface
are zero, owing to the $p$-wave pairing symmetry.\cite{haya03-1}
On the other hand, the chiral positive $p$-wave vortex
behaves differently under
axial rotation.
The pair potential
$\Delta^{({\rm p})}({\bf r},\theta)$ in Eq.\ (\ref{eq:op-pp}) transforms
as
$\Delta^{({\rm p})}(r,\phi+\alpha;\theta+\alpha)
=\Delta^{({\rm p})}(r,\phi;\theta)\exp(2i\alpha)$
and
$\Delta^{({\rm p}) *}(r,\phi+\alpha;\theta+\alpha)
=\Delta^{({\rm p}) *}(r,\phi;\theta)\exp(-2i\alpha)$,
such that
$\Delta^{({\rm p})}+\Sigma_{\rm 12}
\rightarrow (\Delta^{({\rm p})}+\Sigma_{\rm 12})\exp(2i\alpha)$
and
$\Delta^{({\rm p}) *}-\Sigma_{\rm 21}
\rightarrow (\Delta^{({\rm p}) *}-\Sigma_{\rm 21})\exp(-2i\alpha)$.
Here, the off-diagonal impurity self energies
$\Sigma_{\rm 12,21}(i\omega_n, r,\phi)$
depends on $\phi$ like
\begin{equation}
\Sigma_{\rm 12}(i\omega_n, r,\phi)
=\Sigma_{\rm 12}(i\omega_n, r) \exp(2i\phi),
\label{eq:Sig-12-pp}
\end{equation}
\begin{equation}
\Sigma_{\rm 21}(i\omega_n, r,\phi)
=\Sigma_{\rm 21}(i\omega_n, r) \exp(-2i\phi).
\label{eq:Sig-21-pp}
\end{equation}
We set, for the chiral positive $p$-wave vortex, the boundary condition as
\begin{equation}
\Sigma_{\rm 12}(i\omega_n, r>r_{\rm c})
=
0,
\label{eq:boundary-Sig-12-pp}
\end{equation}
\begin{equation}
\Sigma_{\rm 21}(i\omega_n, r>r_{\rm c})
=
0,
\label{eq:boundary-Sig-21-pp}
\end{equation}
as in the case of the negative vortex.
\section{VORTEX CORE RADIUS $\xi_{1}(T)$}
The vortex core radius $\xi_1(T)$ defined in Eq.\ (\ref{eq:KP})
is obtained from the spatial profile of the pair potential
calculated self-consistently.
We solve the Eilenberger equation (\ref{eq:eilen2})
by a method of the Riccati parametrization,\cite{schopohl2,schopohl}
and then iterate the calculation, until self-consistency is reached,
with the self-consistency equations
of the impurity self energies
$\bigl[$Eqs.\ (\ref{eq:sigma1})--(\ref{eq:sigma3})$\bigr]$
and that of the pair potential
for the $s$-wave vortex
$\bigl[$Eq.\ (\ref{eq:gap-s})$\bigr]$
or
for the chiral $p$-wave vortices
$\bigl[$Eq.\ (\ref{eq:gap-p})$\bigr]$.
We use an acceleration method for iterative calculations
to obtain sufficient accuracy.\cite{eschrig1}
When solving the Eilenberger equation (\ref{eq:eilen2}),
we use the boundary conditions for the pair potential
and the impurity self energies described in Sec.\ 3.2.
We show now the results of $\xi_1(T)$ for the three cases
introduced above.
We set the cutoff energy $\omega_{\rm c}=10 \Delta_0$
and the cutoff length $r_{\rm c}=10 \xi_0$.
Here, $\Delta_0$ is the BCS gap amplitude at zero temperature
in the absence of impurities,
and $\xi_0=v_{\rm F}/\Delta_0$.
The lowest value of the temperature at which $\xi_1$ is computed
is $T=0.02T_{{\rm c}0}$.
We have checked an influence of the finite system size,
comparing results of $\xi_1$
obtained for
two cutoff lengths
$r_{\rm c}=10 \xi_0$ and $20 \xi_0$
in the case of the clean limit.
As a result, it was found that
those results well coincide each other
below $T \sim 0.7 T_{{\rm c}0}$
(the deviations in $\xi_1$ are less than
$0.003 \xi_0$ and negligibly small),
while slight deviations appear
above $T \sim 0.8 T_{{\rm c}0}$,
but are less than
$0.014 \xi_0$
($T \le 0.9 T_{{\rm c}0}$)
and almost invisible for plots
with the same plot scale as in the following figures.
\subsection{$S$-Wave Vortex}
\begin{figure}
\begin{center}
\includegraphics[scale=0.79]{fig1.eps}
\includegraphics[scale=0.79]{fig2.eps}
\end{center}
\caption{
The vortex core radius $\xi_1(T)$ (points)
in the case of the $s$-wave vortex (\ref{eq:op-s})
as a function of the temperature $T$
for several values of the impurity scattering rate $\Gamma_{\rm n}$.
Lines are guides for the eye, except for the solid line in (b).
In the plots, $\xi_1$ and $T$ are normalized by
$\xi_0$ and $T_{\rm c}$, respectively.
Here, $T_{\rm c}$ is the superconducting critical temperature,
$\xi_0$ is defined as $\xi_0=v_{\rm F}/\Delta_0$,
and $\Delta_0$ is the BCS gap amplitude at zero temperature
in the absence of impurities.
(a) $\Gamma_{\rm n} = 0$, 0.1$\Delta_0$, and $\Delta_0$.
(b) $\Gamma_{\rm n} = \Delta_0$ and $10\Delta_0$.
The solid line in (b)
is a plot of
the function,
$1/\tanh \bigl(1.74\sqrt{(T_{\rm c}/T)-1}\bigr)
\approx \Delta_0/\Delta_{\rm BCS}(T)
=v_{\rm F}/\Delta_{\rm BCS}(T)\xi_0$,
which reproduces approximately the temperature dependence
of the clean-limit BCS coherence length.
}
\label{fig:1}
\end{figure}
In Fig.\ 1,
we show the vortex core radius $\xi_1(T)$ for the $s$-wave vortex
as a function of the temperature $T$
for several values of the impurity scattering rate $\Gamma_{\rm n}$.
The critical temperature $T_{\rm c}$
remains unaffected by the nonmagnetic impurities,
namely $T_{\rm c} = T_{{\rm c}0}$ for any values of $\Gamma_{\rm n}$.
At high temperatures,
we see in Fig.\ 1 that
the vortex core radius $\xi_1$ decreases
with the increase of
the impurity scattering rate $\Gamma_{\rm n}$.
This is because
the coherence length $\xi$,
over which the pair potential significantly changes,
shrinks
with the decrease of
the quasiparticle mean free path
($\propto 1/\Gamma_{\rm n}$).\cite{tinkham}
The coherence length $\xi$
is a distance
at which
the pair potential
is restored
far away from the vortex center.
The vortex core radius $\xi_1$ defined
in the vicinity of the vortex center
is dominated
by this $\Gamma_{\rm n}$ dependence
of the coherence length $\xi$ in this temperature regime.
Pronounced $\Gamma_{\rm n}$ dependence
appears also at low temperatures.
For $\Gamma_{\rm n}=0$ (the clean limit),
the vortex core radius $\xi_1$
decreases linearly in $T$,
as expected for the KP effect.
In Fig.\ 1(a), we also show $\xi_1(T)$
for finite values of the impurity scattering rate,
$\Gamma_{\rm n}=0.1 \Delta_0$ and $\Delta_0$.
At low temperatures,
the vortex core radius $\xi_1$
{\it increases}
with the increase of $\Gamma_{\rm n}$, in contrast to
the high-temperature behavior.
This increase of $\xi_1$
indicates the saturation feature of the KP effect
due to impurities.\cite{KP}
The low-temperature vortex core radius $\xi_1$
expands by introducing impurity scattering but still
remains much smaller than $ \xi $.
For relatively small $\Gamma_{\rm n}$ $(< \Delta_0)$,
the decrease of the coherence length $\xi$
mentioned above
has little influence
on this expansion of the vortex core radius $\xi_1$ ($\ll \xi$).
For larger $\Gamma_{\rm n}$ towards the dirty limit, however,
the decrease of the coherence length $\xi$
begins to influence
the vortex core radius $\xi_1$,
and $\xi_1$ begins to decrease with growing $\Gamma_{\rm n}$
as seen in Fig.\ 1(b).
In Fig.\ 1(b), the solid line displays
the function,
$1/\tanh \bigl(1.74\sqrt{(T_{\rm c}/T)-1}\bigr)$
$\approx \Delta_0/\Delta_{\rm BCS}(T)
=v_{\rm F}/\Delta_{\rm BCS}(T)\xi_0$.
In the dirty case
($\Gamma_{\rm n}=10\Delta_0$,
i.e., the mean free path
$l=v_{\rm F}/2\Gamma_{\rm n}=0.05\xi_0$),
$\xi_1(T)$
behaves like the clean-limit BCS coherence length
$\sim v_{\rm F}/\Delta_{\rm BCS}(T)$
below $T \sim 0.6 T_{\rm c}$,
and
is almost constant at low temperatures.
The increase of the vortex core radius $\xi_1(T)$
with increasing $T$
at high temperatures for the dirty case $\Gamma_{\rm n}=10 \Delta_0$,
is more gradual than the temperature dependence of
the clean-limit BCS coherence length
$\bigl($the solid line in Fig.\ 1(b)$\bigr)$.
This behavior is qualitatively consistent with a dirty limit result
reported by Volodin {\it et al.}\cite{golubov97}
Note that
the overall temperature dependence of
$\xi_1$ in the dirty case
is quantitatively different
from that of Ref.\ \onlinecite{golubov97} ($\xi_{\rm eff}$),
which is probably caused by the difference of the definitions of the vortex core radius.
As displayed in Fig.\ 1(a) for the
moderately clean case
($\Gamma_{\rm n}=0.1\Delta_0$,
i.e., $l=5\xi_0$)
and even in the relatively dirty case
($\Gamma_{\rm n}=\Delta_0$,
i.e., $l=0.5\xi_0$),
the vortex core radius $\xi_1(T)$
shrinks
approximately linearly in $T$
with moderate curvature
below $T \sim 0.6 T_{\rm c}$
and saturates towards a finite value in zero-temperature limit.
This gradual saturation
due to impurities
is
in contrast to
a sudden truncation of the KP effect
which happens
below a certain temperature related to
the discrete energy levels of
the low-lying vortex bound states.\cite{haya98,m-kato}
\subsection{Chiral $P$-Wave Vortices}
\begin{figure}
\begin{center}
\includegraphics[scale=0.79]{fig3.eps}
\end{center}
\caption{
The vortex core radius $\xi_1(T)$ (points)
in the case of
the chiral {\it positive} $p$-wave vortex (\ref{eq:op-pp})
as a function of the temperature $T$
for several values of the impurity scattering rate $\Gamma_{\rm n}$.
Lines are guides for the eye.
In the plot, $\xi_1$ and $T$ are normalized by
$\xi_0$ and $T_{{\rm c}0}$, respectively.
Here, $T_{{\rm c}0}$ is the superconducting critical temperature
in the absence of impurities,
$\xi_0$ is defined as $\xi_0=v_{\rm F}/\Delta_0$,
and $\Delta_0$ is the BCS gap amplitude at zero temperature
in the absence of impurities.
}
\label{fig:2}
\end{figure}
In the chiral $p$-wave superconductors
and generally in unconventional superconductors,
the superconducting critical temperature
decreases in the presence of impurities.
The units of the temperature $T$ in
Figs.\ 2 and 3 is $T_{{\rm c}0}$
(the superconducting critical temperature in the absence of impurities).
We obtain $\xi_1$
from the dominant components $\Delta_{+}^{({\rm n,p})}(r)$
in Eqs.\ (\ref{eq:op-pm}) and (\ref{eq:op-pp}).
\begin{figure}
\begin{center}
\includegraphics[scale=0.79]{fig4.eps}
\end{center}
\caption{
The vortex core radius $\xi_1(T)$ (points)
in the case of
the chiral {\it negative} $p$-wave vortex (\ref{eq:op-pm})
as a function of the temperature $T$
for several values of the impurity scattering rate $\Gamma_{\rm n}$.
Lines are guides for the eye.
In the plot, $\xi_1$ and $T$ are normalized by
$\xi_0$ and $T_{{\rm c}0}$, respectively.
Here, $T_{{\rm c}0}$ is the superconducting critical temperature
in the absence of impurities,
$\xi_0$ is defined as $\xi_0=v_{\rm F}/\Delta_0$,
and $\Delta_0$ is the BCS gap amplitude at zero temperature
in the absence of impurities.
}
\label{fig:3}
\end{figure}
We show in Fig.\ 2
the vortex core radius $\xi_1(T)$
in the case of the positive vortex
$\Delta^{(\rm p)}$
$\bigl[$Eq.\ (\ref{eq:op-pp})$\bigr]$.
At low temperatures, $\xi_1(T)$ shrinks
approximately linearly in $T$
with moderate curvature
and saturates towards a finite value in zero-temperature limit,
as in the case of the $s$-wave vortex.
The vortex core radius $\xi_1$ expands
with the increase of $\Gamma_{\rm n}$,
owing to the increase of
the coherence length $\xi$ with the suppression of
the pair potential far away from the vortex core.
Figure 3 displays $\xi_1(T)$ for the negative vortex
$\Delta^{(\rm n)}$
$\bigl[$Eq.\ (\ref{eq:op-pm})$\bigr]$.
In contrast to the cases of the $s$-wave vortex
and the positive vortex,
the vortex core radius
$\xi_1(T)$ strongly decreases
even in the presence of impurities,
and shrinks toward almost zero
for the moderately clean cases
below $\Gamma_{\rm n} \sim 0.2\Delta_0$.
We can understand,
in the following way,
the reason why
the vortex core shrinkage is robust against impurities.
In the Eilenberger equations (\ref{eq:eilen}) and (\ref{eq:eilen2}),
the term which includes the impurity self energy is written as
\begin{equation}
\bigl[ {\hat \sigma},{\hat g} \bigr]
=
\bigl[ {\hat \Sigma},{\hat g} \bigr]
=
\frac{n_{\rm i} N_0 v^2}{D}\bigl[ \langle{\hat g}\rangle,{\hat g} \bigr],
\label{eq:imp-term}
\end{equation}
where we referred to Eqs.\ (\ref{eq:IMP3}) and (\ref{eq:imp-self}).
Therefore, if the Fermi-surface-averaged Green function
$\langle{\hat g}\rangle$ is equivalent to
the original Green function ${\hat g}$
$\bigl($namely, if $\langle{\hat g}\rangle = {\hat g}$$\bigr)$,
the above term (\ref{eq:imp-term})
becomes zero and the impurity does not play a role
in the Eilenberger equation.
Such a special situation occurs in the negative vortex (\ref{eq:op-pm})
of the chiral $p$-wave phase,
and not in the positive vortex (\ref{eq:op-pp})
or other usual vortices.\cite{haya02-1,haya03-1}
It corresponds to a local restoration of the Anderson's theorem
inside the vortex core.\cite{volovik99,kato00,kato02,haya02-1,haya03-1,kato03,matsumoto00}
This negative vortex (\ref{eq:op-pm})
is more favorable energetically
than the positive vortex (\ref{eq:op-pp})
at least in the clean limit,\cite{Heeb99}
and therefore the negative vortex is likely to exist in
the chiral $p$-wave superconductors.
\section{SUMMARY}
We investigated the temperature dependence of
the vortex core radius $\xi_1(T)$ defined in Eq.\ (\ref{eq:KP}),
incorporating
the effect of nonmagnetic impurities in the Born limit.
The isolated single vortex in the isotropic two-dimensional system
was considered
for the $s$-wave pairing symmetry
and the chiral $p$-wave pairing symmetry.
In the case of the $s$-wave vortex (\ref{eq:op-s}),
as seen in Fig.\ 1,
{\it at low temperatures}
the vortex core radius $\xi_1$
{\it increases}
with the increase of the impurity scattering rate $\Gamma_{\rm n}$
up to $\Gamma_{\rm n} \sim \Delta_0$
owing to the saturation of the KP effect due to impurities.
In contrast to it,
{\it at high temperatures} the vortex core radius $\xi_1$
{\it decreases}
with the increase of $\Gamma_{\rm n}$
owing to the decrease of the coherence length $\xi$ due to impurities.
In the case of
the $s$-wave vortex (\ref{eq:op-s}) of the moderately clean state
($\Gamma_{\rm n} = 0.1\Delta_0$,
i.e., $l = 5\xi_0$)
and of the relatively dirty state
($\Gamma_{\rm n}=\Delta_0$,
i.e., $l=0.5\xi_0$),
as seen in Figs.\ 1(a),
at low temperatures
the vortex core radius $\xi_1(T)$
{\it shrinks approximately linearly in $T$
with moderate curvature}
and saturates towards a finite value in zero-temperature limit.
This gradual saturation
due to impurities
is in contrast to
a sudden truncation of the KP effect due to the discreteness in
the energy spectrum of the low-lying vortex bound states.\cite{haya98,m-kato}
In the case of the chiral $p$-wave pairing system,
as seen in Fig.\ 2,
for the positive vortex (\ref{eq:op-pp})
the shrinkage of
the vortex core radius $\xi_1(T)$ saturates towards a finite value
in zero-temperature limit
by impurity scattering,
analogous to the $s$-wave vortex.
For the negative vortex (\ref{eq:op-pm}), however,
the local restoration of
the Anderson's theorem
inside the vortex core\cite{volovik99,kato00,kato02,haya02-1,haya03-1,kato03,matsumoto00}
yields a KP effect little affected by impurity scattering
and,
as seen in Fig.\ 3,
the vortex core radius $\xi_1(T)$
{\it strongly shrinks
linearly in $T$
at low temperatures
even in the presence of impurities}.
It would naturally be highly desirable to establish
the KP effect experimentally
beyond the present level.
Our analysis shows that impurity scattering which is
harmful for the KP effect in conventional superconductors,
can under certain
condition be harmless in a chiral $p$-wave superconductor,
making this kind of
system a good candidate for experimental tests.
On the other hand,
we expect that
rather weak shrinkings of vortex core observed
in NbSe$_2$ (Refs.\ \onlinecite{sonier04-1,sonier00,sonier97,miller})
and CeRu$_2$ (Ref.\ \onlinecite{kadono01})
may be explained partly
in terms of
the impurity effect,
i.e.,
the vortex core shrinkage
approximately linear in $T$
with moderate curvature
and its saturation towards a finite value
in zero-temperature limit $\bigl($Fig.\ 1(a)$\bigr)$.
Rather large extrapolated values of
the vortex core radius at zero temperature
observed experimentally
might be partly attributed to effects of multiple gaps.
Finally,
we mention a multi-gap effect on the KP effect.
Our preliminary results for $r_0 (T)$
in a two-gap model
show that
a contribution from
the Fermi surface with a smaller gap to the total supercurrent density
$j(r)$ around a vortex
makes the position $r_0$, at which $|j(r)|$ has its maximum value,
shift outward away from the vortex center $r=0$,
leading to a finite $r_0$ at zero temperature
in spite of the clean limit.\cite{haya}
Our detailed results for the KP effect in a two-gap superconductor
will be reported elsewhere.
Sr$_2$RuO$_4$ (Refs.\ \onlinecite{maeno,agterberg,kusunose2})
and NbSe$_2$ (Ref.\ \onlinecite{nbse2})
have multiple bands and may be effectively two-gap superconductors.
MgB$_2$ is a typical two-gap
superconductor.\cite{choi02}
Further investigations on the vortex core shrinkage
in terms of the multi-gap effects
are left for future experimental and theoretical studies.
\section*{ACKNOWLEDGMENTS}
We would like to thank N.\ Schopohl, T.\ Dahm, and S. Graser
for enlightening discussions.
One of the authors (N.H.) is grateful for the support by
2003 JSPS Postdoctral Fellowships for Research Abroad.
We acknowledge gratefully financial support from the Swiss Nationalfonds.
|
{
"timestamp": "2004-11-04T16:34:58",
"yymm": "0411",
"arxiv_id": "cond-mat/0411110",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0411110"
}
|
\section{INTRODUCTION}
It is not possible to measure a mass composition of primary cosmic rays (PCR) in the
energy range $10^{15} - 10^{20}$~eV using the direct method. Nothing remains, but to
resort to the indirect methods when for the similar estimates the measurement of different
components of extensive air shower (EAS) are used. That can be characteristics of
longitudinal or lateral development of shower in the air. And usually it is connected with
the analysis of the most sensitive to the composition of shower components which differ
from each other by the character of formatting and absorbing in the atmosphere, for
example, of the charged particle flux (electron, muon), the flux of \^{C}erenkov or
ionization radiation.
\section{ENERGY TRANSFERRED TO THE ELECTROMAGNETIC EAS COMPONENT}
Fig.\ref{fig1} presents the EAS Yakutsk array experimental data and calculations by the
model with the decelerated and moderate dissipation of the energy into the electromagnetic
EAS component: quasiscaling models (solid line) and QGSJET (dashed line)~\cite{bib1}. From
Fig.\ref{fig1} it is seen both the agreement of experimental data and calculations by
QGSJET model (proton) in the region $E_{0} \ge 3 \cdot 10^{18}$~eV, and disagreement at $E_{0}
\le 3 \cdot 10^{18}$~eV. The scaling model gives a noticeably greater value of $E_{\mathrm{m}} /
E_{0}$ in relation to the experimental data that is doubtlessly also connected with the break
of scaling function in the region of ultra--high energies.
\begin{figure}
\includegraphics[width=7.5cm]{fig1.eps}
\caption{A portion of the energy transferred to the electromagnetic EAS component by
\^{C}erenkov light data at the Yakutsk array.}
\label{fig1}
\end{figure}
The experimental data in Fig.\ref{fig1} is well approximated by the expression of a form:
\begin{eqnarray}
\frac{E_{\mathrm{m}}}{E_{0}} & = & (0.964 \pm 0.011) - (0.079 \pm \nonumber \\
&& \pm 0.005) \cdot E_{0}^{- (0.147 \pm 0.008)}\mathrm{.}
\label{eq1}
\end{eqnarray}
The relation (\ref{eq1}) is primarily important for the comparison of estimations of $E_{0}$
obtained at the Yakutsk and Fly's Eye arrays~\cite{bib2}.
The calculations in~\cite{bib2} (see Fig.\ref{fig1}) have been carried out by the QGSJET
model in the case of primary proton and iron nucleus. A good agreement of our calculations
in the case of the primary proton is observed. The comparison of experimental data with
calculations for the proton and iron nucleus indicates to the fact that the mass
composition of particles of cosmic radiation in the energy region of $10^{17} -
10^{18}$~eV and above $3 \cdot 10^{18}$~eV must differ. At $E_{0} \ge 3 \cdot 10^{18}$~eV the
mass composition is most likely close to the proton one.
\section{A PORTION OF MUONS WITH $E_{\mathrm{th}} \ge 1$~GeV}
The showers with $E_{0} \ge 10^{18}$~eV and zenith angles $\theta < 60^{\circ}$ have been
chosen. It is required so that the shower axis is within the array and not less than three
muon detectors (one of them is at a distance of 1000~m from the shower axis) operate
during the shower. The density of muons flux $\rho_{\mu}(1000)$ is like a median between the
densities adjusted to $\left<R\right> = 1000$~m, according to a mean lateral muon
distribution function. In this case the detectors which have been operated in the shower
(they have given zero indications), i.e. the detector threshold were used in analyze too.
The muon flux densities at have distances of 300 and 600~m from the shower axis have been
calculated in analogous way. Results are in Fig.\ref{fig2}a and \ref{fig2}b.
Calculations~\cite{bib3} have shown that $\rho_{\mu}(R)$ weekly depends on a zenith angle in
the interval of $\Delta \theta = (0^{\circ} - 60^{\circ})$. So for the comparison of
calculations and experimental data we take a mean angle $\theta = 39^{\circ}$. In order to
take into account both physical fluctuations in measurement of $\rho_{\mu}(1000)$ and
methodical ones (apparatus errors and accuracy of the locate of the shower axis) we have
considered the judicial interval in one $\sigma_{\mu}^{\mathrm{mean}} = \sigma_{\mu}^{\mathrm{phys}} + \sigma_{\mu}^{\mathrm{meth}}$. In
Fig.\ref{fig2}a and \ref{fig2}b the calculation result is shown by
\begin{figure}
\includegraphics[width=7.5cm]{fig2-1.eps}
\includegraphics[width=7.5cm]{fig2-2.eps}
\caption{a) Portion of the muons with $E_{\mathrm{th}} \ge 1$~GeV (\%). $\rho_{\mu}(300) / \rho_{\mathrm{s}}(300)
(\bullet)$ and $\rho_{\mu}(600) / \rho_{\mathrm{s}}(600) (\circ)$. b) $\rho_{\mu}(1000)$ vs. $E_{0}$ relations for
observed events $10^{18} - 10^{19}$~eV (square), $10^{19} - 10^{20}$~eV (points) and
$10^{20} - 10^{21}$~eV (triangles). Expected $\pm 1 \sigma$ bounds for the distributions
are indicated for proton, iron and gamma--ray primary by different curve as in the
legend.}
\label{fig2}
\end{figure}
dotted line in the case of primary proton and by a dashed line for the iron nuclei. A
solid line shows the upper and lower limits for the case if EAS with $E_{0} \ge 10^{19}$~eV
would generate by a primary $\gamma$--quantum. The calculation for the primary
$\gamma$--quantum has been taken from~\cite{bib4}. Dots for $\rho_{\mu}(300)$ and circles for
$\rho_{\mu}(600)$ the experimental data show in Fig.\ref{fig2}a. In Fig.\ref{fig2}b the showers
in the energy range of $10^{18} - 10^{19}$~eV are shown by squares, the showers with $E_{0}
\ge 10^{19}$~eV are shown by dots and the showers of maximum energy are shown by
triangles. The comparison of experimental data presented in Fig.2a with calculations by
the QGSJET model carried out for the case of the primary proton and iron nuclei confirms
the hypothesis on the fact that the considerable portion of ultimate energy EAS have been
formed by protons. Their portion decreases below the energy $10^{18}$~eV. It is seen from
Fig.2b, the basic mass of points (showers with $E_{0} \ge 10^{19}$~eV) falls into the
interval for the proton. 23 points fall into a zone of superposition of a proton and iron
nuclei and 19 showers from 116 fall into a zone of upper boundary of calculation for the
primary $\gamma$--quantum. It testifies to the fact that even taking into account there
exists a probability that showers of such energies can be generated by neutral particles
and, in particular, by a primary $\gamma$--quantum. Then it is justified to use the
analysis of directions of arrival of showers with $E_{0} \ge 10^{19}$~eV for the search of
the sources of highest energy cosmic rays. In this case the determination accuracy of
arrival angles of such showers must be not worse than $(0.5^{\circ} - 1.5^{\circ})$.
\begin{figure}
\includegraphics[width=7.5cm]{fig3-1.eps}
\includegraphics[width=7.5cm]{fig3-2.eps}
\includegraphics[width=7.5cm]{fig3-3.eps}
\caption{Depth of maximum distributions for difference fixes energy.}
\label{fig3}
\end{figure}
\section{$X_{\mathrm{max}}$ FLUCTUATIONS}
A large number of \^{C}erenkov detectors operating in the individual EAS events and also
the use of a new version of the QGSJET model allow to obtain quantitative estimations of
mass composition of PCR. For this aim we have compared experimental data (see
Fig.\ref{fig3}) and theoretical predictions by the QGSJET model for different primary
nucleus with the use of $\chi^{2}$ criterion. The value of $\chi^{2}$ has been determined by the
equality
\begin{equation}
\chi^{2}(X_{\mathrm{max}}) = \Sigma \frac{\left[N_{\mathrm{exp}}(X_{\mathrm{max}}) - N_{\mathrm{theor}}(X_{\mathrm{max}})\right]^2}{N_{\mathrm{theor}}(X_{\mathrm{max}})}
\label{eq2}
\end{equation}
where $N_{\mathrm{exp}}(X_{\mathrm{max}})$ is the experimental number of showers in the $\Delta X_{\mathrm{max}}$ interval.
$N_{\mathrm{theor}}(X_{\mathrm{max}}, A_{i})$ is the analogous number of showers calculated under the assumption
that the mass number of the nucleus is equal to $A_{i}$, and $P(A_{i})$ is the probability
of the fact that the shower with the energy $E_{0}$ is formed by a primary particle $A_{i}$.
Then:
\begin{equation}
N_{\mathrm{theor}}(X_{\mathrm{max}}) = \sum_{i = 1}^{n} P(A_{i}) \cdot N_{\mathrm{theor}}(X_{\mathrm{max}}, A_{i})
\label{eq3}
\end{equation}
The analysis of form of the experimental distribution of $X_{\mathrm{max}}$ at optimal value of
$\chi^{2}$ with a definite portion of probability doesn’t contradict to the following
relationship for five nuclei component:
\begin{description}
\item[$\bar E_{0} = 5 \cdot 10^{17}$~eV:] p: $(39 \pm 11)$\%, $\alpha$: $(31 \pm 13)$\%, M:
$(18 \pm 10)$\%, H: $(7 \pm 6)$\%, Fe: $(5 \pm 4)$\%;
\item[$\bar E_{0} = 1 \cdot 10^{18}$~eV:] p: $(41 \pm 8)$\%, $\alpha$: $(32 \pm 11)$\%, M:
$(16 \pm 9)$\%, H: $(6 \pm 4)$\%, Fe: $(5 \pm 3)$\% ;
\item[$\bar E_{0} = 5 \cdot 10^{18}$~eV:] p: $(60 \pm 14)$\%, $\alpha$: $(21 \pm 13)$\%, M:
$(10 \pm 8)$\%, H: $(5 \pm 4)$\%, Fe: $(3 \pm 3)$\%.
\end{description}
Thus in the framework of the QGSJET model one can suppose that the mass composition of PCR
changes transferring from the energy range $(5 - 30) \cdot 10^{17}$~eV to energy range $(5
-30) \cdot 10^{18}$~eV. At $E_{0} \ge 3 \cdot 10^{18}$~eV the primary cosmic radiation
consists from $\sim70$\% protons and helium nuclei, a portion of the rest nuclei in the
range where there is the second irregularity in the energetic spectrum type ``ankle''
doesn’t exceed $\sim30$\%. A high content of proton and helium nuclei in PCR in the
region of formation of ``ankle'' is most likely connected with an appreciable contribution
into the overall flux of cosmic radiation in the Earth’s vicinity of radiation coming from
beyond our Galaxy limits.
\section*{Acknowledgements}
This work has been financially supported by RFBR, grant \textnumero02--02--16380, grant
\textnumero03--02--17160 and grant INTAS \textnumero03-51-5112.
|
{
"timestamp": "2004-11-17T07:44:12",
"yymm": "0411",
"arxiv_id": "astro-ph/0411483",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411483"
}
|
\section{Introduction}
\label{sec:introduction}
Cataclysmic variables (CVs) are close binary stars consisting of a red--dwarf
secondary transferring material onto a white dwarf primary via an accretion
disc or magnetic accretion stream. V347~Pup is an example of
a nova--like variable (NL), a class of CV with high mass transfer rates and
no recorded nova or dwarf--nova type outbursts; see \scite{warner95a} for a
comprehensive review of CVs.
A knowledge of the masses of the component stars in CVs is
fundamental to our understanding of the origin, evolution and behaviour of
these systems. Population synthesis models (e.g. \pcite{kolb01}) and the
disrupted magnetic braking model of CV evolution (e.g. \pcite{spruit83};
\pcite{rappaport83}) can be observationally tested only if the number
of reliably known CV masses increases. One of the most reliable ways
to measure the masses of CVs is to use the radial velocity and the rotational
broadening of the secondary star in eclipsing
systems. The radial velocity of the disc emission lines is often an unreliable
indicator of the white dwarf motion because of contamination from, for
example, the bright spot. At present, reliable masses are
known for only $\sim$20 CVs, partly due to the difficulties in measurement (see
\pcite{smith98} for a review).
V347~Pup was identified spectroscopically as a NL by \scite{buckley90}
from the presence of high--excitation emission lines.
Even though V347~Pup emits at X-ray wavelengths
(as the {\em Uhuru} X-ray source 4U 0608--49), the
NL classification was favoured over a magnetic CV class on account of the
negligible polarisation present. The study by \scite{buckley90}
revealed a bright and deeply eclipsing system,
with a spectroscopic and photometric orbital period of 5.57 hrs.
Their measured system inclination and emission--line radial velocity curve,
together with an empirical secondary star mass estimated from the orbital period,
suggested a high primary mass close to the Chandrasekhar limit.
A multiwavelength study by \scite{mauche94} revealed an X-ray
spectral energy distribution similar to many dwarf novae in outburst,
with a likely origin in an extended emission region rather than the boundary layer.
The UV emission lines appear to have a similar origin and, in a later paper
by \scite{shlosman96}, their behaviour in eclipse was sucessfully modelled
as disc light scattered in a rotating wind.
The presence of an accretion disc in V347~Pup was confirmed by a rotational
disturbance of the optical emission lines through primary eclipse
(\pcite{mauche94}; \pcite{still98}). The latter authors found evidence for
spiral arms and disc overflow accretion, and identified
the low-excitation optical emission profiles as a
composite of emission from the accretion disc and secondary star.
Secondary star absorption lines were found by \scite{diaz99},
who measured the system parameters of V347~Pup using the
radial velocity semi--amplitudes of the primary and secondary stars.
The radial velocity of the optical emission lines in V347~Pup varies
widely in the literature, with published values
of 134 $\pm$ 9 km\,s$^{-1}$ (\pcite{buckley90}), 122 $\pm$ 19 km\,s$^{-1}$
(\pcite{mauche94}), 156 $\pm$ 10 km\,s$^{-1}$, 125 $\pm$ 13 km\,s$^{-1}$
(\pcite{still98}) and 193 $\pm$ 16 km\,s$^{-1}$ (\pcite{diaz99}).
The radial velocities of the UV emission
lines published by \scite{mauche94} ranged between 220 and 370 km\,s$^{-1}$ with
large phase shifts between spectroscopic conjuction and photometric mid--eclipse.
This wide range in values, and the known unreliability of using disc emission
lines in NLs to determine the motion of the white dwarf (e.g. \pcite{dhillon97b}),
makes the determination of system parameters from the secondary star
features alone highly desirable. In this paper, we derive the
system parameters from the radial and rotational velocities of the secondary
star in V347~Pup.
\section{Observations and Reduction}
\label{sec:observations}
During January and December 1998 and January 1999, we obtained optical
spectra of V347~Pup using the Cassegrain spectrograph + SITe1 CCD chip on
the SAAO 1.9-m telescope. Simultaneous photometry was
available for most of the spectra using the SAAO 1.0-m telescope with
the TEK8 CCD chip. See Table~\ref{tab:journal} and its caption for
full details.
On the December 1998 run, we observed 17 spectral type templates ranging from
G7V--M5.5V and telluric stars to remove atmospheric features. We
observed flux standards on both the 1.9-m and 1.0-m telescopes on all nights.
The spectra and images were reduced using standard procedures (e.g.
\pcite{dhillon94}; \pcite{thoroughgood01}). The photometry data were
corrected for the effects of atmospheric extinction by subtracting the
magnitude of a nearby comparison star.
The absolute photometry is accurate to approximately $\pm$0.5 mJy; the
relative photometry to $\pm$0.01 mag. Comparison arc
spectra were taken every $\sim$40 min in order to calibrate the wavelength scale
and instrumental flexure.
The arcs were fitted with fourth--order polynomials with an
rms scatter of better than 0.04\AA.
Where possible, slit losses were then corrected for by multiplying each
V347~Pup spectrum by the ratio of the flux in the spectrum (over the whole
spectral range) to the corresponding photometric flux.
\begin{table*}
{\protect\small
\caption{Journal of observations. During Jan 1998, we used Grating
No.\,4 to give a wavelength range of $\sim$4200--5060\AA\ ($\lambda_{cen}$
= 4610\AA) at 0.99-\AA$\:$ (64 km\,s$^{-1}$) resolution.
Grating No.\,4 was again used on 28 Dec 1998 and 23 Jan 1999
to give a wavelength range of $\sim$4900--5720\AA\ ($\lambda_{cen}$
= 5290\AA) at 0.95-\AA\ (54 km\,s$^{-1}$) resolution.
On 25 and 27 Dec 1998, we used Grating No.\,5 to give a wavelength range of
$\sim$5960--6725\AA\ ($\lambda_{cen}$ = 6330\AA) at 0.88-\AA$\:$
(42 km\,s$^{-1}$) resolution.
Simultaneous photometry for the December 1998 spectra was recorded in the
Johnson--Cousins $V$ and $R$ bands. Photometry was also available during
the January 1998 run in the Str\"{o}mgren $b$ and $y$ filters.
The seeing measured around 1.0 arcsec with photometric conditions on 25 and 27
Dec 1998 and Jan 23 1999. On 28 Dec 1998, however, the seeing was poor and
patchy high cloud was present. The seeing varied between 1.0--1.5 arcsec over
the January 1998 run.
The epochs are calculated using the new ephemeris presented in this paper
(equation~\ref{eqn:ephem}).}
\label{tab:journal}
\begin{tabular*}{0.95\textwidth}{lcccr@{.}lr@{.}lcccr@{.}lr@{.}l}
\hline
\vspace{-3mm}\\
\multicolumn{1}{c}{UT Date}
& \multicolumn{1}{c}{1.9-m}
& \multicolumn{1}{c}{No. of}
& \multicolumn{1}{c}{Exposure}
& \multicolumn{2}{c}{Epoch}
& \multicolumn{2}{c}{Epoch}
& \multicolumn{1}{c}{1.0-m}
& \multicolumn{1}{c}{No. of}
& \multicolumn{1}{c}{Exposure}
& \multicolumn{2}{c}{Epoch}
& \multicolumn{2}{c}{Epoch}\\
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{$\lambda_{cen}$ (\AA)}
& \multicolumn{1}{c}{spectra}
& \multicolumn{1}{c}{time (s)}
& \multicolumn{2}{c}{start}
& \multicolumn{2}{c}{end}
& \multicolumn{1}{c}{filter}
& \multicolumn{1}{c}{images}
& \multicolumn{1}{c}{time (s)}
& \multicolumn{2}{c}{start}
& \multicolumn{2}{c}{end}\\
\vspace{-3mm}\\
\hline
\vspace{-3mm}\\
1998 Jan 07 & 4610 & 113 & 200 & 17178&62 & 17179&91 & $b$ & 843 & 30
& 17178&88 & 17179&93 \\
1998 Jan 08 & 4610 & 32 & 200 & 17182&88 & 17183&23 & $b$ & 105 & 30
& 17183&83 & 17184&04 \\
1998 Jan 10 & 4610 & 117 & 200 & 17191&94 & 17192&94 & $y$ & 1379 & 30
& 17191&61 & 17192&96 \\
1998 Jan 11 & 4610 & 129 & 200 & 17195&74 & 17197&25 & $b$ & 1217 & 30
& 17195&81 & 17197&08 \\
1998 Jan 12 & 4610 & 116 & 200 & 17200&09 & 17201&44 & $b$ & 373 & 30
& 17200&14 & 17200&52 \\
1998 Dec 25 & 6330 & 68 & 300 & 18696&47 & 18697&55 & $R$ & 962 & 30
& 18696&32 & 18697&65 \\
1998 Dec 27 & 6330 & 61 & 300 & 18705&18 & 18706&17 & $R$ & 748 & 30
& 18704&95 & 18706&25 \\
1998 Dec 28 & 5290 & 23 & 300 & 18709&63 & 18710&16 & $V$ & 442 & 30
& 18709&50 & 18710&29 \\
1999 Jan 23 & 5290 & 64 & 300 & 18821&30 & 18822&49 & \multicolumn{7}{c}
{no photometry available} \\
\vspace{-3mm}\\
\hline
\end{tabular*}
}
\end{table*}
\section{Results}
\subsection{Ephemeris}
\label{sec:ephem}
The times of mid--eclipse for V347~Pup were determined by
fitting a parabola to the eclipse minima in the photometry data.
A least--squares fit to the 21 eclipse timings listed in
Table~\ref{tab:o-cs} yields the ephemeris:
\begin{equation}
\label{eqn:ephem}
\begin{array}{lr@{.}llr@{.}ll}
T_{\rm mid-eclipse} = & \!\!\!\! {\rm HJD}\,\,2\,446\,836&96176
& \!\!\!\! + \!\!\!\! & 0&231936060 & \!\!\!\!\! E \\
& \!\! \pm \,\, 0&00009 & \!\!\!\! \pm \!\!\!\! &
0&000000006. & \\
\end{array}
\end{equation}
Our new ephemeris is exactly the same as that given by \scite{baptista91},
except we
have reduced the errors on both the zero point and orbital period.
We find no evidence for any systematic variation in the O--C values
listed in Table~\ref{tab:o-cs}.
\begin{table}
{\protect\small
\caption{Times of mid--eclipse for V347~Pup according to
Buckley et al. (1990; B90), Baptista \& Cieslinski (1991; BC91) and this
paper.}
\label{tab:o-cs}
\begin{tabular*}{87mm}{lr@{.}lr@{x}lr@{.}lc}
\hline
\vspace{-3mm}\\
\multicolumn{1}{c}{Cycle} &
\multicolumn{2}{c}{HJD} & \multicolumn{2}{c}{Uncertainty}
& \multicolumn{2}{c}{O--C} &
\multicolumn{1}{c}{Reference} \\
\multicolumn{1}{c}{(E)} &
\multicolumn{2}{c}{at mid--eclipse} & \multicolumn{2}{c}{on HJD}
& \multicolumn{2}{c}{(secs)} & \multicolumn{1}{c}{}\\
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{(2,400,000+)} & \multicolumn{2}{c}{}
& \multicolumn{2}{c}{} &
\multicolumn{1}{c}{} \\
\vspace{-3mm}\\
\hline
\vspace{-3mm}\\
--4 & 46836&0379 & 5&$10^{-4}$ & 335&96 & B90\\
0 & 46836&9621 & 5&$10^{-4}$ & 29&74 & B90\\
39 & 46846&0059 & 5&$10^{-4}$ & --117&69 & B90\\
43 & 46846&9333 & 5&$10^{-4}$ & --147&43 & B90\\
48 & 46848&0930 & 5&$10^{-4}$ & --145&73 & B90\\
56 & 46849&9500 & 5&$10^{-4}$ & --15&13 & B90\\
65 & 46852&0373 & 5&$10^{-4}$ & --25&89 & B90\\
69 & 46852&9651 & 5&$10^{-4}$ & --21&08 & B90\\
78 & 46855&0533 & 5&$10^{-4}$ & 45&92 & B90\\
6177 & 48269&63136 & 1.5&$10^{-4}$ & 48&28 & BC91\\
7583 & 48595&73325 & 1.1&$10^{-4}$ & 30&04 & BC91\\
7587 & 48596&66022 & 9&$10^{-5}$ & --36&85 & BC91\\
17179 & 50821&39199 & 5&$10^{-4}$ & 56&29 & This paper\\
17184 & 50822&55127 & 5&$10^{-4}$ & 21&70 & This paper\\
17192 & 50824&40676 & 5&$10^{-4}$ & 21&83 & This paper\\
17196 & 50825&33428 & 5&$10^{-4}$ & 2&46 & This paper\\
17197 & 50825&56611 & 5&$10^{-4}$ & --6&70 & This paper\\
18697 & 51173&47035 & 1&$10^{-4}$ & 6&20 & This paper\\
18705 & 51175&32562 & 1&$10^{-4}$ & --12&68 & This paper\\
18706 & 51175&55802 & 1&$10^{-4}$ & 26&54 & This paper\\
18710 & 51176&48519 & 1&$10^{-4}$ & --22&21 & This paper\\
\vspace{-3mm}\\
\hline
\end{tabular*}
}
\end{table}
\subsection{Average spectrum}
\label{sec:average}
The average spectra of V347~Pup, uncorrected for orbital motion, are shown in
Fig.~\ref{fig:av_spec}. In Table~\ref{tab:linewidths}, we list fluxes,
equivalent widths and velocity widths of the most prominent
lines measured from the average spectra.
The Balmer emission lines are broad, symmetric and single--peaked,
instead of the double--peaked profile one would expect from a high inclination
accretion disc (e.g. \pcite{horne86}). This behaviour is characteristic
of the SW~Sex stars (e.g. \pcite{dhillon97b}).
Previous studies of V347~Pup by \scite{buckley90} and \scite{diaz99} agree
with this single--peaked observation, however, the study by \scite{still98}
shows double--peaked low excitation lines (although this could be
due to the presence of absorption cores).
The HeI $\lambda$6678\AA\ line appears to be composed of a narrow
single--peaked component superimposed upon a broad double--peaked component.
The other HeI emission lines can clearly be seen in the wavelength region
centred on $\lambda$4610\AA\ as double peaked in nature, with the
possible exception of HeI $\lambda$4471\AA. High--excitation
emission is present through HeI $\lambda$4686\AA\ and the CIII/NIII
$\lambda\lambda$4640--4650\AA\ Bowen fluoresence complex.
The secondary
star is clearly visible in the average spectra as absorption
lines of the neutral metals CaI, FeI and MgI, as seen in \scite{diaz99}.
Secondary star features in the SW~Sex stars are not unusual in the longer
period systems, such as BT~Mon (\pcite{sad98b}), AC~Cnc and V363~Aur
(\pcite{thoroughgood04}).
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,angle=-90]{average_all.ps}
\end{center}
\caption{\protect\small
The average spectra for the three wavelength regions; the spectrum centred on
$\lambda$4610\AA\ is an average of all spectra recorded on the Jan 1998 run, and has
not been corrected for slit--losses. The spectrum centred on $\lambda$5290\AA\ is an
average of all data recorded on 28 Dec 1998 and 23 Jan 1999, placed on an absolute
flux scale (as determined from the 28 Dec 1998 photometry and flux standards).
The spectrum centred on $\lambda$6330\AA\ is composed of all data from 25 and
27 Dec 1998 and has been placed on an absolute flux scale. All average spectra
are uncorrected for orbital motion, resulting in smeared spectral features.}
\label{fig:av_spec}
\end{figure*}
\begin{table*}
{\protect\small
\caption{Fluxes and widths of prominent lines in V347~Pup, measured
from the 2 nights' data centred on $\lambda$6330\AA\ and the night of
11 Jan 1998 centred on $\lambda$4610\AA. The full--width half--maximum (FWHM)
velocities were determined from Gaussian fits, whereas the full--width
zero--intensity (FWZI) velocities and their errors have been estimated by eye.
HeII$\:\lambda$4686\AA, CIII/NIII$\:\lambda\lambda$4640--4650\AA$\:$
and HeI$\:\lambda$4713\AA$\:$ are blended, so separate values
of the flux and EW are given (determined from a triple--Gaussian fit) as
well as the combined flux of the three.}
\label{tab:linewidths}
\begin{center}
\begin{tabular*}{0.772\textwidth}
{lcr@{$\:\pm\:$}lr@{$\:\pm\:$}lr@{$\:\pm\:$}lr@{$\:\pm\:$}l}
\hline
\vspace{-3mm}\\
\multicolumn{1}{l}{Line} &
\multicolumn{1}{l}{Date} &
\multicolumn{2}{c}{Flux} &
\multicolumn{2}{c}{EW} &
\multicolumn{2}{c}{FWHM} &
\multicolumn{2}{c}{FWZI}\\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{($\times$ 10$^{-14}$}
& \multicolumn{2}{c}{(\AA)} & \multicolumn{2}{c}{(km\,s$^{-1}$)}
& \multicolumn{2}{c}{(km\,s$^{-1}$)}\\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{(ergs\,cm$^{-2}$\,s$^{-1}$)}
& \multicolumn{2}{c}{} & \multicolumn{2}{c}{}
& \multicolumn{2}{c}{}\\
\vspace{-3mm}\\
\hline
\vspace{-3mm}\\
H$\alpha$ & 25 Dec 1998 & 16.80&0.04 & 35.1&0.1 & 1100&100 & 3500&300 \\
H$\alpha$ & 27 Dec 1998 & 27.98&0.05 & 36.4&0.1 & 1100&100 & 3600&300 \\
H$\beta$ & 11 Jan 1998 & 42.2&0.1 & 24.6&0.2 & 1000&100 & 2800&300 \\
H$\gamma$ & 11 Jan 1998 & 34.6&0.2 & 16.9&0.3 & 1100&100 & 2600&800 \\
HeI$\:\lambda$4471\AA & 11 Jan 1998 & 8.0&0.1 & 3.8&0.2 & 1150&100 & 1850&200 \\
HeI$\:\lambda$4921\AA & 11 Jan 1998 & 4.23&0.08 & 2.6&0.1 & 1300&100 & 2000&200 \\
HeI$\:\lambda$5015\AA & 11 Jan 1998 & 3.3&0.1 & 2.2&0.2 & 1250&100 & 2000&200 \\
HeI$\:\lambda$6678\AA & 25 Dec 1998 & 1.49&0.02 & 2.92&0.08 & 1250&100 & 1900&200 \\
HeI$\:\lambda$6678\AA & 27 Dec 1998 & 2.40&0.04 & 3.00&0.08 & 1300&100 & 1900&200 \\
CII$\:\lambda$4267\AA & 11 Jan 1998 & 3.8&0.1 & 1.7&0.4 & 900&200 & 1800&600 \\
HeII$\:\lambda$4686\AA & 11 Jan 1998 & 26.9&0.3 & 13.1&0.3 & 1450&150 &
\multicolumn{2}{c}{} \\
CIII/NIII$\:\lambda\lambda$4640--4650\AA & 11 Jan 1998 & 11.7&0.1
& 6.4&0.2 & 1700&150 & \multicolumn{2}{c}{} \\
HeI$\:\lambda$4713\AA & 11 Jan 1998 & 4.8&0.3 & 2.3&0.3 & 1500&300 &
\multicolumn{2}{c}{} \\
HeII + CIII/NIII + & 11 Jan 1998 & 45.5&0.2 & 22.4&0.2
& \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\
HeI$\:\lambda$4713\AA & \multicolumn{9}{c}{} \\
\hline\\
\end{tabular*}
\end{center}
}
\end{table*}
\subsection{Light curves}
\label{sec:light}
Fig.~\ref{fig:lc} shows the broad--band and emission--line light curves
of V347~Pup. The emission--line light curves were produced by subtracting
a polynomial fit to the continuum and summing the residual flux. All
light curves are plotted as a function of phase following
the ephemeris derived in Section~\ref{sec:ephem}.
The $b$, $y$, $V$ and $R$--band light curves show deep, asymmetrical
primary eclipses with the egress lasting longer than ingress.
Flickering is present in all light curves, as well as an
increase in brightness approaching eclipse in the $b$, $y$ and $V$--bands.
The $b$ and $y$--band data recorded in 1998 Jan show no significant brightness
variations during the run, with out-of-eclipse magnitudes of 13.3 $\pm$ 0.1
in both filters. The eclipse depths are 3.2 and 2.6 mag, respectively.
We measure $R$--band out-of-eclipse magnitudes of 14.00 $\pm$ 0.05 mag on
Dec 25, increasing in brightness to 13.45 $\pm$ 0.10 mag on Dec 27.
The eclipse depth remains roughly the same at 2.1 mag and 2.0 mag,
respectively. In the $V$--band, the out-of-eclipse magnitude is
14.10 $\pm$ 0.10 mag, with an eclipse depth of 2.6 mag.
Photometric out-of-eclipse magnitudes
in the literature range between 13.05--13.28 in $R$ and 13.2--13.58 in $V$
(\pcite{buckley90}, \pcite{mauche94}), suggesting that our observations in
Dec 1998 find V347~Pup around 0.5--1 mag fainter. Long--term
variations in the magnitudes of NLs are not uncommon (e.g.
\pcite{honeycutt01}) and have been observed in other SW~Sex stars (e.g. BH~Lyn,
\pcite{dhillon92}; DW~UMa, \pcite{dhillon94}; PX~And, \pcite{Still95b}).
Low states are often accompanied by the weakening or disappearance
of the high--excitation HeII and CIII/NIII lines, which were unfortunately
not observed in Dec 1998. There is, however, a change in the HeI
$\lambda$6678\AA\ Doppler maps between the two nights' observations,
which is considered in Section~\ref{sec:trailed_spectrum}.
Further evidence that V347~Pup exhibits changes of state is seen in the EWs
of the emission lines between the observed epochs. For example, the EW of H$\beta$
varies between 17.0 $\pm$ 0.6\AA\ (July 1986, \pcite{buckley90}), 62.6 $\pm$ 1.9\AA\
(April 1991, \pcite{mauche94}), 9.8 $\pm$ 0.1\AA\ (Jan 1995, \pcite{still98}) and
24.6 $\pm$ 0.2\AA\ (Jan 1998, this paper), although the high--excitation
CIII/NIII complex has a constant EW between epochs.
We measured the phase half--width of eclipse at the out-of-eclipse
level ($\Delta\phi$)
by timing the first and last contacts of the eclipse and dividing by two.
Our average value of $\Delta\phi$ = 0.110 $\pm$ 0.005 is consistent with
the values of 0.120 $\pm$ 0.011 quoted by \scite{harrop96} and 0.105 $\pm$ 0.005
measured by \scite{buckley90}. We then computed the radius of the accretion
disc in V347~Pup using the geometric method outlined in \scite{dhillon91}.
Combining $\Delta\phi$ with the system mass ratio and inclination derived in
Section~\ref{sec:params} gives an accretion disc radius ($R_D$) of
0.72 $\pm$ 0.09 $R_1$, where $R_1$ is the volume radius of the primary's Roche lobe.
This value is in agreement with the value of $R_D/R_1 \ge 0.82$ quoted by
\scite{harrop96} at the 2$\sigma$ level.
The H$\alpha$ eclipses are similar in shape to the continuum light curves,
but do not appear to be as deeply eclipsed. The H$\beta$ and H$\gamma$
lines exhibit asymmetric eclipses, with ingress longer than egress. This
behaviour is expected from asymmetric disc emission, consistent with the
spiral arms identified in the Doppler maps (Section~\ref{sec:trailed_spectrum}).
The high--excitation HeII + CIII/NIII complex
has a deep and U--shaped eclipse, suggesting an origin close to the white
dwarf. The HeI eclipses are wide with V--shaped minima,
similar to the SW~Sex stars (e.g. \pcite{knigge00}). Note that the HeI
flux is completely eclipsed, indicating an origin in the central
portion of the disc, and not in an extended emission region which is larger
than the secondary star. The HeI $\lambda$4471\AA\ emission line shows a broad
dip in flux around phase 0.4, before climbing to reach a maximum around
phase 0.75, which could be a further signature of the disc asymmetry.
\begin{figure*}
\begin{center}
\begin{tabular}{c}
\multicolumn{1}{l}{(a)}\\
\includegraphics[width=17cm,angle=0]{v347pup_lc_a.ps}\\
\multicolumn{1}{l}{(b)}\\
\includegraphics[width=17cm,angle=0]{v347pup_lc_b.ps}\\
\end{tabular}
\end{center}
\caption{\protect\small
Broad--band and emission--line light curves of V347~Pup recorded in
Dec 1998 and Jan 1999 (a), and Jan 1998 (b); see the panel labels for details.
Note the increase in continuum and emission--line flux
between Dec 25 and Dec 27 1998.}
\label{fig:lc}
\end{figure*}
\subsection{Trailed spectrum \& Doppler tomography}
\label{sec:trailed_spectrum}
We subtracted polynomial fits to the continuum and
then rebinned the spectra onto a constant velocity--interval scale
centred on the rest wavelength of the principal emission lines.
For the data obtained in Jan 1998, we phase--binned all the spectra in
order to boost the signal-to-noise (S/N). Individual spectra were weighted
according to their S/N in order to optimally combine the spectra.
The trailed spectra of H$\alpha$, H$\beta$, HeII $\lambda$4686\AA\ and
HeI $\lambda\lambda$4471\AA, 5015\AA, 6678\AA\ are shown in
Fig.~\ref{fig:trailed}.
Doppler maps were calculated for the principal emission lines
using the modulation Doppler tomography code of \scite{steeghs03}.
This method is an extension to the conventional Doppler tomography technique
(e.g. \pcite{marsh00}), and maps both the constant and variable part of
the line emission using a maximum--entropy regularised fitting procedure
(\pcite{skilling84}). We found that the modulated contribution to the line
emission was weak ($<1$ per cent), and thus our S/N was not sufficient
to detect significant modulation in the accretion disc emission. We
therefore plot in Fig.~\ref{fig:doppler} the corresponding average
Doppler maps only. The reconstructed line profiles are plotted next to the
observed ones in Fig.~\ref{fig:trailed} for comparison. Good fits to the data
were achieved in all cases (reduced $\chi^2 = 1 - 1.4$)
\begin{figure*}
\begin{center}
\includegraphics[width=15cm,angle=0]{newdata.ps}
\end{center}
\caption{
Trailed spectra and data computed from the Doppler maps
(Fig.~\ref{fig:doppler}). The blue data
recorded in Jan 1998 have been phase binned into 200s bins, the
red data recorded in Dec 1998 into 300s bins.
H$\gamma$ has not been shown, as it is very similar in nature to
H$\beta$.}
\label{fig:trailed}
\end{figure*}
The Balmer--line trailed spectra are dominated by a low--velocity component with a
semi--amplitude of $\sim$150 km\,s$^{-1}$, moving from blue to red across
primary eclipse. This is consistent with emission from the irradiated inner
face of the secondary star, which is clearly seen in the corresponding Doppler maps.
In the H$\beta$ map, a second low--velocity emission source is present,
seemingly coincident with the gas stream at a distance of 0.9$L_1$, where
$L_1$ is the distance from the white dwarf to the inner--Lagrangian point.
There is also a weak two--armed disc asymmetry visible in the H$\beta$ emission,
which is much more prominent in the double--peaked HeI emission lines.
Doppler maps of V347~Pup have been produced by
\scite{still98} for data sets recorded in 1987, 1988 and 1995. The two
components described above from the secondary star and the disc are
clearly visible in their maps. The summed H$\beta$ and H$\gamma$ maps
of \scite{still98} show a stronger disc emission and spiral structure than
our Balmer--line maps.
The disc asymmetry is significant and is reminiscent of the two armed
spiral structures that have been observed in the discs of dwarf novae
during outburst (e.g. \pcite{steeghs01}). We return to these in
Section~\ref{sec:shocks}.
The high--excitation HeII $\lambda$4686\AA\ line is dominated
by emission from the gas stream and bright spot overlayed on a weak accretion
disc with radius $R_D \sim 0.3 - 0.4 L_1$. Note that the HeII $\lambda$4686\AA\
Doppler map shows emission at higher velocities than the low--excitation
lines, demonstrating that the material originates from closer to the white dwarf.
The blue and red HeI emission lines were recorded almost a year apart and
exhibit clear differences in structure. The secondary star emission is
clearly evident in HeI $\lambda$6678\AA\ (Dec 1998), although no
strong HeI $\lambda$4471\AA\ or $\lambda$5015\AA\ emission can be
seen in the Jan 1998 data.
There is also a difference in the HeI $\lambda$6678\AA\ Doppler maps
between the 25th and 27th Dec 1998, which is probably related to the change
in brightness of the system; the 25th Dec 1998 Doppler map has more
enhanced spiral features and weaker secondary star emission than the
27th Dec (note that the average map of these two nights is shown in
Fig.~\ref{fig:doppler}).
During all these epochs, however, the spiral structures were observed,
demonstrating that they are a persistent feature.
\begin{figure*}
\begin{center}
\includegraphics[width=12cm,angle=-90]{finalmaps.ps}
\end{center}
\caption{
Doppler maps of the principal emission lines (H$\gamma$ is not shown,
as it is very similar in nature to H$\beta$). The cross marked on each
Doppler map represents the centre of mass of the system and the open circle
represents the white dwarf. These symbols, the Roche lobe
of the secondary star and the predicted trajectory of the gas stream, have
been plotted using the $K_R$--corrected system parameters summarised in
Table~\ref{tab:params}. The series of points
along the gas stream mark the distance from the white dwarf at intervals
of 0.1$L_1$, ranging from 1.0$L_1$ at the red star to 0.2$L_1$.
Doppler tomography cannot properly account for variable line flux, so
spectra around primary eclipse were omitted from the fits.}
\label{fig:doppler}
\end{figure*}
\subsection{Radial velocity of the white dwarf}
\label{sec:whitedwarf}
We measured the radial velocities of the emission lines in V347~Pup by applying
the double--Gaussian method of
\scite{schneider80}, since this technique is sensitive mainly to the
line wings and should therefore reflect the motion of the white dwarf with
the highest reliability. We tried Gaussians of widths 200, 300 and
400 km\,s$^{-1}$ and we varied their separation from 200 to 3200
km\,s$^{-1}$. We then fitted
\begin{equation}
V=\gamma-K\sin[2\pi(\phi-\phi_0)]
\end{equation}
to each set of measurements, where $V$ is the radial velocity, $K$ the
semi--amplitude, $\phi$ the orbital phase, and $\phi_0$ is the phase at which
the radial velocity curve crosses from red to blue. Examples of
the radial velocity curves measured for the H$\alpha$, H$\beta$,
HeII $\lambda$4686\AA\ and HeI $\lambda$4471\AA\ emission lines
are shown in Fig.~\ref{fig:rvs}. There is clear evidence of rotational
disturbance in the emission lines, where the
radial velocities measured just prior to eclipse are skewed to the red, and
those measured after eclipse are skewed to the blue. This confirms the
detection of a similar feature in the trailed
spectra, and indicates that at least some of the emission must originate in
the disc. There is also evidence of a phase shift in H$\alpha$
and HeII $\lambda$4686\AA, where the
spectroscopic conjunction of each line occurs after photometric mid--eclipse.
This phase shift implies an emission--line
source trailing the accretion disc, such as a bright spot,
and is a common feature of SW~Sex stars (e.g. DW~UMa, \pcite{shafter88};
V1315~Aql, \pcite{dhillon91}; SW~Sex, \pcite{dhillon97b}). There appear
to be no significant phase shifts, however, in the other emission lines.
\scite{buckley90}, \scite{mauche94} and
\scite{diaz99} find no evidence of phase shift in any of their emission lines,
although their errors on $\phi_0$ were much larger.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{v347pup_radial2.ps}
\end{center}
\caption{\protect\small Radial velocity curves of H$\alpha$, H$\beta$,
HeII $\lambda$4686\AA\ and HeI $\lambda$4471\AA\ using Gaussian
widths of 300 km\,s$^{-1}$ and a separation of 1400 km\,s$^{-1}$.
We omitted the points around primary eclipse during the
fitting procedure (open circles) as these measurements are affected by the
rotational disturbance. The emission lines recorded in Jan 1998 have been
phase--binned into 100 bins for clarity.}
\label{fig:rvs}
\end{figure}
We tried to measure white dwarf radial velocity ($K_W$) values using a
diagnostic diagram (\pcite{shafter86}), but with no success.
We therefore attempted to make use of the light--centres method,
as described by \scite{marsh88a}. In the co-rotating co-ordinate system,
the white dwarf has velocity ($0, -K_W$), and symmetric emission, say
from a disc, would be centred at that point. By plotting
$K_x = -K\sin\phi_0$ versus $K_y = -K\cos\phi_0$ for the different
radial velocity fits (Fig.~\ref{fig:centres}), one finds that the points move
closer to the $K_y$ axis with increasing Gaussian separation. A simple
distortion which only affects low velocities, such as a bright spot, would
result in this pattern, equivalent to a decrease in distortion as one measures
emission further into the line wings and therefore more closely representing
the velocity of the primary star. By linearly extrapolating the largest Gaussian
separation on the H$\alpha$ light--centre diagram to the
$K_y$ axis, we measure the radial velocity semi--amplitude of the white
dwarf to be $\sim$180 km\,s$^{-1}$. The large uncertainty in this value
($\sim$ 40 km\,s$^{-1}$), however, and the unsuccessful application of the
technique to the other emission lines, prompted us to proceed with the mass
determination using the secondary star features alone.
\begin{figure}
\begin{center}
\includegraphics[width=5.4cm,angle=-90]{v347pup_ha_centres_paper.ps}\\
\end{center}
\caption{\protect\small
Light--centres diagram for H$\alpha$. Points are plotted for radial velocity
fits using Gaussians of FWHM = 300 km\,s$^{-1}$, with separations from
900 km\,s$^{-1}$ to 2900 km\,s$^{-1}$ at 100 km\,s$^{-1}$ intervals. The points
move anti--clockwise, towards the $K_x = 0$ axis with increasing Gaussian
separation.}
\label{fig:centres}
\end{figure}
\subsection{Radial velocity of the secondary star}
\label{sec:secondary}
The secondary star in V347~Pup is clearly visible in
Fig.~\ref{fig:av_spec} through absorption lines of
MgI, FeI and CaI.
We compared regions of the spectra rich in absorption lines with
a number of templates with spectral types G7V--M3.5V. A technique
known as skew mapping was used to enhance the secondary features and obtain a
measurement of the radial velocity semi--amplitude of the secondary star ($K_R$).
See \scite{vandeputte03} for a detailed
critique of skew mapping and \scite{thoroughgood04} for a successful
application to AC~Cnc and V363~Aur.
The data centred on $\lambda$5290\AA\ were recorded specifically to
exploit the secondary star
features found between the H$\beta$ and H$\alpha$ lines. Unfortunately,
the presence of weak emission lines (e.g. FeII multiplet 42 at
$\lambda\lambda$4924\AA, 5018\AA\ and 5169\AA, \pcite{mason03})
hampered all efforts to determine a $K_R$ value from these data.
The dominance of the emission lines in the spectra
centred on $\lambda$4610\AA\ also prevented a $K_R$ determination from
these data. The red spectra of V347~Pup centred on $\lambda$6330\AA,
however, allowed us to
study the secondary star through absorption features blueward of H$\alpha$,
such as the CaI $\lambda$6162\AA$\:$ line. Exactly the same conclusion
was reached by \scite{diaz99}.
The first step was to shift the spectral type template stars to correct for
their radial velocities. We then normalized each spectrum by dividing
by a constant and then subtracting a polynomial fit to the continuum.
This ensures that line strength is preserved along the
spectrum. The V347~Pup spectra were normalized in the same way.
The template spectra were artificially broadened to account for both the
orbital smearing of the V347~Pup spectra due to their exposure times ($t_{exp}$),
using the formula
\begin{equation}
{V} = {{{t_{exp}}{2\pi}{K_R}} \over {P}}
\label{eqn:smear}
\end{equation}
(e.g. \pcite{watson00}), and the rotational velocity of the
secondary ($v \sin i$). Estimated values of $K_R$ and $v \sin i$ were used
in the first instance, before iterating to find the best--fitting values given
in Section~\ref{sec:params}.
Regions of the spectrum devoid of emission lines were then cross--correlated
with each of the templates, yielding a time series of cross--correlation
functions (CCFs) for each template star. The regions used for the
cross--correlation can be seen in Fig.~\ref{fig:residual}.
To produce the skew maps, these CCFs were then back--projected in the
same way as time--resolved spectra in standard
Doppler tomography (\pcite{marsh88b}). If there is a detectable
secondary star, we expect a peak at (0,$K_R$) in the skew map.
This can be repeated for each of the templates, and the final skew map is the
one that gives the strongest peak.
The skew maps show well--defined peaks at
$K_y \approx$ 216 km\,s$^{-1}$ -- the skew map of the M0.5V template is
shown in Fig.~\ref{fig:skewmaps} together with the trailed CCFs.
A systemic velocity of $\gamma$ = 15 km\,s$^{-1}$ was applied in order to
shift the skew map peaks onto the $K_x$ = 0 axis (see \pcite{sad98b} for details).
We therefore adopt $\gamma$ = 15 $\pm$ 5 km\,s$^{-1}$ as the
systemic velocity of V347~Pup, in excellent agreement with the values of
16 $\pm$ 10 km\,s$^{-1}$ and 15 $\pm$ 12 km\,s$^{-1}$ measured by
\scite{still98} from the Balmer and HeII $\lambda$4686\AA$\:$ emission lines.
The $\gamma$ velocities from the emission lines shown in Fig.~\ref{fig:rvs}
ranged between 13 km\,s$^{-1}$ and 44 km\,s$^{-1}$.
Other $\gamma$ values measured from optical emission lines
vary widely in the literature (--3 to 60 km\,s$^{-1}$, \pcite{diaz99}; --9 to
159 km\,s$^{-1}$, \pcite{mauche94}).
Our adopted $K_R$ of 216 $\pm$ 5 km\,s$^{-1}$ was derived from the skew map
peak of the best--fitting template found in Section~\ref{sec:params}. This result
actually covers the
$K_R$ values derived from $all$ of the template stars to within
the errors, demonstrating that the result is robust to the choice of
template (see Table~\ref{tab:vsini}).
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm,angle=0]{v347_skew+sec.ps}
\end{center}
\caption{\protect\small
Skew maps (top) and trailed CCFs (bottom) of V347~Pup cross--correlated with
a M0.5V dwarf template.}
\label{fig:skewmaps}
\end{figure}
\subsection{Rotational velocity and spectral type of the secondary star}
\label{sec:rotational}
The spectral--type templates were broadened for smearing due to orbital
motion, as before, and rotationally broadened by a range of velocities
(50--240 km\,s$^{-1}$). We then ran an optimal subtraction routine, which
subtracts a constant times the normalized template spectrum from the
normalized average V347~Pup spectrum, adjusting the constant to
minimize the scatter in the residual. (Normalisation was carried out in the
same way as Section~\ref{sec:secondary}, except that this time, the spectra
were set to unity.) The scatter is measured
by carrying out the subtraction and then computing the $\chi^2$ between the
residual spectrum and a smoothed version of itself. By finding the value of
rotational broadening that minimizes the $\chi^2$, we obtain an
estimate of both $v \sin i$ and the spectral type of the secondary star.
Note that the $v \sin i$ values of the template stars are much lower than
the instrumental resolution, so do not affect our measurements of
$v \sin i$ for the secondary star.
The value of $v \sin i$ obtained using this method varies depending on the
spectral type template, the wavelength region for optimal subtraction,
the amount of smoothing of the residual spectrum in the calculation of
$\chi^2$ and the value of the limb--darkening coefficient used in the
broadening procedure.
The values of $v \sin i$ for all of the templates calculated using
values for the limb--darkening coefficient of 0.5 and smoothed using
a Gaussian of FWHM = 15km\,s$^{-1}$, are listed in Table~\ref{tab:vsini}.
A plot of $\chi^2$ versus $v \sin i$ for each spectral--type template is
shown in Fig.~\ref{fig:vsini}. The spectral type with the lowest
$\chi^2$ value is M0.5V, which agrees with a visual identification
of the best fitting template. \scite{diaz99}, however,
estimate a secondary star spectral type between K0V and K5V, with the
possibility of a later--type subgiant.
A plot of the V347~Pup average spectrum, a broadened M0.5V template spectrum and
the residual of the
optimal subtraction is shown in Fig.~\ref{fig:residual}. The $\chi^2$
for the M0.5V template has a minimum at 130 km\,s$^{-1}$, so
we adopt $v \sin i$ = 130 $\pm$ 5 km\,s$^{-1}$, with the error accounting for
the measurement accuracy and the other variables noted in the
previous paragraph. The error quoted on our adopted value encompasses the
measured $v \sin i$
for all of the templates used in the analysis (except for K3V with
$v \sin i$ = 136 km\,s$^{-1}$).
\begin{figure}
\begin{center}
\includegraphics[width=6.2cm,angle=-90]{v347_residual.ps}
\end{center}
\caption{\protect\small
Orbitally--corrected average spectrum of V347~Pup (top) with the
broadened M0.5V template (middle) and the residuals after optimal
subtraction (bottom). The template spectrum has been multiplied by
the scaling factor found from the optimal subtraction. All of the spectra
are normalised and offset on the plot by
an arbitrary amount for clarity. The wavelength limits shown are those used
for the cross--correlation and optimal subtraction procedures, except for the
region between the dashed lines owing to few secondary star features.}
\label{fig:residual}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=7.5cm,angle=0]{v347_vsini2.ps}
\end{tabular}
\end{center}
\caption{Determination of $v \sin i$ for V347~Pup
using different spectral--type templates. Degrees of freedom = 699.}
\label{fig:vsini}
\end{figure}
\begin{table}
{\protect\small
\caption{$v \sin i$ values for V347~Pup cross--correlated with
the rotationally broadened profiles of G7 -- M3.5V templates.
Also shown is the factor used to multiply the template star features during
optimal extraction, and the position of the strongest peak in the
skewmaps derived from each template using $\gamma$--velocities
of 0 km\,s$^{-1}$ and 15 km\,s$^{-1}$.}
\label{tab:vsini}
\begin{tabular}{lcr@{$\:\pm\:$}lr@{,}lr@{,}l}
\hline
\vspace{-3mm}\\
\multicolumn{1}{c}{Templates} &
\multicolumn{1}{c}{$v \sin i$} &
\multicolumn{2}{c}{Optimal} &
\multicolumn{2}{c}{($K_x,K_y$)} &
\multicolumn{2}{c}{($K_x,K_y$)} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{at min $\chi^{2}$} &
\multicolumn{2}{c}{factor} &
\multicolumn{2}{c}{$\gamma=0$} &
\multicolumn{2}{c}{$\gamma=15$} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{(km\,s$^{-1}$)} &
\multicolumn{2}{c}{} &
\multicolumn{2}{c}{(km\,s$^{-1}$)} &
\multicolumn{2}{c}{(km\,s$^{-1}$)} \\
\vspace{-3mm}\\
\hline
\vspace{-3mm}\\
G7V & 134 & 0.32&0.05 & (--26&212) & (10&220) \\
G9V & 133 & 0.28&0.04 & (--13&215) & (13&220) \\
K0V & 133 & 0.23&0.03 & (--2&217) & (14&219) \\
K1V & 134 & 0.24&0.03 & (--15&215) & (8&220) \\
K2V & 133 & 0.20&0.03 & (--22&212) & (6&219) \\
K3V & 136 & 0.19&0.03 & (--28&212) & (0&217) \\
K4V & 135 & 0.14&0.02 & (--17&211) & (3&217) \\
K5V & 134 & 0.13&0.02 & (--17&213) & (1&218) \\
K7V & 133 & 0.12&0.02 & (--24&210) & (--3&216) \\
M0.5V & 130 & 0.13&0.02 & (--18&213) & (0&216) \\
M1.5V & 125 & 0.12&0.02 & (--17&213) & (--2&216) \\
M2.5V & 126 & 0.13&0.02 & (--21&213) & (--7&216) \\
M3.5V & 127 & 0.12&0.02 & (--33&213) & (--23&217) \\
\hline\\
\end{tabular}
}
\end{table}
\subsection{The $K_R$ correction}
\label{sec:kcor}
The irradiation of the secondary stars in CVs by the emission regions around
the white dwarf and the bright spot has been shown to influence the measured
$K_R$ (e.g. \pcite{wade88}, \pcite{watson00}). For example, if absorption
lines are quenched on the irradiated side of the secondary, the
centre of light will be shifted towards the back of the star. The measured
$K_R$ will then be larger than the true (dynamical) value.
\scite{diaz99} found evidence for irradiation of the secondary star
in V347~Pup, leading them to apply a correction to their measured
$K_R$ value. This fact, and the presence of Balmer and HeI emission
from the inner face of the secondary star seen in the Doppler maps and
trailed spectra (Section~\ref{sec:trailed_spectrum}), prompted us to look
for similar irradiation effects
in the absorption lines of our data. We applied the following two observational
tests. First, the rotationally broadened line profile would be distorted if
there was a non--uniform absorption distribution across the surface of the
secondary star (\pcite{davey92}). This would result in a non--sinusoidal
radial velocity curve. Second, one would expect a depletion of secondary star
absorption--line
flux at phase 0.5, where the quenched inner--hemisphere is pointed towards
the observer (e.g. \pcite{friend90a}).
The secondary star radial velocity curves were produced by cross--correlating
the V347~Pup spectra with the best--fitting smeared and broadened template
spectra, as described in Section~\ref{sec:secondary}. The
cross--correlation peaks were plotted against phase to produce the radial
velocity curves shown in the lower panel of Fig.~\ref{fig:irrplot}.
There is evidence for an eccentricity in the radial velocity curve compared
with the sinusoidal fit represented by the thin solid line,
although the data are noisy.
The variation of secondary star absorption--line flux with phase for
V347~Pup is shown in the top panel of Fig.~\ref{fig:irrplot}.
These light curves were produced by
optimally subtracting the smeared and rotationally broadened
best--fitting template from the individual CV spectra (with the secondary
radial velocity shifted out) as described in Section~\ref{sec:rotational}.
This time, however, the
spectra were continuum subtracted rather than normalised to ensure that the
measurements were not affected by a fluctuating disc brightness. The constants
produced by the optimal subtraction are secondary star absorption--line fluxes,
correct relative
to each other, but not in an absolute sense.
The dashed lines super--imposed on the light curves represent the variation of
flux with phase for a Roche lobe with a uniform absorption distribution. The
sinusoidal nature is the result of the changing projected area of the
Roche lobe through the orbit. The V347~Pup light curve is clearly not represented
by a uniform Roche lobe distribution as the secondary star absorption--line
flux vanishes between phases 0.4--0.6.
These three pieces of evidence, as well as the disappearance of the CCFs
between phases 0.4--0.6 seen in Fig.~\ref{fig:skewmaps} suggest that the
secondary star in V347~Pup is
irradiated and we must correct the $K_R$ values accordingly.
It is possible to correct $K_R$ for the effects of irradiation by
modelling the secondary star flux distribution. In our simple model, we divided
the secondary Roche lobe into 40 vertical slices of equal width from the
$L_1$ point to the back of the star.
We then produced a series of model light curves (using the system parameters
derived in Section~\ref{sec:params}), varying the numbers of
slices omitted from the inner hemisphere of the secondary
which contribute to the total flux.
The model light curves were then scaled to match the observed data, and the
best--fitting model found by measuring the $\chi^2$ between the two.
In all models, we used a gravity--darkening parameter $\beta = 0.08$ and
limb--darkening coefficient $u = 0.5$ (e.g. \pcite{watson00}).
The negative data points around phase 0.5 were set to zero, as the
secondary star absorption line flux disappears at this point.
Once the best--fitting light curve was found,
we produced fake V347~Pup spectra from the model, which were
cross-correlated with a fake template star to produce a synthetic radial
velocity curve.
In the first instance, the synthetic curve mimicked the non-sinusoidal
nature of the observed data, but with a larger semi--amplitude. This was
expected, as the model input parameters used the uncorrected
$K_R$ derived in Section~\ref{sec:params}.
We then lowered $K_R$ and repeated
the process, until the
semi--amplitude of the model and observed radial velocity curves were in
agreement, each time checking the light curve models for goodness of fit.
The resulting $K_R$ was
then adopted as the real (or dynamical) $K_R$ value.
The best--fitting model light curve was produced by omitting
12 slices when fitting the data (reduced $\chi^2$ between model and data =
1.03). The model light curves omitting
11, 12 and 13 slices are shown by the solid lines in Fig.~\ref{fig:irrplot}.
Our final model, which has an input $K_R$ of 198 km\,s$^{-1}$, produces the
radial velocity curve shown as the thick solid line in the lower
panel of Fig.~\ref{fig:irrplot}.
There is good agreement between this and the observed data.
If gravity--darkening and limb--darkening are
neglected, the best fit light curve remains the same, but produce a $K_R$
value which is $\sim$ 6 km\,s$^{-1}$ lower.
In summary, we correct the $K_R$ of V347~Pup from 216 km\,s$^{-1}$
to 198 km\,s$^{-1}$. This correction of 18 km\,s$^{-1}$ is exactly the same
as that calculated by \scite{diaz99} using a much simpler approximation,
which changed their measured value of 205 km\,s$^{-1}$ to 187 km\,s$^{-1}$.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8cm,angle=0]{v347pup_irrplot2_v.ps}\\
\end{tabular}
\end{center}
\caption{
Upper panel: Secondary star absorption line light curve with model fits
(solid lines). Model fits are shown for Roche lobes with 11, 12
and 13 slices removed (see text for details). The lower the
line, the more slices removed. The dashed line represents a model
where 0 slices have been removed. The data have been phase--binned into
50 bins to increase S/N.
Lower panel: Measured secondary star radial velocity curve with a sinusoidal fit
(thin solid line) and the best--fitting model fit (thick solid line).
In both panels, the open circles indicate points that have been omitted from
the fits and the data have been folded to show 2 orbital phases.}
\label{fig:irrplot}
\end{figure}
\subsection{The distance to V347~Pup}
\label{sec:distance}
By finding the apparent magnitude of the secondary star from its
contribution to the total light of the system, and estimating its
absolute magnitude, we can calculate the distance ($d$) using the equation:
\begin{equation}
\label{eqn:distance}
{5 \log(d/10) = m_V - M_V - d\;A_V/1000}
\end{equation}
\noindent{where $A_V$ is the visual interstellar extinction in
magnitudes per kpc.}
The mean $R$--band photometric flux of V347~Pup during the recorded spectra
is 8.93 mJy, which we convert to a mean $R$--band
magnitude of 13.8 $\pm$ 0.3. The uncertainty reflects the
change in brightness of the system between 25 and 27 Dec. During this time,
the secondary star contributes 13 $\pm$ 2 per cent of the total light of the
system, assuming an early M spectral type (see Table~\ref{tab:vsini}).
The apparent magnitude of the secondary is therefore $R$ =
16.0 $\pm$ 0.4, which we convert to a $V$--band magnitude of 17.3 $\pm$ 0.4
using a typical $V-R$ value for an early M star from \scite{gray92}.
There are a number of ways of estimating the absolute magnitude of the
secondary star, assuming it is on the main sequence (e.g.
\pcite{patterson84}; \pcite{warner95b}; \pcite{gray92}). We took each
of these into account and adopt an average value of $M_V$ = +8.8 $\pm$
0.5. \scite{mauche94} estimated the extinction to V347~Pup to be
E($B-V$) = 0.05, which results in $A_V = 0.16$ (\pcite{scheffler82}). The
distance to V347~Pup is calculated from equation~\ref{eqn:distance}
to be 490 $\pm$ 130 pc.
\scite{buckley90} estimate the distance to V347~Pup to be between 174--380
pc, based on their measured system inclination and out-of-eclipse magnitude.
\scite{mauche94} use their interstellar reddening measurement and a
mean interstellar hydrogen number density to estimate a distance of 340--590 pc.
Finally, \scite{diaz99} find a distance of 510 $\pm$ 160 pc from the spectral line
depths of the secondary star. Our value is consistent with all
distance estimates in the literature.
\subsection{System parameters}
\label{sec:params}
Using the $K_R$ and $v \sin i$ values found in Sections~\ref{sec:rotational}
and~\ref{sec:kcor} in conjunction with the period determined in
Section~\ref{sec:ephem} and a measurement of the
eclipse full--width at half depth ($\Delta\phi_{1/2}$), we can calculate
accurate system parameters for V347~Pup.
In order to determine $\Delta\phi_{1/2}$, we estimated the flux out of
eclipse (the principal source of error) and at eclipse minimum,
and then measured the full--width of the eclipse half-way between these
points. The eclipse full--width at half-depth was measured to
be $\Delta\phi_{1/2}$ = 0.115 $\pm$ 0.005, in agreement with the eclipse
half--width at half--depth of 0.052 $\pm$ 0.002 measured by \scite{buckley90}
at the 2$\sigma$ level.
We have opted for a Monte Carlo approach similar to \scite{horne93} to
calculate the system parameters and their errors. For a given set
of $K_R$, $v \sin i$, $\Delta\phi_{1/2}$ and $P$, the other system parameters
are calculated as follows.
$R_2/a$ can be estimated because we know that the secondary star fills its
Roche lobe (as there is an accretion disc present and hence mass transfer).
$R_2$ is the equatorial radius of the secondary star and $a$ is the binary
separation. We used Eggleton's formula (\pcite{eggleton83}) which gives the
volume-equivalent radius of the Roche lobe to better than 1 per cent, which
is close to the equatorial radius of the secondary star as seen during
eclipse,
\begin{equation}
{{R_2} \over a} = {{0.49q^{2/3}} \over {{0.6q^{2/3} + \ln{(1+q^{1/3})}}}}.
\label{eqn:eggleton}
\end{equation}
The secondary star rotates synchronously with the orbital motion, so
we can combine $K_R$ and $v \sin i$, to get
\begin{equation}
{{R_2} \over a}{(1 + q)} = {{v \sin i} \over {K_R}}.
\label{eqn:synch}
\end{equation}
By considering the geometry of a point eclipse by a spherical body
(e.g. \pcite{dhillon91}), the radius of the secondary can be shown to be
\begin{equation}
\biggl({{R_2} \over a}\biggr)^2 = \sin^2\pi\Delta\phi_{1/2}+
\cos^2\pi\Delta\phi_{1/2}\cos^2i,
\label{eqn:inclin}
\end{equation}
which, using the value of $R_2/a$ obtained using equations~\ref{eqn:eggleton}
and~\ref{eqn:synch}, allows us to calculate the inclination, $i$, of the system.
The geometry of a disc eclipse can be approximated to a point eclipse if the
light distribution around the white dwarf is axi--symmetric (e.g. \pcite{dhillon90}).
This approximation is justified given the symmetry of the primary eclipses
in the photometry light curves (Fig.~\ref{fig:lc}).
Kepler's Third Law gives us
\begin{equation}
{{K_R^3P_{orb}}\over{2\pi G}}={{M_1\sin^3i}\over{(1+q)}^2},
\end{equation}
which, with the values of $q$ and $i$ calculated using equations
~\ref{eqn:eggleton},~\ref{eqn:synch} and~\ref{eqn:inclin}, gives the mass
of the primary star. The mass of the secondary star can then be obtained using
\begin{equation}
{M_2} = {q{M_1}}.
\label{eqn:qratio}
\end{equation}
The radius of the secondary star is obtained from the equation
\begin{equation}
{{v \sin i} \over {R_2}} = {{2\pi \sin i} \over P},
\label{eqn:secradius}
\end{equation}
(e.g. \pcite{warner95a}) and the separation of the components, $a$,
is calculated from equations
~\ref{eqn:synch} and ~\ref{eqn:secradius} with $q$ and $i$ now known.
The Monte Carlo simulation takes 10\,000 values of $K_R$, $v \sin i$, and
$\Delta\phi_{1/2}$ (the error on the period is deemed to be negligible in
comparison to the errors on $K_R$, $v \sin i$, and $\Delta\phi_{1/2}$),
treating each as being normally distributed about their
measured values with standard deviations equal to the errors on the
measurements. We then calculate the masses of the components, the inclination
of the system, the radius of the secondary star, and the separation of the
components, as outlined above, omitting
($K_R$, $v \sin i$, $\Delta\phi_{1/2}$)
triplets which are inconsistent with $\sin i \leq1$. Each accepted
$M_1,M_2$ pair is then plotted as a point in Figure~\ref{fig:montecarlo},
and the masses and their errors are computed from
the mean and standard deviation of the distribution of these pairs.
We find the component masses of V347~Pup to be $M_1 = 0.63 \pm 0.04M_\odot$
and $M_2 = 0.52 \pm 0.06M_\odot$.
The values of all the system parameters deduced from the Monte Carlo
computation are listed in Table~\ref{tab:params}, including $K_R$--corrected
and non $K_R$--corrected values for comparison. Note that our derived
$K_W$ of 163 $\pm$ 9 km\,s$^{-1}$ is in
remarkable agreement with the $K_W$ values of \scite{still98} who measure
156 $\pm$ 10 km\,s$^{-1}$ using a double--Gaussian convolution of the
Balmer lines, and 166 km\,s$^{-1}$ as the centre of axisymmetric Balmer emission.
The white dwarf mass of 0.63 $\pm$ 0.04$M_\odot$ is consistent
with the average value of ${\overline M}_1 = 0.80\pm0.22M_\odot$ (for
CVs above the period gap) determined by \scite{smith98}.
The empirical relation obtained by \scite{smith98} between mass and radius
for the secondary stars in CVs predicts that if the secondary star in
V347~Pup is on the main-sequence, it should have a radius of
0.54 $\pm$ 0.08 $R_\odot$. Our measured value of 0.60 $\pm$ 0.02 $R_\odot$ (from
equation~\ref{eqn:secradius}) is consistent with this value.
\begin{figure}
\begin{center}
\includegraphics[width=7.8cm,angle=-90]{v347pup_montecarlo3.ps}
\end{center}
\caption{\protect\small
Monte Carlo determination of system parameters for V347~Pup.
Each dot represents
an $M_1,M_2$ pair; the solid
curves satisfy the $v \sin i$ and $K_R$ constraints, and the dashed lines
mark lines of constant inclinations ($i$ = 80$^\circ$, 85$^\circ$ and
90$^\circ$).}
\label{fig:montecarlo}
\end{figure}
\begin{table*}
{\protect\small
\caption[System parameters]{System parameters for V347~Pup. The
Monte Carlo results for corrected and uncorrected $K_R$ values are
shown for comparison. The radial velocity of the white dwarf ($K_W$) has
also been calculated from the secondary star parameters.}
\label{tab:params}
\begin{tabular*}{110mm}
{lr@{$\:\pm\:$}lr@{$\:\pm\:$}lr@{$\:\pm\:$}lr@{$\:\pm\:$}l}
\\
\hline
\vspace{-3mm}\\
\multicolumn{1}{l}{Parameter} & \multicolumn{4}{c}{Non $K_R$--corrected} &
\multicolumn{4}{c}{$K_R$--corrected} \\
\multicolumn{1}{l}{} & \multicolumn{8}{c}{------------------------------------------------------------------------------------} \\
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{Measured} &
\multicolumn{2}{c}{Monte Carlo} &
\multicolumn{2}{c}{Measured} &
\multicolumn{2}{c}{Monte Carlo} \\
\multicolumn{1}{l}{} &
\multicolumn{2}{c}{Value} &
\multicolumn{2}{c}{Value} &
\multicolumn{2}{c}{Value} &
\multicolumn{2}{c}{Value} \\
\vspace{-3mm}\\
\hline
\vspace{-3mm}\\
$P_{orb}$ (d) & \multicolumn{2}{c}{0.231936060} & \multicolumn{2}{c}{} &
\multicolumn{2}{c}{0.231936060} & \multicolumn{2}{c}{} \\
$K_R$ (km\,s$^{-1}$) & 216&5 & 215&5 & 198&5 & 198&5 \\
$v \sin i$ (km\,s$^{-1}$) & 130&5 & 131&5 & 130&5 & 131&5 \\
$\Delta\phi_{1/2}$ & 0.115&0.005 & 0.111&0.003 & 0.115&0.005 & 0.113&0.004 \\
$q$ & \multicolumn{2}{c}{} & 0.73&0.05 & \multicolumn{2}{c}{} & 0.83&0.05 \\
$i^\circ$ & \multicolumn{2}{c}{} & 85.0&2.1 & \multicolumn{2}{c}{} & 84.0&2.3 \\
$K_W$ (km\,s$^{-1}$) & \multicolumn{2}{c}{} & 158&9 &
\multicolumn{2}{c}{} & 163&9 \\
$M_1/M_\odot$ & \multicolumn{2}{c}{} & 0.73&0.05 &
\multicolumn{2}{c}{} & 0.63&0.04 \\
$M_2/M_\odot$ & \multicolumn{2}{c}{} & 0.54&0.06 &
\multicolumn{2}{c}{} & 0.52&0.06 \\
$R_2/R_\odot$ & \multicolumn{2}{c}{} & 0.60&0.02 &
\multicolumn{2}{c}{} & 0.60&0.02 \\
$a/R_\odot$ & \multicolumn{2}{c}{} & 1.72&0.04 &
\multicolumn{2}{c}{} & 1.66&0.05 \\
d (pc) & 490&130 & \multicolumn{2}{c}{} &
490&130 & \multicolumn{2}{c}{} \\
Spectral type & \multicolumn{2}{c}{M0.5 V} & \multicolumn{2}{c}{} &
\multicolumn{2}{c}{M0.5 V} & \multicolumn{2}{c}{} \\
of secondary & \multicolumn{2}{c}{} & \multicolumn{2}{c}{}
& \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\
$\Delta\phi$ & 0.110&0.005 & \multicolumn{2}{c}{} & 0.110&0.005 &
\multicolumn{2}{c}{} \\
$R_D/R_1$ & \multicolumn{2}{c}{} & 0.72&0.08 & \multicolumn{2}{c}{} & 0.72&0.09 \\
\hline\\
\end{tabular*}
}
\end{table*}
\section{Discussion}
\label{sec:discussion}
\subsection{Spiral arms}
\label{sec:shocks}
Spiral-armed disc asymmetries are evident in the HeI and H$\beta$ Doppler
maps, confirming the findings of \scite{still98} in their H$\beta$ and
H$\gamma$ maps. Similar spiral structures
have been observed in dwarf novae during outburst (e.g. IP~Peg,
\pcite{steeghs97}; U Gem, \pcite{groot01}). Tidally
driven spiral density waves can develop in accretion discs due to the
tidal torque of the mass donor star on the outer disc (\pcite{sawada86},
\pcite{blondin00}, \pcite{boffin01}). Their detection in outburst only
reflects the much stronger tidal effects on the accretion disc when it
increases in size and temperature during outburst, in which case a tidally
induced spiral structure is expected that closely matches the observed
structures (\pcite{armitage98}, \pcite{steeghs99}, \pcite{steeghs01}).
In dwarf novae, these asymmetries decay as the system returns to quiescence,
and the disc cools and shrinks. In order for a similar tidal response to be
responsible for the disc asymmetry in V347 Pup, its disc must be
large and comparable to the tidal radius. We calculate the tidal radius of the
accretion disc to be 0.33$a$ using the
pressureless disc models of \scite{paczynski77} and our new system parameters.
The measured disc radius of $R_D/a$ = 0.28 $\pm$ 0.03 is comparable in size
to the tidal radius, and therefore consistent with a tidal origin for the
observed spiral structure.
Our observations show that the spiral structures are clearly visible in the
HeI Doppler maps, but are either weak or non-existent in the Balmer and HeII
maps. This is in contrast to dwarf novae in outburst, which typically show
stronger spiral structures in the HeII and Balmer lines (e.g.
\pcite{marsh00}; \pcite{morales04}). This could be a reflection of
different densities and temperatures in NL discs compared to the discs of
dwarf novae in outburst, or it could
simply be due to a contrast effect where the relative contribution of the
spiral structure is not as high in the HeII and Balmer maps due to the
presence of low--velocity emission.
Note that the impact of such tidally--induced spiral arms on the
angular momentum transport has not been fully established. If they are
associated with hydrodynamical shocks, such as in the
simulations of \scite{sawada86}, their contribution to the angular
momentum transport could be very significant. On the other hand,
\scite{smak01} and \scite{ogilvie02} propose that these disc structures may
reflect tidally thickened areas in the outer disc as it expands
close to its tidal radius. Their enhanced emission is then caused by
irradiation from the accreting white dwarf and regions close to it.
The prospect of testing such basic disc physics with observations
warrants the study of these disc structures in more detail (see also
\pcite{morales04}). With V347 Pup, we have a target that appears to have
a persistent disc asymmetry that is more accessible than the transient
spiral structure observed in dwarf novae.
\subsection{Mass transfer stability}
\label{masstrans}
The mass ratio of a CV is of great significance, as it
governs the properties of mass transfer from the secondary to the white
dwarf primary. This in turn governs the evolution and behaviour
of the system.
The secondary star responds to mass loss on two timescales. First, the
star returns to hydrostatic equilibrium on the dynamical timescale, which is
the sound--crossing time of the region affected. Second, the star settles
into a new thermal equilibrium configuration on a thermal timescale.
The two timescales upon which the secondary responds to mass loss leads to
two types of mass transfer instability. If, upon mass loss, the dynamical
response of the secondary is to expand relative to the Roche lobe, mass
transfer is dynamically unstable and mass transfer proceeds on the
dynamical timescale. \scite{politano96} made an analytic fit
to the models of \scite{hjellming89} to give the limit of dynamically stable
mass loss, plotted as the solid line in Fig.~\ref{fig:qplot}. Dynamically
stable mass transfer can occur if the CV lies below this line.
This limit is important for low mass secondary stars ($M_2 < 0.5M_\odot$), as
they have significant convective envelopes that tend to expand adiabatically
in response to mass loss (\pcite{dekool92}).
Thermally unstable mass transfer is possible if the dynamic response
of the star to mass loss is to shrink relative to its Roche lobe
(i.e. mass transfer is {\em dynamically} stable).
This occurs at high donor masses ($M_2 > 0.8M_\odot$) when the
star has a negligible convective envelope and its adiabatic response
to mass loss is to shrink. (e.g. \pcite{dekool92}; \pcite{politano96}).
Mass transfer then
initially breaks contact and the star begins to settle into its new thermal
equilibrium configuration. If the stars thermal equilibrium radius is
now bigger than the Roche lobe, mass transfer is again unstable, but
proceeds on the slower, thermal timescale.
The limit of thermally--stable mass transfer can be
found by differentiating the main--sequence mass--radius relationship
given in \scite{politano96}.
Thermally--stable mass transfer can occur if the CV appears below the dotted
line plotted in Fig.~\ref{fig:qplot}.
The limit for dynamically stable mass transfer is important in the case of
V347~Pup owing to the low secondary star mass. Fig.~\ref{fig:qplot} shows
that the system is just consistent with the limit at the 1$\sigma$ level.
The mass transfer stability limits, however, are true only for
ZAMS stars, whereas the secondary stars in CVs are expected to have undergone
some evolution. The loss of the outer envelope,
for example, would result in a larger than normal helium to hydrogen ratio and
affect the star's response to mass loss. For instance, DX~And, which lies
outside the limit, has been shown to have an evolved companion (\pcite{drew93}).
There is tentative evidence that the secondary star in V347~Pup is evolved by
considering three pieces of evidence.
First, V347~Pup falls outside the limit for dynamically
stable mass transfer (although agrees at the 1$\sigma$ level). Second,
the measured radius is at the upper limit for a main--sequence companion
of the same mass (\pcite{smith98}). Third, the secondary star mass and
spectral type measured for V347~Pup are closer to the evolved models of
\scite{kolb01} than the ZAMS models.
\begin{figure}
\begin{center}
\includegraphics[width=8.4cm,angle=-90]{q_plot.ps}
\end{center}
\caption{\protect\small
Critical mass ratios for mass transfer stability. The dotted line represents
the condition for thermal instability; the solid line represents the
condition for dynamical instability (Politano 1996).
Both curves assume the star is initially in thermal equilibrium.
Mass ratios and secondary masses from the compilation of
\protect\scite{smith98}, \protect\scite{north00}, \protect\scite{watson03},
and \protect\scite{thoroughgood04} are overplotted.
The mass ratios and secondary star masses of V347~Pup
determined in this paper are also plotted.}
\label{fig:qplot}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
\begin{enumerate}
\item{We have measured the radial and rotational velocities of the secondary
star in V347~Pup in order to calculate the component masses and other
system parameters. The secondary star radial velocity is affected by
irradiation from the emission regions around the primary, which we correct
for using a model. We find the component masses in V347~Pup to be
$M_1$ = 0.63 $\pm$ 0.04 $M_\odot$ for the white dwarf primary and
$M_2$ = 0.52 $\pm$ 0.06 $M_\odot$ for the M0.5V secondary star.
V347~Pup shows many of the characteristics of the SW~Sex stars,
exhibiting single--peaked emission lines, high--velocity S--wave
components and phase--offsets in the radial velocity curves.}
\item{V347~Pup lies outside the theoretical limit for dynamically
stable mass transfer in ZAMS stars, but is just consistent at the
1$\sigma$ uncertainty level. This piece of evidence, together with a
secondary star radius at the upper limit for a main--sequence
companion of the same mass, suggests that the secondary star in
V347~Pup may be evolved. Additionally, the secondary star mass and
spectral type measured for V347~Pup are closer to the evolved models of
\scite{kolb01} than the ZAMS models.}
\item{The presence of spiral arms in the accretion disc, first noted
by \scite{still98}, has been confirmed. Consistent with this, we find that
the measured accretion disc radius is close to the tidal radius computed from
the pressureless disc models of \scite{paczynski77}. The persistent
spiral arms seen in this bright novalike makes it an excellent candidate in which
to study these features, rather than the transient spiral structures observed in
dwarf novae.}
\end{enumerate}
\section*{\sc Acknowledgements}
TDT is supported by a PPARC studentship; CAW is supported by PPARC
grant number PPA/G/S/2000/00598; SPL is supported by PPARC.
DS acknowledges a Smithsonian Astrophysical Observatory Clay Fellowship.
\bibliographystyle{mnras}
|
{
"timestamp": "2004-11-19T02:22:04",
"yymm": "0411",
"arxiv_id": "astro-ph/0411555",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411555"
}
|
\section{Introduction}
The study of radiative $K_L$ decays can give
valuable information on the kaon structure.
It allows a good
test of theories describing hadron interactions and decays,
like Chiral Perturbation Theory (ChPT).
Here we
present a study of the radiative
$K_{e3}$ decay.
There are
two distinct photon components in the radiative $K^0_{e3}$ decays -
inner bremsstrahlung (IB) and direct emission.
$K^0_{e3}$ decays are mainly sensitive to the IB component because of
the small electron mass $m_e$.
A big contribution to the rate,
dominated by the IB amplitude, comes from the
region of small
photon energies $E_{\gamma}^*$
and angles $\theta_{e\gamma}^*$ between the charged lepton
and the photon, with both $E_{\gamma}^*$ and $\theta_{e\gamma}^*$
measured in the kaon rest frame.
$K^0_{e3\gamma}$ amplitude has infrared singularity
at $E_{\gamma}^*\rightarrow0$, and a collinear singularity at
$\theta_{e\gamma}^*\rightarrow0$ when $m_e = 0$.
For
this measurement and the corresponding theoretical evaluation, we exclude
these regions by the restriction
$E_{\gamma}^*>30$ MeV and $\theta_{e\gamma}^*>20^o$.
Two different theoretical approaches for evaluation of the branching ratio
have been used. Current algebra technique together with the
Low theorem were applied
by Fearing, Fischbach and Smith (called FFS hereafter) \cite{FFS},\cite{FFS_new} and by
Doncel \cite{Doncel}.
ChPT calculations were performed in \cite{Holstein}, \cite{ChPT} and
are being continuously improved \cite{ChPT_new}, \cite{Gasser}.
The ratio of the $K^0_{e3\gamma}$ to $K^0_{e3}$ decay probabilities,
applying the
standard cuts on $E_{\gamma}^*$ and $\theta_{e\gamma}^*$, is
predicted to be between $0.95$ and $0.99\%$.
The amounts of direct emission in these various calculations differ, and are
roughly estimated
to be between 0.1
and 1 \% of the size of the IB component.
Two experimental measurements of the $K^0_{e3\gamma}$ branching ratio
have been published.
The NA31 experiment obtained $Br(K^0_{e3\gamma},E_{\gamma}^*>30~ \rm{MeV},
\theta_{e\gamma}^*>20^o)/Br(K^0_{e3})=(0.934\pm0.036^{+0.055}_{-0.039}) \%$
\cite{NA31}.
The KTeV experiment gave a compatible value of the ratio
$Br(K^0_{e3\gamma},E_{\gamma}^*>30~ \rm{MeV},\theta_{e\gamma}^*>20^o)/Br(K^0_{e3})=(0.908\pm0.008^{+0.013}_{-0.012}) \%$ \cite{KTeV}.
However, this value does not agree well with
theoretical predictions.
\section{Experimental setup}
The NA48 detector was designed for a measurement of direct CP
violation in the $K^0$ system. Here we use data from a
dedicated run in September 1999
where a $K_L$ beam was produced by 450 $\rm{GeV}/c$ protons from the
CERN SPS incident on a beryllium target.
The decay region is located
120 m from the $K_L$ target after three
collimators and sweeping magnets. It is
contained in an evacuated tube, 90 m long, terminated by a thin
($3\cdot 10^{-3} X_0$) kevlar window.
The detector components relevant for this measurement include
the following:
The { \bfseries magnetic spectrometer} is designed to measure the momentum of
charged particles with high precision. The momentum resolution is given by
\begin{equation}
\frac{\sigma(p)}{p} = \left( 0.48 \oplus 0.009 \cdot p \right) \%
\end{equation}
where $p$ is in $\rm{GeV}/c$. The spectrometer consists of four drift chambers (DCH), each with 8 planes of sense wires
oriented along the projections $x$,$u$,$y$,$v$, each
one rotated by 45 degrees with respect to the previous one.
The spatial resolution achieved per projection is
$\rm{100~ \mu m}$ and the time resolution is $\rm{0.7~ns}$.
The volume between
the chambers is
filled with helium, near atmospheric pressure. The spectrometer
magnet is a dipole
with a field integral of 0.85 Tm and is placed after the first two chambers.
The distance between the first and last chamber is 21.8~m.
The {\bfseries hodoscope} is placed downstream of the last drift chamber. It
consists of two planes of scintillators segmented in horizontal and vertical
strips and arranged in four quadrants. The signals are used
for a fast coincidence of two charged particles in the trigger. The time
resolution from the hodoscope is $200~\rm{ps}$ per track.
The {\bfseries electromagnetic calorimeter} (LKr) is a quasi-homogeneous calorimeter based on liquid krypton,
with tower read out. The 13212 read-out cells have cross sections of
2 x 2 cm$^2$.
The electrodes extend from the front to the back
of the detector in a small angle accordion geometry.
The LKr calorimeter measures the energies of the $e^{\pm}$ and $\gamma$ quanta by
gathering the ionization from their electromagnetic showers.
The energy resolution is :
\begin{equation}
\frac{\sigma(E)}{E} =
\left(\frac{3.2}{\sqrt{E}}\oplus\frac{9.0}{E}\oplus0.42\right) \%
\end{equation}
where $E$ is in GeV, and the time resolution for showers with
energy between 3 GeV and 100 GeV is $500~\rm{ps}$.
The {\bfseries muon veto system} (MUV) consists of three planes of
scintillator counters, shielded by iron walls of 80~cm thickness.
It is used to reduce the $K_L \rightarrow \pi^{\pm}\mu^{\mp}\nu$
background.
Charged decays were triggered with a two-level trigger system. The trigger
requirements were two charged particles in the scintillator hodoscope
or in the drift chambers coming from the vertex in the decay region.
A more detailed description of the NA48 setup can be found elsewhere
\cite{NA48}.
\section{Analysis}
\subsection{Event selection }
The data sample consisted of about 2 TB of data from 100
million triggers, with approximately equal amounts recorded
with alternating spectrometer magnet polarities.
These data are the same which were used for the measurement of the
$K_{e3}$ branching ratio \cite{BR}.
The following selection criteria were applied to the
reconstructed data
to identify $K_{e3}$ decays and to reject background,
keeping in mind the
main backgrounds to $K_{e3}$, which are
$K_L \rightarrow \pi^{\pm}\mu^{\mp}\nu$ ($K_{\mu3}$) and
$K_L \rightarrow \pi^+\pi^-\pi^0$ ($K_{3\pi}$):
- Each event was required to contain exactly two tracks,
of opposite charge,
and a reconstructed vertex in the decay region.
To form a vertex, the
closest distance of approach
between these tracks had to be less than 3~cm.
The decay region was defined by
requirements that the vertex had to be between 6 and 34~m from
the end of the last collimator
and that the transverse distance between the vertex and the
beam axis
had to be less than 2~cm.
These cuts were passed by 35 million events.
- The time difference between the tracks was required to be
less than $6~\rm{ns}$.
To reject muons,
only events with
both tracks inside the detector acceptance and without
in-time hits in
the MUV system were used.
For the same reason only
particles with a momentum
larger than 10~GeV
were accepted.
In order to
allow a clear separation of pion and
electron showers,
we required the distance between the entry points of the
two tracks at the front face of the LKr Calorimeter
to be larger than 25~cm. As a result 14 million events remained.
- For the identification of electrons and pions, we used the ratio of the
measured cluster energy, $E$, in the LKr calorimeter associated to a
track to the momentum, $p$, of this track as measured in the magnetic
spectrometer. The ratio $E/p$ for
a sample of 75 000 pion tracks,
selected by requiring the other track of a 2-track event to be an
electron with $E/p > 1.02$,
is shown in fig. \ref{eovp}. As a cross-check pion samples from
$K_{2\pi}$ and $K_{3\pi}$ decays were
selected giving similar results.
Also shown in the figure is the distribution
for 450 000 electron tracks which are selected
from 2-track events where the
other track is a pion, with $0.4<E/p<0.6$.
For the selection of $K_{e3}$ events, we require one track to have
$0.93 < E/p < 1.10$ (electron) and the other track to have
$E/p < 0.90$ (pion). 11.7 million events were accepted.
\begin{figure}[h]
\begin{center}
\epsfig{file=eovp_bw.eps,width=8.0cm,height=8cm}
\end{center}
\caption{Distribution of the ratio of the shower energy $E$
reconstructed by the LKr and the momentum $p$ reconstructed by
the spectrometer, for pions (dotted) and electrons (line)
from $K_{e3}$ events (see text).}\label{eovp}
\end{figure}
- In order to reduce background from $K_{3\pi}$ decays,
we required the
quantity
\begin{equation}
{P_0^{\prime}}^2=\frac{(m_K^2 -m_{+-}^2 -m_{\pi^0}^2)^2-
4(m_{+-}^2m_{\pi^0}^2+m_K^2p_{\bot}^2)}{4(p_{\bot}^2+m_{+-}^2)}
\end{equation}
to be less than $-0.004 (\rm{GeV}/c)^2$.
In the equation above, $p_{\bot}$ is the transverse momentum of the two
track system (assumed to consist of two charged pions)
relative to the $K_L^0$ flight
direction and $m_{+-}$ is the
invariant mass of the charged system.
The variable ${P_0^{\prime}}^2$ is positively defined
if the charged particles are pions from the decay $K_{3\pi}$ and its
distribution has maximum at zero.
The cut removes $(98.94\pm0.03)\%$
of $K_{3\pi}$
decays and $(1.03\pm0.02)\%$ of
$K_{e3}$ decays as estimated with the Monte Carlo simulation (sect. 3.3).
After this cut, we were left with
11.4 million $K_{e3}$ candidate events.
The neutrino momentum in $K_{e3}$ decays
is not known and
the kinematic reconstruction of the kaon momentum from the measured
track momenta leads to a two-fold ambiguity
in the
reconstructed kaon momentum.
The solution with larger energy we call ``first
solution".
In order to measure the kaon momentum spectrum, we selected
events in which both solutions for the kaon momentum
lie in the same bin of width 8 GeV. These $4\cdot10^5$ events we call
"diagonal events".
The last selection criterion was the
requirement that each of the two solutions for the kaon energy
had to be in the energy range (60,180) GeV.
As a result of this selection,
$5.6 \cdot 10^6$
fully reconstructed $K_{e3}$ events were selected from the total sample.
These selected events include radiative $K_{e3}$ events.
For the selection of $K_{e3\gamma}$ events,
the following additional requirements were made:
The distance between the $\gamma$ cluster and the pion track in
LKr had to be larger than 55 cm in order to
allow a clear separation of the $\gamma$ cluster from
pion clusters.
As is shown in fig. \ref{piclusters} the hadron showers
can extend over lateral distances of up to 60 cm
from the track entry point in LKr.
\begin{figure}[h!]
\begin{center}
\epsfig{file=pi_clusters.eps,width=8.0cm,height=8cm}
\end{center}
\caption{Transverse distance between the pion entry point in
LKr and the position of a cluster
induced by pion interactions with matter; the pions here are selected from
$K^0_L\rightarrow\pi^{+}\pi^{-}$ decays where the
entry points of the two tracks in the
LKr calorimeter are at least 80 cm from each other; clusters have a minimum
energy of 4 GeV }\label{piclusters}
\end{figure}
After the requirements for $E_{\gamma}^*>30~\rm{MeV}$ and
$\theta_{e\gamma}^*>20^o$ (for both solutions of the kaon energy),
22 100 events survived.
To distinguish the $\gamma$ from the electron cluster we required the
transverse distance
between the $\gamma$ cluster candidate and the electron track in
LKr to be greater than 6 cm.
The electromagnetic
transverse rms shower width in LKr is 2.2~cm.
An event was rejected if
the $\gamma$ cluster candidate was less than
16 cm away from the beam axis, because of the beam
hole in the LKr calorimeter.
We also rejected events
with a $\gamma$ cluster candidate with energy below 4~GeV because the
energy resolution deteriorates below this threshold.
Finally an event was
rejected if the $\gamma$ was not in-time (more than $6~\rm{ns}$
time difference)
with the associated cluster(s).
These cuts provided a sample of
19 117 $K_{e3\gamma}$ candidates.
\subsection{Backgrounds}
The amount of background was evaluated
using a Monte Carlo simulation for other kaon decays.
The background to $K_{e3\gamma}$ events
is small and comes from three sources -
$K_{3\pi}$ and
$K_L \rightarrow \pi^{0}\pi^{\pm}e^{\mp}\nu$ ($K_{e4}$)
decays as well as
$K_{e3}$ decays with an accidental photon. The $K_{3\pi}$ background was
reduced
by the cut on the variable
${P^{\prime}_0}^2$ and the electron identification through the $E/p>0.93$
condition.
Variations of these cuts have a negligible effect, since the
probability to misidentify a pion for
an electron is only 0.57\% from fig. \ref{eovp}, and
the ${P^{\prime}_0}^2$ distribution is well reproduced by
the MC simulation.
The estimated number of background
events was $40^{+60}_{-40}$ events.
The $K_{e4}$
background was evaluated to be $80\pm 40$ events
from the measured
branching ratio and the calculated acceptance for these decays.
The contamination from $K_{e3}$ decays with an accidental photon
was estimated using
the distribution of the time difference between the $\gamma$ cluster candidate
and the (average) time of the other cluster(s).
The number of events in the two control regions
$(-25,-10)~\rm{ns}$ and $(10,25)~\rm{ns}$ were extrapolated to the signal
region $(-6,6)~\rm{ns}$.
The final number for this source
of background was estimated to be $20^{+40}_{-20}$ events,
assuming a flat distribution.
All backgrounds to $K_{e3\gamma}$
add up to $140\pm82$ events or $0.7\%$ of the total $K_{e3\gamma}$
sample of 19117 events.
The main background to the normalization channel $K_{e3}$ arises from
$K_{3\pi}$ and
$K_{\mu3}$ decays.
The estimations were made as in
the case of $K_{e3\gamma}$.
All the background decays together
gave a $K_{e3}$ signature in less than $9 \cdot 10^{-5}$ of the
cases ($<500$ events). This percentage is negligible compared to
background sources in $K_{e3\gamma}$ decay.
\subsection{Monte Carlo Simulation}
\begin{figure}[t]
\begin{center}
\epsfig{file=e_nucms_bw.eps,width=8.0cm,height=8cm}
\end{center}
\caption{Reconstructed neutrino energy in the
Center of Mass System; upper part -
experimental data distribution, lower part -
normalized to unity ratio of
DATA/MC linearly fitted}\label{e_nu}
\end{figure}
\begin{figure}[h]
\begin{center}
\epsfig{file= e_gcms_1_bw.eps,width=8.0cm,height=8cm}
\end{center}
\caption{First solution for $E_{\gamma}^*$; upper part -
experimental data distribution, lower part - normalized to unity ratio of
DATA/MC linearly fitted}\label{e1}
\end{figure}
\begin{figure}[h]
\begin{center}
\epsfig{file= theta_eg_1_bw.eps,width=8.0cm,height=8cm}
\end{center}
\caption{First solution for $\theta_{e\gamma}^*$; upper part -
experimental data distribution, lower part - normalized to unity ratio of
DATA/MC linearly fitted}\label{cos1}
\end{figure}
In order to calculate the geometrical and kinematical acceptance of the
NA48 detector, a GEANT-based simulation
was employed \cite{NA48}. The kaon momentum spectrum from sect. 3.1 was
implemented into the MC code.
The radiative corrections (virtual and real) were taken into account
by modifying the PHOTOS \cite{PHOTOS} program
package
in such a way as to reproduce the experimental data. This was achieved
by weighting the angular distribution $\theta^*_{e\gamma}$ in the
centre-of-mass frame
such as to fit the experimental data (model independent analysis).
With this procedure the MC and experimental data showed good agreement.
As an example,
the distributions of
the neutrino
energy, $\gamma$ energy
and
$ \theta_{e\gamma}^*$ (first solutions)
in the centre-of-mass frame are presented in
figures \ref{e_nu}, \ref{e1},
and \ref{cos1}
respectively.
The upper
plots of the figures show
the experimental data
distributions and
the lower show the ratio of the data and the MC spectra, normalized to
unity.
The plots represent data with the negative
magnet polarity and after the $K_{e3\gamma}$ selection.
The MC data were treated exactly in the
same way as the experimental data and were used for acceptance calculations.
The acceptance for $K_{e3\gamma}$ is
$\epsilon(K_{e3\gamma})=(6.08\pm0.03)\%$
as compared to the $K_{e3}$ acceptance
$\epsilon(K_{e3})=(17.28\pm0.01)\%$.
\subsection{Reconstruction and analysis technique}
We used the ``diagonal events''
to measure the kaon momentum spectrum from $K_{e3}$ decays.
However, as this reduces the data sample significantly,
for the analysis of the branching ratio
the problem was dealt with in another way.
In the $K_{e3}$ selection it was required that both solutions were
in the range (60,180)
GeV.
Further in the $K_{e3\gamma}$ selection
events were rejected if (at least) one of the two solutions for $E_\gamma^*$ was less
than $30$ MeV or (at least) one of the two solutions for $\theta_{e\gamma}^*$
was less than $20^o$.
The same procedure was used for selecting MC events when calculating
the acceptance.
An important issue are radiative corrections. Only the inclusive rate
($K_{e3\gamma}$ plus any number of radiative photons) is finite and calculable.
In our selection we have required only one hard $\gamma$ satisfying
$E_{\gamma}^*>30~\rm{MeV}$ and
$\theta_{e\gamma}^*>20^o$.
In this way in the final selection events with one "hard" $\gamma$ and
any number soft photons are included. Events with two or more hard photons
are rejected. This loss has to be taken into account by MC in the
calculation of the corresponding acceptance. In order to check the MC we
have
compared the number of $\gamma$ clusters in the LKr calorimeter predicted
by the MC with the one in the experimental data. A slight difference
has been observed leading to a small correction of 0.05\% to the branching
ratio.
We
take this into account by a correction factor $C_M=0.9995$ to the
branching ratio.
Additionally we have
reanalysed our data, requiring at least one hard photon i.e.
accepting any number of photons. This is the inclusive rate which
is finite and can be calculated. The result for $R$ agreed within
0.2\% with the analysis requiring exactly one hard photon.
The trigger efficiency was measured to be
$(98.1\pm0.1)\%$ for
$K_{e3}$ decays and $(98.1\pm0.6)\%$ for $K_{e3\gamma}$ decays.
On the basis of 19117 $K_{e3\gamma}$ candidates with an estimated
background of 140 $\pm$ 82 events
and 5.594 million $K_{e3}$ events (including
additional photons) after background
subtraction, and using the calculated acceptances,
the branching ratio was computed from
the relation:
\begin{equation}
R=Br(K^0_{e3\gamma},E_{\gamma}^*>30~ \rm{MeV},\theta_{e\gamma}^*>20^o)/
Br(K^0_{e3})=
{N(K_{e3\gamma})Acc(K_{e3})\over N(K_{e3})Acc(K_{e3\gamma})} \cdot C_M
\end{equation}
The result from
9361 $K_{e3\gamma}$ events and 2.728 million $K_{e3}$ events for
positive magnet
polarity was $R=(0.953\pm0.010)\%$ and from 9616 $K_{e3\gamma}$
events and 2.866
million $K_{e3}$ events for negative polarity, $R=(0.975\pm0.010)\%$,
where the
errors are statistical. We now turn to the systematic uncertainties.
\subsection{Systematic uncertainties}
Our investigation of possible systematic errors showed that the
biggest uncertainty comes from the kaon momentum spectrum. In order to
determine the influence of this factor we reconstructed the
experimental kaon momentum distribution from
$K \rightarrow \pi^+\pi^-$
and $K \rightarrow \pi^+\pi^-\pi^0$ decays and implemented them
in the MC simulation.
The shape of the spectrum for the three decays is shown in fig.
\ref{specs}.
The systematic error from the momentum
spectrum was estimated by taking the 3 different momentum spectra and
calculating the effect of this variation on the acceptance ratio of $K_{e3}$
and $K_{e3\gamma}$. It resulted in an relative uncertainty of
$(~^{+6}_{-3}) \cdot 10^{-3}$.
The stability of the result upon the various cuts used
in the $K_{e3\gamma}$ selection was also investigated.
The cuts were varied in between values
which rejected no more than 10\% of the events. The biggest
fluctuations in the
branching ratio estimation
were taken as systematic errors, and all the
errors were added in quadrature
with a relative result of $\pm 5 \cdot 10^{-3}$.
Uncertainties in accidental photon events and in other
background contributions are dominated by statistics and are not amongst
the largest of the systematic errors ($(~^{+2}_{-1}) \cdot 10^{-3}$ and
$(~^{+4}_{-3}) \cdot 10^{-3}$ correspondingly).
The influence of the $K_{e3}$ selection cuts
to the final result was estimated as in the case of $K_{e3\gamma}$
selection cuts.
The quadratic addition of
all these relative errors from variations of individual selection cuts
yielded an inclusive relative error of $\pm 5 \cdot 10^{-3}$.
The value of the form-factor $\lambda_+$ in the $K_{e3}$ decay was
varied between 0.019 and 0.029.
The largest fluctuation was taken as a relative systematic error -
$\pm 1 \cdot 10^{-3}$.
Our estimate of the systematic errors is summarized in Table \ref{syst}.
\begin{figure}[t!]
\begin{center}
\epsfig{file=specs_3f.eps,width=8.0cm,height=8cm}
\end{center}
\caption{Kaon momentum distribution obtained from $K_{e3}$ (line),
$K_{2\pi}$ (open squares) and
$K_{3\pi}$ (circles) decays. Arbitrary units on Y-axis}\label{specs}
\end{figure}
\section{Results and conclusion}
\begin{table}[t]
\begin{center}
\mbox{
\begin{tabular}{||l|l||}
\hline
Source &$\triangle R/R$ \\
\hline
$K_L$ spectrum&$~^{+6}_{-3}\cdot 10^{-3}$ \\
$K_{e3\gamma}$ selection & $\pm 5\cdot 10^{-3}$ \\
$\gamma$ accidentals&$~^{+2}_{-1}\cdot 10^{-3}$ \\
Background uncertainties&$~^{+4}_{-3}\cdot 10^{-3}$ \\
$K_{e3}$ selection &$\pm 5\cdot 10^{-3}$ \\
Form-factor uncertainties &$\pm 1\cdot 10^{-3}$ \\
\hline
TOTAL & $~^{+11}_{-~9}\cdot 10^{-3}$\\
\hline
\end{tabular}
}
\end{center}
\caption{Relative systematic uncertainties to the Branching ratio}\label{syst}
\end{table}
\begin{figure}[b!]
\begin{center}
\epsfig{file=ke3g_compar_bw.eps,width=8.0cm,height=8cm}
\end{center}
\caption{Theoretical and experimental results for the radiative $K_{e3}$
branching ratio. The two lower entries in the plot are theoretical results.}\label{rescompar}
\end{figure}
The results are based on
$18977$ $K_{e3\gamma}$ and $5.594 \cdot 10^6$ $K_{e3}$ events.
We obtain the following
value for the branching ratio including the systematic error:
\begin{equation}
R=(0.964\pm0.008^{+0.011}_{-0.009})\%= (0.964^{+0.014}_{-0.012})\%
\end{equation}
Figure \ref{rescompar} shows this branching ratio compared to
theoretical
and experimental results. The authors of ref. \cite{Gasser} have undertaken a
serious effort to estimate the theoretical uncertainties in $R$,
while for the earlier theoretical values, this error is not known.
These authors obtain $R = (0.96\pm0.01)\%$.
It appears
that our experimental result agrees well with the theoretical calculations
\cite{FFS_new}, \cite{Doncel}, including the most recent one \cite{Gasser}.
However our result is at variance with a recent experiment with similar
statistical sensitivity \cite{KTeV}. Our measurement, with
a 1.5\% precision, therefore confirms the validity of calculations based
on chiral perturbation theory.
\section{Acknowledgement}
We would like to thank Drs. J\"uerg Gasser,
Bastian Kubis and Nello Paver for fruitful
discussions and for communicating to us their result in ref.[7] prior to
publication. We also thank the technical staff of the participating
institutes and computing centres for their continuing support.
|
{
"timestamp": "2004-11-29T11:21:38",
"yymm": "0411",
"arxiv_id": "hep-ex/0411069",
"language": "en",
"url": "https://arxiv.org/abs/hep-ex/0411069"
}
|
\section{Introduction}
Two measures $\mu$ and $\nu$ defined on the family of Borel subsets of
a topological space $X$ are said to be {\it homeomorphic} or
{\it topologically equivalent} provided there exists a homeomorphism
$h$ of $X$ onto
$X$ such that $\mu$ is the image measure of $\nu$ under $h$:
$\mu = \nu h^{-1}.$ This means $\mu(E) = \nu(h^{-1}(E))$ for each
Borel subset $E$ of $X$.
One may be interested in the structure of
these equivalence classes of measures or in a particular equivalence
class. For example, a probability measure $\mu$ on $[0,1]$ is
topologically equivalent to Lebesgue measure if and only if $\mu$ gives
every point measure 0 and every non-empty open set positive measure.
(The distribution function of $\mu$ is a
homeomorphism on $[0,1]$ witnessing this equivalence.)
This is a special case of a result of
Oxtoby and Ulam \cite{OU}, who characterized those probability
measures $\mu$ on finite dimensional cubes $[0,1]^n$ which are
homeomorphic to Lebesgue measure. For this to be so, $\mu$
must give points measure 0, non-empty open sets positive measure, and
the boundary of the cube measure 0. Later Oxtoby and Prasad \cite{OP} extended
this result to the Hilbert cube. These results have been
extended and applied to various manifolds. The book of Alpern and
Prasad \cite{AP} is an excellent source for these developments. Oxtoby
\cite{O} also
characterized those measures on the space of irrational numbers in $[0,1]$
which
are homeomorphic to Lebesgue measure.
It is natural to ask
what measures are homeomorphic to Lebesgue measure on ${\mathcal C}=\{0,1\}^N$,
the Cantor space, where by Lebesgue measure we mean Haar measure or
infinite product measure $\mu({1/2})$ resulting from fair coin
tossing. The topology on ${\mathcal C}$ is the standard product topology;
we will use as basic open (actually clopen) sets for this topology
the sets $\langle e \rangle$ for all finite sequences $e$ from $\{0,1\}$,
where $\langle e \rangle$ is the set of infinite sequences in ${\mathcal C}$
which begin with the finite sequence $e$. (These basic clopen
sets are sometimes called cylinders.) We will say that the
{\it length} of a basic clopen set $\langle e \rangle$ is the
length of the finite sequence $e$.
It turns out that the Cantor space is more rigid than
$[0,1]^n$ for measure homeomorphisms -- it is not true
that a measure $\nu$ on ${\mathcal C}$ which gives points measure $0$ and
non-empty open sets positive measure is equivalent to Lebesgue
measure. In fact, even among the product measures the only one which
is equivalent to Lebesgue measure is Lebesgue measure itself. To
describe the situation let us use the following notation. For each
number $r$, $0 \leq r \leq 1$, let $\mu(r)$ be the infinite product
measure determined by coin tossing with probability of success $r$.
Consider the equivalence relation on $[0,1]$, $r \sim_{top} s$ if and
only if $\mu(r)$ is topologically equivalent to $\mu(s)$. (We will
sometimes abuse terminology by saying that $r$ is topologically
equivalent to $s$.)
It turns
out that this equivalence relation is closely related to an
algebraic/combinatorial
relation. To explain this, we make the following definition.
\begin{definition}
\label{binreddef}
Let \ $0 < r,s < 1$. The number $s$ is said to be
binomially reducible to $r$ provided
\begin{equation}\label{brfmla}
s = \sum_{i=0}^n \ a_i \ r^i(1-r)^{n-i},
\end{equation}
where $n$ is a non-negative integer and each $a_i$ is an integer with $0
\leq a_i \leq \binom {n}{i}$.
\end{definition}
The numbers $r^a(1-r)^b$ for integers $a,b\ge 0$ will be referred
to as {\it cylinder sizes} for $r$; they are the measures of the
basic clopen sets under $\mu(r)$. So the right side of \ref{brfmla}
is the general form for the measure of a clopen set under $\mu(r)$.
Let us note some basic facts about reducibility to be used later.
(See \cite{Ma} for this and further background information.)
Note that if $s$ is
binomially reducible to $r$, so is $1-s$ (change $a_i$ to $\binom
{n}{i} - a_i$). If
$s_1$ and $s_2$ are reducible to $r$, so is $s_1s_2$.
(If $s_1 =
\sum_{i=0}^n a_ir^i(1-r)^{n-i}$ and $s_2 =
\sum_{j=0}^m b_jr^j(1-r)^{m-j}$, then $s_1s_2 =
\sum_{i=0}^n \sum_{j=0}^m a_ib_jr^{i+j}(1-r)^{n+m-i-j}
=\sum_{k=0}^{n+m}
(\sum_{i+j=k} a_ib_j)r^{k}(1-r)^{n+m-k}$ and $ \sum_{i+j=k} a_ib_j
\leq \sum_{i+j=k} \binom {n}{i}\binom{m}{j} = \binom {n+m}{k}$.)
Hence, if $s$ is binomially reducible to $r$, so is $s^a(1-s)^b$ for
any $a,b \geq 0$. Also, it is known that $\mu(s)$ is continously reducible
to or is a continuous
image of the measure $\mu(r)$ (i.e., $\mu(s) = \mu(r) \circ g^{-1}$ for
some continuous $g: {\mathcal C} \to {\mathcal C}$) if and only if $s$ is binomially
reducible to $r$ \cite{Ma}. Thus, we have another natural equivalence relation
on $[0,1]$.
\begin{definition}
\label{bineqdef}
Let \ $0 < r,s < 1$. Then $r$ is binomially
equivalent to $s$, denoted $r \approx s$, provided $r$ is binomially
reducible to $s$ and $s$ is binomially reducible to $r$, or,
equivalently, each of the measures $\mu(r)$ and $\mu(s)$ is a
continuous image of the other.
\end{definition}
Among several still unsolved problems concerning these relations
is the following.
\begin{problem}[{\cite[Problem 1065]{Ma}}]
Is it true that the product measures $\mu(r)$ and $\mu(s)$ are homeomorphic if
and only if each is a continuous image of the other, or, equivalently,
each of the numbers $r$ and $s$ is binomially reducible to the other?
\end{problem}
(Note: After this paper was first circulated, the above problem
was solved in the negative by Austin~\cite{Au}.)
One can think of this problem in the following way. Suppose we have
$\mu(s) = \mu(r)\circ g^{-1}$ and $\mu(r) = \mu(s) \circ h^{-1}$, where
the maps $g$
and $h$ are continuous. Is there some
sort of Cantor-Bernstein or back-and-forth argument for the
Cantor set which, given $g$ and $h$,
produces not just a one-to-one onto map, but a
homeomorphism taking $\mu(r)$ to $\mu(s)$?
Many cases of this problem have already been settled.
Let us note that, for a given $n$,
the functions $r^i(1-r)^{n-i}$ for $0 \leq i \leq n$
are linearly independent polynomials, since their trailing terms
(i.e., nonzero terms of least degree) have distinct degrees. Therefore,
$\sum_{i=0}^na_ir^i(1-r)^{n-i}$ as in Definition~\ref{binreddef}
is a polynomial of degree $> 1$ unless
it is $0$ (when $a_i = 0$ for all $i$), $1$ (when $a_i = \binom {n}{i}$),
$1-r$ ($a_i =
\binom {n-1}{i}$),
or $r$ ($a_i = \binom {n-1} {i-1}$). Therefore, if $r$ and $s$ are binomially
reducible to each other, $r\neq s$, and $r \neq 1-s$, then $s = P(r)$
and $r = Q(s)$ where ${\text{deg}}(P), {\text{deg}}(Q)>1$, so $r = Q\circ P(r)$ and
${\text{deg}}(Q\circ P)
> 1$. Thus, $r$ is algebraic. Also, in this case, $r$ and $s$ have
the same algebraic degree. Moreover, $r$ is an algebraic integer
if and only if $s$ is. Huang~\cite{H} showed that if $r$ is an algebraic
integer of
degree 2, and $r \approx s$, then $r = s$ or $r = 1-s$.
In fact, Navarro-Bermudez~\cite{NB} showed that if $r$ is rational or
transcendental and $r\approx s$, then $r =s$ or $r = 1-s$.
We gather these facts in the following theorem.
\begin{theorem}[various authors] For $r$ rational, transcendental,
or an algebraic integer of degree
$2$, the $\sim_{top}$ equivalence class containing $r$ and the $\approx$
equivalence class containing $r$ are both equal to $\{r,1-r\}.$
\end{theorem}
On the other hand, it is known that for every $n \geq 3$, there are
algebraic integers $r$ of degree $n$ such that the $\approx$
equivalence class containing $r$ has at least 4 elements \cite
{H}. (In fact, Pinch \cite{P} showed that, if $n = 2^{k+1}$, then there is an
algebraic integer $r$ of degree $n$ with at least $2k$ distinct numbers
binomially equivalent to it.) The simplest
of these is the solution of
$$
r^3 + r^2 -1 =0
$$
lying in the open interval $(0,1)$. For this value of $r$, it turns out
that $s = r^2 \approx r$, and Navarro-Bermudez and Oxtoby \cite {ONB}
proved that
$r\sim_{top} s$ via a simple homeomorphism. Until now this has been the
only nontrivial example of
topologically equivalent product measures.
The purpose of this paper is to present a new condition under which
binomially equivalent numbers are topologically equivalent. First, we
define a condition
called ``refinable'' on numbers in $[0,1]$. Next, we show that if $r$
and $s$ are
binomially equivalent and both $r$ and $s$ are refinable, then the
measures $\mu(r)$ and $\mu(s)$ are homeomorphic. Finally, we apply our
condition to the root $r$ of
$$
r^4 +r -1 = 0,
$$
with $r$ between 0 and 1, and to $s = r^2$. We show both $r$ and $s$
are refinable
and $r$ and $s$ are binomially equivalent. Thus, $\mu(r)$ and $\mu(s)$
are topologically equivalent via a very non-trivial homeomorphism.
Any cylinder size $r^a(1-r)^b$ can be split into two cylinder sizes
$r^{a+1}(1-r)^b$ and $r^a(1-r)^{b+1}$. Either or both of these
can be split in the same way, and so on. After finitely many
steps, one has partitioned the original cylinder size into finitely
many cylinder sizes. We will call a partition obtained in this
way a {\it tree partition} of $r^a(1-r)^b$ (named after the
representation of the Cantor space as the set of paths through a
complete infinite binary tree). A tree partition corresponds
to a partition of a basic clopen set in ${\mathcal C}$ into basic clopen subsets.
Note that any tree partition
can be split further by steps as above to yield a new tree
partition in which
all the final cylinder sizes have
the same length, say $a+b+n$; in this final partition the
cylinder size $r^{a+i}(1-r)^{b+n-i}$ will occur $\binom ni$ times.
On the other hand,
one may be able to partition a cylinder size for $r$
into a finite collection of smaller cylinder sizes (whose sum is the
original cylinder size; repetitions are allowed)
in a way which is not a tree partition. For instance, one can partition
the cylinder size $1$ into $\{r^3,(1-r)^3,r(1-r),r(1-r),r(1-r)\}$
(or, written more briefly, $\{r^3,(1-r)^3,3r(1-r)\}$; we will treat
a positive integer coefficient of a cylinder size as a multiplicity).
For specific values of $r$, there may be many more such partitions.
Recall the definition of refinement:
given partitions $P$ and $P'$ of the same set, we say that $P'$ is
a {\it refinement} of $P$ (or $P$ has been {\it refined} to $P'$)
if every member of $P$ is a union of members of $P'$. The corresponding
definition for partitions of a number (e.g., a cylinder size) is:
$P'$ is a refinement of $P$ if one can write $P'$ as the union
(respecting multiplicities) of collections $S_t$ for $t \in P$ such
that, for each $t$ in $P$, the sum of $S_t$ is $t$.
\begin{definition}
A number $r$ is refinable provided every partition of a cylinder size
for $r$ into smaller cylinder sizes
can be refined to a tree partition.\end{definition}
An equivalent definition in symbols: $r$ is refinable iff, for every
true equation of the form
$$
r^a(1-r)^b = r^{c_1}(1-r)^{d_1} + r^{c_2}(1-r)^{d_2}+ \ldots
+r^{c_m}(1-r)^{d_m},
$$
there exist $n \geq 0$ and nonnegative integers $p_{ij}$ for $0 \leq
i \leq n$, $1 \leq j \leq m$ such that
$$
\binom {n}{i} = p_{i1}+p_{i2}+\ldots+p_{im}
$$
for $0\leq i \leq n$ and
$$
r^{c_j}(1-r)^{d_j} = \sum_{i=0}^n p_{ij}r^{a+i}(1-r)^{b+n-i}
$$
for $1 \leq j \leq m.$
We note that in this definition $a$, $b$, $c_j$, and $d_j$ are assumed to be
nonnegative integers, but the definition would be equivalent if we
allowed them to be arbitrary integers.
We briefly compare this
notion to that of a ``good'' measure as introduced by Akin \cite{Ak}.
A probability measure $\mu$ on the Cantor space is good if,
whenever $U,V$ are clopen sets with $\mu(U) < \mu(V)$, there
exists a clopen subset $W$ of $V$ such that $\mu(W) = \mu(U)$.
We state a few facts without proof here.
If a product measure $\mu(r)$ is good, then $r$ is refinable.
If $r$ is transcendental, then $r$ is refinable, but $\mu(r)$ is
not good. If $r$ is rational and $r\neq 1/2$, then $r$ is not
refinable and hence $\mu(r)$ is not good.
Refinability is useful because of the following result.
\begin{theorem}
\label{newthm}
If $0 < r,s < 1$, $r$ and $s$ are binomially equivalent, and each of $r$
and $s$ is refinable,
then the measures $\mu(r)$ and $\mu(s)$ are homeomorphic.
\end{theorem}
\begin{proof} We construct partitions $P_n$ and $Q_n$ of ${\mathcal C}$ into
clopen sets for $n = 0,1,2,\ldots$ and bijections $\pi_n:P_n\mapsto
Q_n$ satisfying the following properties:
\begin{enumerate}
\item $P_{n+1}$ is a
refinement of $P_n$ and $Q_{n+1}$ is a refinement of $Q_n$,
\item each member of $P_{2n-1}$ and each member of $Q_{2n}$ is a basic
clopen set of length $\geq n$,
\item for any $X \in P_n$ we have
$\mu(s)(\pi_n(X)) = \mu(r)(X)$, and
\item if $X\in P_{n+1}$ and $X
\subseteq X' \in P_n$, then $\pi_{n+1}(X) \subseteq \pi_n(X')$.
\end{enumerate}
Given the above sequence, define $f:{\mathcal C}\mapsto {\mathcal C}$ by: for each
$\alpha \in {\mathcal C}$, let $X_n$ be the unique member of $P_n$ containing
$\alpha$ and let $f(\alpha)$ be the unique element of $\bigcap_n
\pi_n(X_n).$ It is straightforward to verify that $f$ is a well-defined
homeomorphism of ${\mathcal C}$ ($f^{-1}$ is defined by an analogous method from
$Q_n$ to $P_n$), and $f(X) = \pi_n(X)$ for all $X\in P_n,$ so that
$\mu(s)(f(X))=\mu(r)(X)$ for $X\in \bigcup_nP_n$. Since every clopen set
$A$ is a finite disjoint union of sets each in $\bigcup_n P_n$, $f$ maps
$\mu(r)$ to $\mu(s)$.
We build $P_n$, $Q_n$, and $\pi_n$ by a back-and-forth recursive
construction. Let
$P_0 = Q_0 = \{{\mathcal C}\}$ with $\pi_0({\mathcal C}) = {\mathcal C}$. Given
$P_{2n},Q_{2n},\pi_{2n},$ let $P_{2n+1}$ be a refinement of $P_{2n}$
into basic clopen sets of length $\geq n+1$. Fix $Y\in Q_{2n}$, a basic
clopen set, say of $\mu(s)$-measure $s^a(1-s)^b$. Now,
$\pi^{-1}_{2n}(Y)\in P_{2n}$ is a union of basic clopen sets
$X_1,...,X_k \in P_{2n+1}$, each having $\mu(r)$-measure $r^p(1-r)^q$
for some integers $p,q$, and these measures add up to $s^a(1-s)^b$.
Since $r$ is binomially reducible to $s$, so is each $r^p(1-r)^q$. Thus,
each $\mu(r)(X_j)$ can be expressed as a finite sum of numbers
$s^c(1-s)^d$. Putting these together for all such $X_j$'s, we get a
list of numbers $s^{c_1}(1-s)^{d_1}, s^{c_2}(1-s)^{d_2},\ldots,
s^{c_m}(1-s)^{d_m}$ with sum $s^a(1-s)^b$. Since $s$ is refinable, $Y$
can be partitioned into clopen sets $\hat{Y}_1,\ldots,\hat{Y}_m$ with
$\mu(\hat{Y}_i) =s^{c_i}(1-s)^{d_i}$ for each $i$. But the list above
was obtained by
joining the lists for the individual $X_j$'s together; hence, we can
combine the $\hat{Y}_i$'s to get clopen sets $Y_1, Y_2,\ldots,Y_k$, still
forming a
partition of $Y$, such that $\mu(s)(Y_j) = \mu(r)(X_j)$, for each $j$.
Put these sets $Y_j$ into $Q_{2n+1}$, letting $\pi_{2n+1}(X_j) = Y_j$.
Once this is done for all $Y\in Q_{2n}$, we will have the desired
partition $Q_{2n+1}$ and map $\pi_{2n+1}$.
We have finished refining the partition on the $P$ side; it is now
the partition on the $Q$ side that needs to be refined next.
So let $Q_{2n+2}$ be a refinement of $Q_{2n+1}$ into basic clopen sets
of length $\geq n+1$, and apply the above procedure with $r$ and
$s$ interchanged to get $P_{2n+2}$ and $\pi_{2n+2}$ (the map from
$Q_{2n+2}$ to $P_{2n+2}$ will be $\pi_{2n+2}^{-1}$). This will
complete the back-and-forth recursive step.
\end{proof}
To prove that a number $r$ is refinable, it suffices to show the following.
Given any two finite multisets (sets whose elements can have
multiplicity greater than $1$) $A$ and $B$ of cylinder sizes for $r$ such
that $\sum A = \sum B$, one can transform $A$ and $B$ to a common
multiset $C$, where for $B$ the allowed transform steps are arbitrary
splits (meaning replace a cylinder size $x$ with any collection of
cylinder sizes that add to $x$), while for $A$ the only allowed steps
are tree splits (meaning replace $x$ with $xr$ and $x(1-r)$). If
$A$ is a singleton, $A = \{r^a(1-r)^b\}$, and $B$ and $C$ are as just
described, then $C$ will be a tree partition of $r^a(1-r)^b$ which is
a refinement of the partition $B$.
[Note that we only need the case where $A$ is a singleton, but
proofs that one can transform $A$ and $B$ to $C$ as above will usually
work even when $A$ is an arbitrary multiset.]
Equivalently, one can
transform $A$ and $B$ to a common $C'$ where one gets from $B$ to $C'$
by arbitrary splits and one get from $A$ to $C'$ by a sequence of tree
splits followed by a sequence of merges (meaning replace a subcollection
of the current multiset by its sum, which we may or may not require to
be a cylinder size). This is because, if $C$ is the multiset obtained from
$A$ by the tree splits alone, then $C$ is obtained from $C'$ by
arbitrary splits
(a split is the inverse of a merge) and hence from $B$ by arbitrary splits.
In fact, it will suffice to transform $A$ to $C'$ by any sequence of
merges and tree splits in any order, because a merge followed by a
tree split is equivalent to one or more tree splits followed by a merge
(if $\sum_ix_i = x$, then $\sum_ix_ir = xr$ and
$\sum_ix_i(1-r) = x(1-r)$). Define a ``tree move'' to be a merge or a
tree split.
This is the method we will use in the refinability proofs to follow:
given $A$ and $B$, transform $A$ to $A'$ by tree moves and
$B$ to $B'$ by splits, and show that $A' = B'$ (this is the common
multiset $C'$).
We will describe such transformations as built up out of simple steps.
For instance, suppose we have cylinder sizes $a_1$, $a_2$, $b_1$, $b_2$,
and $b_3$, and we demonstrate how to transform $\{a_1,a_2\}$ into
$\{b_1,b_2,b_3\}$ using, say, tree moves. (We may write this more
briefly as ``one can get from $a_1,a_2$ to $b_1,b_2,b_3$ by tree
moves.'') Then, for any multiset $A$, we can also transform
$A \cup \{a_1,a_2\}$ into $A \cup \{b_1,b_2,b_3\}$ using tree moves.
(Here $\cup$ is multiset union, where multiplicities are added.)
Such a transformation can then be used as part of further transformations.
Also, if one can get from $a_1,a_2$ to $b_1,b_2,b_3$ by tree
moves, then for any $c,d$ one can get from $r^c(1-r)^da_1,r^c(1-r)^da_2$
to $r^c(1-r)^db_1,r^c(1-r)^db_2,r^c(1-r)^db_3$ by tree moves (just multiply
every cylinder size involved by $r^c(1-r)^d$). All of our
transformations of multisets will be sum-preserving.
Our next theorem shows that there are non-trivial examples of refinable
numbers.
\begin{theorem}\label{selmerref}
If $r$ is the positive root of $x^n +x-1= 0$, where $n >1$ and $n
\not\equiv 5 \pmod 6$, then $r$ is refinable.
\end{theorem}
\begin{proof}
By a theorem of Selmer \cite{S}, the trinomial $x^n+x-1$ is
irreducible when $n
\not\equiv 5 \pmod 6$. Hence, $r$ is algebraic of degree $n$.
Let $A$ and $B$ be multisets of cylinder sizes for $r$ with the same
sum. Since $1-r = r^n$, we have
$$
r^a(1-r)^{b+1} = r^{a+n}(1-r)^b.$$
The replacement of $r^a(1-r)^{b+1}$ by $r^{a+n}(1-r)^b$ can be
thought of as both a trivial merge and a trivial split. Therefore, it
can be applied repeatedly to both $A$ and $B$ to produce new multisets
$A''$ and $B''$ containing only powers $r^i$, $i \geq 0$.
Next note that the replacement in a multiset
\begin{equation}\label{refine1}
r^a \rightarrow r^{a+1}, r^{a+n}
\end{equation}
is a split which is obtainable by tree moves (split $r^a$ to $r^{a+1}$
and $r^a(1-r)$ and merge $r^a(1-r)$ to $r^{a+n}$), so it can be
applied on both sides. Let $k$ be the largest exponent such that
$r^k$ occurs in $A''$ or $B''$. By repeatedly applying the
replacement (\ref{refine1})
to each $r^a$ with $a \leq k-n$, we can get from $A''$ and
$B''$ to $A'$ and $B'$ consisting entirely of powers $r^a$ with
$k-n+1 \leq a \leq k$. These $n$ powers of $r$ are linearly
independent over the rationals because $1,r,\ldots,r^{n-1}$ are
(since $r$ is algebraic of degree $n$). So
the only way for $A'$ and $B'$ to have the same sum is to have
$A'$ = $B'$. This completes the proof.
\end{proof}
If $r$ is the positive root of $x^n+x-1 = 0$, and $s = r^d$ where
$d$ is a divisor of $n$, then $r \approx s$,
because $r = 1 - s^{n/d}$ (recall that, if $t$ is binomially reducible
to $r$, then so is $1-t$). For most $n$, the preceding theorem shows
that $r$ is refinable. Hence, to show that $r \sim_{top} s$,
it will suffice to show that $s$ is refinable.
In our next theorem we prove this for the special case when $n = 4$.
\begin{theorem}
If $r,s \in (0,1)$, $s = r^2$, and $r = 1-s^2$, then $r$ and $s$ are
refinable and the measures $\mu(r)$ and $\mu(s)$ are topologically equivalent.
\end{theorem}
\begin{proof}
We have $r = 1-r^4$, so $r \approx s$ (as noted above),
$r$ is refinable (by Theorem \ref{selmerref}), and
$r$ and $s$ are algebraic of degree $4$ (by Selmer's theorem
mentioned previously and the fact that binomial equivalence
preserves degree). Also, we have
\begin{equation}\label{eqaprime}
s = (1-s^2)^2 = (1-s)^2(1+2s+s^2).
\end{equation}
Now, from the cylinder size $1$, one can get to
\begin{equation}\label{eqa}
s^2,(1-s)^2,s^2(1-s),2s(1-s)^2,s^3(1-s),s^2(1-s)^2
\end{equation}
by tree splits (this corresponds to partitioning ${\mathcal C}$ into
$\langle 1,1 \rangle$, $\langle 0,0 \rangle$,
$\langle 1,0,1 \rangle$, $\langle 1,0,0 \rangle$,
$\langle 0,1,0 \rangle$, $\langle 0,1,1,1 \rangle$,
and $\langle 0,1,1,0 \rangle$), and then to
\begin{equation}\label{eqb}
s,s^2,s^2(1-s),s^3(1-s)
\end{equation}
by a merge of the second, fourth, and sixth terms in (\ref{eqa}),
using (\ref{eqaprime}); of course,
one can also get from $1$ to
(\ref{eqb})
by an arbitrary split. As noted earlier, this implies that
one can get from $s^a(1-s)^b$ to
$$
s^{a+1}(1-s)^b,s^{a+2}(1-s)^b,s^{a+2}(1-s)^{b+1},s^{a+3}(1-s)^{b+1}
$$
by an arbitrary split or by tree moves.
Next, one can get from $s$ to
\begin{eqnarray}\label{eqcprime}
s^3,s(1-s)^2,2s^2(1-s)^2,2s^4(1-s),s^3(1-s)^2,
\qquad \qquad \nonumber \\ \hfill
s^3(1-s)^3,s^5(1-s)^2,s^4(1-s)^3
\end{eqnarray}
by tree splits --- multiply the steps from $1$ to (\ref{eqa}) by $s$,
and then use three successive tree splits to replace the
cylinder size $s^3(1-s)$ with
$$s^4(1-s),s^3(1-s)^3,s^5(1-s)^2,s^4(1-s)^3.$$
One can then get from (\ref{eqcprime}) to
\begin{align}\label{eqbprime}
s^3,2s(1-s)^2,s^2(1-s)^2,2s^4(1-s),s^5(1-s)^2
\end{align}
by a merge from $s(1-s)^2$ times (\ref{eqb}) to $s(1-s)^2$. But we have
$$
s = r^2 = (1-s^2)^2 = (1-s)^2 + 2s(1-s)^2 + s^2(1-s)^2,
$$
and $s$ is also equal to the sum of the terms in (\ref{eqbprime})
since all of our moves are sum-preserving,
so we must have
$$(1-s)^2 = s^3 + 2s^4(1-s) + s^5(1-s)^2
$$
and hence we can get from (\ref{eqbprime}) to
\begin{align}\label{eqc}
(1-s)^2,2s(1-s)^2,s^2(1-s)^2,
\end{align}
by a merge.
So, we can get from
$s$ to (\ref{eqc}) by tree moves as well as by a split.
Next, we can get from $1$ to $s,(1-s)$ by a tree split and then to
\begin{equation}\label{eqd}
s,s(1-s),s^2(1-s),s^2(1-s)^2,s^3(1-s)^2
\end{equation}
by a (\ref{eqb}) move. (Here ``by a (\ref{eqb}) move'' is short for
``by replacing a cylinder size $t$ with $t$ times (\ref{eqb}), which
is a split and can also be accomplished by tree moves.'' In this case
$t = 1-s$.)
Finally, we can get from $1$ to $s,(1-s)$ by a tree
split, then to
$$
(1-s),(1-s)^2,2s(1-s)^2,s^2(1-s)^2
$$
by a (\ref{eqc}) move, then to
\begin{equation}\label{eqe}
(1-s),3s(1-s)^2,2s^2(1-s)^2,s^2(1-s)^3,s^3(1-s)^3
\end{equation}
by a (\ref{eqb}) move.
Now let $A$ and $B$ be multisets of cylinder sizes with the same sum.
First use (\ref{eqc}) moves repeatedly on both multisets
to get rid of all cylinder sizes $s^a(1-s)^b$ with
$a > b+1$. Then use (\ref{eqb}) moves to get rid of all $s^a(1-s)^b$ with
$a<b$. So only numbers $s^a(1-s)^b$ with $a =b$ or $a = b+1$ occur in
the new multisets. Let $k$ be the largest exponent $a$ which occurs.
If either multiset contains a cylinder size $s^b(1-s)^b$ such that
$b < k-1$, then we can use a (\ref{eqd}) move to replace it
with cylinder sizes with larger exponents; similarly, we can use
a (\ref{eqe}) move to replace a cylinder size $s^{b+1}(1-s)^b$
such that $b < k-2$. (Both of these steps yield new cylinder
sizes $s^{a'}(1-s)^{b'}$ with $a'=b'$ or $a'=b'+1$.)
By
performing these steps as many times as possible,
one can change all cylinder sizes to one
of the
following five cylinder sizes:
\begin{eqnarray*}
s^{k-1}(1-s)^{k-2},\, s^{k-1}(1-s)^{k-1},\, s^k(1-s)^{k-1},
\qquad \qquad \\ \hfill
s^k(1-s)^k, \text{ or }s^{k+1}(1-s)^k.
\end{eqnarray*}
Then one can use tree splits on the first two of these five and
(\ref{eqc}) moves on the resulting occurrences of
$s^k(1-s)^{k-2}$ to reduce everything
to the cylinder sizes
$$
s^k(1-s)^{k-1},\,s^{k-1}(1-s)^k,\, s^k(1-s)^k, \text{ or }s^{k+1}(1-s)^k.
$$
Let $A'$ and $B'$ be the final multisets using these cylinder sizes
only. These four sizes are linearly independent over the
rationals. (One can verify this directly by noting that from $s$, $(1-s)$,
$s(1-s)$, $s^2(1-s)$ one can get $1$, $s$, $s^2$, $s^3$ as linear
combinations, or
one can just notice that our argument shows that the multisets
$\{1\}$, $\{s\}$, $\{s^2\}$, $\{s^3\}$ can all be reduced to these four forms
using the same $k$. We are using here that $s$ has algebraic degree $4$.)
Therefore, since $A'$ and $B'$ are both multisets of these numbers and
$\sum A' = \sum A =\sum B = \sum B'$, we must have $A' = B'$. So $s$
is refinable, as desired. Finally, using Theorem \ref{newthm} we see that
$\mu(r)$
and $\mu(s)$ are homeomorphic.
\end{proof}
\begin{corollary}
If $0 < r < 1$ and $r = 1-r^4$, then there are at least $4$ product
measures topologically equivalent to $\mu(r)$.
\end{corollary}
Let us mention some other results which can be proven using the
techniques of this paper.
It is not hard to show that any transcendental number is
refinable, although this is not useful for proving topological
equivalence.
One can show that, if $r$ is the root of $r^3 +r^2 -1 = 0$ in $(0,1)$
and $s = r^2$, then $r$ and $s$ are both refinable. This gives
another proof of the theorem of Navarro-Bermudez and Oxtoby. However,
it is simpler to produce the homeomorphism as they did. We have
verified that, if $r$ is the positive root of $r^6+r=1$,
then $r$, $r^2$, and $r^3$ are all refinable.
Thus, there are at least
six numbers in $(0,1)$ topologically equivalent to $r$.
We have also verified that the positive numbers $r$ and $s$
given by $s = r^4$ and $r = 1 - s^2$ are refinable.
A number of problems remain open. One of these is the problem
stated after Definition~\ref{bineqdef}, which can now
be restated as follows.
\begin{problem}[{\cite[Problem 1065]{Ma}}]
Are $\sim_{top}$ and $\approx$ the same equivalence
relation on $[0,1]$?
\end{problem}
As noted earier, the above problem
has been solved in the negative by Austin~\cite{Au}.
In connection with this problem, we note that there are relatively
simple examples of two
probability measures on the Cantor space
each of which is a continuous image of the other but there is no
homeomorphism taking one to the other. One such
exanple is $\mu(1/2)$ and $\nu$, where $\nu$ is
obtained from $\mu(1/2)$ by multiplying the measure of
any subset of the left half of the Cantor space by $3/2$
and multiplying the measure of any subset of the right half
of the Cantor space by $1/2$. (Equivalently, $\nu$ can be
described as the disjoint sum of $(3/4) \mu(1/2)$ and
$(1/4) \mu(1/2)$.) The reason for this is that the $3/4$ half has no
clopen subset of measure $1/2^n$, since all the numerators are
divisible by $3$, whereas every clopen subset with
positive $\mu(1/2)$ measure has a clopen subset with measure $1/2^n$
for large $n$.
\begin{problem}[{\cite[Problem 1067]{Ma}}]
Is there an infinite $\sim_{top}$ equivalence class?
Is there an infinite $\approx$ equivalence class?
\end{problem}
\begin{problem}
Is every number in a non-trivial $\sim_{top}$ equivalence
class refinable? (Are the corresponding measures good in Akin's sense?)
\end{problem}
The corresponding question about nontrivial $\approx$ equivalence classes
has a negative answer, by Theorem~\ref{newthm} combined with Austin's result.
Regarding this problem, we note that the number $1/3$ is not
refinable. This is because the partition $1/3 + 1/3 + 1/3$ of $1$
cannot be refined to a tree partition of $1$ since in any such tree
partition only one number will have odd numerator. In fact, one can
show that no rational $r$ in $(0,1)$ other than $1/2$ is refinable.
A particular case of interest in the preceding problem is the
remaining ``Selmer-like'' reals, and the numbers binomially
equivalent to Selmer or Selmer-like reals, where the Selmer reals
and the Selmer-like reals are the positive numbers $r$ satisfying
an equation $x^n+x-1=0$ where $n \not\equiv 5 \pmod 6$
or $n \equiv 5 \pmod 6$, respectively.
We already saw that the Selmer reals are refinable;
It turns out that one can show that the Selmer-like
reals are refinable as well, and the corresponding measures $\mu(r)$
are good in both cases. But it remains open whether the numbers binomially
equivalent to them are refinable (and hence topologically equivalent
to them).
{\bf Acknowledgement} The authors would like to thank Mike Keane for
his useful comments and information.
|
{
"timestamp": "2005-11-10T02:26:10",
"yymm": "0411",
"arxiv_id": "math/0411301",
"language": "en",
"url": "https://arxiv.org/abs/math/0411301"
}
|
\section{Introduction}
One of the interesting developments made possible with {\it Chandra} observations is the abundance
of extragalactic point sources. The large observed samples now available promise to improve our
understanding of their formation and interaction with their host stellar environments. In galaxies
with young stellar populations, it is generally thought that bright point
X-ray sources are young X-ray Binaries (XRBs) \citep{KIL02, FAB01, FW05}, associated
with the large amount of ongoing star
formation. This interpretation is mainly based on the measured X-ray luminosity and the
spectral and temporal variability characteristics of the point sources. For a detailed
review of the formation and evolution of XRBs see \citet{LVDK05}, and for a recent
discussion of the temporal properties see \citet{SES03}. Optical and
infrared observations, most prominently with the {\it Hubble} Space
Telescope, reveal massive, young clusters, often referred to as super star clusters. They range in
mass from $\sim$10$^4$\,M$_\sun$ to $\sim$10$^7$\,M$_\sun$ \citep{SG01, HAR01}, and in age from
just $\sim$1\,Myr to $\sim$50\,Myr, with the majority at $\sim10-20$\,Myr (mostly due to
photometric selection). These clusters are thought to be young analogs of globular clusters and
may be responsible for most of the massive stars in the field of their host galaxies \citep{TRE01}.
Thus, one may expect a concetration of XRBs in or near these clusters.
Recently, {\citet{KAA04}; hereafter K04} have studied 3 starburst galaxies (M82, NGC1569, and
NGC5253) each containing a significant number of young star clusters and point X-ray sources.
Indeed, they do find a statistically significant relationship between the two types of objects: XRBs
are preferentially found within distances of $\sim$30$-$100\,pc from their nearest cluster, but
there is a clear lack of XRBs found coincident with the clusters. There are obvious observational
biases, the most important being that the true parent cluster is unknown; the distances quoted are
only those to the {\it nearest} cluster, not necessarily the parent cluster. Still, the XRB spatial
distribution relative to the clusters and their association seems significant and characteristic of
a non-random sample distribution (see \S\,3 of K04 for the relevant statistical analysis). It is
worth noting that similar results have been found in observations of the Antennae \citep{ZEZ02}.
In this {\em Letter}, our goal is to model in a self-consistent manner the population of binaries in
the cluster potential (assumed static for simplicity). We track the kinematic evolution of compact
object binaries in the absence of dynamical interactions and we follow their X-ray luminosity
($L_{X}$) evolution. We focus on two specific, testable points of comparison between the published
observations and our calculations: the average number of XRBs per cluster, and the median distance
of XRBs from their parent cluster (or nearest cluster in the observations). For point sources
brighter than $\sim$10$^{36}$\,erg\,s$^{-1}$, K04 have shown that the median distance from a cluster
is $\sim$100\,pc, with an average of $\lesssim$1 XRB per cluster (See Table 1 in K04 for the
specifics regarding each of the three galaxies considered). Here we show that these two quantities
can be calculated theoretically and the results appear consistent with the observations for the
range of cluster masses and ages relevant to the clusters observed, when considering only the
supernova kicks imparted to XRBs and their motion in cluster potentials. Guided by the $L_{X}$
sensitivity limits of the observations, we focus our analysis on XRBs with
$L_{X}\geq$5$\times$10$^{35}$\,ergs\,s$^{-1}$, although the formation and evolution of XRBs to
lower $L_{X}$ ranges is included in our models.
In \S\,2 we describe the model methods used and how they are applied in our simulations. In \S\,3 we
describe our main results and compare them to the observations presented by K04. We discuss our
conclusions in \S\,4.
\section{Theoretical Modeling}
To generate the necessary stellar populations for the modeled clusters, we use the population
synthesis program {\it StarTrack} (developed by \citet{BKB02}; Belczynski et al.\ 2004, to be
submitted). We generate and evolve a population of binaries under a given set of conditions, such
as the initial mass function (IMF), supernova kick distribution, common envelope efficiency, etc.
With the resultant evolutionary parameters of the binaries at the time of the compact-object
(neutron star or black hole) formation, we place them in a cluster potential and track their motion
and X-ray luminosity as a function of time. In so doing, we ultimately generate a complete
evolutionary picture of the X-ray binaries in association with their parent cluster. As noted
already, we do not account for any stellar interactions in these young clusters, as our goal is to
examine whether supernova kicks alone can account for the observed spatial distribution of XRBs
relative to their parent clusters.
{\it StarTrack} is a sophisticated Monte Carlo population synthesis code that has been recently
updated to carefully account for binary mass-transfer phases and $L_{X}$ calculation. We account for
various phases of mass and angular momentum losses, and have implemented an integrated tidal
evolution method that is calibrated against observations of Galactic high-mass XRBs and of
circularization in open clusters. Some key features for this investigation include: (i) the
determination of the post-core-collapse systemic velocity for compact object binaries, and (ii) the
detailed calculation of the mass transfer rate between binary components, calibrated against
calculations with a stellar evolution code. Systemic velocities are a key to the proper
determination of the orbital trajectory, which is one of the primary concerns for this work. Also,
given the sensitivity of the observations to $L_{X}$, the mass transfer rate becomes a critical
factor in determining whether or not a given XRB is relevant to the K04 observations at any point in
its lifetime. For the $L_{X}$ determination we further apply a bolometric correction to the
theoretical value, to account for {\em Chandra}'s sensitivity band and the typical XRB spectra. This
bolometric correction is dependent on the system parameters (neutron star or black hole accretor,
wind accretion or Roche-lobe overflow) and assumptions about the typical spectra of different
sources, derived empirically from Galactic observations (Maccarone 2003, private communication;
\citet{PZDM04}). Specifically for wind accretors we adopt 0.15 and 0.7, for disk persistent sources
and transient sources at outburst we adopt 0.5 and 0.7 (for neutron stars and black holes,
respectively).
It is well known that population synthesis calculations require a significant number of input
parameters. Since our main goal in this {\em Letter} is a proof-of-principle study, it is not
necessary to fully explore the parameter space. Instead we choose to consider a reference model with
parameter assumptions that are considered typical for binary evolution calculations (see model A in
\citet{BKB02}). We also consider a small set of other models where we vary the initial mass function
of binary primaries, as this most significantly affects the relative contribution of XRBs with
neutron stars and black holes that acquire systematically different systemic velocities.
The systemic velocities are determined by the natal kicks imparted during the formation of the
compact object. Neutron star natal kick magnitudes are drawn from the distribution derived from
\citet{ACC02}, based on current pulsar kinematics. It consists of two
Maxwellians with $\sigma$'s 90 and 500, with a relative weight of 2:3, respectively. Black hole natal kicks are linearlly
scaled from Neutron star
kicks based on their mass relative to a typical Neutron star mass of 1.44\,M$_\sun$.
For the orbital evolution we use the calculated post-core-collapse systemic velocities of XRB
progenitors and combine them consistently with a Plummer model for the gravitational potential of
the model cluster with a given half-mass radius, $R_{1/2}$. To determine the spatial distribution of
the XRB progenitors right after compact-object formation (which generally occurs immediately prior
to the X-ray phase), we assume that the number density of stars is proportional to the mass density.
Initial systemic velocities consistent with the Plummer potential are also generated (see
\citet{AAR74}). We then apply the calculated post-core-collapse systemic velocities randomly
oriented with respect to the initial cluster velocities. We follow the motion of the binary as a
function of time and correlate position with the X-ray luminosity evolution calculated with the
binary evolution code.
The number of binaries modeled for a given cluster is directly proportional to the mass of that
cluster once the IMF index, mass-ratio-distribution paramters, and binary fraction are chosen. We
adopt a flat mass-ratio distribution and a binary fraction equal to unity in order to represent an
upper limit on the number of binaries, and therefore on the total number of X-ray sources. We
consider cluster masses in the range $10^4$\,M$_\sun - 10^6$\,M$_\sun$, with IMF indices of 2.35
and 2.7. The specific parameters of our simulations are shown in Table~1.
\begin{deluxetable}{cccccc}
\tabletypesize{\scriptsize}
\tablecaption{Parameters for model runs}
\tablewidth{0pt}
\tablehead{
\colhead{Model} & \colhead{\# of} & \colhead{Mass} &\colhead{IMF} & \colhead{R$_{1/2}$}\\
& \colhead{MC runs} & \colhead{(M$_\sun$)} & \colhead{index} & \colhead{(pc)}
}
\startdata
A & 1000 & 5$\times$10$^4$ & 2.35 & 10\\
B & 1000 & 5$\times$10$^4$ & 2.7 & 10\\
C & 100 & 5$\times$10$^5$ & 2.35 & 10\\
D & 100 & 5$\times$10$^5$ & 2.7 & 10\\
E & 7 & 5$\times$10$^6$ & 2.35 & 10\\
J & 500 & 5$\times$10$^4$ & 2.35 & 1
\enddata
\end{deluxetable}
It is interesting to note that although orbits are calculated typically for few to several hundred
Myr, binaries are X-ray sources for only a small part of their orbit and they are {\em bright}
(i.e., Lx $\geq$5$\times$10$^{35}$\,erg\,s$^{-1}$) X-ray sources for an even smaller part. Each
system is evolved individually in a static cluster potential, and thus no interactions or cluster
evolution is allowed in the present analysis since our goal is to examine the effect of the
supernova kicks. This may not be a well justified assumption in general. Nevertheless, the clusters
relevant to our study are very young (few Myrs to $\sim10-20$\,Myr), so significant cluster
evolution is not expected, except for possibly in the most massive and most compact (small half-mass
radius) clusters.
We note that statistical effects play a significant role, especially in the low mass
($\sim$10$^4$M$_\sun$) clusters. Typically, no more than one XRB is bright enough to be seen in
these clusters and the position can vary significantly across the cluster for each different
simulation. Therefore, we consider a large number of Monte Carlo realizations for each parameter set
(cluster mass, half-mass radius, and IMF index). The lower mass clusters have the smallest number
of initial binaries and hence require the most realizations. We chose the number of realizations for
each cluster mass so that our results averaged over the many realizations remained unaffected at the
5\% level.
\section{Results \& Discussion}
Our calculations have yielded a wealth of results. Knowing both the trajectory and X-ray luminosity
of such a wide range of objects can help us understand XRB formation in young clusters both
statistically and in a system-by-system sense. Here, we focus on the statistical results where the
clusters are investigated as a grand average; this is more appropriate when comparing with large
populations of clusters (as analyzed in K04). Specifically, we focus on the statistical averages of
the two quantites quoted observationally in K04: the median distance of XRBs from the nearest
(parent in the models) cluster, and the mean number of bright XRBs per cluster (within 1000\,pc).
In Figure~1 (top) we plot the model average number of XRBs per cluster as a function of distance
from the parent clusters, each of 5$\times$10$^4$ M$_\sun$ and for a variety of cluster ages. These
ages are within the range of estimates for the observed clusters. To take into account the
uncertainties in the age estimates (typically a few Myr), we use an age-snapshot method, based on
which we determine the average number of XBRs as a function of radius for a specific ``instant" in
time, and then average these results over each consecutive ``instant" within the cluster age
estimate and its error.
\begin{figure}
\begin{center}
\includegraphics[width=6cm,height=17cm]{f1.eps}
\caption{Average number of XRBs seen within a given distance from its
parent cluster for specific cluster ages listed shown. See Table~1 for the specifications of
the model parameters. All data shown here is calculated with an L$_X$ cutoff of 5$\times$10$^{35}$\,erg\,s$^{-1}$.}
\end{center}
\end{figure}
It is evident from Figure~1 that the XRB spatial distributions have a dramatic time dependence. For
``young'' clusters, the average XRB number per cluster rises to a maximum rapidly and very few XRBs
are found at large distances. This is primarily because even the unbound XRBs have not had enough
time to move away from their parent clusters. The median systemic velocity of the XRB systems is
$\sim$2$-$3\,pc\,Myr$^{-1}$, which limits the distance any XRB can reach. For older clusters, the
average XRB number exhibits a fairly slow increase with distance, up to 2\,kpc and sometimes
beyond. This
can potentially create a pollution effect and lead to difficulty in identifying the true
parent cluster in observations.
It is also evident that at certain ages XRBs are distinctly more numerous than at others. For
example, in Figure~1, the 5$\times$10$^4$\,M$_\sun$ (top) clusters with
$L_{X}\geq$5$\times$10$^{35}$\,erg\,s$^{-1}$ show more XRBs at 10\,Myr than at any other time in the
clusters' evolution. We also find that this peak age is dependent on the $L_{X}$ cut-off. Fully
exploring these dependencies could allow us to derive general conclusions about XRB populations
dependent only on the average ages of the young cluster population.
In Figure~1 (middle and bottom) we present our results for clusters of 5$\times$10$^5$ and
5$\times$10$^6$\,M$_\sun$, respectively. Note that the behavior is similar for all masses, except
that the average number of XRBs at a given radius scales with the mass of the cluster almost
lineraly. This is due to the direct relationship between the number of binary systems modeled and
the cluster mass.
\begin{deluxetable*}{ccccccccccccc}
\tabletypesize{\footnotesize}
\tablecaption{Mean XRB Number ($\overline{N}_{XRB}$) and Median\tablenotemark{a} XRB distance from cluster center (R$_{median}$)} \tablewidth{0pt}
\tablecolumns{13}
\tablehead{
\colhead{} & \multicolumn{2}{c}{5 Myr} & \multicolumn{2}{c}{10 Myr} & \multicolumn{2}{c}{15 Myr} &
\multicolumn{2}{c}{20 Myr} & \multicolumn{2}{c}{25 Myr} & \multicolumn{2}{c}{50 Myr} \\
\colhead{Model} & \colhead{$\overline{N}_{XRB}$} & \colhead{R$_{median}$} &
\colhead{$\overline{N}_{XRB}$} & \colhead{R$_{median}$} &
\colhead{$\overline{N}_{XRB}$} & \colhead{R$_{median}$} &
\colhead{$\overline{N}_{XRB}$} & \colhead{R$_{median}$} &
\colhead{$\overline{N}_{XRB}$} & \colhead{R$_{median}$} &
\colhead{$\overline{N}_{XRB}$} & \colhead{R$_{median}$} \\
& & (pc) & & (pc) & & (pc) & & (pc) & & (pc) & & (pc)
}
\startdata
A & 0.19 & 10.5 & 0.23 & 17.5 & 0.18 & 44.5 & 0.18 & 54.5 & 0.19 & 58.5 & 0.04 & 146.5 \\
B & 0.14 & 10.5 & 0.18 & 17.5 & 0.12 & 46.5 & 0.12 & 67.5 & 0.16 & 76.5 & 0.04 & 104.5 \\
C & 2.00 & 9.5 & 2.55 & 15.5 & 1.72 & 27.5 & 1.87 & 46.5 & 2.26 & 51.5 & 0.41 & 80.5 \\
E & 20.5 & 16.5 & 24.3 & 17.5 & 17.3 & 18.5 & 18.1 & 24.5 & 20.4 & 26.5 & 6.00 & 29.5 \\
J & 0.18 & 1.5 & 0.25 & 2.5 & 0.18 & 13.5 & 0.16 & 31.5 & 0.21 & 40.5 & 0.04 & 54.5
\enddata
\tablenotetext{a}{Only XRBs within 1000\,pc are used to calculate the values listed here in order to
compare with K04}
\end{deluxetable*}
We calculate the median distance and mean number of XRBs with
$L_{X}\geq$5$\times$10$^{35}$\,erg\,s$^{-1}$ within 1000\,pc (Table~2), in order to compare
appropriately with K04.
{\em Mean number of XRBs per cluster:} We find the theoretical mean XRB number per cluster to vary
significantly from $\sim 0.1$ to $\sim 10$, depending on the cluster mass. Therefore, it is
possible to reproduce the results in K04 by taking contributions from a number of clusters of
different masses. Two of the three galaxies discussed in K04 (M82 and NGC~5253) have a mean number
of observed XRBs of $\sim 1$ per cluster, while NGC~1569 seems to have a very small number of XRBs
(only $\simeq 0.25$ per cluster). This difference would point towards NGC~1569 having, on average,
smaller-mass clusters, even though outliers at high masses can still exist. A difficulty in the
comparison arises because the properties of the clusters in these galaxies are difficult to
determine orbservationally. Those with measured masses are skewed to higher masses
($\gtrsim$1$\times$10$^5$\,M$_\sun$) and younger ages ($\lesssim$15\,Myr) simply because they are
selected photometrically (Gallagher 2004, private communication). Therefore, developing a proper
theoretical cluster distribution for comparison is rather challenging without further observational
studies of the cluster populations.
{\em Median distance of XRBs from the cluster:} Our results (Table 2) indicate a strong dependence
of the median XRB distance on the age and a moderate dependence on the cluster mass. For clusters
with a half-mass radius of $10$\,pc and masses $\lesssim5\times10^5$\,M$_\sun$, median distances
reach values of $30-100$\,pc (similar to those observed) at ages of $15$\,Myr and older. Only very
massive clusters of $\sim 5\times10^{6}$\,M$_{\sun}$ reach such distances later at $\sim 50$\,Myr.
These ages and moderate masses are consistent with the current observational estimates, although
massive and older clusters are also present in the photometrically selected clusters in K04
(Gallagher 2004, private communication).
It should also be noted that, for the highest cluster mass we consider (5$\times$10$^6$ M$_\sun$),
even the oldest clusters seem to show more binaries than what is observed. This clearly implies that
starbursts are not dominated by such massive clusters, and this is not surprising. However, these
more massive clusters may also be affected by dynamical cluster evolution and stellar interactions
leading more binary disruptions and ejections. Thus we would expect the average number of XRBs per
cluster to decrease at all ages and distances.
It should be noted that we have assumed a binary fraction of unity, and therefore the mean XRB
numbers could be overestimated. This is true also because projection effects have not been taken
into account, and our numbers represent the radial distance the XRBs have traveled. Also, we note
that changes in the power law IMF index of the cluster produce noticeable, but largely insignificant
changes in the cluster profiles. For example, changing the IMF index from 2.35 to 2.7 decreases the
average number of binaries at or about the 10\% level for each timestep. This effect may become
more important, especially for very steep IMFs, such as those for clusters proposed by \citet{KW03}
where the index can go as high as 3.2. And last, changes in the half-mass radius of the cluster
dramatically change the median XRB distance for a given mass. Very small values (model J in
Table~2) tend to limit XRBs ejection, as the potential is deeper. In these tight clusters, it is
likely that dynamics will play a non-negligible role, depending on their age.
\section{Conclusions}
With detailed population simulations of XRBs and a simple treatment of gravitational potentials of
young clusters we have shown that the significantly low XRB numbers per cluster observed in
starbursts can be explained as being largely due to supernova kicks imparted to XRBs at
compact-object formation that lead to XRB ejection from the cluster potential, as heuristically
suggested by observational studies (K04; \citet{PZDM04}). Derived XRB median distances are also
consistent with current estimates of cluster masses and ages, although a more direct comparison
requires more detailed observational constraints of the cluster properties.
This work opens many possible avenues in which to continue this study, some of which include an in
depth look into the systematics generated by our stellar evolution code, such as how our results
change with a broader range of masses and IMFs, as well as additional stellar evolution parameters
such as the common envelope efficiency. We also intend to look at the detailed populations created,
and search for specific correlations between types of XRBs, their ages, and positions in the
clusters. Still further, we have largely ignored the low luminosty XRBs in this analysis. This
population may indeed be detectable, if present in large enough quantities, as diffuse emission.
And lastly, it is possible that for the more massive, compact, and older clusters, dynamics play a
non-negligible role in the XRB evolution. We hope to extend our modeling to include dynamical
considerations such as this in the near future.
\acknowledgments
We are grateful to J.\ Gallagher, T.\ Maccarone, and P.\ Kaaret for useful discussions. The work is
partially a Packard Foundation fellowship and a NASA Chandra Award to V.\ Kalogera.
\clearpage
|
{
"timestamp": "2005-02-25T22:00:37",
"yymm": "0411",
"arxiv_id": "astro-ph/0411425",
"language": "en",
"url": "https://arxiv.org/abs/astro-ph/0411425"
}
|
\section{Introduction}
Human languages are grouped into families, like the Indo-European
languages, which may all have arisen from one common original
language. For example, ancient Latin split into Portuguese,
Spanish, French, Italian, Romanian and other languages during the
last two millenia. On the other hand, many of the present
languages are spoken only by a relatively small number of people
and are in danger of extinction \cite{science,sutherland}.
In this way languages are similar to biological species. We
thus try to simulate languages using methods similar to the
modelling of speciation \cite{eigen,pmco}.
A language for us can be a human language (including Fortran,
...), a sign language, a system of bird songs, a human alphabet,
or any other system of communication. We simulate it by a
string of 8, 16 or 30 bits and define languages as different if
they differ in at least one bit. The position of the bit in
the string plays no role, in contrast to the Penna ageing model
from which program elements are taken \cite{book}.
\section{Model}
We start with one person, i.e. $N(t=0) = 1$,
speaking language zero (all bits are
zero). Then at each iteration $t$ all $N(t)$ living people
are subject to a Verhulst death, i.e. they die with probability
$N(t)/K$ where $K$ in biology is often called the carrying
capacity and incorporates the limitations of food and space.
Each survivor produces one offspring at each iteration which
uses the same bitstring apart from one random mutation (bit
changed from 0 to 1 or from 1 to 0) which happens with a
probability $p$ per person (or $p/8$ per bit if the language has
8 bits). Usually, all bit-strings are assumed to be equally fit,
in contrast to typical biological models \cite{eigen,pmco}.
Also at each iteration, each individual can switch
from its present language to another randomly selected one,
with probability $$(2N(t)/K)(1-x^2)$$
where $x$ is the fraction of all people speaking the present
language of that individual. The first factor, which approaches
unity for long times, ensures that at the beginning with a low
population density there is not yet much competition between
languages, while in the later stationary high population
the less spoken languages are in danger of extinction. The
exponent two takes into account that normally two people
communicate with each other; thus the survival probability of
a language is proportional to the square of the number of people
speaking it.
(The final population is $K/2$ and not $K$ since we determine the Verhulst
probability $y = N(t-1)/K$ at the beginning of iteration $t$ and leave it at
that value for the whole iteration. The Verhulst deaths thus reduce the
population by a factor $1-y$, and if each of the survivors has $b$ offspring,
the population is multiplied by another factor $1+b$. For a stationary
population, these two factors have to cancel: $(1-y)(1+b) = 1$, giving
$y = b/(1+b) = 1/2$ for our choice $b=1$.)
\section{Results}
For an eventual stationary population of ten million at $t=1000$, as a
function of increasing mutation rate $p$, a sharp transition
was observed between a dominance regime at low and a smooth
distribution at high mutation rates $p$, Fig.1:
i) For low $p$,
one language, usually the one with all bits zero, contains
nearly all individuals, and the mutant languages differing
from the dominant one by one bit only contain most of the rest.
This behaviour is hardly realistic except for alphabets.
ii) For
high mutation rates $p$, on the other hand, no language contains
a large fraction of the population, and the distribution of
language sizes (measured as the number of people speaking it)
is roughly log-normal with higher statistics for small
languages. This result agrees well with reality
\cite{sutherland}.
In Fig.1, part a shows the drastic difference between dominance (+) and
smooth distribution ($\times$,stars), part b the slow approach to a symmetric
log-normal distribution with increasing mutation rate. (We bin the
number of people speaking one language into powers of two, lumping
together all languages spoken by 33 to 64 people, for example.)
In the dominance regime i) of low $p$, the number $L(t)$ of
languages first increases from unity towards about $10^2$
and then decreases again to about a dozen (not counting languages with
less than 10 speakers). In the smooth regime
ii) of high $p$ the number $L$ of languages first increases and
then reaches a plateau, which may even equal the maximal
number $M = 2^8$ or $M = 2^{16}$ for 8 or 16 bits, respectively.
Also for a fixed mutation rate as a function of the final
population $K/2$ we see a change from the dominance regime at
low populations to a smooth distribution at high populations,
Fig.2. For very large populations a rather narrow distribution
of language sizes develops, i.e. the whole population is
distributed about equally among the surviving languages. Fig.3
shows for an intermediate population a power law on the
small-size side of the histogram, and a parabola-like curve, meaning a
log-normal distribution in this log-log plot, for large
language sizes.
A simple scaling law, seen in Fig.4, predicts the behaviour of
the number $L$ of languages as a function of the maximum possible
number $M$ of languages and the final population $N_\infty
\simeq K/2$:
$$ L/M = f(M/N_\infty) \quad .$$
The scaling function $f(z)$ equals unity for small $z$ and
decays as $1/z$ for large $z$. This means that for a population
much larger than the possible number of languages, each language
possibility is realized, while in the opposite limit each small
group of individuals speaks its own language. Therefore we expect
this simple scaling law to be valid also for longer bit-strings
than the 8 and 16 bits simulated here. (32 bits allow for 4096 Mega
languages, requiring too much computer memory in our program;
30 bits still worked.)
We also modified the model to take into account the influence of a ``superior''
language on another, like the many words of French origin in the German
language. With some probability $q$, at the moment of a mutation the new
value of a bit is not the opposite of the old value (as done above) but is
the value of the corresponding bit in the superior language. We define
as superior language the bit-string having one everywhere except for a zero
in the left-most position, i.e. 127 for 8 and 32767 for 16 bits. The larger
$q$ is (in the smooth regime of large $p = 0.48$ per individual),
the higher is the fraction of samples ending with the superior language
as the largest one. About half of the samples have the superior language
as the numerically strongest one if $q \simeq 0.02$ for 8 and 0.2 for 16
bits. If for 16 bits we take 127 instead of 32767 as the superior language,
the results do not change much. (These probabilities hold for 10 million people
and are appreciably larger, 0.05 and 0.34, for one million.)
\section{Discussion}
Our model is more microscopic than the previous ones known to us
\cite{strogatz,finland} in that individuals are born, give
birth, and die, instead of being lumped together into one
differential equation. It also is more realistic since we allow
for numerous languages instead of only two. For the latter
choice, we would have to reduce our bit-string to a single bit,
with $M = 2$ and thus $M/N \ll 1$, corresponding to the left
part of Fig.4. There we observe $L = M$, that means both
languages survive. In \cite{strogatz} only one language
survived since one was assumed to be superior compared to the
other. We, on the other hand, regarded all languages as
intrinsically equally fit, except for the last paragraph.
\bigskip
We thank P.M.C. de Oliveira for suggesting to simulate languages, and the
Julich supercomputer center for JUMP time.
|
{
"timestamp": "2004-12-04T17:22:53",
"yymm": "0411",
"arxiv_id": "cond-mat/0411162",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0411162"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.